On Cellular Complexity and the Future of Biological Intelligence: A Q&A with Sam Gershman

"I've always been interested in understanding what makes humans and other animals smart, and how this happens at the biological level," says Samuel Gershman, a Kempner associate faculty member and professor in the Department of Psychology.
Photo credit: Anthony Tulliani
What is the logic underlying human and animal intelligence? This is the motivating question behind the research of Samuel J. Gershman, Kempner Institute associate faculty member and professor in the Department of Psychology. Gershman’s lab studies a wide spectrum of phenomena related to intelligence, ranging from the complexity of human cooperation to learning in single-celled organisms. The Kempner’s science writer, Yohan John, sat down with Gershman to chat about his research and the evolution of learning mechanisms.
How do you see your research in relation to the Kempner Institute’s mission to study both artificial and natural intelligence together?
I’ve always been interested in understanding what makes humans and other animals smart, and how this happens at the biological level. Ideas from machine learning have been useful in developing formal accounts of how intelligence arises. The idea is we don’t really understand intelligence until we build a version of it.
How do you approach the study of intelligence?
I’ve always been particularly interested in reconciling two points of view. On the one hand, within AI, humans are often treated as the benchmark for intelligence. But within psychology, there’s a long tradition of showing how ‘stupid’ people are. Reconciling those two perspectives has always perplexed me. The approach I take is to try to understand the deeper underlying logic of human intelligence, which requires taking into account limitations on computation, memory, and data.
“The approach I take is to try to understand the deeper underlying logic of human intelligence, which requires taking into account limitations on computation, memory, and data.”
Samuel Gershman
The question central to my research is: How well could a rationally-designed information processing system perform under the constraints imposed on it? And then how do you actually do something useful, algorithmically, with those constraints?
My research focuses on explaining apparent inefficiencies in natural intelligence by developing algorithms that can do useful computation under resource constraints. What happens when we only have a small amount of data? What happens when we can only think a small number of thoughts? What happens when we can only store a small number of memories? Interestingly, you can explain a lot of cognition in terms of approximately optimal algorithms subject to those constraints.
What has your lab been looking into recently?
My lab’s work is focused on understanding computational principles of intelligence. We explore this through various avenues. One area of interest is common-sense reasoning in humans, particularly intuitive physics [our understanding of how the physical world operates]. Another focus is how groups of people can collaborate to achieve impressive results beyond individual capabilities. This involves investigating cognitive foundations for collaboration and explaining how people use sophisticated theory of mind to reason about others, adaptively learning from their actions.
My research also delves into the role of dopamine in the brain, where reinforcement learning theories used in AI intersect with biological processes. We collaborate with researchers in Harvard’s Department of Molecular and Cellular Biology and the Department of Neurobiology to test these ideas experimentally.
Perhaps my most exciting current pursuit is exploring the potential for non-synaptic forms of plasticity. Synaptic plasticity is often assumed to be the primary mechanism for learning, but memory existed even before the evolution of brains. This leads me to consider the origins of memory itself. I believe that some learning mechanisms we have in the brain evolved from pre-brain mechanisms and that the brain is an elaboration on these earlier processes.
How do you study these non-synaptic mechanisms?
We’ve been studying single-celled organisms as a kind of testbed for this idea. Single-celled organisms have no synapses since they’re just one cell. If they can learn, it raises the question of how they learn. We’ve been focusing on a particular organism called Stentor, which in certain ways is remarkably like a neuron. It has action potentials and calcium signaling and second messengers similar to those found in neurons. And it produces learned behaviors similar to animals. We are trying to characterize these behaviors systematically, and to ultimately understand their neurobiological basis.
How do you see the study of intelligence evolving in the near future? What research directions would you like to see pursued more?
I think we’ve reached a local optimum in how we model the biology of intelligence. It’s common to treat single neurons as simple computing elements that only become intelligent when combined into specific architectures. We have built useful technologies from this approach. However, if we take seriously the question of how biology actually works, we must confront the fact that cells are incredibly complex. This complexity might be irrelevant for intelligence, but alternatively it might be essential. We focus too much on a simplistic view of biology, and at the same time we talk about biologically plausible models. But so many powerful models are fundamentally impossible from a biological perspective. I would like to see more attention paid to the computational complexity at the cellular level.
This interview was edited for brevity and clarity. To learn more about Gershman’s research, check out his lab homepage: Computational Cognitive Neuroscience Lab
About the Kempner Institute
The Kempner Institute seeks to understand the basis of intelligence in natural and artificial systems by recruiting and training future generations of researchers to study intelligence from biological, cognitive, engineering, and computational perspectives. Its bold premise is that the fields of natural and artificial intelligence are intimately interconnected; the next generation of artificial intelligence (AI) will require the same principles that our brains use for fast, flexible natural reasoning, and understanding how our brains compute and reason can be elucidated by theories developed for AI. Join the Kempner mailing list to learn more, and to receive updates and news.
PRESS CONTACT:
Deborah Apsel Lang | (617) 495-7993