Loading Events

Bridging Neural Dynamics To Goal-Directed Behavior Across Timescales

Speaker: Aran Nayebi

Date: Tuesday, February 20, 2024 Time: 10:05 - 11:05am Join remotely , opens in a new tab/window


Humans and animals exhibit a range of interesting behaviors in dynamic environments, and it is unclear how the brain reformats dense sensory information to enable these behaviors. To gain traction on this problem, new recording paradigms now facilitate the ability to record and manipulate hundreds to thousands of neurons in awake, behaving animals. Consequently, a pressing need arises to distill these data into interpretable insights about how neural circuits give rise to intelligent behaviors.

To engage with these issues, I take a computational approach, known as “goal-driven modeling”, that leverages recent advancements in artificial intelligence (AI) to express hypotheses for the evolutionary constraints of neural circuits in a mathematically closed-form. These constraints guide the iterative optimization of artificial neural networks to achieve a specific behavior (“goal”). By carefully analyzing the factors that contribute to model fidelity in predicting large-scale neural response patterns, we can gain insight into why certain brain areas respond as they do, and what selective pressures over evolutionary and developmental timescales give rise to the diversity of observed neural responses.

In this talk, I apply this approach to examine the functional constraints of brain areas in adaptive behaviors across three timescales: 1. the role of recurrent processing in rapid object recognition (within 250 ms), 2. visually-grounded mental simulation of future environmental states over longer timescales (within several seconds), and 3. identifying operative rules during task learning (within an organism’s lifetime). Finally, I conclude with future directions towards closing the perception-action loop by building integrative, embodied agents that serve as normative accounts of the interaction of brain areas that enable taking meaningful actions in complex, dynamic environments. The design of these agents will elucidate the mathematical and algorithmic principles of natural intelligence, yielding safer, more adept, physically-grounded AI algorithms.

10:05 am to 11:05 am Science and Engineering Complex (SEC), SEC 1.413 


Aran Nayebi is an ICoN Postdoctoral Fellow at MIT, currently working with Robert Yang and Mehrdad Jazayeri. He completed his PhD in Neuroscience at Stanford University, co-advised by Daniel Yamins and Surya Ganguli. His interests lie at the intersection of neuroscience and artificial intelligence (AI), where he uses tools from AI and optimization to better understand natural intelligence. His long-term aim is to focus on the sensorimotor loop essential for survival and physical interaction, in order to produce normative accounts of how brain areas collaborate to give rise to complex embodied behaviors, yielding more physically-grounded, common-sense AI algorithms along the way.