Announcing 2025 Kempner Institute Research Fellows
Eleven innovative early-career scientists awarded fellowships to undertake research that advances the understanding of intelligence

The 2025 Kempner Research Fellows are (left to right, from top): David Clark, Ruojin Cai, Elom Amemastro, Gizem Ozdil, Hadas Orgad, Mark Goldstein, Greta Tuckute, Gabriel Poesia, Alexandru Damian, Richard Hakim and William Dorrell.
Cambridge, MA – The Kempner Institute for the Study of Natural and Artificial Intelligence at Harvard is pleased to announce the recipients of its 2025 Kempner Institute Research Fellowships. The 2025 recipients are Elom Amemastro, Ruojin Cai, David Clark, Alexandru Damian, William Dorrell, Mark Goldstein, Richard Hakim, Hadas Orgad, Gizem Ozdil, Gabriel Poesia and Greta Tuckute.
The eleven fellowship recipients are all early-career scientists drawn from a wide variety of skillsets and educational backgrounds. Each of them pursues novel research at the intersection of natural and artificial intelligence.
Each fellowship runs for up to three years and includes salary and research funds, office space, and mentorship. While fellows set their own research agenda, they are strongly encouraged to undertake interdisciplinary projects and to collaborate with experts at the Kempner Institute and throughout Harvard University.
About the fellows
Elom Amemastro focuses on how humans learn new skills from experience, with an emphasis on reinforcement learning and the neural mechanisms that support flexible skill acquisition and generalization. His work integrates behavioral experiments, large-scale neural recordings, and computational modeling to investigate how neural circuits encode structured knowledge, adapt to novel tasks, and leverage past experiences for efficient learning. His research addresses fundamental challenges in understanding how biological systems achieve sample-efficient learning, with the goal of developing AI systems that learn more like humans and informing interventions that enhance cognitive flexibility and recovery.
Ruojin Cai studies 3D computer vision, with the goal of building models that can perceive and reason about the 3D world, advancing spatial intelligence in machines. She studies core challenges in 3D reconstruction under sparse-view or ambiguous settings, where traditional geometric methods often fail. Her key insight is to address these challenges by leveraging learned priors from generative video models and geometric vision models to improve robustness under limited or ambiguous observations. Her long-term goal is to develop truly spatially intelligent systems that can not only perceive but also reason about and act within complex, real-world environments.
David Clark aims to develop a deeper theoretical understanding of how neural systems interact with environmental data to generate representations and perform useful computations. Neural circuits in the brain are characterized by their large-scale, nonlinear dynamics, complex recurrent interactions, and connections that change across multiple timescales. To understand how these features enable computation and learning, he uses theoretical tools from physics (e.g., dynamical systems, statistical mechanics, path integrals, replica and cavity methods, and random matrix theory) and machine learning (e.g., convex optimization, deep and recurrent networks, and sequence models).
Alexandru Damian aims to develop a mathematical foundation for deep learning, with a focus on optimization and representation learning. He is especially interested in how optimization algorithms, such as stochastic gradient descent and Adam, navigate the high-dimensional non-convex loss landscapes in deep learning, how this process is influenced by the choice of optimizer and its hyperparameters, and how these decisions shape the representations learned by the model.
William Dorrell tries to understand how biological neurons implement cognitive computations. His approach to this involves asking why neurons fire the way they do, and building mathematical theories to try and understand this. These mathematical theories are usually optimization problems, leading to hypotheses like: “if the neurons were trying to perform this computation optimally then, under some constraints, they should behave like this.” Dorrell compares the predictions of these theories to neural recordings from brains or artificial neural networks. He hopes these approaches will help in understanding the algorithms the brain uses to do clever things like play board games, tap rhythms, or reason.
Mark Goldstein studies generative modeling, probability density estimation, and sampling. A central theme of his work is rethinking foundational design choices in generative model training and analyzing how these choices affect performance and efficiency. He has explored these questions in the context of diffusion and consistency models. Looking ahead, Mark aims to investigate how compositionality and sequential decision-making can augment existing model classes—or give rise to new ones. Such capabilities could also be integrated with simulation—for example, in the context of scientific discovery, a diffusion or language model that simulates molecular dynamics to observe a phenomenon before refining a proposed molecule.
Richard Hakim studies neural decoding and brain-computer-interfaces (BCIs). His prior work is primarily experimental and includes studies on how movement is encoded in the motor cortex, how brain oscillations are generated, and the development of a suite of open-source computational tools. Moving forward, he is excited by emerging research into foundation models for BCI decoding and aims to leverage principles derived from artificial intelligence to better understand the structure and function of biological brains.
Hadas Orgad investigates the internal mechanisms of AI models to better understand and mitigate failures in safety, fairness, and reliability. Her research bridges interpretability and practical deployment, focusing on harmful model behaviors such as hallucinations, bias, privacy violations, and unsafe outputs. By analyzing the internal structure of models, she develops actionable tools and interventions to improve model behavior and better align it with human values and incentives. Her long-term goal is to advance interpretability and control techniques so that AI systems are fully transparent, trustworthy, and steerable.
Gizem Ozdil bridges systems neuroscience, artificial intelligence, and robotics to uncover the principles that enable adaptive behavior in biological systems. She is particularly interested in how biological insights, such as structural constraints, can inform the design of more flexible and autonomous agents. To explore this, she develops biologically inspired neural networks and trains embodied agents in complex physical environments that require learning, memory retention, and planning. In turn, these models can be used for reverse-engineering brain function and inspiring the development of more efficient and adaptable artificial systems.
Gabriel Poesia investigates formal reasoning in humans and machines. This involves defining a suitable “game of mathematics” on top of a formal foundation like dependent type theory; learning to find proofs using language models and deep reinforcement learning; discovering increasingly high-level mathematical abstractions; and ultimately using these tools to build joyful and scalable experiences for mathematics education. His recent research builds heavily on ideas from intrinsically motivated learning, and also explores program verification.
Greta Tuckute studies how language is processed in the human brain and in artificial neural networks. Her research broadly follows three directions. First, she works to precisely characterize the neural architecture and functions that support language processing in the human brain. Second, she investigates whether the human brain and artificial networks share representations and computational principles during language processing. Third, she develops biologically plausible artificial networks that learn language in more human-like ways. Collectively, these three directions inform one another, advancing our understanding of how language serves as an efficient interface to a wide range of downstream behaviors in both biological and artificial systems.
About the Kempner
The Kempner Institute seeks to understand the basis of intelligence in natural and artificial systems by recruiting and training future generations of researchers to study intelligence from biological, cognitive, engineering, and computational perspectives. Its bold premise is that the fields of natural and artificial intelligence are intimately interconnected; the next generation of artificial intelligence (AI) will require the same principles that our brains use for fast, flexible natural reasoning, and understanding how our brains compute and reason can be elucidated by theories developed for AI. Join the Kempner mailing list to learn more, and to receive updates and news.
PRESS CONTACT:
Deborah Apsel Lang | (617) 495-7993
kempnercommunications@harvard.edu