AI and the Brain
What We Do
A core precept of the Kempner Institute is that AI and ML will advance our understanding of the human brain and that insights learned from the study of the brain will generate new strategies to better understand and potentially advance AI architectures and capabilities. AI models of brain computation, learning, and memory, built on experimental data from humans and animals, will yield testable hypotheses. Our researchers will use these virtual models to create new insights in cognition and computation that can advance our fundamental understanding of the brain and how it is perturbed in disease.
How the Brain and AI Reuse Old Knowledge in New Situations
Kempner Institute Investigator SueYeon Chung and collaborators introduce a mathematical theory that reveals how the geometrical structure of neural activity patterns governs generalization in both brains and artificial intelligence systems. Across artificial neural networks and experimental data from rats and monkeys, the same geometric properties accurately predicted generalization performance, revealing commonalities between artificial and biological intelligence.
Research Projects
A Foundation Model for Neuroscience
Forecasting the Brain: Scalable Neural Prediction with POCO
Kempner Institute Investigator Kanaka Rajan and her team introduce POCO, a unified forecasting model trained on diverse calcium imaging datasets in multiple species including zebrafish and mice. POCO achieves state-of-the-art accuracy by combining lightweight individual predictors with a global population encoder. It also demonstrates rapid adaptation to new individuals and can uncover meaningful embedding without supervision.
Biologically-grounded Models of Neural Computation
Measuring and Controlling Solution Degeneracy Across Task-Trained Recurrent Neural Networks
A Kempner team introduces a new framework for measuring and controlling solution degeneracy in task-trained recurrent neural networks. By studying more than 3,400 RNNs on four neuroscience-related tasks, they show that factors such as task difficulty, learned features, network size, and regularization influence the different internal strategies networks develop, even when they achieve the same level of performance.
Interpretive Models of Neural Activity
Mechanistic Interpretability: A Challenge Common to Both Artificial and Biological Intelligence
Researchers at the Kempner have developed a family of interpretable models to explain neural activity in structured settings.