Loading Events

Spring into Science 2026

All members of the Kempner community are invited to this full day of scientific exchange focused on research, conversation, and connection across the science of intelligence.

Join us for the Kempner Institute’s 2026 Spring into Science annual community retreat! The event is a full day of scientific exchange focused on research, conversation, and connection across the science of intelligence. The event brings together students, fellows, faculty, and researchers to share work spanning neuroscience, AI, machine learning, theory, engineering, and related fields.

The event features talks, poster sessions, and roundtable discussions, showcasing research from abstract theory to real-world applications while creating space for new conversations and collaborations.

Registration Deadline: Thursday 5/14 at 11:59PM EST


Program

9:00 AM Registration & Breakfast
9:30 AM Welcome
9:35 AM Presentation: Ella Batty, “A Cortico-Hippocampal Circuit for Visual Spatial Mapping and Goal-Directed Navigation in 3D Arenas”
10:00 AM Presentation: Sonja Johnson-Yu, “Reverse-engineering natural intelligence in weakly electric fish: a virtual laboratory approach”
10:25 AM Presentation: Shuze Liu, “Reframing problem solving via cached solutions”
10:50 AM Break
11:10 AM Presentation: Houman Safaai, “Dendritic Neural Networks for Structured Computation and Learning”
11:35 AM Presentation: William Dorrell, “An Efficient Computing Theory of Representations in Prefrontal Cortex & Recurrent Neural Networks”
12:00 PM Lunch & Discussion Tables
1:30 PM Presentation: Gabriel Poesia Reis e Silva, “Formal Disco: Open-Ended Generation of Formally Verified Programs at Scale”
1:55 PM Presentation: Ruojin Cai, “3D World Models for Understanding the Spatial and Physical World”
2:20 PM Break
2:40 PM Presentation: Nikhil Anand, “How do architectural inductive biases affect generalization?”
3:05 PM Presentation: Jaeyeon Kim, “Beyond Autoregressive: Toward Any-Order Intelligence in Generative Models”
3:30 PM Break
4:00 PM – 6:00 PM Poster Reception (refreshments will be served)


See who’s coming!


Discussion Topics

This year’s discussion tables will broadly cluster around themes that, together, reflect the Kempner community’s distinctive mix of neuroscience, cognitive science, machine learning, systems, and applied AI.
  • AI for Science beyond Disciplines: Challenges and Frontiers
  • AI Interpretability, Fairness, and Safety
  • Automating Research with Agents: Zeitgeist and Possible Futures
  • Biologically-Inspired Learning Rules: Bridging Neuroscience and AI for Enhanced Computational Methods
  • Communication in multi-agent systems
  • Dendritic Computations And Their Role In Credit Assignment
  • How can LLMs accelerate (your) research?
  • How—if at all—do we use biological brains to develop intelligent systems?
  • Human-Computer Interaction
  • Major axes of differences between AI and biological intelligence
  • Mathematical Theories in Neuroscience
  • Model-Based RL
  • Neural AI: the structural relationship between natural and AI models
  • Performance Is Relative: Efficiency Across the Cluster
  • Problem solving and reasoning in naturalistic domains: humans, LLMs, and crosstalk
  • What are commercial models missing to model more diverse human behavior?
  • What Does Scaling Mean in Your Research?
  • What should a modern training in cognitive science/neuroscience/science of machine intelligence look like?

Posters

See abstracts and poster numbers here! Poster presenters, please refer to the list for updated poster numbers

  • James Arnold, “VLM-Personalized Smartwatch Exercise Recognition for Home Stroke Rehabilitation”
  • Alexander Cai, “Grammar learning trajectories of bilingual language models”
  • George Cai, “Error-driven representation learning in the mesolimbic system”
  • Colton Casto, “The cerebellum plays a privileged role during language development”
  • Hamza Chaudhry, “Adaptive Systems for Scientific Discovery: Closed-Loop Biological Sequence Optimization via Calibrated Uncertainty Quantification”
  • Audrey Cherilyn, “Supernodes and Halos: Loss-Critical Hubs in LLM Feed-Forward Layers”
  • Chi-Ning Chou, “Diagnosing Generalization Failures from Representational Geometry Markers”
  • Shubham Choudhary, “A Normative Theory of Spike-Based Computation via Piecewise Deterministic Markov Processes”
  • Haylin Diaz, “How do LLMs Reinforce Self-Isolating Beliefs in Support Conversations?”
  • William Dorrell, “How Optimality Structures Sparse Dictionaries: A Theory for Understanding SAE Representations”
  • Yu Duan, “From forecasting to causal prediction in data-driven models of neural population dynamics”
  • Ada Fang, “Long Running AI Agents for Scientific Research”
  • Juan Carlos Fernandez del Castillo, “Information processing in early olfactory circuits “
  • Emma Finn, “A Wavelet View of Diffusion”
  • Nic Fishman, “Generative Modeling on the Space of Distributions”
  • Xiaomeng Han, “From Synapses to Intelligence: Internal-State Computation in Hypothalamic Social Behavior Circuits”
  • Helen He, “Murals in Motion: Reimagining Medieval Chinese Dance from Poses in Mural Paintings”
  • Sumedh Hindupur, “Geometry as Algorithmic Fingerprint: Helical Number Embeddings emerge in the Running Maximum Task”
  • Ann Huang, “InputDSA: Demixing then comparing recurrent and externally driven dynamics”
  • Hammad Izhar, “Remote Tracking and Autonomous Planning for Whale Rendezvous using Robots”
  • Sonja Johnson-Yu, “Active Electrosensing and Communication in MARL-trained Weakly Electric Fish Collectives”
  • Cristine Kalinski, “Generative models reveal temporal coding in primate visual cortex”
  • Sara Kangaslahti, “Boomerang Distillation Enables Zero-Shot Model Size Interpolation”
  • Amani Kiruga, “Unified Generative-Predictive Modeling for 4D Scene Understanding”
  • Mujin Kwun, “Matching Features, Not Tokens: Energy-Based Fine-Tuning of Language Models”
  • Ava Lakmazaheri, “Inferring Motor Control Objectives from Human Locomotion”
  • Mary Letey, “Sequential correlations and in-context learning”
  • Karen Li, “Motion-Uncertainty-Aware Next-Best-View Planning for Wild Animal Reconstruction”
  • Victoria Li, “Do Vision-Language Models See Charts Like Humans Do? Behavioral and Attentional Evidence”
  • Shijia Liu, “Time to Stop: Neural Mechanisms of Self-Generated Stopping Decisions”
  • Shuze Liu, “Reframing problem solving via cached solutions”
  • Tianhao Luo, “PATCH: Panel-Aware Hierarchical Conformal Cell Typing for Spatial Proteomics under Marker-Panel Shift”
  • Arnau Marin-Llobet, “The parameters in sparse-weight transformers are interpretable”
  • Ammar Marvi, “Representational dynamics in inferotemporal cortex depend on image manifold scale”
  • Cameron McNamee, “Feature Structure and control: a network science perspective for neural network steering”
  • Clara Mohri, “RL Excursions during Pre-training: How Early is Too Early for On-policy Learning?”
  • May Ng, “A Machine Learning Pipeline for Classifying Synaptic Inputs Underlying Hypothalamic Circuit Computation”
  • Philip Nielsen, “MALA-Guided Mirror Descent”
  • Mohammed Osman, “Modeling cortical resilience to cell loss using Hebbian/anti-Hebbian networks”
  • Jorin Overwiening, “The Artificial Thalamus: State-Dependent Attention Routing in Large Language Models”
  • William (Billy) Qian, “Escaping the simplicity bias in recurrent neural networks”
  • Giordano Ramos-Traslosheros, “Mapping the substructure of visual computations to bridge the perceptual interpretability gap”
  • Daniel Ritter, “LLMs can learn to reason from off-policy data”
  • Kyran Romero, “Tailoring AI Assistance to Individual Cognitive Differences in a Visual Classification Task”
  • Itai Shapira, “How RLHF Amplifies Sycophancy”
  • Josh Stern, “A retinotopic scaffold shapes spontaneous visual cortex activity prior to hippocampal ripple onset”
  • Marina Tiuleneva, “Macaque Gaze Behavior Reflects Sensitivity to Local Visual Structure”
  • William Tong, “Boule or Baguette? A Study on Task Topology, Length Generalization, and the Benefit of Reasoning Traces”
  • Áron Vékássy and Thomas Kaminsky, “Safe Multi-Robot Communication With Uniform Exclusion Guarantees Using Trust”
  • Binxu Wang, “A Regression Theory of dissociation between brain prediction and control “
  • Yangdong Wang, “Compartment-Specific Rules for Dendritic Excitation in L5 neurons”
  • Jiageng Wu, “EHR-Gemma: Training Large Language Models on Large-Scale Real-World Electronic Health Records”
  • Ningjing Xia, “Occam’s razor in the mouse brain”
  • Guowei Xu, “Sparse Reward Subsystem in Large Language Models”
  • Lance Ying, “CogGym: Towards Large-Scale Comparative Evaluation of Human and Machine Cognition”
  • Charles Zhang, “MIMIC-MJX: Neuromechanical Emulation of Animal Behavior”
  • Yuyang Zhang, “Decentralized Diffusion Policies for Improved Exploration in Multi-agent Reinforcement Learning”
  • Minda Zhao, “Large Language Models Are Bad Dice Players: LLMs Struggle to Generate Random Numbers from Statistical Distributions”