Loading [MathJax]/extensions/Safe.js

Kempner Seminar Series

The Kempner Seminar Series brings leading intelligence researchers to Harvard to discuss their latest research and findings.

About

Join us for our research-level seminar series exploring recent advances in the field of intelligence. Seminars are held on Fridays in-person at Harvard’s Science and Engineering Complex in Allston, MA. The talks in this series are free and open to the public.

To explore upcoming talks, visit our event calendar.

Explore the series

Most of our Kempner Seminar Series talks are recorded and available on our Youtube channel.

Recorded talks

Past speakers

  • Allison Gopnik: Transmission Versus Truth: AI Models as Cultural Technologies and Epistemic Agents.
  • Brian DePasquale: Constructing Spiking Networks as a Foundation for the Theory of Manifolds as Computational Substrate
  • Kim Stachenfeld: Structured Predictive Models for Neuroscience + AI
  • Stefano Ermon: Score Entropy Discrete Diffusion Models
  • Andrea Montanari: Solving overparametrized systems of nonlinear equations
  • Tom Griffiths: Using the Tools of Cognitive Science to Understand the Behavior of Large Language Models
  • Larry Abbott: Modeling the Navigational Circuitry of the Fly
  • Rajesh Rao: Active Predictive Coding: A Sensory-Motor Theory of the Neocortex and a Unifying Framework for AI
  • Noam Brown: CICERO: Human-Level Performance in the Game of Diplomacy by Combining Language Models with Strategic Reasoning
  • Carsen Stringer: Unsupervised Pretraining in Biological Neural Networks
  • Emmanuel Abbe: Logic Reasoning and Generalization on the Unseen
  • Denny Zhou: Teach Language Models to Reason
  • Tom Goldstein: Dataset Security Issues in Generative AI
  • Yann LeCunn: Towards Machines That Can Learn, Reason and Plan
  • Timothy Lillicrap: Model-Based Reinforcement Learning and the Future of Language Models
  • Yejin Choi: Common Sense: The Dark Matter of Language Intelligence
  • Jamie Morgenstern: Shifts in Distributions and Preferences in Learning
  • Uri Hasson: Deep Language Models as a Cognitive Model for NLP in the Human Brain
  • Sebastien Bubeck: First Contact
  • Ludwig Schmidt: A Data-Centric View on Reliable Generalization: From ImageNet to LAION-5B
  • Chelsea Finn: Neural Networks Make Stuff Up. What Should We Do About It?
  • Roger Grosse: Studying Neural Net Generalization through Influence Functions
  • Matthieu Wyart: Data Structure and Curse of Dimensionality in Deep Learning