Loading Events

Research Fellow Candidate Presentations

Date: Thursday, November 21, 2024 Time: 9:00am - 12:00pm Virtual Link , opens in a new tab/window
Location: Kempner Large Conference Room (SEC 6.242) Room: Kempner Conference Room - 6.242

Please hold this time for presentations from our 2025-2026 Research Fellow candidates.

9:00-9:45am – Andrew Campbell, University of Oxford

Talk Title: Generative Models for Generic Data

  • Data in the real world often has complex structure that makes it difficult to apply standard generative modelling paradigms such as diffusion models. In this talk, I will cover my recent work expanding the diffusion methodology to a wide range of data types such as discrete data, trans-dimensional data and multi-modal data. I will focus on generative modelling for multi-modal discrete-continuous data using generative flows with particular application to protein structure-sequence co-generation.

9:45-10:30am – Elom Amemastro, Columbia University

Talk Title: Hierarchical Priors for Continual Learning of Motor Skills

  • Humans have an exceptional ability to learn many skills, enabling them to tackle new challenges with remarkable flexibility. In this talk, I will discuss recent experimental work uncovering the neural basis of flexible motor skill execution, revealing that these skills are not random but are hierarchically organized, where simple actions, or subgoals, combine to achieve complex objectives. Further, I will discuss ongoing work on how to endow artificial networks with similar abilities.

10:30-11:15am – Eli Sennesh, Vanderbilt University

Talk Title: Scaling NeuroAI to Cog Sci

  • Many cognitive scientists focus on explaining intelligent behavior computationally, but neuroscience suggests that the brain itself still has lessons to teach in scalability and flexibility. I will describe my efforts, both experimental and computational, to bridge brain and cognition in terms of predictive coding, a popular theoretical framework in neuroscience. Looking forward, I will suggest how my work also points towards scalable modeling of ad-hoc concepts, a ubiquitous cognitive phenomenon not yet captured (to my knowledge) computationally.

11:15am-12:00pm – Gabriel Poesia, Stanford University

Talk Title: Learning Formal Reasoning from Intrinsic Motivation

  • Formal systems such as type theories and logics can encode an extremely wide range of important reasoning domains, including mathematics and program verification. While solving general problems in these systems is computationally undecidable, humans rely on their ability to learn from experience, adapting to particular domains, and creating increasingly higher-level abstractions over time. I will show how learning formal mathematics and program verification can be operationalized in an open-ended setting, with no a priori goals (theorems or programs to verify) given; yet agents improve on proving theorems or verifying programs of interest even without those being particularly targeted during training.

Moderator: Bernardo Sabatini