T. Anderson Keller

Kempner Research Fellow

Preferred Pronouns:

He/Him

KEMPNER GLOBAL COMMUNITY I speak: English, French

Contact Information

About

Though his first name is Thomas, most people refer to T. Anderson Keller by his middle name “Andy”. He is a machine learning researcher who began studying computer science at Caltech and UCSD, then worked on deep learning research at Intel Nervana, and most recently completed his PhD at the University of Amsterdam under the supervision of Max Welling. During his PhD, Keller became fascinated with the brain and how its physical organization might play a role in computation. Simultaneously, he got excited by the question of how abstract computational systems might represent the complex, ever-changing world in structured ways. As a result, his studies naturally gravitated towards the intersection of these two ideas, largely by building new deep neural network models to test long-standing hypotheses from neuroscience relating structure to function. Keller’s work at the Kempner Institute for the Study of Natural and Artificial Intelligence is a direct continuation of these ideas, with the ultimate goal of developing the next generation of artificial neural networks to help us understand the brain in new and meaningful ways.

Research Focus

Keller’s current research focuses on structured representation learning, probabilistic generative modeling, and biologically plausible learning. His research explores ways to develop deep probabilistic generative models that are meaningfully structured with respect to observed, real-world transformations. Such structure permits both improved generalization in previously unobserved settings and reduced sample complexity on natural tasks, thereby addressing two of the fundamental limitations of modern deep neural networks. The approaches Keller has taken to develop such structured representation learning algorithms have been directly motivated by observations from neuroscience—such as topographic organization and cortical traveling waves—and are further reinforced by ideas from machine learning and cognitive theory, such as equivariance, optimal transport, and intuitive physics. In the long term, the goal of Keller’s research is to understand the abstract mechanisms underlying the apparent sample efficiency and generalizability of natural intelligence, and then integrate these into artificially intelligent systems. In the short term, he hopes to be able to answer the question of how transformations and invariances are learned and encoded in the brain, what inductive biases lie behind our natural abilities, and to further understand how the two-dimensional structure of the cortical surface shapes how learning proceeds.