Kempner Institute Celebrates the Innovative Work of Research Fellows Jennifer Hu and Isabel Papadimitriou
The institute bids farewell to two research fellows as they move on to faculty positions for the next academic year

Isabel Papadimitriou (left) and Jennifer Hu both pursue research that advances the field of intelligence by investigating large language models (LLMs), the computer models at the heart of the ongoing revolution in generative AI.
Photo credit: Anthony Tulliani
As members of the first two cohorts of research fellows at the Kempner Institute, Jennifer Hu and Isabel Papadimitriou both arrived at Harvard to pursue research that advances the field of intelligence by investigating large language models (LLMs), the computer models at the heart of the ongoing revolution in generative AI. LLMs can generate text and computer code that is almost indistinguishable from what is produced by humans.
Hu and Papadimitriou both research aspects of how LLMs work, as well as how they might shed light on human cognition. Their research projects at the Kempner have led to important insights, including how both LLMs and humans can, at times, produce intuitive yet incorrect answers to questions, as well as how LLMs can be expanded to incorporate more kinds of input, such as visual data.
Jennifer Hu: Peering into “black boxes” of LLMs and human minds
Hu, who will join the Department of Cognitive Science at Johns Hopkins University as an assistant professor in July, undertakes research that uses theories of human minds to study AI models.
“I’m trying to understand how AI models work using what we know about our own minds, and then also in the other direction, using AI models to test new theories about how human minds work,” she says.
In a recent project at the Kempner, Hu investigated whether tasks that are hard for human beings are also hard for LLMs. In particular, humans are sometimes tricked by questions that have an intuitive but incorrect answer, and Hu wanted to learn whether models are also fooled by these kinds of questions.
“Let’s say I ask a human what the capital of Illinois is,” explains Hu. “They might immediately think of Chicago because it’s the most famous and populous city. But it’s not the correct answer, which is Springfield.”
Studying how humans answer such questions can reveal a lot of interesting data about the link between how people answer a question, and the accuracy of their response. For instance, a person’s reaction time to produce an answer, can provide clues about their confidence in the answer they produce. And overconfidence in answers can lead to mistakes.
With questions that have an intuitive but wrong answer, a human who is overconfident might go with their gut, answering quickly but incorrectly. A human who is asked to think carefully might consider the wrong-but-intuitive answer (Chicago) for a moment, before settling on the correct one (Springfield). In other words, a tricky question can take a little longer to answer correctly because the intuitive answer needs to be suppressed. Taking longer to answer a tricky question is a clue that the person is suppressing the initial gut instinct that can often lead them astray.
Hu and her collaborators have tested LLMs on datasets involving questions that might elicit wrong but plausible answers. They have found patterns in LLMs that are roughly analogous to reaction times in humans.
These patterns shed light on the inner workings of LLMs, which are often described as “black boxes,” since scientists don’t fully understand why they work the way they do. However, the patterns Hu is discovering also have the potential for important practical applications.
“If you can establish a tight link between model processing difficulties and human processing difficulties, then you can use the models to make new predictions about what kinds of things will be hard or easy for humans,” says Hu. In other words, patterns learned from LLM analysis could shed light on another black box: the human mind.
Going forward, in her new faculty role and as the director of the Group for Language and Intelligence at Johns Hopkins, Hu plans to build upon the insights from her work at the Kempner and delve deeper into the complex black boxes that are AI models and human minds.
And for aspiring researchers in the field of intelligence, Hu has some words of advice that reflect the important role that her time at the Kempner has played in advancing her research.
“The pace of research in AI and related fields is so fast,” she says. “There’s so much pressure. But if you get a Kempner research fellowship you’re not tied to a specific project. So take advantage of that freedom!”
Isabel Papadimitriou: Expanding the horizons of language research
Isabel Papadimitriou, who will join the Department of Linguistics at the University of British Columbia as an assistant professor in September, investigates LLMs and natural language processing, specifically researching how LLMs develop certain linguistic abilities.
“What I do is focus on two aspects of LLM research,” says Papadimitriou. “One looks at how language models organize their internal structure to encode linguistic variables that we’re interested in. The other takes a more interventionist approach, enabling us to identify how LLMs acquire particular linguistic capabilities by disrupting different aspects of their training.”
LLMs acquire language in very different ways from human beings. Most LLMs only learn through exposure to text, discerning what a word or a phrase means through the discovery of hidden statistical patterns in vast corpuses of text. Humans, by contrast, can “ground” the meanings of words and concepts in other information, such as sensory data, in order to discern meaning. For example, when a human child first hears the word “chair,” they are normally shown at least one chair. This allows the sound of the word “chair” to be grounded in the visual experience of seeing a chair.
Papadimitriou and her collaborators study how a type of grounding occurs in ML models that have additional types of inputs beyond text, such as image or auditory input. In a recent study they investigated vision-language models (VLMs) which have the ability to process images in addition to text. These kinds of models are called “multimodal,” because they involve two or more different “modalities” of information, as opposed to “unimodal” models that only process a single modality, such as text in the case of standard LLMs.
The study looked into whether a VLM trained on both visual and textual data came up with a unified multimodal encoding of a concept like “chair,” or separate encodings for each modality: one encoding for pictures of chairs and a separate encoding for text about chairs.
Papadimitriou and her team found evidence for the latter hypothesis: vision and text were encoded in distinct internal structures. However, they also discovered that the VLM created “bridges” linking the different encodings. This insight could enable separate AI unimodal models to be merged efficiently, rather than training multimodal models from scratch.
Beyond specific projects, Papadimitriou sees LLMs as a way to expand the horizons of language research. Linguistics has traditionally relied on conceptual analysis of small amounts of data from humans, leading to theories that might be logical, but don’t usually have the ability to generate natural language. Now, LLMs can serve as testing grounds for new kinds of hypotheses about language acquisition and the processes underlying it.
“Linguists aren’t used to having such strong computational models of language,” says Papadimitriou. This has changed with LLMs. Researchers like Papadimitriou can intervene during LLM training or during the language generation process to determine the causal factors that are most important for acquisition and performance. Intervening in analogous ways in a human brain is virtually impossible and would in any case be unethical.
Papadimitriou says this kind of ability also encourages language researchers to “expand their hypothesis-space” by formulating and testing hypotheses about language that were hard to even imagine before the arrival of LLMs.
The Kempner Institute: Driving innovation and collaboration
Hu and Papadimitriou describe the Kempner as a unique research environment that has been incredibly valuable for intellectual collaboration and scientific advancement.
“It’s a great research community,” says Papadimitriou. “People from so many different fields are thinking about language models — it’s good to have that interaction.”
Hu agrees. For her, the Kempner has offered the time and freedom to pursue ambitious lines of research, and, in a larger sense, has expanded her sense of what is possible. “I really feel that the Kempner has helped me take a step back and build collaborations that I wouldn’t have normally thought of doing,” she said. “That’s just been an amazing experience.”
Interested in the Kempner Research Fellowship?
The Kempner Institute will be recruiting a new class of research fellows this fall. Applications open in late summer 2025. Find out more here: Kempner Research Fellowship.
Interested in learning more about the research above?
- Read the recent Deeper Learning blog post about the work of Papadimitriou and her collaborators on VLMs: Interpreting the Linear Structure of Vision-Language Model Embedding Spaces.
- Read the preprint by Hu and collaborators comparing human real time processing with LLM processing: Signatures of human-like processing in Transformer forward passes
About the Kempner Institute
The Kempner Institute seeks to understand the basis of intelligence in natural and artificial systems by recruiting and training future generations of researchers to study intelligence from biological, cognitive, engineering, and computational perspectives. Its bold premise is that the fields of natural and artificial intelligence are intimately interconnected; the next generation of artificial intelligence (AI) will require the same principles that our brains use for fast, flexible natural reasoning, and understanding how our brains compute and reason can be elucidated by theories developed for AI. Join the Kempner mailing list to learn more, and to receive updates and news.
PRESS CONTACT:
Deborah Apsel Lang | (617) 495-7993