Loading Events

Beyond Scaling: Frontiers of Retrieval-Augmented Language Models

Akari Asai

Date: Thursday, March 27, 2025 Time: 2:30 - 3:30pm

 

Abstract:  Large Language Models (LMs) have achieved remarkable progress by scaling training data and model sizes. However, they continue to face critical limitations, including hallucinations and outdated knowledge, which hinder their reliability—especially in expert domains such as scientific research and software development. In this talk, I will argue that addressing these challenges requires moving beyond monolithic LMs and toward Augmented LMs—a new AI paradigm that designs, trains, and deploys LMs alongside complementary modules to enhance reliability and efficiency. Focusing on my research on Retrieval-Augmented LMs, one of the most impactful and widely adopted forms of Augmented LMs today, I will begin by presenting systematic analyses of current LM shortcomings and demonstrating how retrieval augmentation offers a more scalable and effective path forward. I will then discuss my work on establishing new foundations for these systems, including novel training approaches and retrieval mechanisms that enable LMs to dynamically adapt to diverse inputs. Finally, I will showcase the real-world impact of such models through OpenScholar, our fully open Retrieval-Augmented LM for assisting scientists in synthesizing literature—now used by over 30,000 researchers and practitioners worldwide. I will conclude by outlining my vision for the future of Augmented LMs, emphasizing advancements in abilities to handle heterogeneous modalities, more efficient and flexible integration with diverse components, and rigorous evaluation through interdisciplinary collaboration.

 

Speaker Bio: Akari Asai is a Ph.D. candidate in the Paul G. Allen School of Computer Science & Engineering at the University of Washington. Her research focuses on overcoming the limitations of large language models (LMs) by developing advanced systems, such as Retrieval-Augmented LMs, and applying them to real-world challenges, including scientific research and underrepresented languages. Her contributions have been widely recognized, earning multiple paper awards at top NLP and ML conferences, the IBM Global Fellowship, and industry grants. She was also named an EECS Rising Star (2022) and one of MIT Technology Review’s Innovators Under 35 Japan. Her work has been featured in outlets such as Forbes and MIT Technology Review. Beyond her research, Akari actively contributes to the NLP and ML communities as a co-organizer of high-impact tutorials and workshops, including the first tutorial on Retrieval-Augmented LMs at ACL 2023, as well as workshops on Multilingual Information Access (NAACL 2022) and Knowledge-Augmented NLP (NAACL 2025).

View this event on the Harvard SEAS website.