24 March 2025
TxAgent: an AI Agent for Therapeutic Reasoning Across a Universe of 211 Tools
A smarter way to navigate complex drug decisions
By: Shanghua Gao and Marinka Zitnik
The authors introduce TxAgent: a first of its kind AI agent for therapeutic reasoning across a universe of 211 tools, with a comparison against DeepSeek-R1 671B.
20 March 2025
Archetypal SAEs: Adaptive and Stable Dictionary Learning for Concept Extraction in Large Vision Models
By: Thomas Fel*, Ekdeep Singh Lubana*, Jacob S. Prince, Matthew Kowal, Victor Boutin, Isabel Papadimitriou, Binxu Wang, Martin Wattenberg, Demba Ba, Talia Konkle (*denotes equal contribution)
The authors find that Archetypal SAE anchors concepts in the real data’s convex hull and delivers consistent and stable dictionaries.
10 March 2025
Traveling Waves Integrate Spatial Information Through Time
By: Mozes Jacobs, Roberto Budzinski, Lyle Muller, Demba Ba, and T. Anderson Keller
Through the use of recurrent neural networks trained to solve tasks requiring the integration of global information, but with constrained local connectivity, the authors find neurons learn to encode and transmit information to other spatially distant neurons through traveling waves.
10 February 2025
Alignment Reduces Conceptual Diversity of Language Models
By: Sonia Murthy, Tomer Ullman, and Jennifer Hu
The authors use a new way of measuring the conceptual diversity of synthetically-generated LLM “populations” to investigate whether LLMs capture the conceptual diversity of human populations.
19 December 2024
ProCyon: A Multimodal Foundation Model for Protein Phenotypes
By: Owen Queen, Robert Calef, and Marinka Zitnik
The authors introduce ProCyon, a multimodal foundation model to model, generate, and predict protein phenotypes.
9 December 2024
Loss-to-Loss Prediction
Scaling Laws for all Datasets
By: David Brandfonbrener, Nikhil Anand, Nikhil Vyas, Eran Malach and Sham Kakade
The authors develop a method to predict how large language models scale with compute across different datasets, enabling more efficient training and better understanding of data-compute tradeoffs.
22 November 2024
How Does Critical Batch Size Scale in Pre-training? (Decoupling Data and Model Size)
By: Hanlin Zhang, Depen Morwani, Nikhil Vyas, Udaya Ghai, Jingfeng Wu and Difan Zou
The authors empirically show that the critical batch size for pre-training scales with data size rather than model size, followed by theoretical justifications for it.
28 October 2024
Mixture of Parrots 🦜🦜🦜: Experts Improve Memorization More Than Reasoning
By: Samy Jelassi and Eran Malach
The authors demonstrate, through theory and experiments, that in MoEs, experts improve memorization more than reasoning.
25 September 2024
Contrastive Learning Explains the Emergence and Function of Visual Category Selectivity
By: Jacob Prince, George Alvarez, and Talia Konkle
The authors introduce an updated framework for understanding visual object recognition and category selectivity: contrastive coding.
16 August 2024
Context Matters for Foundation Models in Biology
By: Michelle M. Li and Marinka Zitnik
Introducing PINNACLE, a novel contextual AI model for single-cell protein biology that supports a broad array of biomedicalAI tasks by tailoring outputs to the cell type context in which the model makes predictions.