Towards Universality in Representation Learning
Marco Fumero, Institute of Science & Technology, Austria
Designing robust and interpretable learning systems naturally suggests models whose internal representations reflect the underlying structure of the observed world. Yet machine learning models often reflect correlations tied to specific datasets or training conditions, and their behavior can shift unpredictably when the experimental conditions change. This raises fundamental questions about how different learning processes organize information and what kinds of structure support robust and transferable representations.
In this talk, I will present a geometric perspective on representation learning that links causal factors, invariances, and the observation that distinct learning processes, across architectures, modalities, and tasks, tend to produce related geometric structures. I will show how these latent geometries can be characterized and aligned, enabling meaningful comparison and transfer of information across models and transforming the rapidly expanding landscape of pretrained networks into a resource for reusable inductive biases. Building on this foundation, I will outline a broader research direction centered on unifying causal learning and representation alignment. This perspective opens the door to learning systems that are more reliable, interpretable, and adaptable, and that offer new opportunities for accelerating discovery in complex, multi-modal scientific domains.
