Ludwig Schläfli lecture 2025

Monday November 3rd, 2025, 17:15
University of Bern, venue tba.


Kathlén Kohn
KTH
Algebraic Geometry of Neural Networks

The space of functions parametrized by a fixed neural network architecture is known as its "neuromanifold", a term coined by Amari. Training the network means to solve an optimization problem over the neuromanifold. Thus, a complete understanding of its intricate geometry would shed light on the mysteries of deep learning. This talk explores the approach to approximate neural networks by algebraic ones that have semialgebraic neuromanifolds. Such approximation is possible for any continuous network on a compact data domain. By the universal approximation theorem, algebraic neural networks are essentially the only ones whose neuromanifolds span finite-dimensional ambient spaces. In this setting, we can interpret training the network as finding a "closest" point on the neuromanifold to some data point in the ambient space. This perspective enables us to understand the loss landscape better, which is the graph of the loss function over the neuromanifold. In particular, the singularities (and boundary points) of the neuromanifold can cause a tradeoff between efficient optimization and good generalization: On the one hand, singularities can yield numerical instability and slow the learning process (which was already observed by Amari). On the other hand, we will observe how the same singularities cause implicit bias to stable and sparse solutions. Computing the singularities is often a technical endeavor, and requires us to determine both the hidden parameter symmetries of the network and the critical points of the network's parametrization map. In this talk, we will carefully compare 3 popular architectures: multilayer perceptrons, convolutional networks, and self-attention networks. The results presented in this talk are based on several joint works with Nathan Henry, Giovanni Marchetti, Stefano Mereta, Vahid Shahverdi, and Matthew Trager.

The Ludwig Schläfli lecture takes place every two years since 2006 (in odd years) at the University of Bern; it is a joint colloquium with the University of Fribourg, who organizes the Plancherel Lectures in the even years. (Due to the corona pandemic, even and odd have swapped in the past.) The lecture is named after Ludwig Schläfli, who was a member of the mathematics department of the University of Bern between 1848 and 1891.

Directions to the lecture hall: TBA

For any additional information please e-mail Jan Draisma.

Past Lectures:
2024: Günter M. Ziegler (Freie Universität Berlin)
(2023 Schläflic lecture was aborbed into the Einsteiun Lectures by Maryna Viazovska)
2021: Joshua E. Greene (Boston College)
2018: Tamar Ziegler (Hebrew University)
(2016 Schläfli lecture was absorbed into Einstein Lectures by Martin Hairer)
2014: Gérard Besson (Grenoble)
2012: Albrecht Böttcher (Chemnitz)
2010: Bart M. ter Haar Romeney (Eindhoven)
2008: Günter M. Ziegler (TU Berlin)
2006: Oleg Viro (Uppsala)
Design by Emanuele Delucchi