Research Talks at the Zuse Institute Berlin
Welcome to the ZIB's Research Seminars and Colloquia page. Explore our schedule of research talks in mathematics and computer science. We currently host the following series at our institute:
- The Zuse Research Colloquium, a series of talks by high profile invited speakers from academia and industry.
- The Zuse Research Seminar, a series of talks by researchers from the Zuse Institute Berlin and select external speakers from the Berlin math community.
- The IOL Seminar and Lecture Series, the research seminar of the IOL research lab of Sebastian Pokutta at ZIB and TU Berlin.
For questions relating to an individual seminar or colloquium, please contact the organizers of that series. For question relating to this homepage, please contact Christoph Spiegel. The ZIB also hosts an overview of all mathematical research seminars happening in and around Berlin at seminars.zib.de.
Upcoming Talks
In this talk, I present a unified variational framework for modeling population dynamics from sparse, unaligned snapshots. I begin with the classical static Monge problem and its dynamic reformulation by Brenier-Benamou, showing how minimal-energy velocity fields connect two marginal distributions. Building on this foundation, I introduce a general Lagrangian cost to enable richer action modeling and add an entropy regularizer to capture diffusion effects. By leveraging the smoothness and convexity of the Lagrangian, I derive a dual formulation that reveals the coupled Hamilton-Jacobi-Bellman (HJB) - Fokker-Planck (FP) PDE system as an alternative perspective. Next, I survey four machine-learning methods—NLSB (Neural Lagrangian SB), DeepGSB (Deep Generalized SB), Action Matching, and GSB-Matching—and compare them in terms of convergence guarantees, supported cost functions, and computational scalability. I provide practical guidance on selecting and applying the right approach for real-world population-inference tasks.
Generative modeling via measure transport can be effectively understood through the lens of dynamical systems that describe the evolution from a prior to the prescribed target measure. Specifically, this involves deterministic or stochastic evolutions described by ODEs or SDEs, respectively, that shall be learned in such a way that the respective process is distributed according to the target measure at terminal time. In this talk, we show that this principled framework naturally leads to underlying PDEs connected to the density evolution of the processes. On the computational side, those PDEs can then be approached via variational approaches, such as BSDEs or PINNs. Using the former, we can draw connections to optimal control theory and recover trajectory-based sampling methods, such as diffusion models or Schrödinger bridges - however, without relying on the concept of time reversal. PINNs, on the other hand, offer the appealing numerical property that no trajectories need to be simulated and no time discretization has to be considered, leading to efficient training and better mode coverage in the sampling task. We investigate different learning strategies (admitting either unique or infinitely many solutions) on multiple high-dimensional multimodal examples.
We tackle the Optimal Experiment Design Problem, which consists of choosing experiments to run or observations to select from a finite set to estimate the parameters of a system. The objective is to maximize some measure of information gained about the system from the observations, leading to a convex integer optimization problem. We leverage Boscia.jl, a recent algorithmic framework, which is based on a non-linear branch-and-bound algorithm with node relaxations solved to approximate optimality using Frank-Wolfe algorithms. One particular advantage of the method is its efficient utilization of the polytope formed by the original constraints which is preserved by the method, unlike alternative methods relying on epigraph-based formulations. We assess our method against both generic and specialized convex mixed-integer approaches. Computational results highlight the performance of our proposed method, especially on large and challenging instances.
Riemannian optimization refers to the optimization of functions defined over Riemannian manifolds. Such problems arise when the constraints of Euclidean optimization problems can be viewed as Riemannian manifolds, such as the symmetric positive-definite cone, the sphere, or the set of orthogonal linear layers for a neural network. This Riemannian formulation enables us to leverage the geometric structure of such problems by viewing them as unconstrained problems on a manifold. The convergence rates of Riemannian optimization algorithms often rely on geometric quantities depending on the sectional curvature and the distance between iterates and an optimizer. Numerous previous works bound the latter only by assumption, resulting in incomplete analysis and unquantified rates. In this talk, I will discuss how to remove this limitation for multiple algorithms and as a result quantify their rates of convergence.