Zuse Research Seminar

What? The Zuse Research Seminar serves as an interdisciplinary forum for researchers in Applied Mathematics and Computer Science.

Who? Talks are given by researchers from ZIB showcasing their department’s work, as well as invited external speakers from academia and industry. The seminar is open to the public.

When? About five talks per semester, usually 11:30–12:30 or 13:30–14:30. Coffee sometimes available before or after.

Where? Usually the institute’s lecture hall.

Contact Tim Conrad or Christoph Spiegel for organisational questions.

Upcoming Talks

There are no upcoming talks scheduled.

Past Talks

60 min
ZIB Roter Salon (Room 4027)
Paolo Villani FUB
Clustering with mixture models to identify model deficiencies

When representing a complex system through a computational model, a mismatch between the results of the simulations and the measurements from the real world is usually unavoidable. If such discrepancy renders the model unreliable and thus unuseful, improving it becomes a necessity. The work considered for this talk proposes a mixture model-based, non-intrusive strategy which clusters sensor readings using multiple sets of parameters, aiming to give the modeler insight about model biases behind structured discrepancies in order to improve the model.

Carlos Soto U Mass
Differential Privacy over Manifolds and Shape Space

In this work we consider the problem of releasing a differentially private statistical summary that resides on a Riemannian manifold. We present extensions of the Laplace and K-norm mechanisms that utilizes intrinsic distances and volumes on the manifold. We also consider in detail the specific case where the summary is the Fréchet mean of data residing on a manifold. We demonstrate that our mechanism is rate optimal and depends only on the dimension of the manifold, not on the dimension of any ambient space, while also showing how ignoring the manifold structure can decrease the utility of the sanitized summary. Lastly, we illustrate our framework in three examples of particular interest in statistics: the space of symmetric positive definite matrices, which is used for covariance matrices, the sphere, which can be used as a space for modeling discrete distributions, and Kendall’s 2D planar shape space.

Certified Reduced-Order Methods for Model Predictive Control of Time-Varying Evolution Processes

In this talk model predictive control (MPC) is utilized to stabilize a class of linear time-varying parabolic partial differential equations (PDEs). In our first example the control input is only finite-dimensional, i.e., it enters as a time-depending linear combination of finitely many indicator functions whose total supports cover only a small part of the spatial domain. In the second example the PDE involve switching coefficient functions. We discuss stabilizability and the application of reduced-order models to derive algorithms with closed-loop guarantees.

This is joint work with Behzad Azmi (Konstanz), Michael Kartmann (Konstanz), Mathia Menucci (Stuttgart), Jan Rohleff (Konstanz), and Benjamin Unger (Stuttgart).

Stefan Volkwein
Learning to Compute Gröbner bases

Solving a polynomial system, or computing an associated Gröbner basis, has been a fundamental task in computational algebra. However, it is also known for its notorious doubly exponential time complexity in the number of variables in the worst case. In this talk, I present a new paradigm for addressing such problems, i.e., a machine-learning approach using a Transformer. The learning approach does not require an explicit algorithm design and can return the solutions in (roughly) constant time. This talk covers our initial results on this approach and relevant computational algebraic and machine learning challenges.

Hiroshi Kera
Nicholas Charron ZIB (SC)
Machine Learned Force Fields, Coarse Graining, HPC, & Beyond

Machine learned force fields (MLFFs), particular those using deep neural networks to model interaction potentials are quickly becoming a powerful tool for modelling complex molecular systems at scale with both classical and quantum accuracy. In this talk, we demonstrate an application of developing transferable coarse grained (CG) MLFFs for proteins using HPC resources to show how machine learning can be used easily and effectively on compute clusters to solve relevant chemical/physical problems. We furthermore discuss growing trends and usage of MLFFs with HPC resources, including emerging datasets, hardware demands, and integrations of machine learned potentials with existing simulation software.

Swarm-Performance of Multi-Agent Systems and Connections to Equity

Many real-world systems are composed of agents whose interactions result in a collective swarm behavior that may be complex, unexpected, and/or unintended. We highlight intriguing cases of interplay between the micro-scale behavior of agents and the macro-scale performance of the swarm, with a particular emphasis on heterogeneous systems composed of different types of agents, such as: traffic flow (the role of automation/connectivity on the energy footprint of urban traffic flow), mixed human/robotic groups (transportation of supplies to a disaster area), and biological systems (schools of fish and colonies of penguins). We particularly show how behavior interpretable as ‘equitable’ or ‘altruistic’ is possible to arise from pure survival-of-the-fittest objective functions

Benjamin Seibold
Geometric Machine Learning and Graph Machine Learning
Melanie Weber
Backpropagation and Nonsmooth Optimization for Machine Learning

Backpropagation is the major workhorse for many machine learning algorithms. In this presentation, we will examine the theory behind backpropagation as provided by the technique of algorithmic differentiation. Subsequently, we will discuss how this classic derivative information can be used for nonsmooth optimization. Examples from reail will illustrate the application of the proposed nonsmooth optimization algorithm.

Andrea Walther
Christoph von Tycowicz ZIB (VDCC)
Geometric Deep Learning

The increasing success of deep learning techniques during the last decade express a paradigm shift in machine learning and data science. While learning generic functions in high dimensions is a cursed estimation problem, many challenging tasks such as protein folding or image-based diagnosis, have now been shown to be achievable with appropriate computational resources.

These breakthroughs can be attributed to the fact that most tasks of interest aren’t actually generic; they possess inherent regularities derived from the effective low-dimensionality and structure of the physical world.

In this talk, we will see how geometric concepts allow to expose these regularities and how we can use them to incorporate prior (physical) knowledge into neural architectures.

Christoph von Tycowicz
Sebastian Pokutta ZIB (AIS2T)
An Introduction to Conditional Gradients

Conditional Gradient methods are an important class of methods to minimize (non-)smooth convex functions over (combinatorial) polytopes. Recently these methods received a lot of attention as they allow for structured optimization and hence learning, incorporating the underlying polyhedral structure into solutions. In this talk I will give a broad overview of these methods, their applications, as well as present some recent results both in traditional optimization and learning as well as in deep learning.

 Sebastian Pokutta
Pedro Maristany de las Casas ZIB (NEO)
Multiobjective Shortest Path Problems

In this talk we discuss new algorithms for the Multiobjective Shortest Path (MOSP) problem. The baseline algorithm, the Multiobjective Dijkstra Algorithm (MDA) has already been introduced in seminars at ZIB. New aspects discussed in this talk are its output-sensitive running time bound and how the bound compares to the one derived for previously existing MOSP algorithms, a version of the MDA for One-to-One MOSP instances, and the usage of the MDA as a subroutine. The discussed application in which the MDA acts as a subroutine are the Multiobjective Minimum Spanning Tree problem and the K-Shortest Simple Path problem.

 Pedro Maristany de las Casas
Thoughts on Machine Learning

Techniques of machine learning (ML) and what is called “artificial intelligence” (AI) today find a rapidly increasing range of applications touching upon social, economic, and technological aspects of everyday life. They are also being used increasingly and with great enthusiasm to fill in gaps in our scientific knowledge by data-based modelling approaches. I have followed these developments over the past almost 20 years with interest and concern, and with mounting disappointment. This leaves me sufficiently worried to raise here a couple of pointed remarks.

 Rupert Klein
Thorsten Koch ZIB (AAIM)
On the state of QUBO solving

It is regularly claimed that quantum computers will bring breakthrough progress in solving challenging combinatorial optimization problems relevant in practice. In particular, Quadratic Unconstrained Binary Optimization (QUBO) problems are said to be the model of choice for use in (adiabatic) quantum systems during the NISQ era. Even the first commercial quantum-based systems are advertised to solve such problems and QUBOs are certainly an interesting way of modeling combinatorial optimization problems. Theoretically, any Integer Program can be converted into a QUBO. In practice, however, there are some caveats. Furthermore, even for problems that can be nicely modeled as a QUBO, this might not be the most effective way to solve them. We review the state of QUBO solving on digital and Quantum computers and give some insights regarding current benchmark instances and modeling.

 Thorsten Koch
The few and the many

The talk will give a short introduction to complex dynamics of interacting systems of individual units that can be particles (molecules, …),  or agents (individual humans, media agents, …). We are interested in systems with at least two types of such units, one type of which just a “few” individual units are present and another type of which there are “many”. For such systems we will review mathematical models on different levels: from the micro-level in which all particles/agents are described individually to the macro-level where the “many” are modelled in an aggregated way. The effective dynamics given by these models will be illustrated by examples from cellular systems (neurotransmission processes) and opinion dynamics in social networks.  You will be able to follow the talk even if you do not have any detailed knowledge about particles/agents or cellular/social processes (at least I hope).

 Christof Schütte
Sparse Personalized PageRank:
New results on the 25 billion dollar eigenvector problem

This talk will go over the basics of the PageRank problem, studied initially by the founders of Google, which allowed them to create their search engine by applying it to the internet graph with links defining edges. Then, we will explain some of our results on the problem for undirected graphs, whose main application is finding local clusters in networks, and is used in many branches of science. We can now find local clusters fast in a time that does not depend on the whole graph but on the local cluster itself.

This is joint work with Elias Wirth and Sebastian Pokutta.

 David Martínez-Rubio