7–11 Apr 2025
Lecture and Conference Centre
Europe/Warsaw timezone

Learning regularizers - bilevel opitimization or unrolling?

Speaker

Dirk Lorenz

Description

In this talk we will consider the problem of learning a convex regularizer from a theoretical perspective. In general, learning of variational methods can be done by bilevel optimization where the variational problem is the lower level problem and the upper level problem minimizes over some parameter of the lower level problem. However, this is usually too difficult in practice and a popular method is the approach by so-called unrolling (or unfolding) of a solver for the lower level problem. There, one replaces the lower level problem by an algorithm that converges to a solution of that problem, chooses a number N of iterations to be performed and uses the N-th iterate as a substitute for the true solution.

While this approach is often successful in practice, little theoretical results are available. In this talk we will consider a situation in which one can make a thorough comparison of the bilevel approach and the unrolling approach in a particular case of a quite simple toy example. Even though the example is quite simple, the situation is already complex and reveals a few phenomena that have been observed in practice: Deeper unrolling is often not beneficial, especially if algorithm parameters such as stepsizes are not learned as well. With learned stepsizes, deeper unrolling often does not improve performance, but gives good results already for shallow unrolling.

Primary author

Presentation materials

There are no materials yet.