7–11 Apr 2025
Lecture and Conference Centre
Europe/Warsaw timezone

A multilevel proximal trust-region method for nonsmooth optimization with applications to scientific machine learning

Speaker

Qi Wang

Description

Many applications in PDE-constrained optimization and data science require minimizing the sum of smooth and nonsmooth functions. For example, training neural networks may require minimizing a mean squared error plus an l₁ regularization to induce sparsity in the weights. In this talk, we introduce a multilevel proximal trust-region methods to minimize the sum of a nonconvex, smooth and convex, nonsmooth function. Exploiting ideas from the multilevel literature allows us to reduce the cost of the step computation, which is a major bottleneck in single level procedures. Our work unifies theory behind the proximal trust-region methods and certain multilevel recursive strategies. We prove global convergence of our method in ℝⁿ and provide an efficient nonsmooth subproblem solver. We show the efficiency and robustness of our algorithm by means of numerical examples from training a neural network to solve PDEs.

Primary author

Presentation materials

There are no materials yet.