7–11 Apr 2025
Lecture and Conference Centre
Europe/Warsaw timezone

Neural Operator-accelerated Parallel-in-Time Methods

9 Apr 2025, 08:30
20m
Room 0.29

Room 0.29

Speaker

Sebastian Götschel

Description

Many models in computational science and engineering are based on time-dependent partial differential equations. Hence, integration along the time axis arises as an important numerical problem in many domains. While parallelization by decomposing the spatial computational domain is an established technique, on its own it will not suffice to provide the massive degree of concurrency required by upcoming exascale systems. Parallel-in-time integration (PinT) methods provide additional concurrency along the temporal axis and can improve parallel scaling. Classical PinT methods like Parareal, MGRIT, or PFASST rely on a computationally cheap coarse integrator to propagate information forward in time, while a parallelizable expensive fine propagator provides accuracy. Typically, the coarse method is a numerical integrator using lower resolution, reduced order or a simplified model. Similarly, iterative methods like spectral deferred correction (SDC) methods, the main ingredient of PFASST, require good initial guesses for fast convergence.

Considering that machine learning-based methods to approximate PDEs are becoming more and more successful, in this talk we propose to use neural operators as coarse propagators in Parareal or to initialize SDC iterations. Using Rayleigh–Bénard convection as a benchmark problem, we discuss design and training of suitable neural operators, investigate their performance and demonstrate space-time parallel scaling. This is joint work with Andreas Herten, Chelsea John, Stefan Kesselheim (FZ Jülich), Abdul Qadir Ibrahim, Thibaut Lunet, and Daniel Ruprecht (TUHH).

Primary author

Presentation materials

There are no materials yet.