Speaker
Description
Neural network architectures based on overlapping domain decomposition approaches have emerged as a powerful framework for enhancing the efficiency, scalability, and robustness of physics-informed neural networks (PINNs). In this work, we apply this approach to randomized neural networks (RaNNs) for solving partial differential equations (PDEs). A separate neural network is independently initialized on each subdomain using a uniform distribution, and the networks are combined via a partition of unity. Unlike classical PINNs, only the final layers of these networks are trained, which has a significant impact on the resulting optimization problem.
The optimization problem, for linear PDEs, reduces to a least-squares formulation, which can be solved using direct solvers for small systems or iterative solvers for larger ones. However, the least-squares problems are generally ill-conditioned, and iterative solvers converge slowly without appropriate preconditioning. To address this, we first apply singular value decomposition to remove components with low singular values, improving the system’s conditioning. Additionally, we employ a second type of overlapping domain decomposition in the form of additive and restricted additive Schwarz preconditioners for the least-squares problem, further enhancing solver efficiency.
Numerical experiments demonstrate that this dual use of domain decomposition significantly reduces computational time while maintaining accuracy, particularly for multi-scale and time-dependent problems.