Speaker
Description
Measurement data is often sampled irregularly i.e. not on equidistant time grids. This is also true for Hamiltonian systems. However, existing machine learning methods, which learn symplectic integrators, such as SympNets [2] and HénonNets [1] still require training data generated by fixed step sizes. To learn time-adaptive symplectic integrators, an extension to SympNets, which we call TSympNets, was introduced in [2]. We adapt the architecture of TSympNets and extend them to non-autonomous Hamiltonian systems. So far, the approximation qualities of TSympNets have been unknown. We close this gap by providing a universal approximation theorem for separable Hamiltonian systems and show that it is not possible to extend it to non-separable Hamiltonian systems. To investigate these theoretical approximation capabilities, we perform different numerical experiments.
References
[1] J. W. Burby, Q. Tang, and R. Maulik. Fast neural Poincaré maps for toroidal magnetic fields.Plasma Physics and Controlled Fusion, 63(2):024001, 2020.
https://doi.org/10.1088/1361-6587/abcbaa .
[2] P. Jin, Z. Zhang, A.Zhu, Y.Tang, and G. E. Karniadakis. SympNets: intrinsic structure-preserving symplectic networks for identifying Hamiltonian systems. Neural Networks,
132:166–179, 2020.
https://doi.org/10.1016/j.neunet.2020.08.017 .