Speaker
Description
Calculating exact derivatives is a cornerstone of scientific computing. It enables deep learning, solving nonlinear partial differential equations, or asset optimization in finance. For real world applications, the calculation of derivatives poses challenges due to the complexity of the algorithms that represent the underlying function. Algorithmic Differentiation (AD) addresses this challenge.
Despite its advantages, applying AD naively to numerical algorithms such as fixed-point iterations can lead to inefficiencies and incorrect derivatives. In this case, knowledge of the problem structure should be leveraged. Recently, Walther and Sander extended the established results for first-order derivatives of fixed-point iterations to second-order derivatives using the two-phase approach.
In addition to a brief introduction to AD, we present the corresponding theoretical results in this talk. We further explain the integration of the theoretical results into the C++ AD tool ADOL-C and demonstrate their application to a geometric finite element approximation.