Speaker
Description
Many overdetermined inverse problems for parameter estimation can be formulated as the minimization of the sum of squares of residuals between some reference and a corresponding parametrized simulation. The least squares objective is geometrically closely related to the Euclidean metric of points in an high-dimensional ambient space. The output of a differentiable simulation model with respect to its parameters can be interpreted as belonging to the set of points on a (relatively) low-dimensional Riemannian manifold. From this follows trivially that the Euclidean metric is not necessarily a proper measure of distance between points on the model manifold, which explains the non-linearity of the least squares objective. The geodesic metric could define a convex objective given two known points on the manifold. However, we lack knowledge of precisely the second point, the parameters we wish to identify that satisfy the minimization objective.
Assuming the set of potential minimizers does not lead to self-intersection of the manifold, a transformation can be found which non-linearly projects the ambient space into a latent space equipped with a metric that properly projects onto the distances between parameter combinations. Autoencoders are particularly suited for such a task. We propose a novel autoencoder architecture that finds such a differentiable projection leading to an approximate linearization of the manifold with respect to its parametrization while preserving orthogonality of noise below a desired signal-to-noise ratio. This allows rapid minimization of the original objective using the encoded residuals for which especially the Gauss-Newton method leads to superlinear convergence rates. In theory, perfect linearization of the tubular neighborhood of the manifold would even allow to solve the problem trivially in the sense of linear least squares. The procedure is illustrated using both academic and applied examples.