Speaker
Description
Optimal feedback control for nonlinear systems is a powerful tool with applications in engineering, physics, and many other fields. However, a significant drawback of this approach is that the numerical treatment of the resulting nonlinear first-order partial differential equation—the Hamilton-Jacobi-Bellman (HJB) equation—can be challenging. In this talk, we will show that the HJB equation is linked to a nonlinear operator equation very similar to the Riccati equation.
To establish this connection, we define weighted Lp-spaces and develop a theory based on the Koopman operator that generalizes many concepts known from linear quadratic control problems. Moreover, we present a theory rooted in the Koopman operator and weighted Lp-spaces, which extends several well-known ideas from linear quadratic control.
We then demonstrate that the HJB equation can be formulated as a minimization problem over a set of nuclear operators, where the solution is characterized by a nonlinear operator equation analogous to the Riccati equation. Furthermore, we show that policy iteration can be interpreted as a specific method for solving this operator equation. However, this method has some unfavorable properties, which we address by introducing a modification.
We believe these results may pave the way for convergence proofs of the modified policy iteration or even offer an alternative theory to viscosity solutions. Finally, we present numerical experiments that illustrate the theoretical properties derived in this work.