Speaker
Description
Chemical kinetic models pose particular challenges to Bayesian parameter estimation due to their high nonlinearity and sensitivity. Additionally, priors are largely unformative whereas experimental data has a comparatively high accuracy. As a consequence posteriors can become highly complex and localized compared to the prior. As a possible route to address such problems, we explore an approach based on normalizing flows in conjunction with Quasi-Monte Carlo sampling. This approach involves learning a bijective neural network for parameter transformation such that uniform sampling in the transformed parameter space yields an efficient importance sampler for the Bayesian posterior. This is done in a sequential fashion exploiting the bijectivity to draw samples from an existing approximation for learning the next layer in the network. Due to the high localization, directly learning the posterior will not be robust and we will investigate the use of tempering to increase robustness. As a realistic showcase, we evaluate the performance of this method on an established empirical model for methanol synthesis over Cu-based catalysts, using both synthetic and experimental data. Our findings indicate a high potential of this approach for Bayesian inference of chemical kinetic models but also its challenging nature due to, e.g., the high number of hyperparameters in the neural network which need to be tuned.