- Indico style
- Indico style - inline minutes
- Indico style - numbered
- Indico style - numbered + minutes
- Indico Weeks View
In aerodynamic flows, viscous effects are concentrated in thin boundary layers along solid surfaces. Numerical simulation at high Reynolds numbers requires the turbulent boundary layer to be correctly described, and the modeling of turbulence is still an indispensable prerequisite. Modern turbulence modeling involves one to seven additional equations with deliberately formulated source terms. As a consequence, the resulting stiff system of flow and turbulence equations leads to severe challenges with respect to an efficient integration towards steady state.
Despite decades-long efforts, up to now no “universal” turbulence model has evolved which can be applied with reasonable reliability to various types of flows, with respect to numerical robustness and efficieny as well as to predictive quality. However, concerning zero-equation or algebraic turbulence models, since about the 1990s there is unanimous consensus that such models are not sufficiently accurate, and these models are not in use anymore. On the other hand, algebraic turbulence models are very efficient, since no additional equations with source terms are introduced. Thus, the objective of the present contribution is to make the predictive capabilities of algebraic models comparable to modern equation-based models.
Algebraic turbulence models mainly rely on the “Mixing Length” hypothesis, which Ludwig Prandtl first proposed 100 years ago at the 1925 GAMM conference in Dresden. Based on this Mixing Length hypothesis and further modifications, algebraic turbulence models like the Cebeci-Smith and the Baldwin-Lomax model were derived, and extensively used in the aircraft industry until the 1990s. Algebraic turbulence models were numerically robust, but for more complex airfoil and wing flows with shock-boundary layer interaction and/or flows being close to separation, these turbulence models proved to be inadequate by predicting shock locations too far downstream and/or too large regions of attached flow.
In the present contribution, an algebraic turbulence model is derived with a predictive quality comparable to contemporary one- and two-equation turbulence models. Here, the classical Baldwin-Lomax model is revised with a formulation very close to the original approach of Ludwig Prandtl. Experimental evidence and a shear stress sensor function are used to enhance the prediction of flows with shocks and close to separation. Flow computations around airfoils and wings show that the resulting model provides predictive properties similar to the most advanced modern one- and two-equation turbulence models. This convincingly confirms that the now 100 years old Mixing Length hypothesis of Ludwig Prandtl is still of high relevance for today’s aerodynamic problems.
The topic presents a concept of full-field microstructure-based models that can be considered digital shadows of the metallic material's microstructure during numerical simulation of thermo-mechanical treatment. Recent progress in the area of full-field modelling is directly driven by the development of modern metallic materials, often of a multiphase nature. Such microstructure types lead to local heterogeneities influencing material behaviour and, eventually, macroscopic properties of the final product. Commonly used microstructure evolution models and experimental techniques often fail to capture the true complexity of such modern multiphase materials and multistage processes. Information from simplified phenomenological or mean-field models can be deceiving and does not describe the true geometry of microstructural features. In contrast, full-field 2D/3D modelling allows for a comprehensive and closer-to-reality visualisation of microstructures as they evolve, enabling researchers to observe the interactions between grains/phases in 2D/3D at an unprecedented level. At the same time, it has to be emphasised that numerical modelling is not an isolated endeavour; it is closely integrated with experimental methods. The synergy of the two approaches is the basis for modern computational materials science.
The concept of the digital material shadow, stages of the model development, and examples of practical applications to the simulation of microstructure evolution during forming and heat treatment operations will be discussed (e.g. recrystallisation, phase transformations). The selected results will demonstrate the capabilities and limitations of such microstructure-based models in computational material design.
The Generalized Minimal Residual methods (GMRES) for the solution of general square linear systems Ax=b is often combined with a preconditioner to improve the convergence speed and the overall computing performance of the method. Successful mixed precision implementations for the application of the preconditioner inside GMRES have been previously proposed: certain strategy prescribes to apply the preconditioner in low precision to reduce overall time and memory consumption, other strategies propose to apply the preconditioner in higher precision to improve robustness and accuracy. These existing studies tend to focus on one kind of preconditioner combined with one kind of preconditioning technique (left-, right-, and flexible-preconditioning). In this talk, we propose to unify most of the state-of-the-art mixed precision implementations for preconditioned GMRES under the same comprehensive theory and give a clear and exhaustive list of the possible strategies to set the precisions and how they compare numerically. To achieve this, we derive new descriptive bounds for the attainable forward errors of the computed solutions. From the study of these bounds, we uncover yet unknown mixed precision implementations that achieve new tradeoffs between computing performance and accuracy. As importantly, we explain that there are critical differences in robustness and accuracy between left-, right-, and flexible-preconditioning for a same given set of precisions and that, therefore, the choice between the three preconditioning techniques has higher stakes in mixed precision. We substantiate our theoretical findings with a comprehensive experimental study on random dense and SuiteSparse matrices with various preconditioners.
We investigate the iterative refinement method applied to the solution of linear discrete inverse problems by considering its application to the Tikhonov problem in mixed precision. Previous works on mixed precision iterative refinement methods for the solution of symmetric positive definite linear systems and least-squares problems have shown regularization to be a key requirement when computing low-precision factorizations. For problems that are naturally severely ill-posed, we formulate the iterates of iterative refinement in mixed precision as a filtered solution using the preconditioned Landweber method with a Tikhonov-type preconditioner. Through numerical examples simulating various mixed precision choices, we showcase the filtering properties of the method and the achievement of comparable or superior accuracy compared to results computed in double precision as well as another approximate method.
Inverse problems focus on reconstructing hidden objects from indirect and often noisy measurements and are prevalent in numerous scientific and engineering disciplines. However, these reconstructions are typically highly sensitive to perturbations such as measurement errors, making regularization essential to obtain meaningful approximations.
Krylov subspace methods are a class of very popular projection methods with regularization properties, which typically construct stable bases for Krylov subspaces related to the original linear system using orthogonalization. However, there are scenarios where the inner products required in this process can hinder the usability of the solvers. For example, in low-precision arithmetic, standard Krylov solvers might break too early, or there can be over or under-flows in the computation of norms. Moreover, inner-products can be a limiting factor for high-performance computing, since they require global communication. On the other hand, the most used inner-product free solvers, e.g. Chebyshev semi-iterations, can show very poor convergence. In this work, I present a family of solvers which leverage the fast converge of Krylov methods while being inherently inner-product free, and I show that these can be used to tackle large-scale linear inverse problems efficiently. In particular, I revise the changing minimal residual Hessenberg method (CMRH) in the context of inverse problems, showing that it has regularization properties; and I introduce a new method, the least squares LU (LSLU). Both methods rely on the (possibly modified) Hessenberg iterative algorithm and are based off implicit LU factorizations of the Krylov basis. Moreover, this framework is extended to include Tikhonov regularization, in the fashion of hybrid regularization, so that the regularization parameters can be chosen on-the-fly. Theoretical results and extensive numerical experiments suggest that inner-product free variants exhibit comparable performance to the established methods.
The block classical Gram--Schmidt (BCGS) algorithm and its reorthogonalized variant are widely used methods for computing the economic QR factorization of block columns X due to their lower communication cost compared to other approaches such as modified Gram--Schmidt and Householder QR. To further reduce communication, i.e., synchronization, there has been a long ongoing search for a variant of reorthogonalized BCGS that achieves O(u) loss of orthogonality while requiring only one synchronization point per block column, where u represents the unit roundoff. Utilizing Pythagorean inner products and delayed normalization techniques, we propose the first provably stable one-synchronization reorthogonalized BCGS variant, demonstrating that it has O(u) loss of orthogonality under the condition O(u)κ^2(X) ≤ 1/2, where κ represents the condition number.
By incorporating one additional synchronization point, we develop a two-synchronization reorthogonalized BCGS variant which maintains O(u) loss of orthogonality under the improved condition O(u)κ(X) ≤ 1/2. An adaptive strategy is then proposed to combine these two variants, ensuring O(u) loss of orthogonality while using as few synchronization points as possible under the less restrictive condition O(u)κ(X) ≤ 1/2. As an example of where this adaptive approach is beneficial, we show that using the adaptive orthogonalization variant, s-step GMRES achieves a backward error comparable to s-step GMRES with BCGSI+, also known as BCGS2, both theoretically and numerically, but requires fewer synchronization points.
Hybrid constitutive modeling combines two complementary approaches for describing and predicting a material’s mechanical behavior: data-driven black-box methods and physically constrained, theory-based models [1,2]. While black-box methods can achieve high accuracy, they often lack interpretability and extrapolation capabilities. In contrast, physics-based models offer theoretical insight and generalizability but may struggle to capture complex behaviors with the same precision. Traditionally, hybrid modeling has required a compromise between these strengths.
In this presentation, we demonstrate how recent advancements in symbolic machine learning—particularly Kolmogorov-Arnold Networks (KANs)—help mitigate this trade-off. We present Constitutive Kolmogorov-Arnold Networks (CKANs) [3] as a novel class of hybrid constitutive models. By integrating a post-processing symbolification step, CKANs retain the predictive accuracy of data-driven models while enhancing interpretability and extrapolation through symbolic expressions, effectively bridging machine learning and physical modeling.
References:
[1] K. Linka, M. Hillgärtner, K.P. Abdolazizi, R.C. Aydin, M. Itskov, C.J. Cyron, Constitutive artificial neural networks: A fast and general approach to predictive data-driven constitutive modeling by deep learning, Journal of Computational Physics, 429:110010, 2021.
[2] K.P. Abdolazizi, K. Linka, C.J. Cyron, Viscoelastic constitutive artificial neural networks (vCANNs) - A framework for data-driven anisotropic nonlinear finite viscoelasticity, Journal of Computational Physics, 499:112704, 2024.
[3] K.P. Abdolazizi, R.C. Aydin, C.J. Cyron, K. Linka, Constitutive Kolmogorov–Arnold Networks (CKANs): Combining Accuracy and Interpretability in Data-Driven Material Modeling, Preprint, https://arxiv.org/abs/2502.05682, 2025.
The formulation and calibration of constitutive models remain challenging for materials exhibiting complex nonlinear elastic or inelastic behavior. In recent years, data-based or data-driven approaches have gained significant attention within the computational mechanics community to address these challenges. However, these methods typically require extensive datasets, often consisting of stress-strain relationships, which are fundamental in solid mechanics.
This contribution introduces a robust two-step methodology for the automated calibration of hyperelastic constitutive models, relying solely on experimentally measurable data. In the first step, data-driven identification (DDI) is employed to extract pairs of stress and strain states [1]. This approach requires only the application of boundary conditions and the displacement field, which can be obtained through full-field measurement techniques such as digital image correlation (DIC). The second step involves calibrating a physics-augmented neural network (PANN) [2] using the identified plane stress data. The PANN framework inherently satisfies the common principles of hyperelasticity by construction while offering remarkable flexibility. Furthermore, its implementation into finite element (FE) codes is straightforward.
To illustrate the effectiveness of the proposed methodology, we present several descriptive examples. Synthetic two-dimensional data are generated using a reference constitutive model and subsequently used to train the PANN. The calibrated PANN is then validated through three-dimensional FE simulations, where its results are benchmarked against the reference model.
[1] Leygue, A., Coret, M., Réthoré J., Stainier, L. and Verron, E., Data-based derivation of material response, Computer Methods in Applied Mechanics and Engineering 331 (2018).
[2] Linden, L., Klein, D. K., Kalina, K. A., Brummund, J., Weeger, O. and Kästner, M., Neural networks meet hyperelasticity: A guide to enforcing physics, Journal of the Mechanics and Physics of Solids 179 (2023).
Physics-enforced neural-network inelastic material models guarantee adherence to the laws of physics, even before training. There are currently two main approaches to enforce this, either by using neural networks with convexity restrictions to model thermodynamical potentials (known as Physics-Augmented Neural Networks, cf. [1]), or by embedding neural networks in the evolution laws such that positive dissipation is ensured by construction (cf. [2]). The similarities and differences between these approaches, in the context of plasticity and viscoelasticity, will be highlighted in this talk. Furthermore, the possibilities for discovering interpretable material models from these neural networks are discussed.
[1] Rosenkranz, M., Kalina, K. A., Brummund, J., Sun, W. C., \& Kästner, M. (2024). Viscoelasticty with physics-augmented neural networks: model formulation and training methods without prescribed internal variables. Computational Mechanics.
https://doi.org/10.1007/s00466-024-02477-1
[2] K. A. Meyer and F. Ekre, “Thermodynamically consistent neural network plasticity modeling and discovery of evolution laws,” J. Mech. Phys. Solids, vol. 180, p. 105416, 2023,
https://doi.org/10.1016/j.jmps.2023.105416.
The characterization of mechanical properties is essential not only in traditional engineering applications but also in food science, where it serves as a powerful tool for texture profile analysis (TPA) of food. Standardized TPA is commonly used to extract properties such as hardness and springiness [1]. To modify food texture, additional ingredients, such as thickeners, are often introduced to alter peak forces during chewing. Often, these ingredients are successively added, which is not only time-consuming but also tedious. Studying food from an engineering perspective offers new ways to improve the properties. Mechanically, food can be treated as a visco-elastic solid, allowing its mechanical properties – such as relaxation behavior and stiffness – to be systematically analyzed. Instead of introducing artificial additives to achieve a desired texture, an alternative might be evaluating the inherent properties of existing ingredients through material modeling. Therefore, we aim to identify the material model that best describes the mechanical behavior of the examined food. Rather than presupposing a specific behavior, such as that described by the elastic Arruda-Boyce or inelastic Drucker-Prager models, we leverage the recent advancements in constitutive modeling using neural networks. By employing interpretable and physically consistent models, we aim to determine the most suitable material description in an unbiased manner. Using compression test data from tofu conducted from recent experiments, we investigate the presence of inelasticity through inelastic constitutive artificial neural networks (iCANNs) [2].
[1] Rahman, M. S., Al-Attabi, Z. H., Al-Habsi, N., \& Al-Khusaibi, M. (2021). Measurement of instrumental texture profile analysis (TPA) of foods. Techniques to Measure Food Safety and Quality: Microbial, Chemical, and Sensory, 427-465.
[2] Holthusen, H., Linka, K., Kuhl, E., \& Brepols, T. (2025). A generalized dual potential for inelastic Constitutive Artificial Neural Networks: A JAX implementation at finite strains. arXiv preprint arXiv:2502.17490.
Slender, beam-like structures with hyperelastic base materials and complex cross-sectional shapes exhibit highly nonlinear constitutive responses in terms of their effective strain measures and stress resultants. This behavior is determined by the base material as well as geometric nonlinearities on the cross-sectional scale. While classical constitutive models for beams are typically restricted to linear elasticity and rigid cross-sections, modeling hyperelastic beams with deformable cross-sections requires multiscale beam simulations. Here, the effective strain measures of the beam serve as inputs to a microscale simulation, which evaluates the cross-sectional deformation and the beam’s effective constitutive response. However, this results in massive computational overhead, diminishing the main purpose of using a beam formulation in the first place. In this contribution, we present physics-augmented neural network constitutive models for beams and explore their application as surrogates for speeding up hyperelastic, multiscale beam simulations. Effective strains and curvatures are used as inputs for feed-forward neural networks, which represent the hyperelastic beam potential. Forces and moments are received as the gradients of the potential ensuring thermodynamic consistency. The potential is complemented with normalization terms guaranteeing stress and energy normalization. We further extend the model to transverse isotropy and a less restrictive point symmetry constraint. To improve scaling, a data augmentation is applied. Lastly, we introduce a scalar parametrization for different ring-shaped cross-sections. All models are calibrated to data of circular or ring-shaped deformable hyperelastic cross-sections, showing excellent accuracy and generalization. The straightforward applicability of the models is demonstrated in isogeometric beam simulations.
The growing demand for high-performance materials has led to significant advancements in engineering composite structures, such as laminates, and fiber-reinforced materials, and novel developments such as voided slabs for reinforced concrete. While these materials optimize the structural performance by taking advantage of the unique properties of the different constituents, developing effective material models based on classical phenomenological approaches remains challenging. Full 3D solutions on the other hand, under consideration of the complex microstructures, quickly encounter the bottleneck of a high computational effort. Alternative approaches, as developed in, e.g., [1] for shell structures, employ multiscale techniques, based on consistent homogenization schemes, which effectively capture the microstructural constitutive response at each macroscopic integration point. In this contribution, we seek to further reduce the computational cost by taking advantage of the consistent homogenization scheme in order to train an artificial neural network (ANN) material model for arbitrary history-dependent stress-strain paths. As ANNs are capable of capturing inelastic material behavior, see. e.g. [2], this work focuses on the application to high dimensional strain-stress relations for shell structures with underlying viscoelastic microstructures. We demonstrate, that a small database comprising uniaxial synthetic material tests on a representative microstructure in combination with derivative information is sufficient to train a feasible ANN material model [3]. Furthermore, we explore the limits of the approximation capabilities of the ANN by including non-physical parameters of the microstructure, e.g. volume fractions, as inputs to the material model. Studies include comparisons with full-scale and multiscale models, highlighting computational efficiency and practical feasibility in application to real-world engineering problems involving complex microstructures.
[1] Gruttmann, F. and Wagner., W.: A coupled two-scale shell model with applications to layered structures. International Journal for Numerical Methods in Engineering 94(13), pp. 1233-1254, 2013.
[2] Rosenkranz, M, Kalina, K.A., Brummund, J., Kästner, M.: A comparative study on different neural network architectures to model inelasticity. International Journal for Numerical Methods in Engineering 124(21), pp. 4802-4840, 2023.
[3] Geiger, J., Wagner, W., Freitag, S.: Multiscale modeling of viscoelastic shell structures with artificial neural networks. Computational Mechanics 2025, under review
In this talk, we present splitting methods for closed port-Hamiltonian systems, focusing on preserving their internal structure, particularly the dissipation inequality. Port-Hamiltonian systems are characterized by their ability to describe energy-conserving and dissipative processes. The physical properties are encoded in the algebraic structure of the system. Operator splitting takes advantageous of the decomposition of the underlying problem into subproblems of profoundly different behavior. Classical splitting schemes of order p≥3 involve negative step sizes. For time-irreversible systems, such as port-Hamiltonian systems with dissipation, the positivity of the step sizes is essential. Negative step sizes can be avoided by help of commutator-based methods. We introduce an energy-associated decomposition that exploits the system’s energy properties. Then, the numerical structure preservation depends crucially on the properties of the designed commutator. We set up skew-symmetric commutators for linear systems and discuss generalizations for non-linear systems. In particular, we present splitting schemes up to order four for special port-Hamiltonian systems. The proposed approaches are validated through theoretical analysis and numerical experiments.
Port-Hamiltonian systems are an extension of Hamiltonian systems that incorporate network structure and energy exchanges through ports, enabling the modeling of open, interconnected physical systems from various domains. The interconnection of network components often leads to differential-algebraic equations, which also include algebraic constraints, like in the case of Kirchhoff's laws. To ensure that these constraints are not violated, additional care is necessary when applying numerical methods to differential algebraic equations.
In this talk, we discuss the application of discrete gradient methods to nonlinear port-Hamiltonian descriptor systems, with a specific focus on the case of semi-explicit differential-algebraic equations. Discrete gradient methods are particularly suitable for the time discretization of port-Hamiltonian systems since they are structure-preserving regardless of the form of the Hamiltonian, unlike other common methods whose structure-preserving characteristics are limited to quadratic Hamiltonians, like the implicit midpoint rule. We also present numerical results to demonstrate the capabilities of our methods. This is joint work with Philipp L. Kinon (KIT) and Philipp Schulze (TU Berlin).
In this work we study the structure-preserving space discretization of port-Hamiltonian (pH) systems defined with implicit or differential constitutive relations. Using the concept of Stokes-Lagrange structure to describe these relations, we show how these can be reduced to a finite-dimensional Lagrange subspace of a pH system thanks to a structure-preserving Finite Element Method. As an application, we first consider the linear nanorod case in 1-D, which is described by a nonlocal (implicit) constitutive relation for which a Stokes-Lagrange structure along with boundary energy ports naturally occurs. Then, these ideas are applied to a linear nonlocal example of a seepage model in 2-D, namely the Dzektser equation where distributed boundary energy ports have to be taken into account. In both cases we identify the underlying Stokes-Lagrange structure, that describes the Hamiltonian of the system; a focus is given on the boundary energy ports that appear because of the differential operator inside the Hamiltonian. This procedure yields two pH systems defined with Stokes-Dirac (power routing) and Stokes-Lagrange (energy definition) structures. Lastly, these structures are discretized into discrete Dirac and Lagrange structures along with their corresponding discrete control ports. Moreover, the computational burden of the implicit constitutive relation is shown to be better than its explicit counterpart, even if it requires solving a linear system.
Finally, we will extend our results to the nonlinear 2-D incompressible Navier-Stokes equation written in vorticity-stream function formulation. In this case, since a differential operator appears in the constitutive relations, it will be recast as a pH system defined with a Stokes-Lagrange structure and a Stokes-Dirac structure modulated by the energy variable.
Geometrically exact beam models provide a powerful way to accurately model slender elastic elements undergoing large deformations, which is essential in many applications such as medical or soft robotics, cable and pipe modeling, or even biophysics. Due to the underlying kinematic description based on a Cosserat continuum, these models possess an inherent geometric structure, as their configuration space has the structure of a Lie group – more precisely, it is an infinite-dimensional product of the Special Euclidean group SE(3).
Preserving this structure during space and time discretization brings several advantages: It allows the definition of discrete strain measures that are inherently objective, which simultaneously enables interpolation schemes directly on the Lie group, it gives discrete numerical methods highly favorable properties (such as stability and accuracy at coarse discretizations), and it simplifies the integration of the discrete model into rigid-flexible multibody systems.
In this talk, we present a structure-preserving approach for the space and time discretization of geometrically exact beam models in the framework of variational integrators, which provide an elegant way to obtain highly stable, accurate, and efficient fully discrete models. The core of the approach is to define discrete velocity and deformation variables in the Lie algebra of SE(3), which are used to compute the group updates between space and time nodes using the retraction map [1], a generalization of the Lie group exponential map. We further illustrate how switching to a relative-kinematic description of the discretized beam – i.e., using discrete deformation variables as states – brings important practical advantages [2]: The resulting discrete model has a minimum number of states, can be integrated into rigid-flexible multibody systems in a remarkably elegant way, and highly stiff deformation modes can be excluded without algebraic constraints, which is essential for numerical performance.
[1] F. Demoures, F. Gay-Balmaz, S. Leyendecker, S. Ober-Blöbaum, T. S. Ratiu, and Y. Weinand, "Discrete variational Lie group formulation of geometrically exact beam dynamics," Numer. Math., vol. 130, no. 1, pp. 73–123, 2015,
doi: 10.1007/s00211-014-0659-4.
[2] M. Herrmann and P. Kotyczka, "Relative-kinematic formulation of geometrically exact beam dynamics based on Lie group variational integrators," Computer Methods in Applied Mechanics and Engineering, vol. 432, p. 117367, 2024,
doi: 10.1016/j.cma.2024.117367.
The introduction of input-to-state stability (ISS) in control systems research has resulted in numerous investigations into its properties, particularly for finite-dimensional systems. While ISS for these finite-dimensional systems is well-established, its extension to infinite-dimensional systems still presents unresolved challenges. One of these challenges is the determination of ISS gain functions for problems on complex geometries. This work establishes connections between ISS gain functions of finite-dimensional numerical approximations and those of the corresponding continuous systems, addressing both bounded and unbounded control operators. We demonstrate the applicability of these results to dissipative systems.
I will show how techniques in Geometric Numerical Integration can be exploited for data-driven system identification. I will demonstrate that exploiting Hamiltonian or variational structure can lead to increased accuracy in system identification by machine learning. Moreover, an exploit of data-driven symmetries can improve the extrapolation performance of machine-learned models and enables to detect conservation laws.
In this talk, we consider a class of variational problems with integral functionals involving nonlocal gradients. These models have been recently proposed as refinements of classical hyperelasticity, aiming for an effective framework to capture discontinuous and singular material effects, such as fracture. Specific to our set-up is that the nonlocal gradient has a space-dependent interaction range that vanishes at the boundary of the reference domain. In particular, the nonlocal operator only depends on values inside the domain and localizes to the classical gradient on the boundary. Our main contribution consists of a comprehensive treatment of the associated Sobolev spaces, including the analysis of a trace operator and the validity of a Poincaré inequality. As an application, we obtain the existence of minimizers for quasiconvex or polyconvex integral functionals involving these heterogeneous nonlocal gradients subject to local boundary conditions of Dirichlet-, Neumann- or mixed-type.
We present some recent developments in the derivation of a model for three-dimensional crystal plasticity via a Gamma convergence approach. It has long been understood that, in crystalline materials such as metals, a major role in the mechanisms involved in plastic deformations is played by a particular kind of defects of the underlying crystal lattice called dislocations. In the two-dimensional setting in [1], the authors studied a linear quadratic elastic energy in the presence of a diverging number of point dislocations, deriving a strain gradient plastic energy in the sense of Gamma convergence assuming well separation of the defects. The nonlinear case later studied in [2].To tackle the three-dimensional case, in [3] we studied an intermediate problem consisting in a (rescaled) version of a line tension energy that allows to understand the behaviour of the self-energy of the dislocations as the total number of defects diverge. The two-dimensional counterpart of this result was originally obtained in [4]. As a byproduct of this analysis, we obtain a density result for one rectifiable fields without boundary in the class of divergence-free measures. In this seminar, we present the results contained in [3] and some recent developments obtained in collaboration with Roberta Marziani and Adriana Garroni for the nonlinear three-dimensional elastic energy in the quadratic case.
[1] Garroni, A., Leoni, G., and Ponsiglione, M. "Gradient theory for plasticity via homogenization of discrete dislocations." Journal of the European Mathematical Society 012.5 (2010): 1231-1266
[2] Müller, S., Scardia, L., and Zeppieri, C. I. "Geometric Rigidity for Incompatible Fields, and an Application to Strain-Gradient Plasticity". Indiana University Mathematics Journal, 63(5) (2014): 1365–1396
[3] M. Fortuna, A. Garroni. "Homogenization of line tension energies". Nonlinear Analysis, 250 (2025): 113656
[4] Conti, S., Adriana G. and Müller S. “Homogenization of vector-valued partition problems and dislocation cell structures in the plane.” Bollettino dell'Unione Matematica Italiana, 10 (2017): 3-17.
In this talk, I will study a discrete model of brittle damage in different regimes where the damaged zone concentrates on vanishingly small sets. We will identify the nature of the effective limit models obtained by means of an asymptotic analysis based on the Gamma-convergence of the total energies. I will recall the mechanical model of brittle damage introduced by Francfort and Marigo (1993), specified to the discrete setting where the total energies are restricted to piecewise affine continuous displacements.
We study the (lower) scaling behavior of a class of compatible two-well problems for higher order, homogeneous linear differential operators. To this end, we first deduce general lower scaling bounds which are determined by the vanishing order of the symbol of the operator on the unit sphere in the direction of the associated element in the wave cone. We complement the lower bound estimates by a detailed analysis of the two-well problem for generalized (tensor-valued) symmetrized derivatives with the help of the (tensor-valued) Saint-Venant compatibility conditions. In two spatial dimensions for highly symmetric boundary data (but arbitrary tensor order m) we provide upper bound constructions matching the lower bound estimates. This illustrates that for the two-well problem for higher order operators new scaling laws emerge which are determined by the Fourier symbol in the direction of the wave cone.
In this talk, we explore a scaling law for the infimal energy of a one-dimensional nonlocal variant of the Canham-Helfrich functional in terms of problem parameters. This functional models the formation of periodic structures in cellular membranes, known as lipid rafts, by incorporating a coupling term between the order parameter and the local curvature of the membrane. A key tool in establishing the scaling result is a new set of nonlinear interpolation inequalities in fractional Sobolev spaces. Some of these inequalities, which bound the fractional Sobolev seminorm in terms of a Modica-Mortola energy, are used to obtain an Ansatz-free lower bound for the functional. On the other hand, to show the upper bound we use suitable periodic test functions, which depend on the parameter regime. Additionally, some results for the two-dimensional functional will be presented. The talk is based on joint work with Barbara Zwicknagl and Janusz Ginster.
We discuss energy scaling laws for simplified, singularly perturbed, double-well nucleation problems confined in a half-space, for an inclusion of fixed volume. Motivated by models for boundary nucleation of a single-phase martensite inside a parental phase of austenite, our main focus in this nonlocal isoperimetric problem is how the relationship between the rank-1 direction and the orientation of the half-space influences the energy scaling with respect to the fixed volume of the inclusion. This is joint work with Antonio Tribuzio
We consider the N spherical particles suspended in a Stokes flow in R³. The particles satisfy a no-slip boundary condition and are subject to constant gravity with a mean-field scaling. The dynamics of the particles is modeled by Newton's laws. We discuss the rigorous derivation of the Vlasov-Stokes equation in the limit N →∞. The main difficulty arises from the implicit and singular particle interaction which we can only well approximate by an effective interaction as long as have sufficient control on the inter particle distances. Currently, we are able to achieve this in 1) the monokinetic setting and 2) when the inertia of the particles vanishes as N →∞. The proof is based on an adaptation of an approach of Hauray-based on 2-Wasserstein distances, and a stability estimates for the Vlasov-Stokes equation that builds on the energy dissipation of the system.
We consider the homogenization of the time-dependent compressible Navier–Stokes equations in the low Mach number regime in domains with critically scaled perforations. In analogy to a classical result of Allaire (ARMA90) for the incompressible fluids, we obtain in the limit the incompressible Navier–Stokes system with an additional Brinkman friction term.
Recent years have seen the derivation of different kinds of mean-field models for the behaviour of particles in suspensions. This is typically based on the relatively mild singularity of the interaction. However, in models, where the orientation of the particles interact to leading order, the typical strategy runs into problems. We present here a negative result that suggests that no mean-field result for the behaviour of particle orientations can exist. In particular we present the specific model of spherical inertialess particles suspended in a Stokes flow in the three-dimensional torus. The particles perturb a linear extensional flow due to their rigidity constraint. We show that the macroscopic distribution of the orientations is not enough information to predict the evolution even approximately.
The Becker-Döring equations are the simplest discrete coagulation-fragmentation model, where the cluster interaction only occurs through the monomers. Nonetheless, they are able to describe the phase transition of the underlying physical system and offer a rich mathematical study. We consider a straightforward extension, where clusters are build from two different types of monomers. The added dimension introduces surprising amounts of complexity both physically and mathematically. We investigate the resulting phase transition and propose a rigorous approach via an entropy-entropy dissipation estimate.
We investigate the dynamics of two immiscible viscous fluids in a two-dimensional domain, governed by the Stokes-transport system. Our focus lies on the evolution of the free interface separating the fluids. Using a contour dynamics approach, we derive a nonlinear, nonlocal equation for the free boundary and exploit it to study the stability and long-time behavior of the system. This work is in collaboration with F. Gancedo and R. Granero-Belinchón.
Current research is shifting from minimizing or eliminating residual stresses in components to intentionally introducing them during the manufacturing process. Hot bulk forming processing enables the controlled modification of residual stresses by leveraging thermal, mechanical, and metallurgical interactions in order to evoke e.g. improved durabilty. Typically, hot bulk forming involves heating the material, here 100Cr6, above 1000°C, leading to full austenitization and an assumed stress-free initial state, followed by forming and cooling down to room temperature. Rapid cooling, i.e. evoked by quenching in water, initiates a diffusionless phase transformation from austenite to martensite in the considered material [1]. Hot bulk forming with adapted cooling strategies helps to avoid the need for costly and time-consuming subsequent steps such as heat treatments.
This contribution focuses on modeling the cooling process and the associated phase transformation generating residual stresses at both the micro- and macroscale of the component. To differentiate between the various types of residual stresses, which are classified based on the scale at which they occur, two-scale finite element (FE²) simulations may be conducted [2,3]. These simulations require a representative volume element that accurately reflects the phase transformation and other microscopic features. Single-scale finite element simulations are first performed to determine the macroscopic thermal and mechanical conditions, which then serve as the driving force for a subsequent thermo-mechanically coupled multi-phase-field model at the microscale [4]. This approach allows for the evaluation and analysis of martensitic evolution and microscopic residual stresses.
References
[1] B.-A. Behrens, J. Schröder, D. Brands, K. Brunotte, H. Wester, L. Scheunemann, S. Uebing and C. Kock. Numerische Prozessauslegung zur gezielten Eigenspannungseinstellung in warmmassivumgeformten Bauteilen unter Berücksichtigung von Makro- und Mikroskala, Forschung im Ingenieurwesen (Engineering Research), 85, 757-771, 10.1007/s10010-021-00482-x, (2021).
[2] S. Uebing, D. Brands, L. Scheunemann and J. Schröder. Residual stresses in hot bulk formed parts: microscopic stress analysis for austenite-to-martensite phase transformation, Archive of Applied Mechanics, 91, 3603–3625, (2021).
[3] J Schröder. A numerical two-scale homogenization scheme: the FE2-method. In J. Schröder and K. Hackl (Eds.), Plasticity and Beyond - Microstructures, Crystal-Plasticity and Phase Transitions, Volume 550 of CISM Courses and Lectures, 1–64. Springer, (2014).
Phase-transforming solids, including shape memory alloys (SMAs), represent a class of multifunctional materials renowned for their remarkable thermo-mechanical properties, such as superelasticity and shape recovery. These materials undergo microstructure evolution during stress- and temperature-induced phase transformations between austenite and multi-variant martensite. The energy dissipation associated with these transformations gives rise to hysteretic behavior under thermo-mechanical cyclic loading. Notably, experiments have revealed a rate-independent response under quasi-static conditions, with hysteresis loops maintaining a finite width even at extremely low loading rates. The phase-field method has become a powerful tool for modeling complex interface topologies that evolve during phase transformations. Despite numerous important contributions addressing the continuum mechanical description of SMAs, see [1] and the references therein, many existing phase-field models employ rate-dependent dissipation formulations, limiting their ability to accurately replicate the thermo-elastic hysteresis loops observed under quasi-static conditions. This work focuses on a recently developed thermomechanically coupled and variationally consistent phase-field approach that addresses these limitations [2]. The study provides a comprehensive guide for implementing phase-field models capable of capturing the rate-independent hysteretic behavior of phase-transforming solids. The approach integrates rate-independent and rate-dependent driving force formulations while incorporating temperature-dependent local energetic minima to reflect experimentally observed behaviors, such as sigmoidal undercooling hysteresis and microstructural evolution. The practical applicability of this approach is demonstrated in two-dimensional finite element simulations, which includes studies of microstructure formation in twinned martensite and the evolution of remanent microstructure under cyclic loading conditions. The influence of driving force thresholds is examined, and it is shown how model parameters can be calibrated using experimentally observed martensite start and finish temperatures. Finally, the work outlines future extensions of the framework to multi-phase and multi-variant alloy systems. This methodology provides a deeper understanding of the complex microstructural properties and behavior of SMAs, facilitating the design of SMA-based devices.
References
[1] L. Xu, T. Baxevanis, D. C. Lagoudas, 2019, Smart Mater. Struct. 28(7), 074004.
[2] O. El Khatib, V. von Oertzen, S. A. Patil, B. Kiefer, 2022, Proc. Appl. Math. Mech. 23(2), e202300273.
Magnesium is distinguished by a highly anisotropic inelastic deformation involving a profuse activity of deformation twinning. Modeling deformation twinning poses additional challenges in comparison to plastic slip due to the critical dissimilarities in the underlying mechanisms and characteristics. A finite-strain model of coupled twinning and plastic slip is formulated by combining the phase-field method and crystal plasticity. A distinct feature of the model lies in the treatment of the kinematics of deformation twinning, which, rather than being formulated as a shear-based deformation, is expressed as a sequential operation of a volume-preserving stretch and a rigid-body rotation. The stretch-based kinematics is particularly relevant when conjugate twinning systems are crystallographically equivalent.
Instrumented micro/nano-indentation technique has been widely applied to characterize the mechanical properties of magnesium. This is usually done through the analysis of the load-indentation depth response, surface topography, and less frequently, the post-mortem in-depth microstructure. Experimental limitations, however, hinder the real-time observation of the evolving twin microstructure. Motivated by this, our model is employed to simulate the evolution of the twin microstructure and its interaction with plastic slip in a magnesium single crystal subjected to indentation. Special emphasis of the study is laid on two aspects: orientation-dependent inelastic deformation and indentation size effects. The 2D simulations reveal several interesting effects, in agreement with the existing experimental data, including an intriguing twin microstructure at large spatial scales. To further investigate size effects, we extend the model by incorporating gradient-enhanced crystal plasticity, and re-examine the notion of ‘smaller is stronger’.
Technical surfaces are rarely perfectly smooth; instead, they typically exhibit some degree of roughness. Furthermore, as micro-production techniques continue to advance, it has become possible to manufacture geometrical surface structures at the microscale, posing challenges for wetting models, which are often designed for smooth surfaces. In [1, 2], a phase-field model for surface wetting was proposed and serves as the foundation for this investigation. The model's ability to handle non-smooth surfaces was examined in [3, 4]. It was observed that while the model struggles to provide reliable results for randomized rough surfaces, it performs well for sinusoidally shaped surfaces within the investigated scope. In this study, we propose a simple geometrical model that represents the droplet as part of a circle intersecting a sine function. This model is used to generate potential configurations for a given set of parameters, which are then compared to the results of phase-field simulations to validate the simulations.
[1] F. Diewald, C. Kuhn, M. Heier, M. Horsch, K. Langenbach, H. Hasse, and R. Müller, 2017, Surface wetting with droplets: A phase field approach., PAMM 17(1), 501–502.
[2] F. Diewald, M. Heier, M. Horsch, C. Kuhn, K. Langenbach, H. Hasse, andR. Müller, 2018, Three-dimensional phase field modeling of inhomogeneous gas-liquid systems using the PeTS equation of state, The Journal of Chemical Physics 149(6), 064701.
[3] J. Wolf, Y. Flieger, F. Diewald, K. Langenbach, S. Stephan, H. Hasse, and R. Müller, 2023, Wetting of rough surfaces in a phase field model. PAMM, 22: e202200275.
[4] J. Kunz, L. Leroux, H. Hasse, R. Müller, 2024, Behavior of a phase field model for wetting on structured surfaces. PAMM, 24, e202400198.
Phase-field modelling of fracture is gaining popularity in the fracture mechanics community, particularly for its ability to generate cracks with arbitrarily complex geometries and topologies in two and three dimensions without the need for ad hoc criteria. The model first introduced in [1] has a clear connection with Griffith’s propagation criterion via Gamma convergence tools and recent results [2] have shown that, in addition to propagation, it can quantitatively predict crack nucleation for mode-I loading. However, the initial model cannot reproduce with flexibility the experimentally measured strengths under multiaxial loads. Moreover, a modification is necessary to avoid the interpenetration of crack surfaces in compression and reflect the physical asymmetry of fracture behaviour between tension and compression [3].
In this presentation, staying within the realm of variational approaches, we discuss existing modifications based on energy decomposition, their shortcomings, and the requirements for an effective energy decomposition method to model crack nucleation and propagation. Finally, we introduce a new energy decomposition, the star-convex model, that solves (at least partially) the issues with the existing ones [4].
REFERENCES
[1] B. Bourdin, G. A. Francfort and J. J. Marigo, Numerical experiments in revisited brittle fracture. Journal of the Mechanics and Physics of Solids 48 (2000) 797–826.
[2] E. Tanné, T. Li, B. Bourdin, J. J. Marigo and C. Maurini. Crack nucleation in variational phase-field models of brittle fracture. Journal of the Mechanics and Physics of Solids 110 (2018) 80–99.
[3] H. Amor, J. J. Marigo and C. Maurini. Regularized formulation of the variational brittle fracture with unilateral contact: Numerical experiments. Journal of the Mechanics and Physics of Solids 57 (2009) 1209–1229.
[4] F. Vicentini, C. Zolesi, P. Carrara, C. Maurini, L. De Lorenzis. On the energy decomposition in variational phase-field models for brittle fracture under multi-axial stress states. Preprint (2023): hal-04231075.
In recent times, material models that rely on neural network-based methods become increasingly popular, with many applications to soft solids. Moreover, for the modelling of fracture phenomena, the phase-field approach has proven to be a powerful tool. In this contribution, we combine these two concepts.
We present a hybrid phase-field model of fracture at finite deformation and its application to the response of quasi-incompressible, hyperelastic rubber [1]. The key idea is to combine the predictive capability of the phase-field approach to fracture with a physics-augmented neural network (PANN) that serves as a flexible, high-fidelity model of the response of the bulk material. To this end, recently developed neural network approaches are modified to better meet specific requirements of the phase-field framework. In particular, a novel invariant-based architecture for a hyperelastic PANN is presented, that enables a decoupled description of the volumetric and the isochoric response. This is of particular interest when modelling fracture of soft quasi-incompressible solids with the phase-field approach, where a weakening of the incompressibility constraint in fracturing material may be required. It can be easily proven that the PANN fulfils all desirable properties of hyperelastic potentials by construction.
The proposed model is implemented in the finite element framework FEniCSx. Its performance is studied by means of several numerical examples and discussed with respect to experimental data.
References
[1] F. Dammaß, K. A. Kalina, M. Kästner: Neural networks meet phase-field: A hybrid fracture model, under preparation.
Failure of materials and structural components is an important issue as long as man-made constructions exist. The section focuses on damage mechanics and fracture mechanics for all kinds of solid materials and structures. It aims at bringing together related original research covering experimental observations, modeling approaches and numerical techniques. Moreover, material failure is a complex process, which may be considered on different length scales ranging from atomistic scale up to macro scale of engineering structures. Since failure behavior of materials strongly depends on loading situations, contributions addressing static, dynamic and multi-axial failure are welcome as well as fatigue problems.
We explore experimentally and computationally an unconventional class of fractures in elastomers: sideways cracks. Under certain conditions, a crack propagates in a (sideways) direction parallel to the loading direction rather than perpendicularly in the (forward) direction of the notch. Then, the crack arrests, and the material ahead of the crack can be further deformed enabling giant stretchability. This fracture mode results from higher resistance to propagation perpendicular to the principal stretch direction, driven by deformation-induced anisotropy. Our research efforts show that the tendency of a crack to propagate sideways in the two components of Elastosil P7670 increases with the degree of cross-linking. We show that fracture anisotropy can be modulated during the synthesis of the polymer through the mixing ratio of the raw phases. To assist the investigations, we construct a novel phase-field model for sideways fracture where the critical energy release rate is related to the crosslinking degree. Unlike existing approaches in the literature, we propose a phenomenological model that integrates deformation-induced fracture anisotropy as the fundamental mechanism driving lateral cracking. Our approach renders a crack surface density (γ) unaltered and introduces an anisotropic critical energy release rate in the format of a material function Gc = Gc (F, ϕ), with F and ϕ deformation gradient and damage order parameter, respectively. Eventually, we propose a roadmap with composite soft structures with low and highly crosslinked phases that allow for control over fracture, arresting and/or directing the fracture. The smart combination of the phases enables soft structures with enhanced fracture tolerance and reduced stiffness.
References:
M.A. Moreno-Mateos, P. Steinmann. “Crosslinking degree variations enable programming and controlling soft fracture via sideways cracking”. Accepted, In Press in Npj Computational Materials.
The phase-field approach to fracture is a widely recognized effective method for simulating crack initiation and propagation. The standard phase-field method is based on the crack propagation model formulated by Francfort and Marigo (1998), which is based on energy minimization and was later regularized by Bourdin et al. (2000). This study compares simple experiments with phase-field simulations. In the experiments, the critical force and crack path are affected by uncertainties. To evaluate such uncertainties, Monte-Carlo simulations are calculated with a simplified setup. Furthermore, a series of numerical examples demonstrate the effects of non-perfect experimental conditions. On the numerical side, deterministic phase-field fracture simulations are performed for the complete setup, and the influence of model uncertainties, like the non-local length parameter and the crack driving energy formulation, are evaluated. In general, we found that the uncertainties of straightforward phase-field fracture simulations are in the same order of magnitude as the uncertainties of the experiment.
Cohesive zone models provide a promising framework for modeling nonlinear fracture processes. Unlike brittle fracture, where the crack energy remains constant, cohesive zone models assume that the energy depends on the crack opening. This dependency gives rise to a traction-separation law, which describes the traction as a function of the crack opening.
In the finite element approximation of fractures, two main approaches can be identified. The first approach treats the crack as a sharp interface, which can be modeled using interface elements or by embedding sharp interfaces within elements (e.g., XFEM). The second approach approximates the crack surface as diffuse or smeared, assigning a finite interface thickness. The phase-field method offers such a diffuse representation by introducing an additional order parameter.
A notable phase-field method for cohesive fracture was proposed by Conti et al. [1] and has since been further developed (e.g., [2,3,4]). This model is supported by proven Gamma-convergence and requires just two field variables: the displacement field and an order parameter. In this talk, the phase-field model for cohesive fracture will be extended to incorporate finite strain. Furthermore, the MCR effect, which limits crack evolution to tensile states, will be included.
S. Conti and M. Focardi and F. Iurlano, “Phase field approximation of cohesive fracture models”, Annales de l’Institut Henri Poincaré C, Analyse non linéaire Vol. 33, pp. 1033–1067, (2016).
F. Freddi and F. Iurlano, “Numerical insight of a variational smeared approach to cohesive fracture”, Journal of the Mechanics and Physics of Solids, Vol. 98, pp. 156–171, (2017).
H. Lammen and S. Conti and J. Mosler, “A finite deformation phase field model suitable for cohesive fracture”, Journal of the Mechanics and Physics of Solids, Vol. 178, 105349, (2023).
H. Lammen and S. Conti and J. Mosler, “Approximating arbitrary traction-separation-laws by means of phase-field theory – mathematical foundation and numerical implementation”, Journal of the Mechanics and Physics of Solids, Accepted for publication, (2025).
Hyperelastic materials play a crucial role in current applications due to their unique properties, especially their flexibility, stretchability, and resilience based on incompressibility. However, the incompressibility constraint is an unavoidable challenge in the modeling of most hyperelastic ma- terials, and makes it even more challenging to successfully explain or reproduce crack propagation via numerical simulations.
Mixed formulations, e.g. the so-called Q1P0 approach [1], are commonly used to avert locking issues due to incompressibility at finite deformations. In the present study, the Q1P0 approach is derived based on the Hu-Washizu three-field variational principle [2], yielding a single-field displacement formulation. Afterwards, the phase-field approach is incorporated to predict finite strain fracture of nearly incompressible hyperelastic materials.
Following the basic idea in [3], the phase-field is coupled to release the incompressibility con- straint in damaged material, while ensuring incompressibility for intact material. With a consis- tently derived and condensed description, the multi-field formulation is reduced into a standard displacement-phase-field approach. A special phase-field degradation function is particularly in- corporated into the volumetric contribution to release the pressure term faster and consequently to allow crack opening, which mediates the innate contradiction between incompressibility constraint and diffuse crack opening. Subsequently, several numerical examples are presented to illustrate the characteristics of the proposed formulation.
References
[1] J.C. Simo, R.L. Taylor and K.S. Pister, 1985, Variational and projection methods for the volume constraint in finite deformation elasto-plasticity, Computer Methodes in Applied Mechanics and Engineering 51, pp. 177–208.
[2] GA. Holzapfel, 2000, Nonlinear solid mechanics: a continuum approach for engineering, Wiley, Chichester.
[3] B. Li, N. Bouklas, 2020, A variational phase-field model for brittle fracture in polydisperse elastomer networks, International Journal of Solids and Structures 182, pp. 193–204.
Concrete and cement-based materials are known to describe a brittle failure. Thus, fiber reinforcement is used to enhance the post-cracking properties of the material. In this regard, the polymer fibers arise as an environmentally friendlier alternative to steel fibers. However, the effect of their characteristic viscoelastic behavior on the performance of fiber-reinforced concrete requires further research. Therefore, the influence of viscoelastic behavior of the polymer fibers on the failure of high-performance concrete (HPC) is investigated. The three-point bending boundary value problem is used with two loading stages: short-timed monotonically loading is followed by constant load. The test is repeated at different loads to reach the non-linear viscoelasticity of the fibers. For that purpose, the developed phenomenological model for fiber-reinforced high-performance concrete, see [1], combined with the non-linear viscoelastic Schapery model from [2] see also [3], is implemented. The failure behaviors of the HPC in tension and compression are governed by step-wise linearly approximated degradation functions, see [4]. The orientation distribution functions (ODF) containing different distributions and orientations are implemented, as proposed in [5]. Finally, the performance of the numerical model is discussed using Load-CMOD (crack mouth opening displacement) curves.
References
[1] M. Pise, D. Brands, J. Schröder. Development and calibration of a Phenomenological Material Model for Steel-Fiber-Reinforced-High-Performance Concrete Based on Unit Cell Calculations. Materials, 17(10), 2247, 2024.
[2] R.A. Schapery. On the characterization of nonlinear viscoelastic materials. Polymer Engineering \& Science, 9, 295-310, 1969.
[3] M.A. Margalho de Barros, Sanjay Govindjee, J. Schröder. A note on the nonlinear viscoelastic Schapery model applied to PMMA and asphalt. Proceedings in Applied Mathematics and Mechanics, 24, e202400214, 2024.
[4] J. Schröder, M. Pise, D. Brands, G. Gebuhr, S. Anders. Phase-field modeling of fracture in high performance during low-cycle fatigue: Numerical calibration and experimental validation. Computer Methods in Applied Mechanics and Engineering, 398, 115181, 2022.
[5] G. Gebuhr, M. Pise, S. Anders, D. Brands, J. Schröder. Damage Evolution of Steel Fibre-Reinforced High-Performance Concrete in Low-Cycle Flexural Fatigue: Numerical Modeling and Experimental Validation. Materials, 15(3), 1179, 2022.
During the last years, the phase-field method for fracture has gained a lot of attention. It has become the most frequently used method for the simulation of quasi-static and dynamic fracture processes for brittle and ductile materials. Its biggest advantages are the simplicity of the implementation and the fact that it can capture crack propagation, branching, coalescence and initiation without the evaluation of additional criteria in a post-processing step. Despite its great success, the classical phase-field method has one severe disadvantage if standard Lagrange finite elements are employed. Due to the necessity of very fine meshes in the vicinity of an existing crack and its front, the computational effort is very high. This computational effort is further amplified by the highly nonlinear behaviour even for the simulation of linear elastic fracture mechanics processes. The extended phase-field method (XPFM) combines the phase-field method for fracture with concepts from the extended/generalized finite element method. The concept aims at a significant reduction of computational effort in comparison to the standard phase-field method while keeping the advantages of not having to explicitly track the crack geometry and introduce additional crack propagation criteria. The XPFM is based on a transformed phase-field ansatz in combination with an enriched displacement field ansatz which depends on the phase-field. In the current approach, the enrichment function of the displacement field is formulated in a discrete way which avoids the potentially difficult calculation of the crack geometry. The enrichment function is calculated by solving phase-field dependent Laplacian equations on the element level which can be done in an efficient way. In this contribution, the XPFM, its algorithmic treatment as well as its application to common academic examples is presented.
[1] Loehnert, S.; Krüger, C.; Klempt, V.; Munk, L.: An enriched phase-field method for the efficient simulation of fracture processes. Computational Mechanics, vol. 71(5), pp. 1015-1039 (2023)
[2] Krüger, C.; Curosu, V.; Loehnert, S.: An Extended Phase-Field Approach for the Efficient Simulation of Fatigue Fracture Processes. International Journal for Numerical Methods in Engineering, vol. 125, e7422
The section will focus on advanced theoretical, numerical and experimental models for the evaluation of the behavior of structures subjected to static and dynamic loads. Innovative materials characterized by high strength, anisotropy and unconventional mechanical responses pose new challenges to the design and the performance of various structural elements such as beams, plates and shells. In particular, structural issues may appear at different scales when materials with an internal architecture are employed. Particularly welcome are linear and nonlinear models and algorithms that address static and dynamic behavior of structures at different scales.
The problem of sliding of an elastic rod, partially inserted into a sleeve is attracting more and more researchers’ attention. Dynamics, stability, locomotion are just some of the mechanical phenomena intrinsic to the set-up, whose modelling requires novel analytical and numerical techniques. The behavior of the structure is strongly determined by a configurational force acting between the rod and the sleeve at the transition point separating the overlapping region (rod inside the sleeve) and the free segment. While the investigation of such Eshelby-like forces (named so because of the analogy to fracture mechanics) in the context of structural mechanics dates back to middle 1980s, their role in various problems featuring relative sliding has been understood relatively recently.
In the present contribution, we release the assumption that the sleeve is rigid and consider frictionless contact between an elastic rod and a flexible sleeve, also treated as a rod. Because of the partial insertion, we deal with a compound beam with piecewise constant bending stiffness (as both the rod and the sleeve contribute to the bending stiffness in the overlapping region) and moving boundaries between the three segments. Configurational forces at the transition points repel the rod and the sleeve from each other, eventually causing full ejection when the external loads exceed certain threshold. Considering the outer ends of the rod and of the sleeve to be simply supported and loaded by bending moments, we develop the governing equations for the static equilibrium by means of a variational procedure, in which the length of the overlapping region is an unknown configurational parameter, whose variation is independent. This results into a specific sliding condition, which relates state variables of all three segments and provides a well-posed boundary value problem. Alternatively, stable equilibria can be obtained by means of a non-material finite element formulation. Further investigations on the model demonstrate two scenarios of a catastrophe as the moment load reaches the critical value: the rods may detach either when the overlapping region vanishes, or because of a snapping instability (fold point).
While being powerful in computing equilibria, the compound beam model lacks the ability to predict the forces acting between the rod and the sleeve along the overlapping region and in the transition points. We address this problem in the final part of the talk, demonstrating that it is necessary to include the extensibility of the sub-rods into account to resolve the statical indeterminacy.
The shift towards more sustainable energy sources allows for lattice structures to play an emergent role in the design of future generations of energy storage and conversion devices, such as Li-ion batteries and thermoelectric generators. Particularly, three-dimensional electrode architectures have the potential to provide shorter ion-diffusion paths due to the greater surface-to-volume ratios of the active electrode material. Nevertheless, the modeling and simulation of lattices structures (let alone their optimization) with commonly used methods, such as continuum finite elements, is computationally expensive. A way to mitigate this adversity is the use of beam theories. However, an efficient numerical scheme for the modeling and simulation of 3D beams that allows for large volumetric strain, induced primarily by Li-ion diffusion, has not been found in the literature. This contribution aims to provide essential steps for closing this gap.
For the Cosserat beam, a mixed isogeometric collocation method (IGA-C) that alleviates locking phenomena has already been developed and validated. In this work, this model is further enhanced to incorporate axial and radial strains, both small and large, that result from the beam’s interaction with a temperature or a concentration field. The latter is achieved following a multiplicative split of the deformation gradient. Moreover, the coupling of the mechanical with the thermal/chemical system is realized through a staggered scheme, where the diffusion equation is also solved following an IGA-C approach. Assuming rotational symmetry of the beam's internal temperature or concentration field, the diffusion equation is reduced to a 2D problem, thus improving the overall computational effort while retaining reliable results.
The novelty of the presented method is twofold. First, it relates beam theory, and consequently small elastic strains, with large swelling deformation stemming from diffusion phenomena. Second, it also provides insight into the implementation of IGA-C for solving diffusion equations subject to large deformations. Ultimately, the current model represents the starting point for the coupling between thermo- and chemo-mechanics, beam theory, and IGA-C.
In automotive applications, cable and cable bundle structures range from single conductors and unshielded twisted cable pairs to wiring harnesses consisting of up to 100 different cable types. All mentioned structures consist of several components, which leads to complex deformation characteristics.
In this work, the mechanical behaviour of simple cable bundle structures is investigated by Finite Element (FE) simulations and experiments on twisted strands of metal wires. In applications, the observed double-helix structures are relevant in unshielded twisted cable pairs which are used to reduce electromagnetic radiation and cross talk between cable pairs. However, their electrical properties are sensitive to mechanical deformations which might influence the symmetry of the structure. Nonlinear and inelastic effects can already be observed in uncoupled planar bending and simple torsion. In reality however, states where bending and torsion are coupled occur and typically determine the three-dimensional shape of the cable structure. Hence, we aim at investigating bending, torsion and coupled loading of twisted pairs in simulations and experiments. The twisted wire strands are therefore simulated in FE using finite beam elements with quadratic shape functions, accounting for frictional contact between the wires using the approach presented in [1, 2]. The simulations’ boundary conditions are derived from those of the experiments, allowing for a comparison of simulation and experimental results by evaluating the relevant reaction forces and moments.
Acknowledgment: This work was supported within the Fraunhofer and DFG transfer programme under grant no. DI 430/37-1.
References:
[1] Hawwash, M., Dörlich, V., Linn, J., Müller, R., and Keller, R.: Effective Inelastic Bending Behavior of Multi-Wire Cables Using Finite Elements Accounting for Wire Contact. In ECCOMAS Thematic Conference on Multibody Dynamics, pp. 369 – 379, Budapest, 2021.
[2] Hawwash, M., Dörlich, V., Linn, J., Keller, R., and Müller, R.: Modeling the Effective Inelastic Behavior of Multi-Wire Cables Under Mechanical Load Using Finite Elements. In European Congress on Computational Methods in Applied Sciences and Engineering, 2022.
Machining operations are susceptible to different kinds of adverse dynamic phenomena. This is especially true for the band sawing process with its slender endless-moving blade that may exhibit forced oscillations, self-excited vibrations, or even torsional flexural buckling under high load magnitudes. Current research primarily focuses on the cutting of metal slabs and is motivated by the need to improve the surface quality of the cut, to increase productivity, to minimize scrap, and to reduce tool wear. In the present study, a mechanical model that accurately captures the dynamics of the moving blade under different working conditions is developed and verified by comparison against physical experiments. The bandsaw blade is modelled as an unshearable Kirchhoff rod with a thin rectangular cross-section. Linear modal and buckling analyses are performed with the incremental rod theory of second order that accounts for axial pre-tension and pre-twisting of the blade. This pre-twist is imposed by the tilting angle between the linear blade guides and the wheels of the drive system. A large pre-twist occurs when the wheel axes are deliberately not in parallel to the plane of the cut surface as is typical for horizontal bandsawing. Due to the Wagner effect, pre-tensioning and pre-twisting alter the effective torsional behaviour owing to the non-trivial uniaxial stress distribution over the width of the rectangular blade cross-section. The torsional rigidity of the rod must be modified accordingly. Forces in the cut are approximated by prescribed distributed loadings allowing for an estimation of how the load affects the modal spectrum of the blade; both follower and dead loadings are considered. The model may be extended in the future with respect to the tool-workpiece interaction in order to capture self-excited vibrations due to regenerative chatter. A non-material finite element model is implemented to compute actual numerical solutions and perform parameter studies. Numerical results are further compared with experimental measurement data for certain parameter configurations to empirically justify the simulation model.
It is well-established that slender columns subjected to compression may fail by buckling and that the introduction of imperfections can significantly reduce the load-bearing capacity. It is therefore necessary to ensure an accurate representation of imperfections to correctly determine the failure loads of slender structures. Imperfections in slender columns are typically modeled using either geometric imperfections or load eccentricity. However, the existing imperfection types are unable to represent imperfections like, e.g., a footpoint rotation of a bearing or foundation, erroneous angular alignment of fixtures, or deformation of structural elements to which a slender column is rigidly connected. Here, rotations of a clamped boundary result in an additional type of imperfection.
In this work, we are introducing rotation imperfections to the classical Euler buckling problem transforming the eigenvalue problem into a deformation problem. We are presenting an analytical closed-form solution for the load-deformation behavior of a column with rotation imperfections. The model is compared to a numerical study showing very good agreement. Analyzing a simple study of columns connected to a surrounding structure we show that the introduction of rotation imperfection can lead to the clamped columns exhibiting lower critical loads than pined joint columns with free end-point rotations. Additionally, the model is expanded to allow for kinematical controlled boundary rotations using a semi-analytical approach.
The section focuses on constitutive modelling of natural and artificial materials subject to elastic and inelastic deformation processes. The aim is to compare new constitutive models formulated on both the phenomenological and the micromechanics basis to determine their validity by comparison of simulations with experiments. A wide range of open problems will be considered in the section, like multi-scale modelling of heterogeneous materials, implementation of constitutive models in numerical applications, and the virtual testing of structural systems.
Crystal plasticity theory offers a powerful framework for analyzing plasticity at different scales while incorporating crystallographic information into the constitutive model. Unfortunately, classic rate-independent crystal plasticity models face long-standing problems regarding the determination of active slip systems and the computation of non-unique solutions. A very popular and already well-established algorithm is based on an augmented Lagrangian formulation. Due to its fixed-point update, the algorithm diminishes the effect of ill-posed crystal plasticity and therefore poses a robust algorithmic framework. Unfortunately, this scheme also makes the algorithm slow due to necessary inner and outer iterations. Therefore, measurements to improve the efficiency are discussed first. Subsequently and based thereon, a completely novel algorithm is developed which combines the benefits of an augmented Lagrangian formulation with those of nonlinear complementarity problems (NCP). The novel algorithm also does not pose any ill-conditioning and is significantly more robust and efficient than the original augmented Lagrangian formulation. Several numerical examples including statistical evaluations of various random crystal orientations as well as RVE simulations of a polycrystal demonstrate the properties and limits of the proposed algorithm.
Simulating the deformation behavior of crystals provides fundamental insights into the mechanics of polycrystalline materials like metals and alloys. Single-crystal plasticity models, based on the crystallographic structure of a single grain and formulated through multisurface plasticity, describe this behavior as an optimization problem governed by the principle of maximum plastic dissipation and constrained by the crystal's slip systems. However, in rate-independent models, the non-uniqueness of active slip systems presents algorithmic challenges, necessitating robust, efficient methods to handle computationally demanding simulations. Conventional approaches include active set searches with regularization [1] or simplifications of the numerical problem to ensure uniqueness.
In recent years, new approaches have been developed to solve this constrained optimization problem. One of the most promising methods is the primal-dual interior-point method (PDIPM), which addresses the ill-posed nature of the problem without relying on perturbation techniques [2,3]. In PDIPM, barrier functions are used to penalize infeasible solutions. However, unlike classical penalty methods, this penalization is applied smoothly and gradually intensifies as the limit of the feasible domain is approached This contribution examines the effectiveness of PDIPM for single-crystal plasticity, focusing on primal variable selection and its impact on performance, as well as its extension to account for complex hardening functions.
[1] C. Miehe and J. Schröder. A comparative study of stress update algorithms for rate-independent and rate-dependent crystal plasticity. International Journal for Numerical Methods in Engineering, 50:273–298, 2001.
[2] L. Scheunemann, P. Nigro, J. Schröder, and P. Pimenta. A novel algorithm for rate independent small strain crystal plasticity based on the infeasible primal-dual interior point method. International Journal of Plasticity, 124:1–19, 2020.
[3] E. S. Perdahcıoğlu, “A rate-independent crystal plasticity algorithm based on the interior point method,” Computer Methods in Applied Mechanics and Engineering, vol. 418, p. 116533, Jan. 2024
Additive manufactured materials enable new possibilities in the construction of components by realizing complex geometries. One example of such materials is the aluminum alloy AlSi10Mg, that is manufactured by powder-based laser fusion and demonstrates a mechanical behavior comparable to conventionally processed materials. The manufacturing process results in an inhomogeneous structure that is characterized by voids. For the numerical description of aluminum alloy AlSi10Mg, a coupled thermomechanical Chaboche-Gurson-Tvergaard-Needleman (GTN) model is applied in this work. The model was implemented as a user material subroutine (UMAT) in the finite element software Abaqus. As a result, it enables a precise description of rate-dependent material behavior under consideration of damage mechanisms and establishes the fundamental framework for the development of innovative design methods.
The focus of the research is a numerical analysis of a complex riveted connection with AlSi10Mg joining partners. In the first step, the model parameters are determined based on static thermomechanical tests to describe the material properties under different loading conditions. The model is then transferred to a complex shear tensile test of the riveted connection for validation of the model and the basic methodology. A numerical analysis shows the potential of the coupled Chaboche-GTN model to describe additively manufactured components. This enables an optimized component design and provide important knowledge for the development of load-optimized connections in additively manufactured structures.
Mechanism-based constitutive models for creep-fatigue analysis and the corresponding user material subroutines for finite element codes were developed for years by many research groups. However, the application of such models in the industry is limited by the availability of accurate experimental data and the time-consuming parameter identification.
The presentation focuses on the identification of material parameters with artificial neural networks (NNs) and the revision of a mechanism-based constitutive model for 9-12\%Cr heat-resistant steels. The use of NNs is realized in a Python environment using machine learning libraries Keras and TensorFlow. To test the revised constitutive model the Simcenter Nastran software, specifically a NXUMAT subroutine is used.
The results of multiple calculations, which are run for different stresses and temperatures, are compared with experimental data. The ability of the subroutine to work with variable mechanical and thermal loads and a complex geometry is tested by LCF and TMF tests. Moreover, a steam turbine rotor and an inner casing of the rotor under realistic boundary conditions are analyzed by the subroutine to test the ability to analyze the behavior of real components. The examples of benchmark tests and analysis of power plant components are presented.
Machine Learning (ML) methods have proven their potential in material modeling with the goal to replace analytical constitutive models by machine-learned relationships [1]. Apart from the possible simplification of existing approaches, the application of ML is motivated by the goal to accurately characterize materials by a quick and integrated, automatized process [2]. The present talk particularly focuses on modeling of cyclic plasticity, assuming the von Mises flow rule and the associated plasticity framework as a basis. With the use of pseudo-experimental data, it is shown that lightweight Feed Forward Neural Networks can completely replace the assumptions on the mixed kinematic and isotropic hardening. The numerical investigations performed with different network architectures indicate that a direct training of a six-dimensional stress tensor is not feasible and show that the reduction of the problem complexity is key for a successful training and a low generalization error. One of the steps to achieve that is learning the plastic multiplier and the evolution of backstresses. The training data set is generated by random walks constructed by Gaussian Processes which then is optimized to reduce the overlap between its entries. The input strains are transformed into the principal axes to exploit isotropy for further efficiency gains and physics-informed regularization [3] is applied to ensure the accuracy and stability of the ML model. An optimal lightweight architecture is determined by systematically varying the number of hidden layers, neurons and activation functions. In terms of computational effort necessary for training and prediction, the final configuration has shown a significant improvement compared to the alternative approaches based on the application of recurrent Neural Networks with significantly reduced training data. The validation of the model has been performed on the example of dual-phase steels and indicates high accuracy and cyclic stability of the results.
[1] Hildebrand S., Klinge, S. Hybrid data-driven and physics-informed regularized learning ofcyclic plasticity with Neural Networks. Mach. Learn.: Sci. Technol. 5 045058 (2024). DOI 10.1088/2632-2153/ad95da
[2] Hildebrand S., Friedrich, J.G., Mohammadkhah, M., Klinge, S. Coupled CANN-DEM Simulation in Solid Mechanics, Machine Learning: Science and Technology, 2024 (accepted)
[3] Hildebrand, S., Klinge, S. Comparison of neural FEM and neural operator methods for applications in solid mechanics. Neural Comput \& Applic 36, 16657–16682 (2024). DOI 10.1007/s00521-024-10132-2
This contribution examines the unification of tensor and matrix approaches in continuum mechanics, with a particular emphasis on their relationships and the potential for smooth interchange between the two frameworks. While tensor algebra provides a coordinate-independent framework, matrix algebra offers a computationally accessible representation. The tensor and matrix representations are both powerful tools for describing physical phenomena; however, their distinct notations often lead to confusion. By examining the connections between these two approaches, the presentation demonstrate how tensor operations, such as contraction, inner product and linear transformation, translate directly into matrix operations, and vice versa. This unification allows for a more rationalized and efficient treatment of continuum problems, fostering clearer insights into the underlying physics while enhancing computational efficiency. By clarifying the relationship between these approaches, this work aims to enhance both theoretical understanding and computational mechanics.
Coupled problems arise in several applications. From a general point of view each problem containing more than one primary field is called a coupled one. Usually the class of coupled problems is subdivided into volumetrically coupled problems and problems with surface coupling. The class of volumetrically coupled problems contains e.g. the fluid flow in porous solids described by mixture theory, thermo-mechanically coupled problems, chemo-mechanically coupled problems and electro- or magneto-mechanically coupled problems while in the second class problems like the fluid-solid interaction via an interface are included. All problems is in common that the presence of different fields in the numerical treatment requires special attention with respect to the multi-field formulation and the solution strategy. The session on coupled problems deals with all aspects mentioned above, i.e. ranging from modelling aspects to numerical solution strategies.
In this talk, the linear model of Moore-Gibson-Thompson (MGT) thermoelasticity for materials with voids is introduced in which the coupled phenomenon of the deformation of material, the concepts of volume fraction of pore network and the MGT law of heat conduction is presented. The basic boundary value problems (BVP) of steady vibrations of this model are investigated. Indeed, the fundamental solution of the system of steady vibration equations is constructed explicitly. Green’s identities are obtained and the uniqueness theorems for the classical solutions of the BVPs are proved. The surface and volume potentials are constructed and their basic properties are given. The BVPs are reduced to the always solvable singular integral equations for which Noether’s theorems are valid. Finally, the existence theorems for classical solutions of the internal and external BVPs are proved by means of the potential method and the theory of singular integral equations.
Acknowledgements: This work was supported by Shota Rustaveli National Science Foundation of Georgia (SRNSFG) [Grant nr. FR-23-4905].
Fractured and fracturing porous media are commonly encountered in a wide range of scientific and engineering disciplines such as geomechanics and environmental applications. Understanding the complex coupled behaviour between fractures, fluid flow and mechanical deformation in porous materials is crucial to better evaluate applications such as geothermal energy production.
Experimental methods to study the coupled processes are often limited by practical constraints and the difficulty of accurately reproducing conditions in naturally fractured or fracturing porous media. On the other hand, numerical models often lack information on appropriate modelling assumptions or quantitative knowledge of material parameters. Therefore, we follow a combined numerical-experimental strategy, where we take on the role of modelling and simulation, but build upon data from tailor-made experiments investigated at the Porous Media Lab (PML) of the University of Stuttgart. It should be noted that a successful combination of experiments and simulations requires a very close (iterative) collaboration, so that both areas can benefit significantly.
Methodologically, we apply well-established biphasic Theory of Porous Media (TPM) models with an embedded phase-field approach to describe fracture processes in porous media in a diffusive manner. In this fracture description, two key material parameters govern the phase-field equation. In particular, these are (i) the critical energy release rate (crack resistance) and (ii) the length-scale parameter (governing the transition length). To investigate these two parameters, two customised experimental investigations are studied. For (i), triaxial testing of cylindrical sandstone cores under pressure-diffusion controlled tensile fractures and for (ii) microfluidic experiments for pre-defined fracture geometries to study the interaction between free flow and porous media flow at the interface.
In this talk, we will present the developed models and discuss the findings based on the experimental and numerical results. It is concluded that the combined numerical-experimental strategy provides a powerful framework for analysing and predicting the coupled behaviour of fractures and fracturing porous media under various conditions.
In models describing hydraulic fracturing problems by use of the Theory of Porous Media (TPM) with an integrated phase-field approach, see e. g. [1-3], fracturing scenarios such as natural fracturing and classical hydraulic fracturing processes can be reliably described. However, state-of-the-art models do not capture a local porosity change, as the fractured rock or soil particles remain in the newly created fracture region. From a physical point of view, it is expected that the solid particles will be mobilised and transported by the fluid over time in case of a (dynamic) flow in the fracture. Therefore, we propose an entropy-principle based, thermodynamically consistent description of the mass transfer processes, similar to [4], between the mobile solid particles and the pore fluid. The pore fluid itself is considered as a mixture with similar velocities between liquid water and the contained solid particles in the form of a linear viscous suspension. In this talk, we will present the methodological framework for a rigorous description of the coupled problem and discuss the numerical implementation strategy using the mixed finite-element method. Finally, we will discuss the application of the model in terms of different numerical test cases.
[1] Heider, Y., Markert, B.: A phase-field modeling approach of hydraulic fracture in saturated porous media. Mechanics Research Communications 80, 38–46 (2017).
[2] Ehlers, W., Luo, C.: A phase-field approach embedded in the theory of porous media for the description of dynamic hydraulic fracturing. Computer Methods in Applied Mechanics and Engineering 315, 348–368 (2017).
[3] Ehlers, W., Luo, C.: A phase-field approach embedded in the Theory of Porous Media for the description of dynamic hydraulic fracturing, Part II: The crack-opening indicator. Computer Methods in Applied Mechanics and Engineering 341, 429–442 (2018).
[4] Steeb, H., Diebels, S.: A thermodynamic-consistent model describing growth and remodeling phenomena., Computational Materials Science 28 ,597–607 (2003).
We consider the coupled, multi-domain, multi-physics problem posed by a fluid-filled, pressurised fracture in an intact solid medium surrounding the fluid. For example, such a setting may be found in Enhanced Geothermal Systems or nuclear waste management.
A well-established method for dealing with cracks is the phase-field fracture (PFF) approach. This method is effective as it can deal with propagating fractures, splitting fractures and topology changes. This approach captures the fracture interface by approximating the fracture by a diffusive zone. While this gives the method its favourable properties, the smeared interface causes complications when complex physics at the boundary between the fluid-filled crack and the surrounding medium are needed.
We consider a geometry reconstruction approach to capture the effects at the interface. Based on the crack opening displacements (or fracture width), we obtain a description of the interface between the solid and the fluid-filled crack. With this geometry at hand, we can pose a stationary thermo-fluid-structure interaction problem, which incorporates the physics at the interface. In particular, we can use interface tracking methods, such as arbitrary Lagrangian-Eulerian finite element methods, to solve the fluid-structure interaction (FSI) problem in this domain.
The challenge now lies in coupling results from the FSI problem back to the PFF. The temperature and pressure drive the fracture problem. However, the FSI temperature and pressure exist on a different geometry than the PFF problem. Furthermore, the FSI pressure only exists inside the fracture, while the PFF model assumes the ex- istence of a global pressure. To deal with this problem, we propose two approaches. The first is based on an averaged pressure, which may be evaluated globally. The second approach derives a novel phase-field fracture model, incorporating the driving force contributions through appropriate boundary integrals. Consequently, the open crack geometry can also be utilised for the PFF problem. We present several numerical examples illustrating the potential of both approaches.
Diffusion chronometry is an important tool in understanding various aspects of geological processes, e.g., processes in magma reservoirs [1]. However, the timescales which can be accessed by diffusion chronometry are restricted by recrystallization. While the coupling of mechanical and chemical processes has not been explored in a quantitative framework yet, it has been shown in both experimental [2] and field observations [3].
In this presentation we expand on the model introduced by Haddenhorst et al. [4] to describe the evolution of olivine crystals surrounded by a melt. Whilst the original model was intended to be used to describe the evolution of magnesium-based forsterite crystals exchanging iron as a fayalite component, thanks to its general nature, the model can be used to describe the behavior of varying crystals, which are present in volcanic eruptive products under the influence of any diffusing component.
Furthermore, the existing model is expanded by the introduction of a phase field approach and the inclusion of heat conduction. As the original model was restricted to idealized spherical crystals, the phase field approach enables us to simulate olivine crystals of arbitrary shapes.
We discuss the introduction of the phase-field model and present initial results.
References:
[1]: Chakraborty, S., \& Dohmen, R. (2022). Diffusion chronometry of volcanic rocks: looking backward and forward. Bulletin of Volcanology, 84 (6), 57.
Retrieved from https://doi.org/10.1007/s00445-022-01565-5
[2]: Nachlas, W., \& Hirth, G. (2015). Experimental constraints on the role of dynamic recrystallization on resetting the ti-in-quartz thermobarometer. Journal of Geophysical Research: Solid Earth, 120 (12), 8120–8137.
[3]: Bestmann, M., Pennacchioni, G., Grasemann, B., Huet, B., Jones, M. W. M., \& Kewish, C. M. (2021). Influence of deformation and fluids on ti exchange in natural quartz. Journal of Geophysical Research: Solid Earth, 126 (12), e2021JB022548. Retrieved from https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/2021JB022548
[4]: Haddenhorst, H. H., Chakraborty, S., \& Hackl, K. (2023). A model for the evolution size and composition of olivine crystals. Proceedings in Applied Mathematics and Mechanics, 00, e202300081. https://doi.org/10.1002/pamm.202300081
The presence of hydrogen in steels leads to a significant loss of ductility, giving rise to a ductile to brittle transition with increasing hydrogen concentration, which is a major concern in the safety assessment of structural components. Among other effects, the hydrogen-enhanced decohesion (HEDE) and the hydrogen-enhanced localized plasticity (HELP) mechanisms are primarily associated with this ductile to brittle transition [M.B. Djukic, et al., Eng. Fract. Mech., 2019, 216:106528].In the current contribution, a monolithically coupled chemo-mechanics framework is presented that incorporates both the transient diffusion of hydrogen and the gradient-extended nonlocal Gurson-Tvergaard-Needleman (GTN) damage model [O. El Khatib, et al., Int. J. Fract., 2023, 241:73-94]. As the framework is inspired by a rate-type variational principle, the diffusion problem is formulated in terms of the chemical potential, which naturally accounts for the influence of the stress on the species flux. Thus, the tedious computation of the gradient of the hydrostatic stress by means of the projection of quadrature point values and shape function derivatives is completely avoided. Furthermore, the influence of hydrogen on the porosity evolution is included in the nonlocal GTN model by linear scaling functions as proposed in [R. Depraetere, et al., Comp Mater Sci, 2019, 200:110857]. The implementation of the model as a user element (UEL) in Abaqus and in particular the return mapping based on an Augmented Lagrangian formulation will be discussed in detail. In addition, representative boundary value problems illustrate the ability of the proposed, phenomenological model to describe the upper shelf of the hydrogen-induced ductile to brittle transition.
This section is dedicated to discuss recent advances in multiscale and homogenization techniques for static and dynamic problems. Topics of particular interest are nonlinear homogenization techniques, multiscale modelling of failure processes and localization phenomena, FE2 methods, atomistic to continuum coupling, contact homogenization, model reduction techniques and furthermore homogenization schemes incorporating experimentally determined microstructure data.
The lecture focuses on understanding and describing within the multiscale modelling frameworks the void growth and coalescence leading to the ductile damage of polycrystalline metals and alloys deforming by slip and twinning. Our motivation stems from solids with a high plastic anisotropy, like magnesium with hexagonal close packed (HCP) lattice, which are known to suffer from reduced ductility and fracture toughness. Those drawbacks result from insufficient number of easy slip systems and activity of twinning. Void growth failure mechanism under the condition of locally constrained plastic deformation and the strongly heterogeneous stress field in a polycrystalline volume is not yet well understood and incorporated into the validated constitutive models.
The large strain elastic-viscoplastic model of the single crystal with twinning is a basis for the performed analyses [1]. This local model accounts for mutual interactions between different slip modes and twinning, which modify hardening laws for material parameters such as critical shear stresses, as well as for an abrupt lattice reorientation in part of the considered grain by a probabilistic twin-volume consistent reorientation scheme. Two approaches are proposed to analyse overall behaviour of voided crystalline material:
i. the full-field finite element analyses of unit cells of FCC and HCP crystals containing voids, performed to understand local mechanisms affecting porosity evolution and cavities coalescence [2], especially in the presence of twinning,
ii. the multiscale micromechanical framework formulated at small strain format, including mean-field model of two-phase medium: anisotropic elastic-(visco)plastic matrix, governed by the crystal plasticity constitutive rule, with embedded void inclusion, and the self-consistent framework to describe porous polycrystal of prescribed texture. The outcomes of mean-field approach are compared to the results of full-field analyses.
The study of multiple factors influencing material ductility will be presented, including overall loading scheme, local variations of crystal orientation and initial porosity. Moreover, the effect of void growth and coalescences on the microstructure evolution in the matrix material will be discussed.
Acknowledgements.
The research was partially supported by project No.2021/41/B/ST8/03345 of the National Science Centre, Poland.
[1] Frydrych, K., Maj, M., Urbański, L., \& Kowalczyk-Gajewska, K. (2020). Twinning-induced anisotropy of mechanical response of AZ31B extruded rods. Materials Science and Engineering: A, 771, 138610. https: //doi.org/10.1016/j.msea.2019.138610
[2] Virupakshi S., Kowalczyk-Gajewska K., Cylindrical void growth vs. grain fragmentation in FCC single crystals: CPFEM study for two types of loading conditions, International Journal of Solids and Structures, 280, 2023, 112397, https://doi.org/10.1016/j.ijsolstr.2023.112397.
The predictive accuracy of computational multiscale methods depends on detailed microstructural representations and sophisticated material models for microscale constituents. However, with increasing (physical) complexity of the cell problem, the computational cost keeps increasing as well. This stipulates the development of tailored solution approaches among which FFT-based spectral methods have emerged as particularly promising options.
Against this background, we focus on a critical limitation of current FFT-based approaches: their reliance on regular, structured grids. To overcome this, we exploit the hierarchical structure and inherent adaptivity of wavelets. By representing the governing fields in a wavelet basis and by utilizing wavelet transforms, we derive higher-order stress approximations in a nested set of approximation spaces. This enables the precise detection and resolution of localized features while significantly reducing the number of material model evaluations. Because the computational cost of the wavelet transforms scales linearly in the number of voxels and the per-voxel overhead is negligible compared to typical material model evaluations, substantial gains in computational efficiency are thus achieved.
We use a wavelet-enhanced version of the classic Moulinec-Suquet basic scheme as our point of departure and validate the proposed approach through detailed studies of representative boundary value problems in one- and two-dimensional settings. Notably, we demonstrate that the numerical grid in the hybrid wavelet-FFT approach naturally adapts to the solution profile based on a predefined refinement tolerance which results in a 95% reduction in the number of material model evaluations.
[1] T. Kaiser, T. Raasch, J.J.C. Remmers and M.G.D. Geers: A wavelet-enhanced adaptive hierarchical FFT-based approach for the efficient solution of microscale boundary value problems, Computer Methods in Applied Mechanics and Engineering, 409, 115959, 2023, https://doi.org/10.1016/j.cma.2023.11595}
FFT-based computational homogenization methods [1, 2] have been shown to efficiently solve multi-scale problems that are discretized on a regular grid. However, due to the grid-like structure, the obtained local solution fields lack accuracy in the vicinity of material interfaces that are not parallel to the grid. Simple workarounds typically comprise the numerical efficiency of the solver. In this contribution, we present a computational homogenization approach that guarantees the accuracy of interface-conforming finite elements while maintaining the computational efficiency of FFT-based computational homogenization methods for two-dimensional thermal conductivity problems. In particular, we propose a discretization by the extended finite element method (X-FEM) with modified absolute enrichment [3] and present an associated fast Lippmann-Schwinger solver that is numerically robust independently of the mesh. We analyze the properties of the corresponding X-FFT solver in a series of computational experiments.
REFERENCES
[1] Moulinec H., Suquet P., A fast numerical method for computing the linear and nonlinear mechanical properties of composites, Comptes Rendus de l’Académie des sciences. Série II. Mécanique, physique, chimie, astronomie, Vol. 318 (11), pp. 1417–1423, 1994.
[2] Moulinec H., Suquet P., A numerical method for computing the overall response of nonlinear composites with complex microstructure, Computer Methods in Applied Mechanics and Engineering, Vol. 157 (1-2), pp. 69–94, 1998.
[3] Moës N., Cloirec M., Cartraud P., Remacle J.-F., A computational approach to handle complex microstructure geometries, Computer Methods in Applied Mechanics and Engineering,Vol. 192 (28-30), pp. 3163–3177, 2003.
Within the framework of the project “Multifunctional High-Performace Components made of hybrid porous materials” (HyPo, TRR375) an efficient numerical multiscale model is being developed to describe the microscopic and macroscopic behavior of hybrid porous materials using a Finite Element - Fast Fourier Transform (FE-FFT) approach. The influence of different microstructural morphologies, e.g. varying porosities, as well as hybrid material compositions on the macroscopic component properties is to be simulated. In order to prepare the basis for the multiscale simulation framework, micro- and macroscopic modeling approaches for porous structures are implemented, analyzed, and compared. On the macroscopic scale, hyperelastic and elastoplastic material models for foams are evaluated, see e.g. [1], [2]. These models are integrated into the FE environment Ferrite.jl [3], and applied to various boundary value problems. Complementary microscopic simulations, using the FFT-based solver FeelMath [4], are conducted on microstructures with defined porosities under diverse macroscopic deformation states. This presentation will compare the macroscopic model results with the homogenized responses obtained from the microscopic model and analyze limitations and possibilities for improvement.
[1] M. Dhondt. The Finite Element Method for Three-Dimensional Thermomechanical Applications, pages 190–195, 2004.
[2] V. S. Deshpande and N. A. Fleck. Isotropic constitutive models for metallic foams. Journal of the Mechanics and Physics of Solids, 48(6–7):1253–1283, 2000.
[3] K. Carlsson, F. Ekre, and Ferrite.jl contributors. Ferrite.jl [Computer software]. URL: https://github.com/Ferrite-FEM/Ferrite.jl.
[4] M. Kabel and H. Andrae. Fast numerical computation of precise bounds of effective elastic moduli. Berichte des Fraunhofer ITWM, 224, 2013.
In order to meet the high demands on usability, structural components in the aerospace and automotive industries are typically made from high performance metals. Throughout the life-cycle of a component, from its manufacturing to its application, the respective material behavior is strongly influenced not only by mechanical but also by thermal loads, which may for example lead to phase transformations [1]. The overall material behavior depends on the properties of the underlying microstructure, which is typically polycrystalline and whose complexity is defined by the distribution, size and orientation of the individual grains. In general, a two-scale simulation approach allows to predict the overall material behavior while taking into account a highly resolved simulation of the microstructural behavior. In this context, the FE-FFT-based two-scale method [2], combining the finite element (FE) method on the macroscale with a fast Fourier transform (FFT)-based simulation technique on the microscale, serves as an efficient alternative to the classical FE² approach for the simulation of periodic microstructures. We present the extension of an efficient FE-FFT-based two-scale simulation framework for the investigation of purely mechanical boundary value problems [3] to a fully thermo-mechanically coupled framework [4] which accounts for both thermal and mechanical influences. This simulation approach will be used to model the material behavior of polycrystalline metals, taking into account microstructural phenomena such as crystal plasticity or phase transformations. To demonstrate the feasibility of the proposed simulation framework, several numerical examples are presented.
\newline
\newline
[1] Waimann, J. \& Reese, S. (2022). Variational modeling of temperature induced and cooling-rate dependent phase transformations in polycrystalline steel. Mechanics of Materials, Vol. 170, 104299
[2] Spahn, J., Andrä, H., Kabel, M., & Müller, R. (2014). A multiscale approach for modeling progressive damage of composite materials using fast Fourier transforms. Computer Methods in Applied Mechanics and Engineering, Vol. 268, pp. 871-883
[3] Gierden, C., Kochmann, J., Waimann, J., Kinner-Becker, T., Sölter, J., Svendsen, B., & Reese, S. (2021). Efficient two-scale FE-FFT-based mechanical process simulation of elasto-viscoplastic polycrystals at finite strains. Computer Methods in Applied Mechanics and Engineering, Vol. 374, 113566
[4] Schmidt, A., Gierden, C., Fechte-Heinen, R., Reese, S., & Waimann, J. (2024). Efficient thermo-mechanically coupled and geometrically nonlinear two-scale FE-FFT-based modeling of elasto-viscoplastic polycrystalline materials. Computer Methods in Applied Mechanics and Engineering, Vol. 435, 117648
The topic of this session is the analysis and modeling of turbulent non-reactive and reactive flows based on DNS, LES, RANS, and experiments. A special focus is on fundamentals in turbulence, turbulent reactive flows, turbulent multi-phase flows, turbulence of atmosphere, atmosphere/ocean interaction, modeling and simulation of complex turbulent flows, the interface of numerical algorithms, chemical and physical modeling, as well as high-performance computing with its application to turbulence.
First shown in Oberlack (2001) and significantly extended to higher moments in Oberlack et al. (2022), we presently show that in near-wall turbulent shear flows, i.e. specifically for the log-region as well as for the core region of a channel flow, streamwise velocity moment scaling laws up to an arbitrary order can be derived using symmetry theory. Recently, we have extended the theory to wall-normal and spanwise velocity moments, which do not contain a mean-velocity. The new theoretical results are fully consistent to those for the streamwise scaling laws in Oberlack et al. (2022), and the symmetry predicts that the key scaling exponents σ in the log-region, as well as σ₁ and σ₂ in the core-region, are identical for all velocity moments. For its validation, and to compensate for the known weak convergence of the moment statistics of the spanwise and wall-normal velocities, we have again doubled the DNS run time of Hoyas et al. (2022) for wall shear Reynolds number of Re_τ=10⁴. DNS data and the extended symmetry theory show excellent agreement for both wall-normal and spanwise velocity moments in the above two areas, and, further, we confirm an essential result of the symmetry theory that the above scaling exponents are identical for all velocity moments. Further, the symmetry theory for arbitrary moments is detailed for all three velocities, and we explicitly show for both regions that the above scaling laws are invariant solutions of the infinite set of moment equations, i.e. a symmetry reduction has been achieved. In particular, the resulting equations contain the scaling parameter $\omega$ and this appears as a kind of eigenvalue of the reduced equations. Finally, we also give integral forms of the aforementioned reduced equations, which imply a Reynolds number dependence of the scaling laws via this route. We compare this dependence with channel DNS and experimental data for Re_τ = 180 − 9.4 ⋅ 10⁴.
The pursuit of sustainable mobility requires innovative solutions, one of which involves applying riblets—a specific form of small surface corrugation—to smooth surfaces immersed in turbulent flows. Riblets have demonstrated the ability to reduce friction drag below that of a smooth wall without any external energy input, making them highly promising for industrial applications. Their performance is closely tied to their size relative to the characteristic length scale of small turbulent eddies.
For small riblets, the mechanisms responsible for drag reduction are well understood and can be quantified using the difference between parallel and perpendicular protrusion heights. In this regime, friction reduction improves with increasing riblet size. However, for larger riblets, this trend reverses, and the linear relationship between drag reduction and riblet size breaks down. The underlying causes of this breakdown remain poorly understood, with prior studies identifying potential contributors such as Kelvin-Helmholtz instabilities, secondary motions, and a non-monotonic drag behavior.
This study seeks to elucidate the physical mechanisms driving the observed increase in friction drag for larger riblets. By gaining a deeper understanding of these mechanisms, we aim to identify strategies for mitigating drag penalties and expanding the range of industrially relevant riblet geometries. Particular attention is given to the emergence of a fully rough regime, characterized by a specific range of nondimensional riblet sizes. This regime is surprising since riblets generate no pressure drag, yet still exhibit a notable increase in skin friction.
To investigate this phenomenon, Direct Numerical Simulations (DNS) of fully developed incompressible channel flows with trapezoidal riblets of varying sizes are performed in the drag-increasing regime, up to a friction Reynolds number of 938. The numerical results are validated against the experimental findings of von Deyn et al. (J. Fluid Mech., 951, 2022) for identical riblet geometries. Notably, the experimental study revealed a plateau in the skin-friction coefficient as a function of dimensionless riblet size following the fully rough regime—a feature observed for the first time. This behavior is confirmed in the present numerical analysis. At the conference, we will discuss possible reasons for this behaviour, providing critical insights into the drag-generating mechanisms of large riblets.
Axisymmetric turbulent wakes generated by bluff bodies are relevant to several applications, including aviation, road transport, and the design of wind turbine masts, among others. This experimental study focuses on the unsteady properties of turbulence dissipation resulting from the non-equilibrium energy cascade in the turbulent axisymmetric wake. Recent numerical studies (e.g., Goto and Vassilicos, Physical Review E, Volume 94, 2016, and Alves Portela et al., Physical Review Fluids, 2018) suggest two key findings: (1) the turbulence production rate may be correlated with the turbulent energy dissipation rate (denoted as ε) with a spatio-temporal lag, and (2) the relationship εₗₒw / ε = constant (where εₗₒw represents the low-pass filtered ε) may hold over a range of streamwise distances. Experimentally verifying these predictions is very challenging because it requires resolving various quantities, such as turbulence production and ε, in both time and space.
We conducted a series of experiments combining 2D2C particle image velocimetry (PIV) and hot-wire anemometry (HWA) to characterize some unsteady aspects of the energy cascade. A bluff fractal plate, suspended perpendicular to the freestream flow and with a characteristic length of 64 mm (defined as the square root of the frontal area), was tested in the LMFL Boundary Layer Wind Tunnel. This facility has a measurement section 20~m long and a cross-sectional area of 2 m by 1 m. The freestream velocity, U_∞ (measured with a pitot tube at the beginning of the test section), was maintained at a constant value of 8.5 m/s, resulting in Reynolds numbers of approximately 35,000 based on the plate's characteristic length.
Four cameras with a temporal resolution of 4 Hz were aligned longitudinally to capture PIV fields covering 1.35 m in length and 30 cm in height. These fields included the wake's centerline and lower part. By repositioning the cameras, we recorded three measurement stations spanning the ranges 24 < x/D < 40, 38 < x/D < 54, and 54 < x/D < 70, respectively. The spatial resolution was approximately 3 mm, sufficient to resolve several low-pass filtered turbulence quantities. Within the last camera's field of view, HWA measurements, sampled at 50 kHz and synchronized with the PIV data, were performed. These HWA measurements enabled the correlation of fully resolved turbulence quantities, such as ε, with low-pass and coherent fluctuation quantities (such as εₗₒw) and turbulence production rates obtained from the PIV data.
Atmospheric turbulence is of key importance in numerical weather prediction and climate modeling. Particularly intriguing are the convective structures, which give rise to cloud formations and affect the intra-cloud processes. Although impactful, these can rarely be solved directly and have to be modeled instead. The challenging aspect of atmospheric turbulence is its anisotropy and inhomogeneity. It exhibits coherences like convective plumes and thermal vortex rings which persist over long time scales often dominating local dynamics. Such a system significantly differs from the turbulence described by the classical Kolmogorov-Kraichnan theory. Exceeding its assumptions, atmospheric convection exhibits a broader range of behaviors.
A particularly interesting feature is the inversion of the energy cascade in some regions of the flow. This can be related to high anisotropy induced by a large coherent structure. Associated preferential directions and intense stretching can dynamically lead three-dimensional turbulence to locally resemble a two-dimensional one. This can take place at the boundaries of a strong updraft, which ultimately leads to the formation of a cumulus cloud. In this study, we consider the environment of an isolated updraft. The focus is on the small-scale phenomena, excited by the anisotropy introduced by the bulk flow. We argue that the dynamics of its interfacial regions can be understood by referring to the plane, two-dimensional Kelvin-Helmholtz instability. Further, we investigate the interscale energy transfer looking for symptoms of an inverse cascade. The limitations of this approach are discussed, together with the necessary circumstances for such a phenomenon to occur. The insights and conclusions of this work can contribute to the development of subgrid-scale models in geophysical applications. This is allowed by linking selected features of the small scales to the local bulk flow and its anisotropy.
Liquid metals, such as Sodium, Gallium and Gallium alloys have applications as heat transfer fluids in a broad temperature range. They possess low Prandtl numbers, which makes them ideal as a heat transfer medium for applications such as concentrated solar power plants (CSPs). Due to their low Prandtl numbers the heat transfer mechanism for liquid metals is different than for medium or high Prandtl number fluids, such as air or water. Therefore, a deeper understanding of the heat transfer mechanisms in low Prandtl number fluids is necessary for a safe and optimal design of concentrated solar power plants.
Most of the existing literature in this topic regards only canonical flows without considering the thermal entrance. This study now includes the thermal entrance region in numerical large-eddy simulations at a bulk Reynolds number of Re_b = 5300. Two different Prandtl numbers are considered. A low Prandtl number of Pr=0.025, corresponding to a typical class of liquid metals, and a medium Prandtl number of Pr=0.71, corresponding to air. The results are validated with a reference simulation for the fully developed state (S. Straub (2019) Azimuthally inhomogeneous thermal boundary conditions in turbulent forced convection pipe flow for low to medium Prandtl numbers). The numerical simulation provides additional information about the thermal entrance region, that was not considered in the reference. An azimuthally non-homogeneous boundary condition was hereby set to investigate the effect of the inhomogeneity on the thermal statistics. In this study, the non-homogeneous wall heat flux consists of an adiabatic wall at the lower half of the pipe and a constant wall heat flux at the upper wall of the pipe. The whole setup is commonly denoted as the turbulent Graetz problem with homogeneous and azimuthally non-homogeneous wall heat flux.
Furthermore, the numerically determined turbulent solution is compared with the analytically determined laminar solution. For the laminar case, a series solution for the thermal entrance was obtained for both the homogeneous and non-homogeneous boundary condition.
Waves are a ubiquitous natural phenomenon and acoustics are, besides surface water waves, the most obvious representatives, familiar to anybody and quantitatively known to any student of mathematics, physics or a technical subject. To this corresponds a long mathematical tradition, continuing today in the accurate numerical computation of linear and nonlinear wave phenomena. Our session is devoted to the simulation and understanding of waves and wave interactions. The range of applications is thus very broad, while the focus is meant to be on the unifying physical phenomenon. In the past years we had numerous contributions from solid mechanics, porous media flow, turbulence and aeroacoustics, from crack detection to explosions.
The historical development of the concept of caustics and its applications in science and technology is shown, starting from the works of Leonardo da Vinci (LdV) to contemporary achievements in the field of radio astronomy.
Starting from LdV research in the field of light propagation, caustics present in his drawings on the topic of reflecting light by concave mirrors were analyzed. The discovery of LdV is presented, according to which, at an infinitely distant source of rays, a small fragment of the canopy is enough to generate a focus, while the rest of the mirror forms a caustic for which LdV did not indicate an application.
Based on the principles of geometrical optics, an analytical description of caustics generated in a spherical mirror is presented. A mathematical description of the concentration of energy on caustics is given, taking into account the energy loss during reflection. The occurrence of a singularity, the physical equivalent of which is the focus of the mirror, is shown. Attention is drawn to the difference in the formation of caustics in spherical and parabolic mirrors. In the first case, it is an indelible element of the mirror's operation, while in the second, under appropriate conditions, caustics can completely disappear, transforming into a point focus. Using acoustic and electromagnetic waves as examples, the formation of caustics in a wave field was investigated taking into account the scalar and vector nature of the field. Based on the general principles of wave motion, symmetry was shown in the description of energy dependencies in acoustics and electromagnetism.
In the practical part the presence of caustics in the field of room acoustics and radio astronomy was indicated. It was explained why in the sound field in existing halls, instead of the entire caustics, only its cusp is observed, which is perceived as a point-like focus of sound. In relation to radio astronomy, the influence of caustics on the efficiency of spherical and parabolic antennas was presented. It was shown how the development of receiving techniques allows the use of energy contained in caustics to increase the aperture of a spherical mirror, which significantly improves the observation possibilities. Implementation of these improvements is demonstrated at the 305-meter Arecibo radio telescope in Puerto Rico, and the 500-meter FAST radio telescope in Dawodang, China.
Fibre Metal Laminates (FML) constitute an advanced composite material class that combine the strength and ductility of metals with the lightweight and high-stiffness properties of fibre reinforced polymers, and thus become a promising material system for applications in aerospace, automotive engineering, and other engineering fields. Structural Health Monitoring (SHM) using Guided Ultrasonic Waves (GUW) represents a state-of-the-art approach for the non-destructive testing of such engineering structures.
When applied to FML, SHM is of key importance in assessing structural integrity over time and detecting potential damage such as delamination, fibre breakage, or other structural inhomogeneities. In SHM involving GUW, actuators emit a wave-field that interacts with structural inhomogeneities due to a change in the acoustic impedance. These interactions can lead to reflections, scattering, and mode conversion of the wave-field, which can be measured by sensors, enabling damage detection.
In this study, a central aspect involves the embedding of sensors within the laminate structure to facilitate damage monitoring in the middle layers of FML and thus enabling more advanced and precise monitoring capabilities compared to conventional measurement techniques, e.g. laser vibrometry (LSV). The embedded sensors are, unlike LSV, not limited to the measurements of deflections on the external surface of the structure. Depending on the damage type, wave modes can occur that do not cause deflection on the external surface and therefore cannot be measured with LSV. However, embedded sensors can be considered as inhomogeneity and lead to interactions with the wave-field. Analogous to damage, disturbances caused by embedded sensors will be measured by other sensors and thus can be misinterpreted as damage.
In previous research, the incorporation of an artificial interphase to match acoustic \linebreak impedances between sensor and FML showed a reduction of wave-field disturbances for anti-symmetric modes. In FML, GUW form so-called Lamb waves, which occur with different wave modes. Based on the previous work, the current study extends the concept not only for the anti-symmetric A0-mode, but also for the symmetric S0-mode.
This contribution presents a comprehensive numerical study of a two-dimensional FML model with embedded sensors. The implemented interphase is enhancing signal transmission while minimising unwanted reflections. The study explores various interphase configurations across a broad frequency range, demonstrating improved sensor integration for more accurate and reliable SHM systems. These findings offer a valuable basis for future experimental validation and the development of advanced FML structures with embedded sensors.
Tapered edges in beams have been attracting attention in engineering science for their passive vibration damping properties [1]. Flexural waves propagating into a power-law-shaped beam edge slow down with decreasing beam thickness until the phase and group velocities are zero at the tip. Standing waves, often undesired in engineering, cannot form in a beam with a perfectly tapered edge due to the inhibited wave propagation and reflection at the tip. Due to their similarities with astronomical black holes, structures like those described are called ‘acoustic black holes’ (ABH).
Theoretical and experimental research of ABH so far has focused only on elastic waves in solids that are substantially thinner than the acoustic wavelengths of interest [1]. Already in the first literature [2], equations approximated the problem in the long wavelength regime. Regarding thicker plates, however, formulas for the phase and group velocities reached unphysically large values exceeding even the longitudinal wave velocity.
According to the Rayleigh-Lamb theory, in plates that are thicker than the wavelengths, flexural waves turn into surface acoustic waves (SAW) [3]. Reflection reduction of SAW is of interest in microelectromechanical systems like SAW sensors or microfluidic actuators [4]. In our study, we investigate the applicability of the Rayleigh-Lamb theory to solve ABH for thicker plates and unlock the concept of ABH for SAW. We present a procedure to find analytical solutions of the Ralyeigh-Lamb equation in an ABH via MATLAB® and compare the results with numerical simulation.
Our results show that the Rayleigh-Lamb theory is useful for understanding the interplay of surface and flexural waves along an ABH and can help optimize the design of ABH when SAW reflections must be suppressed.
References
[1] A. Pelat, F. Gautier, S. C. Conlon and F. Semperlotti, "The acoustic black hole: A review of theory and applications," Journal of Sound and Vibration, p. 115316, 2020.
[2] M. A. Mironov, “Propagation of a flexural wave in a plate whose thickness decreases smoothly to zero in a finite interval,” Akust. Zh. , pp. 546-547, 1988.
[3] H. Lamb, “On waves in an elastic plate,” Proceedings of the Royal Society of London, pp. 114-128, 1917.
[4] L. Y. Yeo and J. R. Friend, “Surface Acoustic Wave Microfluidics,” Annual Review of Fluid Mechanics, pp. 379-406, 2014.
The propagation of mechanical waves in an unbounded, non uniform medium can be described by using curvilinear coordinates centred at source position. The Frenet coordinate system, with origin on a point riding a wave, is used as basis for the curvilinear coordinates. In this way a principal coordinate stating the evolution of the wave field and running along a ray, orthogonally to the successive wave fronts, is determined. A linear expansion of the refraction law at the rider point connects the gradient of the speed of sound field to the curvatures of the wave path and of the wave front. We use this connection in developing a numerical solution for the wave field, after recalling recent developments about the historical determination of the refraction law.
Kinematically, the elastic deflection in a solid can be traced back to 3 irreducible types of deformation {Dil, Rot, Dev}: The dilatation Dil is defined by the div-operator, the change in direction Rot by the rot-operator and the shape change Dev by the dev-operator. These deformations are automatically determined thanks to the mathematical well known identities rot grad = 0 and div rot = 0 and are subject to the longitudinal, transverse and deviatoric wave equations. All 3 types of waves are based on a local Euler momentum balance and can be confirmed physically by the impedance theorem and mathematically by factorization. The longitudinal and transverse waves are space waves that are free of transverse expansion in a medium that is unlimited on all sides. The deviation wave is divergence- and rotation-free and describes a surface wave. All 3 types of equations are partial differential equations (PDG 1st order) and provide for the homogeneous solid the same solutions as the classic Cauchy wave equation (=PDG 2nd order) and simplify the calculation of inhomogeneous waveguides.
The session especially welcomes contributions to the following topics: uncertainty quantification; risk analysis and assessment; Bayesian methods in engineering; decision analysis; stochastic modeling (including spatio-temporal modeling); stochastic mechanics; stochastic algorithms and simulation; stochastic processes (including time series); resampling methods; stochastic networks. Applications to large scaled problems are encouraged.
This work's main aim is to quantify uncertainty in some elastoplasticity problems using probabilistic entropy and probabilistic distance. Classical theoretical and computational approaches based on probabilistic moments and characteristics are contrasted with Shannon entropy in order to obtain and test a single uncertainty measure in certain nonlinear problems of solid mechanics. A general hypothesis that probabilistic entropy may serve for this purpose is tested using the Stochastic Finite Element Method (SFEM) approach implemented using Monte-Carlo simulation. Probabilistic moments computed as the referential solution are determined using a triple numerical strategy, where Monte-Carlo simulation is accompanied by the perturbation technique as well as the semi-analytical approach. This SFEM implementation has been completed using the ABAQUS system and a computer algebra system MAPLE. Additionally, the probabilistic distance (relative probabilistic entropy) apparatus is presented and applied to study two certain limit functions defined using admissible and extreme deformations and stresses in the elastoplastic regime. It has been demonstrated that such a relative entropy may serve for reliability assessment and exhibit very similar variations in addition to the input uncertainty level as the classical First Order Reliability Method index. A few mathematical models of probabilistic distance will be discussed including Bhattacharyya, Hellinger, Kullback-Leibler, and also Jensens models. Some issues concerning the existence of the integrals representing probabilistic distance will be discussed separately. A few computational experiments illustrating this approach will discuss uncertainty quantification in the Ramgberg-Osgood constitutive law, probabilistic convergence of the Monte-Carlo simulation, numerical error of probabilistic entropy determination as well as an influence of a choice of the probability density function.
The main aim is to study natural vibrations of the Kirchhoff-Love elastic plates by various probabilistic extensions of the Boundary Element Method. Uncertainty quantification is provided for various material and geometrical parameters including especially a plate thickness [1,2]. They are traditionally considered as Gaussian due to the Maximum Entropy Principle, nevertheless the methodology presented allows for other probability distributions as well. The Response Function Method using polynomial and spline functions enables for numerical recovery of analytical representation of structural responses versus the given input uncertainty source. Then, the probabilistic moments of these responses are determined using three concurrent probabilistic methods, namely semi-analytical direct integration method, Monte-Carlo simulation statistical estimation as well as Taylor-series based perturbation method [1,2]. Numerical efficiency of this approach, especially of the first method, strongly depends upon an application of the computer algebra system enabling for analytical derivation of all probability integrals, which frequently have complex forms. Additionally, a relative entropy quantification is applied here [3] to check probabilistic distance in-between some input and output probability distributions. It may have some engineering importance in the reliability analysis [4], where limit functions are specific cases of such distances. This apparatus would serve also for traditional and probabilistic sensitivity analysis, where the first two gradients with respect to all plate parameters are inherent in the Taylor expansion of the perturbation-based method. Verification and discussion of probabilistic sensitivity would be discussed thanks to the fact that all moments and distances are parametrized in addition to the input statistical scattering. Finally, let us note that some extension of this approach towards the BEM formulation using the so-called double collocation point would be also demonstrated.
Literatrure
[1] Guminiak M., Kamiński M. On semi-analytical Stochastic Boundary Element Method and its application to eigenproblem of thin elastic plate immersed into a fluid. Engineering Analysis with Boundary Elements. 2022; 134: 219–230.
[2] Kamiński M. On iterative scheme in determination of the probabilistic moments of the structural response in the Stochastic perturbation-based Finite Element Method. International Journal for Numerical Methods in Engineering. 2015; 104(11):1038–1060.
[3] Bhattacharyya A. On a measure of divergence between two multinomial populations. Indian J. Stat. 7:401–406, 1946.
[4] Xiang Y, Liu Y. Application of inverse first-order reliability method for probabilistic fatigue life prediction. Probabilistic Engineering Mechanics. 2011; 26(2): 148–156.
Calibrating constitutive models based on experimental data is a common task to ensure the reliability of numerical models in solid mechanics. This calibration is usually accompanied by quantifying material parameter uncertainties, either through approximations considering asymptotic normality or stochastic approaches such as Bayesian methods. However, propagating these material parameter uncertainties to estimate the uncertainty of simulation results is essential for validating numerical models. In recent years, numerous methods have been developed for this purpose, each with its own advantages and disadvantages. Apart from many sampling-based methods, the first-order second-moment method provides reasonable uncertainty approximations in case the input parameter uncertainties are small. In this contribution, we demonstrate that the first-order second-moment method can be seamlessly integrated into the nonlinear finite element solution scheme for transient thermal problems. Although being intrusive in the sense that modifications to the finite element code are required, this approach rewards by enabling efficient uncertainty quantification without significant additional effort. In particular, the required first-order derivatives are computed using internal numerical differentiation, which is based on analytical derivatives and the additional solution of a linear system with multiple right-hand sides. This allows us to consider the uncertainties of material parameters and boundary conditions in our numerical model and propagate them to the simulation results. The efficacy of this approach is demonstrated for the calibration and uncertainty quantification of thermal material parameters in transient thermal finite element computations. Additionally, the uncertainty propagation is studied for a validation example. It is found that an additional computational effort of approximately 16% is required for integrated uncertainty quantification compared to a regular finite element simulation. The quantified uncertainties using the first-order second-moment method are compared to reference results obtained with the Monte Carlo method, revealing only minor deviations.
Capillary flow through porous media plays a crucial role in applications ranging from material science to biological systems. We are interested in quantifying the effect of hydrophobic defects in a heterogeneous wetting problem, which can for example occur in powder processing problems for food technologies [1]. In real-world applications such as the one mentioned before, the amount and spatial position of these defects is highly uncertain and quantifying their effect on the functionality, i.e. wettability and wetting dynamics of the composite is crucial in the design process for such applications.
Our model is a revision of the classical problem of liquid imbibition in a single pore, focusing on spatially varying wettability of the pore surface. For single pores consisting of two different hydrophilic materials, we recently found [2] that the order of materials matters for the wetting dynamics, leading to the shortest wetting times, when the liquid first completely passes the less wettable material, followed by the material with higher wettability. Extending these results to cases, where the less wettable material is even hydrophobic comes along with new, interesting questions.
Firstly, in contrary to considering two hydrophilic materials, the presence of hydrophobic defects can lead to a complete stoppage of the capillary rise under certain critical conditions.
Secondly, the complex dynamics of the dynamic contact angle as well as free surface oscillations make it harder to develop macroscopic ODE models for the overall wetting dynamics, which are available for the case with two hydrophilic materials, at least under some assumptions [2,3].
For this reason, we need to rely on direct numerical simulations (DNS) of the capillary rise problem. Quantifying the effect of hydrophobic defects of unknown size distribution, number and position leads to a multidimensional forward UQ problem, so, regarding the computational expense of DNS simulations, classical sampling methods such as Monte Carlo quickly reach their limits. To circumvent this issue, we construct Polynomial Chaos-based surrogate responses to our simulations, subsequently enabling to define critical parameters for the wetting process under uncertain conditions. This ultimately aims at obtaining robust process designs in the numerous wetting-based technologies.
[1] J. Kammerhofer et al. (2018), Powder Technology, 328, 367-74
[2] M. Fricke, L. Gossel, J. De Coninck (2024), Optimizing Capillary Flow: The Role of Material Distribution in Fluid Transport, to be published.
[3] M. Fricke et al. (2023), Physica D: Nonlinear Phenomena, 455, 133895
The paper is concerned with a coupled piezo-electric problem with incompletely known coefficients of the elasticity tensor and two other tensors that define electric properties of the media. Due to this uncertainty, the problem possesses a set (cloud) of equally probable solutions instead of the unique solution. Quantitative characteristics of this set are derived by a posteriori estimates of the functional type. They give an upper bound of the cloud diameter and lower bound of maximal diameter of the ball inscribed. The estimates are fully computable. They are based on solving algebraic optimisation problems of low dimensionality related to the sets containing possible coefficients.
The aim of this section is to bring together experts in the field of applied and numerical linear algebra, discussing recent theoretical and algorithmic developments.
The operator scaling problem is a generalization of the famous matrix scaling problem and admits several important applications. The operator Sinkhorn iteration (OSI) is an iterative algorithm for finding a solution. In the talk we will introduce the basic convergence theory of this method based on the Hilbert metric on positive definite matrices. In addition, we propose accelerated versions of OSI using overrelaxation and investigate their convergence properties. In particular, the local convergence analysis allows to determine the asymptotically optimal relaxation parameter based on Young's classic SOR theorem. For a geodesic version of overrelaxation, we also obtain a global convergence result in a specific range of relaxation parameters. Numerical experiments demonstrate that the accelerated methods outperform the original OSI in certain applications.
Tensor Train (TT) decomposition is a widely used low-rank tensor factorization technique known for its memory efficiency and scalability in high-dimensional data. Its advantages have motivated the development of various TT-based methods and applications across fields, such as chemistry, quantum physics, and machine learning. However, constructing low-rank tensors and performing operations in TT arithmetic can be computationally intensive, often due to challenges on the cost of tensor construction and the complexity of numerical operations. To address these issues, high-performance computing (HPC) techniques such as parallelism and mixed-precision arithmetic have become essential tools for enhancing computational efficiency and reducing memory and communication requirements in (multi)linear algebra. In this talk, we discuss recent advances in HPC for TT arithmetic and introduce parallel TT operations using mixed-precision arithmetic. We then explore the potential of these developments to improve large-scale tensor computations and discuss their implications for future applications in scientific computing.
Continuous-time algebraic Lyapunov equations arise in the fields of, e.g., optimal control and model order reduction. For many applications, the coefficient matrices are large and sparse, while the solution matrix has a low numerical rank. In this setting, the alternating-directions implicit (ADI) method, which directly operates on the low-rank factors of the solution matrix, is one of the most widely used algorithms for this type of equation. We report on our progress of applying mixed-precision techniques to the low-rank Lyapunov ADI, namely the use of multi-precision low-rank factorizations, as well as a mixed-precision variant of the inexact low-rank Lyapunov ADI.
In this talk, we introduce a mesh-free two-level hybrid Tucker tensor format for the approximation of multivariate functions. This new format combines the product Chebyshev interpolation with the ALS-based Tucker decomposition of the coefficients tensor. The benefits of this tensor approximation are two-fold. On the one hand, it allows to avoid the rank-structured approximation of functional tensors defined on large spatial grids, while on the other side, this leads to the Tucker decomposition of the core coefficient tensor with nearly optimal $\varepsilon$-rank parameters, which are shown to be much smaller than both the polynomial degree of the Chebyshev interpolant and the potential number of spatial grid points in the commonly used grid-based discretizations. We discuss the error and complexity estimates of the presented method and demonstrate its efficiency on the demanding example of multi-particle interaction potentials generated by the 3D Newton kernel.
We investigate the use of mixed precisions in an iterative refinement framework for solving low-rank Lyapunov matrix equations, where the sign function method with Newton iteration is used as the solver. We also discuss the possibility of using mixed precisions within the Newton iteration itself and propose a mixed-precision truncation scheme based on the QR factorization with column pivoting. Numerical experiments are presented to verify the accuracy of our mixed-precision algorithms, and accordingly, we also discuss the potential speedup based on current and future hardware.
For all fields of applications the mathematical models are primarily based on differential equations. Hence, their numerical solution plays a fundamental role in numerical mathematics. This section covers mainly the construction and the behavior of numerical methods for differential equations including those of ordinary as well as of partial differential type.
Recent modeling frameworks to predict the mechanics of additive manufacturing processes involve both fluid and solid mechanics, with the former often described adopting the lattice Boltzmann method. Motivated by the wish to model all physics with the same method, we proposed a novel lattice Boltzmann formulation to solve the equations of linear elastostatics [1,2] and we now aim at extending the scope to elastodynamics [3]. In comparison to previous attempts in the same direction, our approach aims at higher accuracy and efficiency, as well as at retaining the computational benefits of the lattice Boltzmann method.
In this contribution we outline the systematic construction of a second-order consistent lattice Boltzmann formulation to solve the equations of linear elastodynamics with Dirichlet and Neumann boundary conditions. To this end, we reformulate the target equation as first-order hyperbolic system and use the so-called vectorial LBM for its numerical approximation [3]. Using the asymptotic expansion technique [4] we formally show second-order consistency. Additionally, we establish a CFL-like stability criterion based on the notion of so-called pre-stability structures [5]. Lastly, we propose novel second-order consistent and stable boundary formulations for Dirichlet and Neumann boundary conditions.
All derivations are verified by numerical experiments using manufactured solutions and standard benchmark test cases.
References
[1] Boolakee, O., Geier, M. and De Lorenzis, L. A new lattice Boltzmann scheme for linear elastic solids: periodic problems. Comp. Meth. Appl. Mech. Eng. (2023) 404:115756.
[2] Boolakee, O., Geier, M. and De Lorenzis, L. Dirichlet and Neumann boundary conditions for a lattice Boltzmann scheme for linear elastic solids on arbitrary domains. Comp. Meth. Appl. Mech. Eng. (2023) 415:116225.
[3] Boolakee, O., Geier, M. and De Lorenzis, L. Lattice Boltzmann for linear elastodynamics: Periodic problems and Dirichlet boundary conditions. Comp. Meth. Appl. Mech. Eng. (2024) 433:117469.
[4] Junk, M., Klar, A. and Lou, L. S. Asymptotic analysis of the lattice Boltzmann equation. J. Comp. Phys. (2005) 210:676–704.
[5] Banda, M. K., Yong, W. A., Klar, A. A stability notion for lattice Boltzmann equations. SIAM J. Sci. Comput. (2006) 27:2098–2111.
The lattice Boltzmann method has recently been developed to solve problems in solid mechanics, starting with linear elastostatics [1, 2] and continuing with linear elastodynamics [3] in two dimensions. Based on these works, we propose an extension of the lattice Boltzmann method for linear elastodynamics in three dimensions. As in the 2D case, we transform the system into an equivalent first-order hyperbolic system of equations, on which the lattice Bolzmann scheme with vector-valued populations is applied. Using the asymptotic expansion technique, we prove second-order consistency for both periodic and Dirichlet boundary conditions for 3D prismatic domains. The scheme is stable under a CFL-like condition. Moreover, we propose a projection of the solution on a 2Drectangular domain, which recovers the solution using a 2D lattice Boltzmann formulation. Finally, we conduct numerical experiments to verify our theoretical derivations.
References:
[1] Boolakee, O., Geier, M. and De Lorenzis, L., A new lattice Boltzmann scheme for linear elasticsolids: periodic problems. Comp. Meth. Appl. Mech. Eng. (2023) 404:115756.
[2] Boolakee, O., Geier, M. and De Lorenzis, L., Dirichlet and Neumann boundary conditions fora lattice Boltzmann scheme for linear elastic solids on arbitrary domains. Comp. Meth. Appl.Mech. Eng. (2023) 415:116225.
[3] Boolakee, O., Geier, M. and De Lorenzis, L., Lattice Boltzmann for linear elastodynamics: pe-riodic problems and Dirichlet boundary conditions. (2024).
https://doi.org/10.48550/arXiv.2408.01081 . Preprint.
Fractional calculus is a powerful extension of the traditional calculus, which found its way to modeling complex phenomena such as anomalous diffusion, heat conduction, elasticity and plasticity, or micro/nano beams. The wide spectrum of applications of this branch of mathematical analysis is related to its nonlocal nature, which allows to describe problems where it is necessary to take into account the influence of the surrounding of a point in a given space, while analyzing the behavior of the phenomenon there. Since in many cases the affecting neighborhood is not only one-sided, special both-sided operators, such as compositions of fractional derivatives, have been gaining popularity.
In our work we consider the fractional Euler - Bernoulli beam equation including the composition of the left and right Caputo derivatives. We analyze two types of boundary conditions: beam with fixed-supported and fixed-free ends and employ numerical methods, based on the trapezoidal rule, to obtain the approximated solutions of the given equation. We perform the numerical simulations for three particular downward transverse loads per unit length: constant, power and trigonometric function. Additionally, for each case, we conduct an analysis of orders of accuracy of the proposed method.
Fractional calculus is gaining increasing recognition among scientists from various fields of science, and one of its key applications is continuum mechanics. The introduction of nonlocality to models of this field has enabled the development of a fractional theory of continuum mechanics, which opens up new perspectives in modeling and analyzing the behavior of materials and structures. The newly created models contain fractional operators with a fixed memory length, which poses a problem in determining their solutions. Existing analytical solutions include only selected types of equations, which emphasizes the need to develop numerical and approximate methods that allow for the effective application of this theory in practice. In our work, we discuss the numerical approximation of fractional compositions of Caputo derivatives with fixed memory length. We also present the application of this type of operator to modeling a one-dimensional problem of continuous mechanics. The composition of fractional operators is approximated by the trapezoidal rule. The obtained numerical solution is compared with approximate solutions for selected functions. Then, we use the derived numerical scheme to determine the solution of beam displacement under given Dirichlet boundary conditions.
Diabetes is rapidly emerging as a global epidemic, posing a significant threat to public health. Modeling the spread and management of diabetes is crucial for monitoring its growing prevalence and developing cost-effective strategies to mitigate its incidence and complications. This paper presents a fractional order nonlinear model for diabetes mellitus that incorporates the cumulative effect of media-driven diabetes awareness and education programs. The model is analyzed using a two-step Newtonian polynomial approach with the Caputo derivative. Key aspects such as equilibrium points, stability, and the existence and uniqueness of solutions are examined to ensure the robustness of the model. The obtained results are validated against previously published findings. Graphical and numerical results are obtained for different values of the fractional order. This research provides valuable insights for predicting disease trends and planning effective clinical management for diabetes patients.
In this session novel developments devoted to optimization and optimal control problems governed by ordinary or partial differential equations will be discussed. The focus is on theoretical investigations, numerical analysis, algorithmic issues as well as on application.
Shape optimization focuses on determining the optimal shape of a domain—such as a boundary or region in space—to minimize or maximize an objective function. This process is typically subject to constraints, often modeled by partial differential equations (PDEs), as the underlying physical phenomena, such as fluid flow, heat conduction, or elasticity, are governed by these equations. Common applications include minimizing drag in fluid dynamics, maximizing structural strength, and optimizing material distribution in a design domain.
A central challenge in this field is the effective modeling of shapes. Currently, there is no universally accepted approach, and several techniques are employed, including level set methods, the method of mappings, and boundary parametrization. Each method has its own strengths and limitations.
In this talk, we will focus on the Riemannian perspective of shape optimization. Within this framework, Riemannian geometry provides a rigorous mathematical foundation for understanding and navigating the space of shapes. The core concept is to treat the space of admissible shapes as a Riemannian manifold, where each shape—such as a curve in 2D or a surface in 3D—represents a point in either an infinite- or finite-dimensional manifold. Riemannian metrics are used to define distances and gradients on this manifold, enabling the application of optimization techniques, such as gradient descent, in a mathematically consistent way.
We will explore various metrics for infinite-dimensional and finite-dimensional shape manifolds, examine commonly used numerical methods within this framework, and present several illustrative numerical examples.
In this talk we extend the work of Deckelnick et al. (arXiv.2301.08690) to shape optimization with parabolic PDE constraints, where we consider time-dependent as well as time-independent domains. We use the method of mappings and consider steepest descent and Newton-type directions for the solution of the underlying optimization problems. The appearing PDEs for the state and the adjoint are formulated using a least-squares space-time approach requiring only minimal regularity. The descent directions for the shape transformations are computed in the Lipschitz topology, where the respective linear programmes are numerically solved with an interior point method. We illustrate our approach at hand of a selection of numerical examples which demonstrate the performance of the method.
We present a general framework for PDE constrained shape optimization which combines the phase field method with a sharp interface approach based on the method of Lipschitz mappings. On the one hand, phase field methods are very well suited to numerically determine the shape, size and topology of a sought domain, but on the other hand, they have problems to sharpen out domains where they e.g. should develop corners. This, however is the strength of a sharp-interface approach developed in our group, which provides shape updates in the Lipschitz topology. This leads to a two-stage process that first determines an optimal shape numerically using the phase field method discretized with the finite element method. The resulting domain is the starting solution for the finite element-based sharp interface shape optimization method. We describe the underlying construction process in detail and investigate the performance of our method on a selection of test problems from the literature and from applications.
A common approach to simulate fracture propagation is by the means of phase field methods, in which the discrete fracture is replaced by a continuous phase-field variable. The phase-field variable then indicates the material state. Together with the material behavior, it can be obtained by the numerical solution of partial differential equations. This replacement however introduces additional parameters, which require additional effort in order to be chosen appropriately.
Instead of interpreting the problem of fracture propagation as an optimization problem with respect to the phase-field, alternative optimization algorithms can also be employed, e.g., shape or topology optimization. Additional challenges arise when considering the spectral decomposition of the strain tensor to exclude unphysical fracture paths.
In the first part of the talk, we present the optimization problem to be solved. In the second part, we then employ a shape optimization approach, address possible additional challenges in doing so, and solve the optimization problem numerically.
Scientific Computing is concerned with the efficient numerical solution of mathematical models from both science and engineering. The field covers a wide range of topics: from mathematical modeling over the development, analysis and efficient implementation of numerical methods and algorithms to software and finally application for the solution of complex real-world problems on modern computing systems. This interdisciplinary field combines approaches from applied mathematics, computer science and a wide range of applications in which in-silico experiments play an increasingly important role.
We consider the frequency domain simulation of second-order linear dynamical systems, which is crucial for many applications ranging from vibroacoustics to electrical circuits. These problems are parameter-dependent, involving frequency and additional parameters to account for uncertainties. This dependence on a large number of parameters results in a very high numerical cost. To address these challenges, surrogate modeling based on polynomial, rational, and reduced-order models has recently gained significant attention. In this contribution, we will present multiple perspectives on efficient numerical methods with emphasis on rational modeling.
In the first part (joint work with J. Bect, N. Georg, and S. Schöps) we will consider the frequency as the only parameter and introduce a hybrid method that combines a parametric rational model with a complex-valued kernel approximation. We will discuss the role of pseudo-kernels and the underlying reproducing kernel Hilbert spaces and compare the performance of the method against established rational surrogates, such as vector fitting and the Adaptive Antoulas-Anderson (AAA) algorithm [1].
In the second part (joint work with M. Bollhöfer, H. Sreekumar, S. Langer), we will consider both the frequency and additional model parameters with an underlying probability distribution. The frequency dependence is captured again with a rational model, specifically, a moment matching-based reduced order model. This reduced order model is then interpolated on sparse grids to account for the high-dimensional parameter space. We outline an adaptive procedure, driven by adjoint error indicators, that balances the reduced order and sparse grid approximation errors [2].
Finally, we will provide a brief outlook (joint work with J. Heiland, I.V. Gosea, D. Pradovera) on rational approximation of matrix- or tensor-valued data, offering a highly structured representation of snapshots over the frequency domain.
[1] Bect, J., Georg, N., Römer, U., and Schöps, S. (2024). Rational kernel-based interpolation for complex-valued frequency response functions. SIAM Journal on Scientific Computing, 46(6), A3727-A3755.
[2] Römer, U., Bollhöfer, M., Sreekumar, H., Blech, C., and Langer, S. (2021). An adaptive sparse grid rational Arnoldi method for uncertainty quantification of dynamical systems in the frequency domain. International Journal for Numerical Methods in Engineering, 122(20), 5487-5511.
The classical (weak) greedy algorithm is widely used within model order reduction in order to compute a reduced basis in the offline training phase: Ana posteriori error estimator is maximized and the snapshot corresponding to the maximizer is added to the basis. Since these snapshots are determined by a sufficiently detailed discretization, the offline phase is often computationally extremely costly. We suggest replacing the serial determination of one snapshot after the other with a parallel approach. In order to do so, we introduce a batch size b and add b snapshots to the current basis in every greedy iteration. These snapshots are computed in parallel. We prove convergence rates for this new batch greedy algorithm and compare them to those of the classical (weak) greedy algorithm in the Hilbert and Banach space case. Then, we present numerical results where we apply a (parallel) implementation of the proposed algorithm to the linear elliptic thermal block problem. We analyze the convergence rate as well as the offline and online wall-clock times for different batch sizes. We show that the proposed variant can significantly speed up the offline phase while the size of the reduced problem is only moderately increased. Additionally, the benefit of the parallel batch greedy increases for more complicated problems. A preprint is available here.
Many model order reduction (MOR) methods rely on the computation of an orthonormal basis of a subspace onto which the large full order model is projected. Numerically, this entails the orthogonalization of a set of vectors. The nature of the MOR process imposes several requirements for the orthogonalization process. Firstly, MOR is often times performed in an adaptive or iterative manner, where the quality of the reduced order model, i.e., the dimension of the reduced subspace, is decided on the fly. Therefore, it is important that the orthogonalization routine can be executed iteratively. Secondly, one possibly has to deal with high-dimensional arrays of abstract vectors that do not allow explicit element-wise access to their elements, making it difficult to employ so-called orthogonal triangularization algorithms
, such as Householder QR. For these reasons, (modified) Gram-Schmidt-type algorithms are commonly used in applications. These methods belong to the category of triangular orthogonalization
algorithms, that do not rely on element-wise access to the vectors, and can be easily updated. Recently, shifted-Cholesky-QR-type algorithms have gained attention. These also belong to aforementioned category and have proven their aptitude for MOR algorithms in previous studies. A key benefit of these methods is that they are communication-avoiding - leading to vastly superior performance on memory-bandwidth-limited problems and parallel or distributed architectures. This work formulates an efficient updating scheme for Cholesky-QR-type algorithms and proposes an improved shifting strategy for highly ill-conditioned matrices. Driven by the MOR applications, in the numerical experiments, we further introduce the dominant subspace angle as a quality measure in addition to classic measures like deviation from orthogonality or the reconstruction error.
In this talk, we will introduce the formulation and theoretical analysis of the reduced-order numerical method constructed by proper orthogonal decomposition (POD) for nonlocal diffusion problems with a finite range of interactions. Due to the nonlocality, the corresponding discrete systems of nonlocal models have less sparsity than those for PDEs. Given the challenges of frequently handling large systems of linear equations with much lower sparsity, we establish a reduced-order model (ROM) for nonlocal diffusion problems to expedite the iterative solution process. The ROM is constructed using FE solutions in a very small time interval as snapshot data and has much fewer degrees of freedom than FEMs. In this contribution, we focus on discussing mathematical justifications for the existence, stability, and error estimates of the ROM method, which have not been considered in previous research for nonlocal models. Another important component of our work is that we systematically explore the effect of different parameters on the behavior of the POD algorithms. Numerical examples will be finally presented to validate the theoretical conclusions and to illustrate the efficiency of the proposed method.
The convergence speed of linear model order reduction techniques cannot be better than the Kolmogorov N-width. The problem is that it is well-known that the Kolmogorov N-width of wave-type problems only decays slowly.
In this talk we will address the question to what extent discontinuous Galerkin Trefftz formulations of parametrized partial differential equations and its associated Trefftz spaces are suitable for model order reduction techniques. We construct non-linear model order reduction approaches using a specific discontinuous Galerkin Trefftz formulation of the wave equation.
Trefftz finite element formulations use trial and test functions that solve the corresponding PDE locally on each mesh element. In order to approximate the boundary conditions well, they cannot be continuous on the whole domain and therefore are discontinuous Galerkin methods by nature.
We are building on a specific formulation of the homogeneous wave equation, published by Andrea Moiola and Ilaria Perugia in 2018. It is a space-time approach considering the wave equation as a first-order system.
Subject to investigation is whether in non-linear model order reduction approaches the Trefftz spaces will provide reduced spaces that outperform the Kolmogorov N-width. The approaches are tested in numerical experiments. Although the work is still in its early stages, the results of the numerical experiments indicate that dG Trefftz methods have the potential to be well suited for efficient non-linear model order reduction techniques.
This section will provide a forum for the presentation of historical and/or speculative studies on mechanics focusing on the relations between concepts from antiquity up to now, which could be of interest to historians of mechanics and physics as well as researchers in mechanics. The contributions of those authors who have benefited from the study of old theories for developing current models in the field of applied mechanics and mathematics will be also considered. This in order to highlight the importance of historical and epistemological perspective setting for the current and future developments in science. With these aims in mind, topics of applications will include (but are not limited to): -Virtual work laws -Variational methods and conservation principles -Molecular foundation for continuum mechanics -Coarse graining processes and the use of Cauchy-Born rule -Continuum and statistical thermodynamics -Material deformation theories and experimental results in solid mechanics -Inelastic deformations, damage and fracture -Development and use of the Ricci’s tensor calculus in deformation theories of solids, shells and fluids -Mechanical models for ancient constructions
Fritz Noether, Emmy Noether's younger brother, was a German mathematician who contributed in particular to the development of a new scientific discipline, applied mathematics. He worked as a professor at the Technische Hochschule Breslau and the State University of Tomsk, which brought him into conflict with two dictatorships. His tragic death can help us to better understand current political problems. At the same time, as a doctoral supervisor, he ensured that the ZAMM journal had an excellent editor-in-chief for may years.
The approach using the formulations of the Lagrange functions allows the equations of motion to be established. These can be integrated using numerical methods and the trajectories calculated. Physical properties can be assigned to the mass. The spin is remarkable. Although the energy input is low, it has a significant influence on the movement of the ball. Friction can be taken into account as rolling and boring friction.
Conformal mappings, i.e., complex differentiable bijective mappings, from one domain in the complex plane onto another domain preserve angles locally and therefore respect plane potential functions. This makes them useful in physics, for example in fluid mechanics and electrostatics. In particular, one is interested in finding a conformal mapping from a domain in the plane where a physical problem occurs to the complex unit circle since potential functions can easily be calculated here.
Bernhard Riemann (1826–1866) proved in [Riemann 1851, Art. 21] that such a mapping always exists if the starting domain is simply connected, i.e., has no holes. However, even if he took the plausibility for his proof from physics, he only showed the mere existence of the mapping without any concrete expression that could be used for explicit calculations.
As early as the winter semester 1863/64 Franz / Franciszek Mertens (1840–1927) remarked “that it was odd that Riemann had already proven the existence of a function which conformally maps, e.g., the area of a plane straight-sided triangle onto the area of a circle whereas for the present time the effective determination of such a function seems to surmount the powers of analysis” [Schwarz 1869, 105]. (Mertens was born in Schroda / Środa WielkoPoland. His father’s family came from Bremen. On his mother’s side, he had a Polish grandmother and a French grandfather. From 1865 to 1884 he was professor of mathematics at Kraków, where he lectured in Polish and published both in German and in Polish.) Hermann Amandus Schwarz (1843–1921) took up that problem and, encouraged by his academic teacher Karl Weierstraß (1815–1897), published an article with explicit formulas for conformal mappings in the case of polygons [Schwarz 1869]. In the meantime, also Elwin Bruno Christoffel (1829–1900) had attacked this problem [Christoffel 1867].
References
Christoffel, Elwin Bruno: Sul problema delle temperature stazionarie e la rappresentazione di una data superficie. Annali di Matematica Pura ed Applicata 1 (1867), 89–103.
Riemann, Bernhard: Grundlagen für eine allgemeine Theorie der Functionen einer veränderlichen complexen Grösse. Ph.D. thesis. Göttingen 1851.
Schwarz, Hermann Amandus: Über einige Abbildungsaufgaben. Journal für die reine und angewandte Mathematik 70 (1869), 105–120.
The design of a new machine is preceded by an analysis focused on the optimum realisation of the expected utility functions. It is motivated by the desire to obtain the simplest possible solution that minimises power consumption. The most general definition calls synthesis the design of a mechanism involving the selection of the structural scheme and the dimensions of the links in order to obtain the desired motion. When a designer identifies the type of mechanism and searches for its optimal dimensions, such a task is called dimensional synthesis. The earliest methods were developed graphically. They have not only useful but also scientific and didactic qualities. The constructions presented here undoubtedly develop the geometrical imagination needed in an engineer's work. Despite the obvious limitations of geometric constructions, due to the rather low complexity of the tasks to be solved, such approaches – although historical - are still used today and often form the basis for the development of analytical and computer methods.
In this work, the synthesis problem of a mechanism for emptying containers was solved graphically. The method is based on the use of certain geometric-kinematic properties of the four bar linkage mechanism, which constitutes the main kinematic chain of the device. The container that needs to be emptied is attached to the coupler. In order to make optimum use of the kinematics of this mechanism, the coupler moves between two preset extreme positions in which the direction of its rotation changes. At these positions, the instantaneous angular velocity of the container is zero. In the first, simpler approach, it is assumed that the coupler rotates by 90 degrees when the container is emptied and that the active link also rotates by the preset angle. The way the mechanism is designed allows the lengths of some links and the points at which the links are pivoted to the frame to be influenced. Thus the size of the working area can be approximately prescribed. In computer methods, these features are often checked after the dimensions of the mechanism have been found due to the main objective function, and this causes the solution to be rejected in the final stages of the design process. The task was developed to the case in which the rotation angle of the container is given and greater than 90 degrees. the graphical relationship was then described analytically and implemented. Example solutions were presented.
In this talk, we address the problem of uniqueness in learning physical laws for systems of partial differential equations (PDEs) [1]. Contrary to most existing approaches, we consider a framework of structured model learning, where existing, approximately correct physical models are augmented with components that are learned from data. Our main result is a uniqueness result that covers a large class of PDEs and a suitable class of neural networks used for approximating the unknown model components. The uniqueness result shows that, in the idealized setting of full, noiseless measurements, a unique identification of the unknown model components is possible as the regularization-minimizing solution of the PDE system. Furthermore, we provide a convergence result showing that model components learned on the basis of incomplete, noisy measurements approximate the ground truth model component in the limit. These results are possible under specific properties of the approximating neural networks and due to a dedicated choice of regularization. With this, a practical contribution is to provide a class of model learning frameworks different to standard settings where uniqueness can be expected in the limit of full measurements.
Reference:
[1] Martin Holler and Erion Morina. On uniqueness in structured model learning, 2024. [ArXiv preprint https://arxiv.org/abs/2410.22009 ]
Adversarial training procedures can alleviate the problem of overfitting in multi-class classification. Exploring this approach from a distributional perspective leads to robust optimization using a Wasserstein distance. Recent theoretical results showed a similarity to the multi-marginal formulation of Wasserstein-barycenter in optimal transport. Unfortunately, both problems suffer from the curse of dimension, making it hard to exploit the nice linear program structure of the problems for numerical calculations; the number of unknowns scales polynomially with the number of data points and even exponentially with the number of classes. We investigate how ideas from genetic column generation for multi-marginal optimal transport can be used to overcome the curse of dimension in computing the minimal adversarial risk in multi-class classification. For details, see the accompanying preprint: arXiv:2406.08331
We consider general linear PDE boundary value problems in the strong form on arbitrary bounded Lipschitz domains. For such problems, we recently presented a scale of meshless greedy kernel-based collocation techniques [1]. The approximation spaces are incrementally constructed by carefully collecting Riesz-representers of (derivative operator) point-evaluation functionals. The approximants are obtained by generalized interpolation [2, Chap. 16]. The scale of methods naturally generalizes existing approaches of PDE approximation [3] as well as function approximation techniques [4,5]. Assuming well-posedness and a stability estimate of the given PDE problem, we can rigorously prove the convergence rates of the resulting approximation schemes [1]. Interestingly, those rates show that it is possible to break the curse of dimensionality and potentially reach high input dimensions. For cases with domains in small input space dimensions the schemes allow experimental comparison with, e.g., standard finite element methods. The strength of the procedure, however, is the ease of treating high-dimensional input space dimensions due to the mesh independence and omitting spatial integrals. When considering additional parametric inputs, the overall procedure can be interpreted as an apriori surrogate modelling approach. Herewith, we can especially address (non-affine) parametric geometries, moving source terms, or high-dimensional domains, which pose obstacles to standard model reduction techniques. We present numerical experiments demonstrating these aspects, especially exponential convergence rates for problems with smooth solutions and smooth kernels.
References:
[1] S. Dutta, M. W. Farthing, E. Perracchione, G. Savant, and M. Putti. A greedy non-intrusive reduced order model for shallow water equations. Journal of Computational Physics, 439:110378, 2021.
[2] R. Schaback. A greedy method for solving classes of PDE problems, 2019. arXiv 1903.11536.
[3] H. Wendland. Scattered Data Approximation, volume 17 of Cambridge Monographs on Applied and Computational Mathematics. Cambridge University Press, Cambridge, 2005.
[4] T. Wenzel, G. Santin, and B. Haasdonk. Analysis of target data-dependent greedy kernel algorithms: Convergence rates for f-, f/P- and f.P-greedy. Constructive Approximation, 57(1):45–74,Feb 2023.
[5] Wenzel, T., Winkle, D., Santin, G. and Haasdonk, B.: Adaptive meshfree approximation for linear elliptic partial differential equations with PDE-greedy kernel methods. preprint arXiv:2207.13971v2, 2024.
Metamaterials are artificial and architected materials, offering various possible designs for achieving peculiar mechanical properties thanks to their structural arrangement. Although promising, with potentially broad applications in, e.g., medicine [1] or mobility [2], apprehending their geometry is challenging due to their complex and often disordered configuration. In this regard, applied topology and its tools, such as persistent homology [3], emerged as a great mathematical method to grasp their geometrical organization and to use it in the design and inverse design of metamaterials. Indeed, persistent homology allows the local geometry of the structure to be captured and converted into point clouds representing the structural features. In other words, the geometrical problem is translated into a data analysis problem, suitable for statistical methods, for example via machine learning models [4]. We discuss the application of persistent homology to point patterns [5] and we show that the resulting data, although obtained from local geometrical features, can capture with great accuracy global properties only appearing on large scales, for instance, the hyperuniformity of the arrangement [6]. We illustrate how the persistent homology delivers an exploitable description of the geometry, and we expose different data analysis methods that can be applied, ranging from machine learning to minimization processes including the Wasserstein distance. Finally, we discuss the highly promising extensions offered by these results to the case of hyperuniform fields, with a large range of applications to the mechanical engineering of metamaterials.
[1] Masoud Shirzad, Ali Zolfagharian, Mahdi Bodaghi, Seung Yun Nam, Enhanced manu-facturing possibilities using multi-materials in laser metal deposition, European Journalof Mechanics - A/Solids, volume 98, (2023)
[2] Anastasiia Krushynska, Metamaterials in flexible wings, Society of Acoustics (2024)
[3] Gunnar Carlsson, Topology and data, Bull. Amer. Math. Soc. 46, 255-308 (2009)
[4] Pun, Chi Seng and Lee, Si Xian and Xia, Kelin, Persistent-homology-based machinelearning: a survey and a comparative study, Artif. Intell. Rev. 55, 5169-5213 (2022)
[5] Abel H. G. Milor, Marco Salvalaglio, Inferring hyperuniformity from local structuresvia persistent homology arXiv preprint (arXiv:2409.08899) (2024)
[6] Salvatore Torquato, Hyperuniform states of matter, Physics Reports volume 745, 1-95(2018)
In this talk we introduce some urban transport networks that we can analyze using multilayer complex networks. For these we show several multilayer centrality measure and how they can be computed efficiently.
The report mainly introduces two types of system identification methods based on sparse identification and neural network frameworks, including the fast SINDy method based on model selection and dictionary learning, and the ODE-RC algorithm by combining Symplectic Integrator and Reservoir Computing. The identification and prediction performance of several types of dynamic systems have verified the advantages of the above methods in terms of interpretability, training speed, and prediction accuracy.
Physics-based model order reduction presents a great potential for digital twins, providing highly accurate yet efficient approximations to solutions of parametrized partial differential equations. However, embedding such physics-based reduced order models in digital twin frameworks presents significant challenges. In this talk, we consider some of these challenges (as well as opportunities), particularly in the context of multi-scale materials.
In the first part, we consider model order reduction for two-scale materials simulations. Two-scale simulations are often employed to analyze the effect of the microstructure on a component’s macroscopic properties. Understanding these structure–property relations is essential in the optimal design of materials, or to enable (for example) estimation of microstructure parameters through macroscale measurements. Since these two-scale simulations are typically far too expensive computationally to use in digital twins, we explore the use of parametric MOR. To this end, we briefly present some recent work on a reduced basis methods to construct inexpensive surrogates for parametrized microscale problems, and also highlight difficulties for MOR presented by nonlinear constitutive relations in (multi-scale) problems in mechanics.
In the second part, we discuss the embedding of reduced order models in inverse and data assimilation problems, which are essential in digital twins. We will delve into strategies to mitigate the approximation error, to reduce both the offline and online computational cost, and to choose the most informative data. Finally, in conclusion, we offer insights into how we can further enhance and exploit the power of model order reduction in digital twins.
Model Predictive Control (MPC) and Reinforcement Learning (RL) are two of the most prominent methods for computing control laws based on optimization. In both cases, the resulting controllers approximate infinite-horizon optimal controllers, where the objective of the optimization may range from stabilization of a set-point to energy efficiency to yield maximization. However, for both methods the computational effort may make their application infeasible for large-scale or complex problems. In this talk we explain the basic functioning of both methods and then present situations in which the methods provably work well, by identifying beneficial structures of the solutions of optimal control problems. In the case of MPC we focus on the so-called turnpike property of optimal trajectories, while for Deep RL (i.e., RL with deep neural networks as approximators) we look at the compositional structure of optimal value functions. Examples and use cases from academia and from industry illustrate the theoretical findings.
Lunch will be served at the conference venue in Rooms 51 and 53 for a pre-purchased voucher.
——
In many modern computer application problems, the classification of image data plays an important role. Among many different supervised machine learning models, convolutional neural networks (CNNs) and linear discriminantan alysis (LDA) as well as sophisticated variants thereof are popular techniques. Divide-and-conquer algorithms in combination with machine learning methods have been proven to be an efficient approach for image classification problems yielding both, higher accuracy and good parallelization properties.
In this talk, two different decomposed CNN models are experimentally compared for different image classification problems. Both models are loosely inspired by domain decomposition methods and in addition, combined with a transfer learning strategy. The resulting models show improved classification accuracies compared to the corresponding, composed global CNN model without transfer learning and besides, also help to speed up the training process. Moreover, a novel decomposed LDA strategy is discussed which also relies on a localization approach and which is combined with a small neural network model. In comparison with a global LDA applied to the entire input data, the presented decomposed LDA approach shows increased classification accuracies for the considered test problems.
We present and discuss a parallel training strategy for the training of neural networks. Our strategy is based on a domain decomposition-like approach, which is combined with trust region as a convergence control strategy. The resulting additive non-linear preconditioner APTS (Additively Preconditioned Trust-region Strategy) provides a general framework for the parallel training of neural networks, which includes the decomposition of the network's parameters (model-parallelism) or of the samples of the training data set (data parallelism). The combination with a Trust-Region strategy ensures global convergence and eliminates the need for extensive hyper-parameter tuning. We furthermore remark on SAPTS, a stochastic variant of APTS.
We compare (S)APTS in terms of convergence behavior and hyper-parameter sensitivity to traditional training methods like Stochastic Gradient Descent (SGD), ADAptive Moment estimation (Adam), and Limited-memory Broyden–Fletcher–Goldfarb–Shanno (LBFGS) algorithm. Our numerical experiments, which are conducted on benchmark problems from the field of image and text classification, showcase the capabilities, strengths, and limitations of APTS in training neural networks. The experiments demonstrate that APTS applied to the parameter space (model parallelism), especially with an increased number of subdomains, achieves comparable or superior generalization capabilities and faster convergence compared to traditional optimizers while offering inherent parallelism for the training procedure. APTS applied to the data space, however, shows competitive generalization capabilities with a small number of "data-subdomains" only, as its performance does not scale optimally with an increasing number of subdomains.
Neural network architectures based on overlapping domain decomposition approaches have emerged as a powerful framework for enhancing the efficiency, scalability, and robustness of physics-informed neural networks (PINNs). In this work, we apply this approach to randomized neural networks (RaNNs) for solving partial differential equations (PDEs). A separate neural network is independently initialized on each subdomain using a uniform distribution, and the networks are combined via a partition of unity. Unlike classical PINNs, only the final layers of these networks are trained, which has a significant impact on the resulting optimization problem.
The optimization problem, for linear PDEs, reduces to a least-squares formulation, which can be solved using direct solvers for small systems or iterative solvers for larger ones. However, the least-squares problems are generally ill-conditioned, and iterative solvers converge slowly without appropriate preconditioning. To address this, we first apply singular value decomposition to remove components with low singular values, improving the system’s conditioning. Additionally, we employ a second type of overlapping domain decomposition in the form of additive and restricted additive Schwarz preconditioners for the least-squares problem, further enhancing solver efficiency.
Numerical experiments demonstrate that this dual use of domain decomposition significantly reduces computational time while maintaining accuracy, particularly for multi-scale and time-dependent problems.
Over the last years, Transformer-based models have achieved cutting-edge results in areas such as natural language processing, computer vision, multimodality, and robotics due to the parallelization of their attention mechanism and its direct access to distant tokens in the sentence. Nonetheless, such parallelization can only be carried out along the sentence length, not the number of layers (i.e., depth). Despite yielding an amazing performance, the rising scaling of Transformers' depth and dimension entails a high computational cost. By formulating the forward and backward propagations of the Transformer as ODEs, we explore parallel-in-time and multilevel methods to mitigate the computational cost caused by a large depth. We present numerical experiments from the field of large-language modeling that demonstrate the effectiveness of the proposed training strategies.
In the field of experimental mechanics, strain is a very frequently measured variable. Furthermore, the experimental determination of inhomogeneous strains is often necessary. This requires powerful field measurement methods which, due to their characteristic, have a high information density. One measurement method that is becoming increasingly important in this context is digital image correlation (DIC), which is highly developed with increasingly powerful hardware and software. For small deformations, however, this method has system-related limitations due to its limited sensitivity. In contrast, for large deformations, this method is very well suited.
In this contribution, a method is presented that enables an extension of the measuring range of DIC to smaller deformations. For this purpose, a B-spline approximation of the displacements (DIC raw data) with optimized smoothing is carried out. To assess the method, it was analyzed regarding precision and accuracy in a four-point bending test with an aluminum beam in the elastic strain range of the metal. A 3D DIC strain analysis was performed at the front side between the load application sections. The very precise and accurate strain measurement using electrical strain gages, which were applied to the top and bottom of the bending beam between the load application sections, served as a reference measurement method. The results show good agreement of the strains, which proves the high precision and accuracy of the strain determination with the approximation-based method. In addition, the results are compared with strain distributions obtained using the standard evaluation method.
Furthermore, an application example for the method, which has been extended with regard to circular boundaries, is shown. Therein, the deformation analysis of a borehole environment at a loaded plastic specimen based on 2D DIC data is presented, which is potentially to be used in the context of residual stress analyses. The strain fields obtained in this way show high precision likewise.
Damage development in power engineering steel structures requires instant monitoring to maintain their properties, ensure the safety of working components, and estimate their service life. One should highlight, that the operational loads and simultaneous microstructural changes occurring due to high-temperature exposure accelerate the development of damage dynamics significantly. Thus, it is of the highest importance to maintain the safe state of power engineering steel pipes subjected to continuous operations under high pressure and temperature to further minimize the operating costs of industrial structures. Therefore, the first main goal of this research is to assess and describe the effect of 280 000 h operating conditions on the microstructure, strength properties, and dynamics of fatigue damage development of 10CrMo9-10 power engineering steel (10H2M). The quantitative assessment of the degradation state in 10H2M steel was described as a function of the fatigue damage measure, ϕ, and the fatigue damage parameter D.
The second goal of this research was to compare the effectiveness of two different optical measurement techniques (Digital Image Correlation – DIC and Electronic Speckle Pattern Interferometry - ESPI) during fatigue damage development monitoring in X10CrMoVNb9-1 (P91) power engineering steel for pipes. The specimens machined from the as-received pipe were subjected to fatigue loadings and monitored simultaneously using DIC and ESPI techniques. It was found, that DIC enables to monitor the fatigue behaviour and accurately indicate the area of potential failure within the early stage of fatigue damage development. Contrary to this, the application of the ESPI method was not so successful. It also enabled to indicate a location of potential damage area, however, significantly later than DIC. The main limitation of ESPI technique usage results from its high sensitivity which procures many difficulties during working with the servo-controled hydraulic testing machines. Such machines generate a high frequency vibrations during experiments due to oil flow supplying the machine loading systems. The vibrations disturb significantly the work of ESPI cameras and narrows greatly its measuring capabilities.
The paper presents experimental methods for determining the fatigue characteristics of various materials. Basic fatigue characteristics are obtained under tension-compression loads. However, few structures work in this system. More often we are dealing with some form of bending. In this case, we are also dealing with variable normal stresses. However, these characteristics are not the same. Fatigue tests should reflect the performance of the structure. Therefore, such tests can be implemented as two-point bending (buckling-like), bending with restraint, and rotational bending. This work presents a new three-point bending station. The station's operation was verified based on fatigue tests of rubber belts used on conveyors. An analogous position can be proposed for four-point bending.
This paper was funded by the National Science Centre, Poland, under the OPUS call in the Weave programme No 2022/47/I/ST8/00003.
In this research talk, we delve into the intricacies of selected optical measurement techniques applied to experimental mechanics of complex and big (in relation to the actual field-of-view) objects. The focus is on two major approaches: multimodal measurements and synthetic aperture. We explore the challenges faced by two groups of experimental methods: (i) digital image correlation (or shearography) combined with structured light and spectral methods dedicated to scattering objects and (ii) digital holographic microscopy/phytography (2D measurements and monitoring) or optical diffraction tomography (3D measurements and monitoring) also combined with spectral methods dedicated to weakly scattering phase-amplitude objects. These two groups have very different applications, namely the first group – big engineering or cultural heritage objects and the second – technical or biological transparent or quasi-transparent micro-objects, however, the metrological challenges (accuracy, reliability, and efficiency of the methods) are similar. Through a comprehensive analysis, we shed light on the advancements and practical implications of applying multimodal and synthetic aperture approaches in the measurements of full-field, mechanical features of investigated objects. We focus on the problems of the determination of a common coordinate system for data distributed in space, time, and modalities. Also, we discuss the need of standardised procedures and calibration bodies for these complex measurements.
In digital image correlation (DIC) different methods to determine the displacements fields on the surface of a specimen are proposed in the literature. Here, we assume that a commercial DIC system provides the three-dimensional coordinates of material points on the curved surface of a specimen, and one does not have any access to the pixel data. Based on this point cloud data, a point on the surface in the initial configuration can be obtained by interpolation, where we draw on radial basis functions (RBF) to obtain continuous interpolation. The same can be carried out for each individual picture, and, thus, the motion. From this, in-plane strains, principal strains and principal directions can be determined (Lagrangian description). Pictures of an infrared thermographic (IRT) camera provides the temperature assigned to a spatial point (Eulerian description). To assign the temperature to a material point, both systems must be combined. A first approach makes use of a pure geometrical consideration requiring an object with specified points and the distance between the IRT-camera to the object. Alternatively, a camera model can be considered. This is shown at particular applications, namely pure theoretical verification examples, and real specimens.
Accurate and efficient numerical methods for solving differential equations remain fundamental to modern computational engineering. Challenges arise, e.g., from costly vector field evaluations, high-dimensional states, multirate dynamics, long-term integration, real-time requirements in control, or ensemble forecasting.
This work explores neural network-based approximations for the integration errors inherent in standard numerical methods, as well as the development of more advanced integration schemes. By combining the strengths of traditional physics-based numerical methods with the universal approximation capabilities of neural networks, we seek to enhance both computational efficiency and the mitigation of errors in classical integration techniques. Concretely, we propose NN-enhanced Runge-Kutta schemes as well as structure-preserving enhanced symplectic integration methods.
The approach is applied to the simulation of wind turbines based on open-source automatically generated wind turbine models with parameters derived from the open-source reference simulation framework OpenFAST. These multi-body models generated using CADynTurb contain modal representations of the elastic tower and blades and modular aerodynamics models. Large simulation tasks occur, e.g., in fatigue analysis or when predicting the remaining useful lifetime for which ranges of initial conditions, system parameters, and exogenous inputs have to be considered.
Extended dynamic mode decomposition (EDMD), embedded in the Koopman framework, is a widely-applied technique for prediction and control of dynamical control systems. In this talk, we discuss recent uniform error bounds for kernel-based EDMD. Leveraging the interpolation property of regression problems in Reproducing Kernel Hilbert Spaces, we deduce uniform error bounds for kernel-based EDMD. A particular feature is an explicit dependence of the error bound on the distance to the data set. We show that this property is a crucial ingredient for data-driven stability analysis, feedback control and predictive control with guarantees.
Data-driven virtual sensing methods enable cost efficient monitoring of critical system components in predictive maintenance applications. The parameterization of these methods mainly requires the collection of representative usage datasets for the system of interest. In most applications, very long times to failure are a significant challenge for the validation of these approaches. This contribution therefore provides an experimental example dataset, where notched specimens are subjected to individual service loads until failure, using a fatigue test bench.
The dataset is based on acceleration and strain measurements from an instrumented e-bike [1], which are randomly resampled and transformed into new service loads. The experimental setup consists of two servo hydraulic cylinders, which control both force and displacement of the steel specimens. These cylinders are also equipped with acceleration sensor. Separate service loads ensure independence of parameterization and validation datasets and a component SN-curve is obtained for subsequent fatigue analysis.
Similar to predictive maintenance applications in vehicle monitoring, the virtual sensing task is to predict the fatigue damage accumulation in the specimens from the acceleration measurements. This is achieved using two different strategies. In the first method [2], the force signal of the specimens it directly predicted from the acceleration measurements. Following the nominal stress concept, fatigue damage sums are computed using cycle counting and the elementary Miner rule. The second virtual sensing approach [3] instead characterizes the acceleration data using the wavelet Scattering transform and principal component analysis, which is subsequently used for a direct fatigue damage regression. Both approaches are evaluated and compared using an independent validation dataset.
References:
[1] Heindel, Leonhard, Hantschke, Peter und Kästner, Markus. „eBike measurements for fatigue monitoring and maneuver identification tasks“. OpARA TU Dresden, 2022
[2] Heindel, Leonhard, Hantschke, Peter und Kästner, Markus. „ Assessment of hybrid machine learning models for non-linear system identification of fatigue test rigs“. Franklin Open, 2024
[3] Heindel, Leonhard, Hantschke, Peter und Kästner, Markus. „Fatigue monitoring and maneuver identification for vehicle fleets using a virtual sensing approach“. International Journal of Fatigue, 2023
In maritime engineering ocean waves often play an important role. The sea state generates dynamic loads on structures and floating bodies from which motion or stresses result. Often the ocean wave state is irregular and may be understood as chaotic or turbulent. As a consequence of the corresponding high-dimensional nature of the wave states under consideration, numerical simulation of the evolution of the sea state forms a considerable challenge. Moreover, identifying initial and boundary conditions, and reconciling them with the wave evolution, forms another task. In the present talk different approaches to the problem of wave identification and wave simulation will be presented. The approaches range from ideas anchored more in traditional numerical simulation, to ideas employing machine learning, including physics informed neural networks for envelope equations and potential flow. It is demonstrated, how the different approaches deal with differing efficiency and accuracy objectives, and how the turbulent ocean wave dynamics, even including rare rogue wave events, can be identified and mapped out.
The prediction of forced vibrations in nonlinear systems is typically based on solving differential equations derived from 'first principles'. Such modeling requires a high level of expertise, experience, and prior knowledge about the system and its relevant parameters. In cases where such in-depth understanding is not available but observations of the system's behavior are accessible, data-driven approaches offer a promising alternative, as they demand less user expertise and system-specific information.
This contribution introduces a data-driven approach employing stabilized autoregressive neural networks (s-ARNNs) to model nonlinear transfer behavior. The performance of these s-ARNNs is compared to that of long short-term memory (LSTM) and gated recurrent unit (GRU) architectures, as well as classical linear methods in both frequency and time domains. Particular attention is given to addressing stability issues inherent to autoregressive architectures. Additionally, it is demonstrated that performance can be significantly enhanced through a hybrid architecture, which incorporates frequency response functions (FRFs) as information about the linearized transfer behavior to enrich the dataset.
For demonstration and comparison, the proposed s-ARNNs, alongside the alternative approaches, are applied to a forced Duffing oscillator – as a well-documented reference example – and to data from a real-world application from automotive engineering. Across all time series in both examples, the s-ARNN approach consistently exhibits superior accuracy and an enhanced ability to handle nonlinear effects compared to the other methods. Moreover, the implemented stabilization technique ensures robustness, making the approach highly applicable beyond academic scenarios.
In the final step, aleatoric and epistemic uncertainties are estimated using a mean-variance approach combined with ensembling the outputs of the s-ARNNs. Validation of this method confirms that combining s-ARNN with FRF achieves the highest accuracy and reliability in both prediction performance and uncertainty quantification.
Soft robots have emerged as a subclass of continuum robots as a young research field in robotics. These soft robots are made from soft materials and can undergo large continuous deformations. Our study considers soft pneumatic actuators (SPA) with a slender cylindrical structure. The continuous deformations of soft robots are advantageous, however, the description of this behavior is highly non-linear and presents a vast possible design space. The design is typically driven by trial-and-error approaches, accompanied by simulations and experimental verification, furthermore, it is generally very task-specific. Machine Learning offers a promising way to enrich the design process of soft robots. It can restrict the initial design space and guide practitioners to good designs. In this talk, we show how Machine Learning can be used to support the inverse design process of slender soft robots. In our case, a workspace of the soft robot end-effector can be described at different pneumatic pressures. This workspace serves as the input for design optimization to obtain an optimal cross-section geometry for a slender soft robot. As this cross-section could be highly detailed in design, it is also necessary to favor manufacturable designs during the optimization stage.
Designing mechanisms and machines that execute periodic motions typically requires considerations of speed and energy efficiency. This is a challenging task due to the deep interdependence created by the mechanical dynamics between the hardware (morphology) and the control (motion). The co-design of such mechatronic systems involves the simultaneous and integrated development of both the physical components (such as actuators and mechanical structures) and the control algorithms that govern the system's behavior.
To address this in a design assistant, we formulate an extended trajectory optimization problem to simultaneously identify optimal designs that generate optimal periodic motions. Such problems are inherently non-convex due to nonlinear dynamics and implicit boundary constraints coupling initial and final conditions.
To cope with this challenge, we explore the use of the Koopman operator framework to tackle trajectory optimization problems with a partially convex approach. While the Koopman operator has been successfully applied in model predictive control, handling mixed boundary constraints within this framework remains an open challenge. This work makes two key contributions: (1) we demonstrate why full convexification of the problem is fundamentally infeasible within the Koopman operator framework, and (2) we propose a bilevel optimization approach that convexifies the high-dimensional lower-level problem, resulting in a simplified, low-dimensional upper-level problem. This decomposition not only facilitates efficient computation but also supports global optimization strategies.
The bi-level structure is particularly suited for co-design, as it allows the morphology design (upper-level) and motion planning (lower-level) to be addressed independently yet collaboratively. To validate the method, we present case studies on two systems: the mathematical pendulum and the compass-gait walker, showcasing the approach's effectiveness in optimizing periodic motion.
Tracking control of multibody systems typically requires a deep understanding of system kinematics and dynamics. For closed-loop mechanisms, however, this becomes challenging, which is why this study introduces a novel method that utilizes artificial intelligence (AI) to simplify the process, making it possible to achieve effective trajectory control with minimal technical expertise [1].
A key component is to use surrogates for inverse models based on artificial neural networks, which will be purely trained from data being generated, e.g., by classical simulation of a nominal model of the lambda robot. With these nominal surrogates for inverse kinematics and inverse dynamics, a well-performing linear quadratic regulator (LQR) controller based on linear feedback approximation is designed to track the desired trajectory.
This works perfectly well if the plant is consistent with the nominal model used for data generation. The behavior of a real model with disturbed parameters, however, may deviate from an ideal tracking. Such model uncertainties, such as disturbance and external forces, can be taken into account by an additional feedback loop to adjust the control signal w.r.t discrepancies between the real tracking path of the system and the desired track.
Different parametric uncertainties are considered to examine the robustness of the proposed concept. First results indicate excellent tracking performance within an adequate range of error, and validate the efficacy of this surrogate-based control approach, highlighting its adaptability and reliability for complex robotic systems without insight in details of their equations of motion.
[1] Hajipour, S.; Bestle, D. (2024) “Data-based Design of a Tracking Controller for Planar Closed-loop Mechanisms”. In: Proc. of NAFEMS Nordic Conference on AI and ML in Simulation Driven Design, Lund.
After decades of research, four-bar linkages are still an active research topic in engineering, as they can convert a rotary motion into a linear or other, more complex motion if carefully designed. One of the central theorems relating to four-bar linkages is Grashof's law. Based on the four given link lengths, this law dictates the resulting mechanism type and its behaviour, e.g., crank-rocker, double-crank or double-rocker mechanism. In particular, Grashof's law must be fulfilled to obtain a functioning mechanism.
For the rapid development of task-specific mechanisms, the inverse problem must be learned, i.e.: Given a mechanism path, what are appropriate mechanism parameters to produce a similar motion? In recent years, the application of data-driven methods to the design and dimensional synthesis task of four-bar linkages has yielded impressive results. However, when predicting the link lengths of a four-bar linkage, a neural network can also propose infeasible mechanism configurations, especially when the given input is far from the patterns of the training distribution. This raises interest in neural networks that realize prescribed constraints, such that the outputs always guarantee the fulfilment of certain physical conditions, leading to more robust predictions and a better network performance.
We present a novel neural network architecture that incorporates Grashof's law directly into the final layers of the neural network, ensuring that the law is always satisfied for any predicted link lengths. The network thus guarantees the validity of the proposed designs. To evaluate the robustness and network accuracy in the presence of outliers, we compare our proposed method against a naive feedforward MLP approach for paths from a test set that matches the training distribution, as well as with hand-drawn paths for which there may not exist a mechanism that can reproduce the motion exactly. Furthermore, we investigate the impact of different dataset sizes on the performance of the presented architecture.
The growing use of quadrotors in fields like delivery services, infrastructure inspection, industrial agriculture, and surveillance has increased the demand for their autonomous capabilities, including path-following control. Existing methods, such as backstepping, feedback linearization, and learning-based techniques like deep learning, have proven effective. The present work novelly uses model predictive path-following control to address the path-following problem for quadrotors. This approach has, so far, mostly been applied to planar and industrial robots.
The proposed model predictive path-following controller extends prior work on trajectory tracking model predictive control. Therein, a timing law is introduced to render the path parameter time-dependent. This timing law includes a virtual input to steer the path parameter and its associated dynamics within the optimal control problem. The deviation of the predicted output compared to the predicted evolution of the path is then penalized through a quadratic cost function, similar to established results from model predictive control. Stability is ensured through stabilizing terminal conditions designed using the quasi-infinite horizon approach. The key advantage of this control scheme over previous approaches is its explicit inclusion of state and input constraints while using the quadrotor’s full potential to track the desired path. Unlike conventional methods, which often rely on complex pre-processing algorithms to transform a path into a trajectory under such constraints, this approach integrates trajectory generation directly into the optimal control problem.
The control scheme is validated on three geometric paths, a spiral, a lemniscate, and a straight line. Simulations are first conducted in MATLAB/Simulink and then in greater detail using CrazySim, a Gazebo-based simulation framework that incorporates firmware and communication infrastructure. Additionally, the numerical properties of the optimal control problem are analysed to improve computation time by leveraging the inherent structure of the resulting nonlinear program.
It is also investigated how the control scheme can be employed for real-world hardware, using the CrazyFlie platform from Bitcraze, to validate the simulation results.
Concentric tube continuum robots (CTCRs) allow to consider minimally invasive deep brain surgeries which could not be realized with straight cannulas and manual surgery planning, as it is state of the art. We present a design assistant for finding personalized tube configurations (e.g. thicknesses and precurvatures of the nested cannulae) for the CTCR and an automated path planning. The core of the design assistant is the modelling of a high-dimensional nonlinear constrained optimization problem. It is derived from the brain topology obtained from labeling MRI scans, further medical constraints and objectives, such as minimally invasive interventions (i.e. shortest tube lengths or minimal penetrations), as well as mechanical constraints on the cannulae design, e.g. given by material parameters, and a cannulae model. The optimization problem is then solved by gradient-based solution methods in combination with evolutionary algorithms, such that an optimal design together with the optimal surgery plan is returned.
In this talk, we explore the representation of control Lyapunov functions using neural networks. First, we demonstrate that, under suitable assumptions regarding the decomposition of a given control system into subsystems, a smooth control Lyapunov function with a separable structure exists. This separable structure enables its representation via neural networks requiring a number of neurons that increases only polynomially with the state dimension, thereby avoiding the curse of dimensionality. Next, we address the practically relevant scenario where a smooth control Lyapunov function does not exist. We establish conditions under which nonsmooth control Lyapunov functions exist that can be represented using neural networks with a suitable number of ReLU layers. These theoretical results are supported by illustrative numerical test cases.
The choice of the step size (or learning rate) in stochastic optimization algorithms, such as stochastic gradient descent, plays a central role in the training of machine learning models.
Both theoretical investigations and empirical analyses emphasize that an optimal step size not only requires taking into account the nonlinearity of the underlying problem, but also relies on accounting for the local variance within the search directions. In this presentation, we introduce a novel method capable of estimating these fundamental quantities and subsequently using these estimates to derive an adaptive step size for stochastic gradient descent. Our proposed approach leads to a nearly hyperparameter-free variant of stochastic gradient descent with provable convergence guarantees.
We provide a convergence analysis for the ideal step sizes, as well as for the approximated step sizes.
In addition, we perform numerical experiments focusing on classical image classification tasks. Remarkably, our algorithm exhibits truly problem-adaptive behavior when applied to these problems that exceed theoretical boundaries. Moreover, our framework facilitates the potential incorporation of a preconditioner, thereby enabling the implementation of adaptive step sizes for stochastic second-order optimization methods.
Graph Neural Networks (GNNs) have become powerful tools for modeling complex relationships in graph-structured data across various domains. The success of GNNs comes from their graph convolution process, which allows information to propagate through the graph structure, so that each node can aggregate information from its neighbours. This process uses graph information (the connections between nodes) and node features (attributes specific to each node) to create representations that can be used for various tasks. In this presentation, we investigate how much of the information provided by the graph and the node features contribute to the prediction of GNNs in a semi-supervised learning setting. We derive the exact generalization error for linear GNNs under a theoretical framework, where node features and the graph convolution are partial spectral observations of the underlying data. We investigate the generalization error to evaluate the learning capabilities of Graph Convolutional Networks (GCNs), a specific type of GNN that uses graph convolution operations. A key insight from our analysis is that GCNs fail to utilize graph and feature information when graph and feature information are not aligned. We conclude with ongoing work on extending our analysis to other state-of-the-art GNNs and graph attention mechanisms. Our goal is to develop an architecture that better exploits graph and feature information.
In this talk, I will discuss various way to introduce spatial adaptivity into filter-based regularisation functionals. With this adaptivity, we are able to cancel out the filter responses to structure. Hence, we can interpret it as boosting the initial regulariser based on the data. A direct question if is we can repeat this process to get even better solutions. If we try this naively, the answer is sadly no for many cases. However, we can instead train the model in such a way that this procedure works, which opens up many interesting relations with other image reconstruction approaches. Our numerical results are on par with other approaches that rely on spatial adaptivity.
We deal with the task of sampling from an unnormalized Boltzmann density ρD by learning a Boltzmann curve given by energies ft starting in a simple density ρZ. First, we examine conditions under which Fisher-Rao flows are absolutely continuous in the Wasserstein geometry. Second, we address specific interpolations ft and the learning of the related density/velocity pairs (ρt,vt). It was numerically observed that the linear interpolation, which requires only a parametrization of the velocity field vt, suffers from a "teleportation-of-mass" issue. Using tools from the Wasserstein geometry, we give an analytical example, where we can precisely measure the explosion of the velocity field. Inspired by Máté and Fleuret, who parametrize both ft and vt, we propose an interpolation which parametrizes only ft and fixes an appropriate vt. This corresponds to the Wasserstein gradient flow of the Kullback-Leibler divergence related to Langevin dynamics. We demonstrate by numerical examples that our model provides a well-behaved flow field which successfully solves the above sampling task.
Gradient descent methods are a popular choice for minimizing the approximation error of different neural network models. Understanding their trajectories can be used to study implicit bias or the existence of spurious minima in the optimization process. In this work we study the polynomials defining algebraic varieties that contain the trajectories of the Gradient flow for deep linear neural networks. These polynomials are invariants of the gradient flow. We use combinatorial methods to compute the number of independent invariants that are satisfied by all flows and differentiable loss functions. This computation relies on an identification of the neural networks with quiver representations, and becomes tractable by the introduction of a double Poisson bracket related to the quadrupling of the underlying quiver.
We consider the optimization of the topology and geometry of an elastic structure subjected to a set of boundary loads. Specifically, we aim to minimize the energy, which is the sum of the material volume and the structure's compliance (defined as the work done by the load). This theory has been extensively developed by many authors, such as Allaire, Murat, and Kohn, in the case of a single load applied to the boundary of the geometry. However, only a few non-numerical results have been obtained for structures that are optimal with respect to a distribution of loads.
In this talk, we aim to characterize the optimal structures for some simple sets of loads applied to the boundary. We observe how the choice of a suitable reference system connects the symmetry of the boundary data to certain optimality conditions for the elasticity tensor of our design. This approach allows us to compute a lower bound for the energy and provides a characterization of the minimizers within the set of laminate designs.
The computations suggest a bifurcation in the solutions. For example in the particular case of two uniaxial loads: when the two loads have nearly the same direction, we observe multiple optimal configurations in the set of rank-3 laminates, in contrast to the unique optimal configuration in the set of rank-2 laminates when the two loads are almost perpendicular.
All computations are supported by numerical results obtained by optimizing the energy of the system in MATLAB.
Unlike their crystalline counterparts, glasses have a complex structure that lacks any long-range order, resulting in the system possessing a large number of metastable states. The system transitions between these metastable states, giving rise to a complex mechanical response when subjected to mechanical deformation. These transitions manifest as localized atomic rearrangements also known as plastic events. It has been postulated that certain regions that are prone to rearrangements can be identified in the stress-free configuration. Efforts to predict these regions have employed a wide range of methods, from computationally expensive local mechanical simulations to data-intensive machine learning techniques that require large training datasets. In contrast, we propose the use of the fabric tensor as a simple, geometry-based predictor for soft spots. The fabric tensor relies solely on atomic positions to characterize bond directionality within the system. We demonstrate a strong correlation between certain features of the fabric tensor and soft spots in two-dimensional silica samples generated using the Monte Carlo bond-switching algorithm and subjected to pure shear under athermal quasistatic conditions. These results show that a purely geometrical measure can effectively predict soft spots in disordered solids independent of the underlying potential energy landscape.
We study the behaviour of a given volume of liquid confined between two rough solid plates. When the separation between the plates is small relative to the liquid volume, capillary bridges are expected to form, which minimise Gauss' capillary energy locally. We derive aΓ-expansion for the energy as the plate separation approaches zero, yielding a dimensionally reduced problem in terms of the wetted regions on the plates. At leading order, the energy is determined by the area of the wetted regions, while the second-order term is given by their perimeter, weighted by appropriate functions of the relative adhesion coefficients. This provides a framework for a successive phase-field approximation, which is employed in numerical simulations to study the evolution of the droplets under the normal and shear movement of the plates.
The talk is concerned with rate-independent dissipative processes and their approximations by means of balanced-viscosity solutions.
Within the first part of the talk, algorithms for approximating balanced-viscosity solutions by means of arclength-methods are discussed. The focus is on the modeling of brittle fracture, cf. [1]. It is shown that balanced-viscosity solutions indeed capture the underlying physics well.
The second part deals with dissipation potentials that depend not only on the rate but also on the state of the solution. An approximation scheme whose convergence proves the existence of balanced viscosity solutions is presented. One possible application is given by models in non-associated plasticity frequently employed in soil-mechanics. A variational structure thereof can be recovered based on an idea of Laborde, resulting in a state-dependent dissipation [2].
[1] F. Rörentrop, S. Boddin, D. Knees, J. Mosler. A time-adaptive finite element phase-field model suitable for rate-independent fracture mechanics. CMAME, vol. 431, 117240, 2024,
https://doi.org/10.1016/j.cma.2024.117240.
[2] J.-F. Babadjian, G. Francfort, M.G. Mora. Quasi-static Evolution in Nonassociative Plasticity: The cap Model. SIAM J. Math. Anal., vol. 44, pp. 245-292, 2012,
https://doi.org/10.1137/110823511.
We give a brief overview of our previous work analyzing models for dynamic phase-field fracture in viscoelastic materials under small strains [1]. Building on this, the initial steps towards an extension of the framework to finite strains are addressed. In this context, we adopt a first-order formulation for the momentum balance to describe the dynamics of the elastic solid. To model the material response, a polyconvex stored elastic energy density W = W(F, H, J) is employed, depending on the gradient of the deformation F, its cofactor H, and Jacobian J. For a fully discrete approximation scheme, existence, stability, and consistency conditions for the discrete solutions are established, and convergence to a measure-valued limit in the sense of [2] and [3] are discussed.
This work is financially supported by the German Research Foundation (DFG) in the priority programme “Variational Methods for Predicting Complex Phenomena in Engineering Structures and Materials” (SPP 2256) within the project “Nonlinear Fracture Dynamics: Modeling, Analysis, Approximation, and Applications”.
References:
[1] M. Thomas, S. Tornquist, C. Wieners, Approximating dynamic phase-field fracture in viscoelastic materials with a first-order formulation for velocity and stress. Preprint, Weierstraß-Institut für Angewandte Analysis und Stochastik Leibniz-Institut im Forschungsverbund Berlin eV, 2023.
[2] S. Demoulini, D. M. A. Stuart, A. E. Tzavaras, A variational approximation scheme for three-dimensional elastodynamics with polyconvex energy, Arch. Rational Mech. Anal. 157 (2001), no. 4, 325-344
[3] E. Feireisl, M. Lukáčová-Medvid'ová, H. Mizerová, K-convergence as a new tool in numerical analysis, IMA J. Num. Anal. 40 (2020), 2227–2255.
The numerical simulation of finite strain damage models often faces challenges owing to the non-convexity of the underlying strain energy densities, such as mesh-dependent approximations and stability issues. In this talk, we focus on recent developments in computational relaxation by semiconvexification of a pseudo-time-incremental damage model, including the efficient approximation of polyconvex and rank-one convex envelopes. For the approximation of the polyconvex envelope, a dimension reduction is achieved by employing the signed singular value characterization of isotropic functions, transitioning the convexification from the d x d-dimensional deformation-gradient space to a d-dimensional manifold in the space of signed singular values. The combination of this dimension reduction with well-known algorithms for the resulting convexification process results in a significant increase in computational efficiency. On the other hand, the approximation of the rank-one convex envelope is accelerated by a hierarchical rank-one sequence convexification approach, which computes locally optimal rank-one sequences having the cost of essentially one-dimensional convexification. Although this approach provides only an upper bound on the rank-one convex envelope in general, it yields viable results in the intended application setting. A series of numerical experiments demonstrates the substantial computational speed-up and enhanced stability, thus paving the way for concurrent relaxation in simulations of boundary value problems. The feasibility of the presented approaches is illustrated by application to the isotropic damage model, showcasing mesh-insensitivity in the approximations, the ability to capture strain-softening and the microstructural evolution inherent to the relaxation process.
Background: Advanced sensor insoles and motion capture technology can significantly enhance the monitoring of patient's rehabilitation progress with distal tibial fractures [1], [2]. This study utilizes the potential of these innovative tools to provide a more comprehensive assessment of a patient's gait and weight-bearing capacity following surgical intervention, offering a possibility for improved patient outcomes.
Methods: A patient who underwent distal medial tibial LCP surgery in August 2023 and a subsequent revision surgery involving the insertion of an intramedullary nail in December 2023 was meticulously monitored over 12 weeks. Initial assessments in November 2023 revealed pain upon full weight-bearing without crutches. Post-revision, precise weekly measurements were taken starting two days after surgery, instilling confidence in the ability to accurately track the patient's progress from initial crutch-assisted walking to full recovery.
Objectives: The study aimed to evaluate the hypothesis that the approximation and formation of a healthy gait curve are decisive for healing. Specifically, it investigated whether cadence, imbalance factors, ground reaction forces, joint angles, and segment acceleration could be significant indicators of the healing status and potential disorders.
Results: The kinetic and kinematic gait parameters significantly correlate with the patient's recovery trajectory. These metrics allow for the early identification of deviations from expected healing patterns and facilitate timely interventions, underscoring the transformative potential of these technologies in patient care.
Conclusion: Integrating sensor insoles and motion capture technology offers a promising approach for monitoring the recovery process in patients with distal tibial fractures. This method provides valuable insights into the patient's healing status, potentially predicting and addressing healing disorders more effectively. Future studies are recommended to validate these findings across a larger cohort and explore the possibility of integrating these technologies into clinical practice.
REFERENCES
[1] Orth, M., et al., Simulation-based prediction of bone healing and treatment recommendations for lower leg fractures: Effects of motion, weight-bearing and fibular mechanics, Front. Bioeng. Biotechol., 11(February), 1–13. (2023)
[2] Warmerdam, E. et al., Gait Analysis to Monitor Fracture Healing of the Lower Leg, Bioeng. (Vol. 10, Issue 2). MDPI. (2023)
Background: Femoral fractures are common injuries in orthopaedic trauma surgery that usually require surgical intervention with internal fixation devices such as intramedullary nails or implants. In the implant case studied here, the number of screws, their positioning and the corresponding angle of insertion can significantly influence the mechanical environment at the fracture site, which in turn can affect healing outcomes. A better understanding of how screw configurations affect local micromechanics and interfragmentary motion can be used to optimize patient-specific implant setups.
Methods: 3D computer models were generated based on clinical image data of several femurs from body donations. These models were virtually provided with different fractures that would be treated with implants in everyday clinical practice and then virtually treated accordingly. To investigate the influence of the screw configuration, a series of configurations was then created for each model and analyzed using biomechanical FEM. To ensure realistic boundary conditions, physiological loading conditions were applied as hip forces and moments corresponding to those of normal healing progression from partial to full weight bearing. The key parameters evaluated included the von Mises stress distribution of the implant and the associated screws as well as the degree of interfragmentary movement and the strain-state of the fracture gap.
Results: The simulations showed that there are combinations of fractures and screw configurations where the influence on the micro-mechanics is greater than in other cases where the screw variation exerts only a minor influence. In addition, the result is influenced by the quality of the bone, comparing healthy bone versus osteoporosis patients.
Conclusion: The study shows that there is an influence of the screw configuration, which can be of great relevance in individual cases. The mechanical stability of the bone-implant system also influences the interfragmentary movement depending on the level of the partial weight bearing.
In this presentation, we report about ongoing work on in silico research for the better understanding of an experimental study for meniscus regeneration. In essence, this experiment uses a nonwoven scaffold that is colonized by chondrocytes and human mesenchymal stem cells. The mathematical description involves active processes at the cell level, as well as macroscopic effects. The corresponding mathematical model consists of a set of coupled nonlinear parabolic partial differential equations where further effects, such as the mechanical deformation, can also be taken into account.
From the numerical point of view, also the vastly differing time scales ranging between seconds for the mechanical deformation and days for the processes related to the cells are hard problems which have to be addressed.
In view of efficient and fast computation, we transform the PDE system into a proper ODE system. The latter can be used to verify the numerical results of the PDE model and to get a better insight into the interactions between the quantities. However, both the PDE model and the ODE model depend on many parameters and several of them have to be regarded as unknowns. To deal with this issue, we perform a sensitivity analysis to identify the most important and influencing parameters. We see this also as a key step for a deeper understanding of the model and starting point of possible simplification strategies. It becomes apparent that our proposed ODE ansatz is especially suitable to perform such a sensitivity analysis. More precisely, we analyze one classical approach and two statistical methods. The first one is a so called local sensitivity analysis where sensitivities are computed directly by varying input parameters. The latter are so called global sensitivity analyses. We focus on the Sobol’ method which treats the input parameters as random variables and decomposes the variance of the model output into single contributions of the parameters, as well as on the extended Fourier amplitude sensitivity test which varies the input parameters with certain frequencies and performs a Fourier analysis on the model output. The numerical results of these different approaches will be compared and discussed.
Simulation-based, patient-specific risk assessment via a digital liver twin has enormous potential in clinical applications such as personalized drug dosing or evaluation of the status and impairment of liver grafts before transplantation [1]. We present a flexible software framework for coupling tissue-scale and cellular-scale processes using FEniCS [2], libRoadRunner [3], and preCICE [4]. Tissue-scale phenomena like deformation and fluid transport are modeled with partial differential equations (PDEs) within the framework of the Theory of Porous Media (TPM). These advection-diffusion-reaction systems are extended by cellular-scale reaction terms to model hepatocyte functions. Efficient and robust coupling between the tissue-scale solver FEniCS (PDE) and the cellular-scale solver libRoadRunner (ODE) is facilitated through the preCICE coupling library. A Python-based package, the Micro Manager [5], controls all the micro-simulations and is itself coupled via preCICE. This segregation of the managing software from the coupling library enables adaptive and parallel execution of cellular models. This modular framework supports the flexible exchange of metabolic models, increasing potential use-cases to various clinical scenarios. The numerical study investigates different coupling schemes (explicit/implicit) and their interplay to different boundary value problems (BVP). To demonstrate the capabilities of such digital lobular structures, we showcase the coupled framework with different micro models, e.g. ischemic-reperfusion-injury (IRI) or substrate-product-toxin (SPT) metabolism. The outcome of this approach paves the way for further methods of data integration or surrogate modeling on the roadmap towards the digital liver twin.
[1] Tautenhahn H. et al. GAMM-Mitteilungen. (2024)47
[2] Baratta I. A. et al (2023). Zenodo. doi: 10.5281/zenodo.10447666.
[3] Welsh C. et al., Bioinformatics (2023)39
[4] Chourdakis G. et al. Open Research (2022)2
[5] Desai I. et al. Journal of Open Source Software (2023) [6] Gerhäusser S. et al. bioRxiv doi: https://doi.org/10.1101/2024.03.26.586870
Computer models of the human heart can lead to a better understanding of cardiac function. Since the objective of many of these models is to be used in a clinical setting, a compromise between computational cost and numerical accuracy is needed. Due to the high mathematical complexity of the underlying model, the finite element discretization commonly used may not be the optimal balance between efficiency, reliability, and accuracy. To investigate the impact of different spatial discretization schemes on cardiac mechanics, we realized a benchmark configuration that considers the hyper-elastic problem of inflating and actively contracting an idealized left ventricle with transversely isotropic and nearly incompressible properties. In this study, we examined the influence of three different finite elements — conforming Galerkin (cG), discontinuous Galerkin (dG), and enriched Galerkin (eG) elements — by investigating the cavity volume and apex shortening for four mesh refinements. Furthermore, we compare the various spatial discretizations concerning the number of degrees of freedom and computational time. All simulations were conducted using both linear and quadratic elements for all methods. We demonstrate that the cG scheme leads to the occurrence of locking phenomena for coarse mesh resolutions using linear elements. However, locking can be mitigated by using finer mesh resolutions, higher-order elements, or by adopting the dG or eG elements. DG elements have notably more degrees of freedom compared to the cG method, while eG discretization has only one additional per element. However, both eG and dG schemes cause higher computational costs, particularly the dG method. Furthermore, simulations utilizing the eG and dG schemes demonstrate enhanced robustness and stability compared to the conforming method. In conclusion, the eG approach offers a favorable balance between computational efficiency and numerical robustness in cardiac modeling applications.
Cerebral aneurysms (CAs), saccular dilations in the cerebral arteries affecting 3.2\% of the world's population, are a leading cause of stroke. Prone to rupture, they often cause subarachnoid bleeding, resulting in high mortality and severe disability. Endovascular coiling, a minimally invasive procedure, prevents rupture by altering blood flow and triggering clot formation within the aneurysm. Despite its widespread use, coiling can fail, leading to aneurysm recurrence.
To better understand treatment outcomes, we develop a mathematical model to simulate thrombosis growth in coiled aneurysms in silico. Using our discrete-elastic rod (DER) model, patient-specific aneurysms are virtually embolized with coils, capturing their physical properties like natural curvature and flexibility. Blood flow dynamics are simulated via the lattice Boltzmann method in the parallelizable framework waLBerla, incorporating the non-Newtonian behavior of blood through the Carreau-Yasuda model and dynamic inflow conditions from a 1D arterial model.
Clot formation is modeled using advection-diffusion-reaction equations to transport clotting factors and platelets. A simplified coagulation cascade reduces model complexity and addresses parameter uncertainties. Clotting occurs near walls when concentration and shear-rate thresholds are met. The resulting thrombus, represented as a porosity field, integrates into the simulation via volume-averaged Navier-Stokes equations (VANSE), which impose resistance on blood flow.
Virtual angiography evaluates clot effectiveness, with outcomes classified using the \linebreak Raymond-Roy occlusion grading scale to predict long-term treatment success and aneurysm recurrence. Our work offers insights into the interplay between coil placement, blood flow, and thrombus formation, advancing endovascular treatment strategies.
The dissipative anomaly, known as the zeroth law of turbulence, states that the rescaled mean kinetic energy dissipation rate ⟨ε⟩ L/Urms³ remains constant in the bulk of a turbulent flow away from surfaces when the kinematic viscosity ν approaches zero. Here, L is a characteristic outer scale of the flow and Urms is the root mean square velocity.
The framework first suggested by Duchon and Robert [1] is used to determine the anomalous dissipation term in the turbulent kinetic energy balance of a three-dimensional homogeneous isotropic incompressible Navier-Stokes flow. The incompressible anomalous dissipation term for a finite filter scale is defined by filtering the cube of the velocity increment. Anomalous contribution to the energy balance remains finite as the coarse-graining filter scale approaches zero.
In the current study, we extend this approach to compressible flow, introducing new dissipation terms that emerge from density gradients. To compute these terms in the balances, a continuous test function with compact support is chosen based on wavelet transform theory. Initially, we validate our method using an analytical solution for a single Burgers vortex [2]. The analysis is further extended using the Hatakeyama and Kambe model [3] for an ensemble of Burgers vortices. This method was proposed more than 20 years ago as a kinematic building block model for the turbulent cascade in three-dimensional homogeneous isotropic turbulence. For comparison with realistic turbulence, we calculate the anomalous dissipation for several intense vortex stretching events in a turbulent flow from a simulation. We compare the analysis of the anomalous dissipation for all three models with different levels of complexity.
To investigate dissipation terms in compressible flow, we focus on a 1D fully compressible flow, particularly to highlight dissipative effects in shock waves. The Duchon and Robert framework for compressible turbulence is applied and compared with the approach proposed by Aluie [4]. This comparison provides a better understanding on how the different methods capture the dissipation mechanisms, in particular for strong density gradients and shock-induced dissipation.
[1] Duchon and Robert, Nonlinearity 13, 249 (2000)
[2] Burgers, J. Adv. Appl. Mech. 1, 171 (1948)
[3] Hatakeyama and Kambe, Phys. Rev. Lett. 79, 1257 (1997)
[4] Aluie, Phys D: Nonl. Phen. 247, 54 (2013)
We analyze Lundgren's infinite multipoint hierarchy of probability density functions (PDFs) in the context of homogeneous isotropic turbulence (HIT). To account for the physical properties of HIT, we express the PDF in terms of the scalar invariants under the SO(3) group of rotations, yielding new coordinates for a homogeneous isotropic PDF and leading to a dimensional reduction of the problem. Furthermore, we transform Lundgren's hierarchy to a description in spatial increment variables in order to obtain a formulation consistent with the famous approach chosen by von Kármán and Howarth for the description of two-point velocity moments which may be derived once the PDF is known. The new reduced PDF equation thus represents a higher-level equation for the moment hierarchy. Introducing the new coordinates describing homogeneous isotropic PDFs allows us to dimensionally reduce the hierarchy, finally describing multi-point statistics in HIT with a minimal set of coordinates. For further dimensional reduction compared to the homogeneous isotropic case, invariance of the PDF under one more group of rotations per additional point is introduced, which greatly simplifies the hierarchy. For the simplified hierarchy, a solution approach of superposed eigenfunctions is introduced. Together with the side conditions of the hierarchy, this leads to a formally closed eigenvalue problem.
The statistical behaviors of hyperbolic conversation laws with random initial data and/or stochastic forcing terms are crucial for uncertainty quantification and deeper insights into turbulent flows. We start from derivations of known hierarchies governing the evolution of probability distribution functions f, and propose a new set of hierarchies relating locally higher-order spatial derivatives. For such kinetic-like equations, a viscosity-induced unclosed term remains, which is demonstrated to play a central role in preserving the positivity of f in the scalar case. Illustrating examples will be discussed to shed light on the properties of the unclosed term. For general cases, closure through the piecewise solution of the underlying conservation laws is desirable. To mediate the computational difficulties raised from the viscosity term with second-order spatial derivatives, we propose a novel first-order relaxation system to approximate the incompressible Navier-Stokes equations, and prove the convergence of the solutions to those of the limit system when two relaxation parameters both approaches zero. The new system is promising for our multiscale numerical pproach for the statistical behaviors, and this is our ongoing work under the project SPP 24-10.
Flows with high Mach and Reynolds numbers exhibit pronounced multi-scale features due to turbulence and compressibility effects. Implicit large-eddy simulation (ILES) models for such flows require numerical discretizations with tailored truncation error properties, enabling coarse-resolution simulations to closely approximate low-pass filtered direct numerical simulation (DNS) data. These ILES discretizations must adapt to local flow characteristics to best approximate subgrid-scale structures while maintaining stability and robustness in the presence of shocks. In this SPP-2410 project, we explore the potential of machine-learned implicit large-eddy simulations (ML-ILES) for modeling compressible turbulence. As an initial step, we replace the classical ENO-type cell-face reconstruction operator within a Godunov-type finite-volume framework with a machine learning surrogate. Specifically, shallow artificial neural networks (ANNs) substitute classical smoothness indicators and are hybridized with standard Harten-type interpolation polynomials. Separate reconstruction ANNs are used for each physical flow quantity, facilitating dedicated cell-face reconstruction. The hybrid approach ensures Galilean invariance of the model and convergence upon mesh refinement. The ANNs are trained end-to-end within the automatically differentiable JAX-Fluids computational fluid dynamics solver, using a training data set composed of coarse-grained spatio-temporal DNS trajectories of compressible homogeneous isotropic turbulence (HIT). A posteriori tests on unseen HIT data demonstrate a promising performance of the trained ML-ILES model compared to established ILES discretizations. Notably, ML-ILES shows improved dissipation characteristics at high wave numbers and generalizes well to unseen computational grids. Finally, application of ML-ILES to more complex test cases, along with potential extensions on the modeling side, are discussed.
Multibody dynamics enables the simulation of a wide variety of systems, all characterized by having multiple parts in relative motion with one another. Applications span from biological to engineering systems, requiring diverse capabilities which range from real-time simulation to high fidelity modeling of complex multidisciplinary systems. Goal of this mini-symposium is to present a view on the latest developments in models and advanced numerical methods in multibody dynamics. Focus is on techniques that enable applications to complex real-life problems.
In robotics, trajectory generation is a vital topic, especially in the case of planning motion that dynamically changes in response to the robot's interaction with its environment. Generating a feasible trajectory in the task space requires considering the position, velocity, and acceleration bounds imposed on the joint space motion. Due to nonlinear mapping between the task and joint spaces, trajectory generation is non-trivial. It becomes even more involved in the case of redundant manipulators.
A trajectory consists of a path and a time law deciding how fast that path is traversed. For trajectories feasible at the position level are too demanding in terms of velocities or accelerations, trajectory scaling, consisting in slowing down motion, is a reasonable approach leading to fulfillment of the manipulator task. In the simplest case, the whole trajectory—calculated in advance—is uniformly scaled. However, the necessity for off-line calculations and slowing down motion, even in its not-demanding parts, makes this approach impractical.
In this contribution, the gradual development of novel algorithms for trajectory scaling is presented. First, we discuss an algorithm that can be executed online and applies the scaling only to the part of the originally planned trajectory when necessary. The proposed approach analyzes the planned motion in a specified prediction horizon. Moreover, the initially planned path and velocity profile must be known in advance only within this horizon, allowing for a dynamic replanning of the robot’s tasks beyond the prediction horizon. For this algorithm, an inverse kinematic (IK) solver is assumed to be available.
Next, considerations are extended to the case of redundant manipulators. Trajectory scaling is combined with solving the IK. We start by formulating redundancy resolution as a quadratic programming (QP) problem with joint accelerations as the decision variables. Equality constraints enforce task space trajectory, whereas joint acceleration, velocity-, and position-level limits are represented by inequality constraints. Supplementary tasks may be incorporated into the goal function.
Then, the QP-based algorithm is enhanced by introducing trajectory scaling for the next control step. Single-step scaling is effective in many situations, but it may be insufficient in the case of more demanding trajectories. Therefore, the algorithm is further improved by including some prediction window inside which the IK problem is solved, and the trajectory can be scaled. As a result, the novel algorithm with real-time capabilities ensures constraints fulfillment, non-uniformly scales the trajectory, and accepts additional tasks to exploit redundancy.
For many engineering applications, friction is vital for their functionality. For walking and driving for example, the friction with the ground is exploited for locomotion. From a mathematical point of view, one of the main hurdles for the optimal control of systems with friction is the strong nonlinearity or even nonsmoothness introduced by slip-stick transitions, for example.
In this talk, we employ a pendulum driven by a frictional clutch as a representative benchmark problem to delve into the intricate challenges posed by friction in optimal control problems. We examine how the nonlinearity and nonsmoothness inherent in friction affect the optimal control problem and introduce different numerical solution approaches. Finally, the different approaches are compared to each other.
Traditionally, building an inverse dynamics model to control a multibody system (MBS) involves deriving it from first principles. Such physics-based representations lead to ordinary differential or differential-algebraic formulations, which extrapolate well by design and are usually preferred in model-based control strategies. Nevertheless, it is often that parameters of the model remain unknown, models of the phenomena governing the physics introduce too much simplification, or it is not known at the moment of the design which of them would impact the system behavior significantly.
With recent advancements, data-driven techniques to identify dynamics directly from data are quickly growing. The emerging method, dynamic mode decomposition (DMD), seems to be a highly versatile and powerful approach to discovering dynamics from time-series recordings or numerical simulations.
We propose using DMD to directly derive the inverse dynamics model, which is subsequently employed for control. The model is updated online as new measurements are acquired. We refer to this approach as data-driven inverse dynamics (DID), a method we introduced in our recent work to assist feedback and feedforward controllers based on physics-informed neural networks by modeling their error. In this study, however, we focus on an inverse controller that operates either alongside a simple PID controller or independently. The DID model is trained to capture the complete dynamics of the multibody system under control. The PID controller is used for the starting phase, in which DID is initialized and, as DMD generally linearizes the problem, a preliminary tracking of the desired trajectory is needed.
We evaluate the proposed method in a simulation of a five-bar linkage mechanism, where motors attached to the outer links are controlled, and the positions and velocities of these links are measured. The online updates of the DID model employ a moving window over the collected data, with older measurements being weighted or decayed. The results demonstrate that the DID controller significantly reduces the signal magnitudes produced by the PID controller and achieves satisfactory trajectory tracking independently. This includes tracking complex paths, such as Lissajous curves and trajectories with sharp, non-differentiable turns.
Although the tested system is relatively low-dimensional, the flexibility of DMD extensions allows us to adjust model dimensionality through embeddings or mode reductions. This capability enables the proposed updating method to be applied to more complex systems. We are also conducting real-world experiments to validate these findings.
Lie group integrators help to avoid singularities in the dynamical simulation of multibody systems with large rotations by solving initial value problems for (ordinary) differential equations on manifolds with Lie group structure.
For one-step methods, the application of classical ODE time integration methods to a locally defined equivalent ODE in terms of local coordinates has become a quasi-standard. These local coordinates are elements of the corresponding Lie algebra. They are mapped by the exponential map to the Lie group itself. For typical applications in multibody system dynamics, there are closed form expressions that allow to evaluate this coordinate map and the right hand side of the locally defined equivalent ODE efficiently.
For multi-step methods and for the generalized-α method with its subsidiary variables, the situation is more complex since frequent re-parametrizations of the manifold need to be avoided. As a practical consequence, the corresponding Lie group methods suffer from extra local error terms that may, however, be eliminated by appropriate correction terms (V. Wieloch, M. Arnold: BDF integrators for mechanical systems on Lie groups, NUMDIFF-15, September 2018). Recently, these modified Lie group integrators have been interpreted in terms of time derivatives of the local coordinates. In that way, the accuracy of simulation results was substantially improved (S. Holzinger, M. Arnold, J. Gerstmayr: Improving the accuracy of Newmark-based time integration methods, IMSD 2024, June 2024, and M. Arnold, J. Gerstmayr, S. Holzinger: A Lie group generalized-α method with improved accuracy, NUMDIFF-17, September 2024).
In the present paper, we analyse local and global errors of these modified generalized-α methods, discuss some implementation issues and present numerical test results that illustrate the improved accuracy.
Failure of materials and structural components is an important issue as long as man-made constructions exist. The section focuses on damage mechanics and fracture mechanics for all kinds of solid materials and structures. It aims at bringing together related original research covering experimental observations, modeling approaches and numerical techniques. Moreover, material failure is a complex process, which may be considered on different length scales ranging from atomistic scale up to macro scale of engineering structures. Since failure behavior of materials strongly depends on loading situations, contributions addressing static, dynamic and multi-axial failure are welcome as well as fatigue problems.
Ice shelves are large floating plates of ice in the ocean that are still connected to the inland ice of a glacier. Due to elevations in the bathymetry, the ice shelf can be partially grounded. These areas are called pinning points and satellite images show that cracks often form at these locations.
To better understand the crack formation, three-dimensional fracture simulations are carried out. The crack is modeled using the phase field method for fracture, where an additional scalar field represents whether the material is intact or broken.
Glacier ice is a Maxwell-type material with a short-term elastic and long-term viscous behavior. In addition to the viscoelastic behavior, ice is a non-Newtoinian fluid with a strain-thinning behavior characterized by Glen’s flow law. The viscosity of glacier ice is influenced by the stress and temperature distribution within the ice shelf. These material characteristics are taken into account in the crack simulation by incorporating a nonlinear viscosity into the phase field method for the fracture of a viscoelastic material. Finite strain theory is used to adequately represent the high strains typically found in ice shelves.
This approach allows the simulation of crack initiation at pinning points and contributes to the understanding of ice shelf dynamics and calving processes.
Clinching, as a versatile joining process capable of bonding different sheet materials, is widely used in the automotive industry to meet the demand for lightweight structures. Consequently, understanding and improving the fatigue strength of clinched joints is of significant importance. Accurate fatigue life prediction during the design phase can help reduce costs and minimize safety risks.
However, simulating the fatigue life of clinched joints in a 3D model for millions of cycles until failure is computationally prohibitive due to the high costs associated with modeling contact mechanics between the metal sheets. Recent research has proposed an alternative approach: computing the stress distribution of a 3D model for a single cycle and applying the stress history of the element with the highest von Mises stress to Lemaitre’s two-scale damage model to predict fatigue life. While this method significantly reduces computational effort, since only one element is simulated to failure, it overlooks the evolution of stresses and strains in the entire structure.
To address these limitations, this study introduces a 2D substitute numerical model that incorporates the effects of friction between sheets and thermal influences. The proposed model eliminates the need to fully simulate contact mechanics by implementing a slip condition on the internal surface of the upper sheet. This simplification balances computational efficiency with physical accuracy.
In this work, the 2D numerical model is first described in detail. The model is then validated against experimental data to ensure its reliability. Finally, the influence of friction and thermal effects on fatigue life is investigated through a series of test cases, highlighting the model’s predictive capability and potential applications.
Peridynamics is a nonlocal continuum mechanics formulation widely used for modeling fracture phenomena. The standard peridynamic models, introduced by Silling in 2000, are particularly effective in modeling damage and fracture under large deformations. However, these models have significant discretization errors in elastic modeling due to nonlocality and surface effects, especially in thin structures. The correspondence formulation mitigates these limitations, utilizing an approximated deformation gradient to calculate stress forces. Although this reformulation improves the accuracy of elastic modeling, it introduces instability through zero-energy modes. These problems combined make it hard to model dynamic fracture with large deformations in thin structures.
This study presents a comprehensive review of bond-associated peridynamic formulations as a promising solution to these challenges. Various approaches within bond-associated modeling are compared, focusing on computational efficiency and other factors regarding the discretization. The analysis confirms that bond-associated models effectively capture the correct fragmentation process in thin structures. Explanatory examples of dynamic fracture in thin structures are provided, showcasing the capabilities and practical relevance of these models.
In mechanical engineering, aerospace engineering, and civil engineering there are many situations where multilayer structures are applied. This can, for instance, be the overlap region of an adhesive joint, a local patch reinforcement on a substrate, or a substrate with a protective layer. Under given mechanical or thermal loading due to the dissimilar material properties typically stress concentrations will arise at the ends of the involved layers which might lead to the onset of debonding cracks and a subsequent debonding failure.
In order to assess such situations in the framework of linear elasticity theory a closed-form analytical approach is suggested based on layerwise displacement representations that take into account singular stress concentrations. From this by means of the principle of minimum potential energy an approximate closed-form description of the resultant deformation and stress distribution can be derived and validated by accompanying finite element calculations.
In order to assess potential failure in the framework of finite fracture mechanics debonding cracks of finite length are introduced and the corresponding incremental energy release is identified. Using the hybrid failure criterion suggested by Leguillon [1] then it can be clearly concluded whether such debonding failure will happen in reality or not. By a corresponding optimization problem also the minimal critical loading leading to failure can be determined. This is of high significance for the practical application.
References:
[1] Dominique Leguillon: Strength or toughness? A criterion for crack onset at a notch. European Journal of Mechanics – A/Solids 21 (2002) 61-72.
Explicit finite element solvers are commonly used for the simulation of sophisticated industrial processes. These methods are particularly beneficial in scenarios characterised by high strain rates, complex contact interactions, or rapid failure processes, making them indispensable for simulating manufacturing operations such as forming processes or impact loading. To capture the localisation and evolution of damage and to obtain mesh objective solutions, the incorporation of regularisation approaches is essential, with gradient-enhancement emerging as one of the most promising options. However, classic micromorphic formulations of gradient-enhanced damage models, as discussed in e.g. [1], result in elliptic partial differential equations, which are inherently incompatible with explicit solver frameworks. To address this incompatibility, a reformulation of the field equation into a hyperbolic structure was proposed in [2].In view of these developments, this contribution presents a comprehensive discussion of such dynamic micromorphic formulations. A dimensional analysis is performed to establish the scaling characteristics. The regularisation properties are explored based on wave propagation analyses. An implementation into an explicit finite element solver is presented, with particular attention dedicated to the stability of the time integration scheme. Representative boundary value problems are presented to validate the formulation and to demonstrate its potential for practical applications in process simulations.
[1] Forest S. (2009) Micromorphic approach for gradient elasticity, viscoplasticity, and damage. Journal of Engineering Mechanics 135(3) 117-131.
https://doi.org/10.1061/(ASCE)0733-9399(2009)135:3(117).
[2] Saanouni K., Hamed M. (2013) Micromorphic approach for finite gradient-elastoplasticity fully coupled with ductile damage: Formulation and computational aspects. International Journal of Solids and Structures 50(14) 2289-2309.
http://dx.doi.org/10.1016/j.ijsolstr.2013.03.027.
Cracks and the associated frictional forces that arise under combined compressive and shear loads at the crack surfaces can significantly impact the material behaviour, which is particularly relevant if the load direction changes after the formation of the cracks.
The phase-field method is a powerful method for the simulation of cracks and their propagation, enabling automatic handling of crack propagation, including the detection of the direction and length of the propagation, and it can even capture the coalescence and branching of cracks. However, to the best of the author's knowledge, no generalisable method for the simulation of crack surface friction under combined compressive and shear loads within the phase-field method has been published yet. This contribution can therefore be considered as a first attempt in this direction.
Since the cracks are not modelled discretely in the phase-field method but are represented by a reduction of the material stiffness, which is applied in a certain width that is determined by the internal length parameter, this represents a particular challenge.
Coulomb's friction law is used for the simulation of crack surface friction, and it is adapted for the calculation of the stick and slip state analogously to elastoplastic models.
The method has been tested with a set of numerical tests showing promising results indicating that the approach has the potential to correctly represent crack surface friction in rather general cases. Thus the approach can be a valuable contribution to a more accurate modeling of material behaviour and failure within the framework of the phase-field method.
The section will focus on advanced theoretical, numerical and experimental models for the evaluation of the behavior of structures subjected to static and dynamic loads. Innovative materials characterized by high strength, anisotropy and unconventional mechanical responses pose new challenges to the design and the performance of various structural elements such as beams, plates and shells. In particular, structural issues may appear at different scales when materials with an internal architecture are employed. Particularly welcome are linear and nonlinear models and algorithms that address static and dynamic behavior of structures at different scales.
The numerical treatment of Kirchhoff–Love shells is challenging due to fourth-order derivatives in the displacement-based strong form of the governing mechanical model. For the finite element analysis, the employed shape functions must either meet increased continuity requirements such as those provided in isogeometric analysis or—to continue using standard C0‑continuous elements—mixed formulations are needed. Then, the components of the displacement vector and the moment tensor are typically present as primary variables, largely increasing the number of unknowns. A hybridized formulation, see [1], may address this disadvantage, where an element-wise static condensation reduces the number of overall DOFs, leading only to displacements and rotations on element boundaries as the resulting primary variables. The moment tensor may be retrieved in a post-processing step with improved convergence rates compared to the moment tensor computed in a displacement-based formulation. We present a novel mixed-hybrid finite element approach, where classical C0‑continuous shape functions based on higher-order Lagrange elements are employed without the need for special finite element spaces such as in [2]. All mechanically useful boundary conditions are systematically considered. The weak form is formulated in the frame of the tangential differential calculus (TDC) following our works in [3, 4, 5], thus being applicable to explicitly and implicitly defined shell geometries. A new set of benchmark test cases featuring smooth mechanical fields is proposed, where the numerical results confirm optimal higher-order convergence rates.
REFERENCES
[1] D. Boffi, F. Brezzi, M. Fortin, Mixed Finite Element Methods and Applications, 1 ed., Springer-Verlag Berlin Heidelberg, Berlin, 2013.
[2] M. Neunteufel, J. Schöberl, The Hellan–Herrmann–Johnson method for nonlinear shells, Comput. Struct., 225, 106109, 2019.
[3] D. Schöllhammer, T.-P. Fries, Kirchhoff–Love shell theory based on tangential differential calculus, Comput. Mech., 64, 113–131, 2019.
[4] D. Schöllhammer, T.-P. Fries, Reissner–Mindlin shell theory based on tangential differential calculus, Comput. Method. Appl. M., 352, 172–188, 2019.
[5] D. Schöllhammer, T.-P. Fries, A higher‐order Trace finite element method for shells, Int. J. Numer. Meth. Eng., 122, 1217–1238, 2021.
The heterogeneity of natural materials is inherent and poses significant challenges in determining their mechanical properties, such as stiffness and density. Since biological samples are often very fragile, special care is necessary to examine them with traditional material testing. Moreover, applying the standard methods might be impossible in in-vivo conditions. These issues cause increasing interest in non-destructive methods, including inverse approaches based on full-field experimental data. This study employs the Finite Element Model Updating (FEMU) method for material identification of heterogeneous Kirchhoff-Love shells and planar Bernoulli-Euler beams, which are analyzed numerically with isogeometric analysis. The unknown material fields are discretized with a separate mesh based on Lagrange interpolation, referred to as the material mesh. The nodal values of the material mesh become discrete unknowns of the inverse problem. The framework focuses on reconstructing stiffness parameters through inverse analysis based on nonlinear statics, followed by identifying density unknowns using modal dynamics. The FEMU objective function is formulated as a least-squares problem, mainly consisting of the differences between experimental and FE displacements. The objective function may also include, for instance, resultants of the contact forces or natural frequencies, depending on the underlying FE problem. Quasi-experimental data is artificially generated using high-resolution FE models with random noise added subsequently. To solve the nonlinear least-squares problem, the Trust Region Interior Reflective algorithm is used. Furthermore, analytical gradients and Jacobians are derived and implemented to accelerate the computations. The FEMU approach is tested with several numerical examples, including a shell strip subjected to uniform pressure and an abdominal wall under rigid contact with a spherical probe. Results show that the approach is sensitive to noise. However, with sufficient experimental data, i.e., higher resolution of experimental mesh or the number of experiments involved, the influence of noise can be significantly reduced. The method enables the identification of smooth and discontinuous distributions, even for large noise levels. Additionally, the density can be accurately reconstructed based on the previously obtained stiffness. The analytical sensitivities provide a huge decrease in the computational time required for the inverse analysis. The material mesh can act as a filter to prevent overfitting. Its adaptivity is crucial and will be explored in future research.
Topology optimization is a powerful and common tool in lightweight design. It is possible to use topology optimization to optimize the surface of an existing part by modelling a shell onto an existing FE model and optimizing the thickness of the shell. However, if the goal is to reduce the mass, it is currently necessary to remodel the existing model with smaller dimensions before running this type of optimization. The reason for this remodelling step is that mass can only be added because available optimization tools only allow for the increase of the thickness of the shell. To eliminate this time-consuming remodelling step, we propose elements with negative thickness values. As a first step towards this goal, we prove that the laws of mechanics also apply when areas are subtracted from an existing solid body and that there are no mechanical and mathematical breach points. The conducted considerations consist of the analytical evaluation in accordance with the established engineering mechanics. Examples are given for the necessary next steps for preparing the transfer of the approach into the FE-Simulation with an outlook on the modelling and numerical challenges that derive from our approach.
Recent developments in digitalization and digital fabrication have enforced an increasing complexity of the structural components for the analysis. A promising way to discretize components is to rely on polygonal meshes, as they provide advantages such as simplified mesh generation and highly localized mesh refinement, for example, Voronoi tessellation and quadtree meshes. This calls for finite element formulations that can handle an arbitrary number of edges, such as the scaled boundary finite element method that allows the use of both convex and concave elements. A particular focus is placed on thin-walled structures, widely used in different domains of engineering. This encouraged the development of various thin plate and shell theories. Among these, Reissner-Mindlin theory has gained prominence due to its C$^{\circ}$-continuity requirement, simplifying the selection of interpolation functions in comparison to other plate theories. Despite being computationally more efficient, low-order Reissner-Mindlin plate elements face a significant disadvantage. They are prone to locking effects known as shear locking in the thin plate limit. Further, the plate formulation addresses a specific consideration of constitutive laws that restricts the plate’s application. Accounting additional thickness stretches on element level yields the computation with a full three-dimensional material description. However, it comes with an additional locking phenomenon, called Poisson’s thickness locking.
The presented work focuses on the reduction of shear locking in polygonal Reissner-Mindlin plate elements. It follows a formulation for a scaled boundary finite element that employs mixed interpolation techniques for the bending and shear components. An assumed natural strain method is derived to interpolate the shear strains at the section level of a scaled boundary element. It reduces transversal shear locking significantly for the polygonal scaled boundary finite plate element. Additionally, three-dimensional material laws are incorporated by enhanced thickness strains with a linear interpolation along the thickness direction, effectively addressing Poisson’s thickness locking.
Hoses for applications in the automotive industry are mainly made of fiber-reinforced rubber, with reinforcements ranging from nylon to steel. The properties of this reinforcement layer largely determine the deformation behaviour under internal pressure loading. For optimal space utilization and to meet certain design requirements, hoses are rarely straight but are often curved. These curved sections can be regarded as parts of a torus. This work investigates the role of fiber orientation in the analysis and simulation of helically reinforced, pressurized tori. Therefore, the concept of the neutral wrapping angle of a straight hose is analytically extended to a torus. This specific angle establishes an iso-sinusoidal load condition by balancing axial and circumferential stresses. It varies for a torus along the cross-section due to fluctuating circumferential stresses. Additionally, because of the curvature of the torus, the geometrical wrapping angle is not constant. Bending a reinforced straight hose or winding fibers onto a toroidal part inherently creates varying geometrical wrapping angles, which follow a similar pattern to the neutral wrapping angles but differ quantitatively. Using the finite element method and suitable modeling approaches, we analyze this difference as well as the interplay between fiber orientation and curvature effects on toroidal deformations. Specifically, we examine how deviations from the neutral wrapping angle superpose with the Bourdon effect. These interactions underscore the need for precise modeling of varying wrapping angles, especially in highly curved toroidal geometries. Finally, the potential of toroidal geodesics for optimal reinforcement paths is explored, as they wind around the torus in a spiral manner.
Thin-walled structures are part of many engineering applications, as wind turbine blades, pressure vessels and sheet metal products. A thin-walled structure is characterized by one significantly smaller dimension compared to the others. To numerically support the designing process of such industrial structures, a robust and efficient approximation algorithm is required.
This work aims to examine the behavior of a nonlinear formulation for shell models using an efficient triangular shell element. Thereby, the proposed displacement-based triangular shell element, which comprises six nodes, has a compatible linear interpolation scheme for displacements and a non-conforming linear rotation field, resulting in a very efficient finite element formulation.
This particular formulation is based on Reissner-Mindlin kinematic assumptions and an initial plane reference configuration for the shell. The proposed formulation also accounts for finite strains, large displacements, and rotations. Additionally, the rotation field has been re-parameterized using the Rodrigues rotation parameters (Argyris,1982, Campello, 2011), facilitating an efficient update of the rotational field in comparison to the classical Euler rotation vector.
Furthermore, a comparison with the T6-3i element introduced in Campello,2003 is performed to illustrate the robustness of the proposed formulation.
The section focuses on constitutive modelling of natural and artificial materials subject to elastic and inelastic deformation processes. The aim is to compare new constitutive models formulated on both the phenomenological and the micromechanics basis to determine their validity by comparison of simulations with experiments. A wide range of open problems will be considered in the section, like multi-scale modelling of heterogeneous materials, implementation of constitutive models in numerical applications, and the virtual testing of structural systems.
This presentation introduces a thermodynamically consistent formulation for the heat source arising from plastic deformation within a small deformation framework (cf., [1,2]), which leads to an additive decomposition of the flow stress into two distinct parts: one associated with dissipation and another linked to the energy stored within the material due to microstructural changes, such as dislocation evolution and interaction. This provides a framework for investigating the thermomechanical coupling during plastic deformation. The consequences of the underlying assumptions for the dependence of the material functions and the free energy function are discussed.
The proposed decomposition provides a structured approach to understanding energy and stress partitioning during plastic deformation of metallic materials. Although established in a small deformation setting, the framework and its implications can be generalized to large deformations and may serve as a starting point for more refined studies of dislocation-based thermoplasticity in mono- and polycrystals. By focusing on the thermodynamic consistency of the formulation, this work contributes to the fundamental understanding of plastic deformation and lays the fundamentals for further exploration of the interplay between thermomechanical and microstructural effects. Selected experimental results based on in-situ infrared thermography measurements [3] during tensile tests and micromechanical model cases are discussed. Limitations and prospect for generalization are highlighted in the presentation.
[1] Bertram, A., \& Krawietz, A. (2012). On the introduction of thermoplasticity. Acta Mechanica, 223, 2257-2268.
[2] Rosakis, P., Rosakis, A. J., Ravichandran, G., \& Hodowany, J. (2000). A thermodynamic internal variable model for the partition of plastic work into heat and stored energy in metals. Journal of the Mechanics and Physics of Solids, 48(3), 581-607.
[3] Chrysochoos, A., \& Louche, H. (2000). An infrared image processing to analyse the calorific effects accompanying strain localisation. International journal of engineering science, 38(16), 1759-1788.
The Portevin-Le Chatelier (PLC) effect is a propagative instability phenomenon appearing in metals, for example in steel and aluminum alloys at elevated and room temperature, respectively. It is characterized by serrations (repetitive changes from hardening to softening) visible in load-displacement diagrams and by traveling strain rate bands that can be observed in the sample. At the atomic level, dislocations are stopped by defects or solute atoms, causing their pile up with stresses going up. When a sufficient load is applied, the dislocations are freed and the stress drops. The cycle is then repeated and the phenomenon is called Dynamic Strain Aging (DSA). Lueders bands are another propagative instability phenomenon appearing in metals, characterized by a plastic front that moves through the sample right after yielding. The strains first localize in a shear band during a transient softening stage. At the edge of the shear band, a plastic front is formed and it then moves through the sample while a plateau is visible in the load-displacement diagram. When the plastic front reaches the end of the sample, hardening starts and the sample then exhibits uniform deformation.
Both the Lueders bands and PLC effect can be observed during a tensile test of aluminum alloy AW5083. Such tests have been performed on bone-shape samples for three strain rates at room temperature. After the initial elastic deformation, a small stress drop is visible, followed by a short plateau characteristic for Lueders bands, and saturation hardening with serrations indicating the PLC effect. On DIC images the elastic deformation followed by localization in a single band is first visible. Next, the plastic front is formed and moves. In the hardening process, when the PLC effect is activated, shear bands start to appear and disappear at different places in the sample.
A large strain thermo-visco-plastic Estrin-McCormick model, i.e. a phenomenological description of DSA, is used to simulate the behavior combining Lueders bands and the PLC effect. Different yield functions, both isotropic and anisotropic, are used in the formulation to determine the most suitable one. Computations are performed in AceGen and AceFEM packages for Wolfram Mathematica. The identification of the parameters of the model is part of future research.
ACKNOWLEDGMENT:
The work is supported within Weave-UNISONO call by the German Research Foundation (DFG grant 527828607) and by the National Science Centre, Poland \linebreak (NCN grant 2023/05/Y/ST8/00006).
We consider the initial value problem for nonlinear partial differential equations describing the motion of an inhomogeneous and anisotropic hyperelastic medium. We assume that the stored energy function of the hyperelastic material is a function of the point x and the nonlinear Green-St. Venant strain tensor e_jk. Moreover we assume that the stored energy function is C^∞ with respect to x and e_jk*. In our description we assume that Piola – Kirchoff’s stress tensor p_jk depends on the tensor e_jk. This means that we consider the so-called physically nonlinear hyperelasticity theory. We prove (local in time) existence and uniqueness of a smooth solution to this initial value problem. Under the assumption about the stored energy function of the hyperelastic material, we prove the blow-up of the solution in finite time.
In this study, the smoothed particle hydrodynamics (SPH) method is used for modeling and simulating the solid-state deposition process, friction surfacing (FS), involving severe plastic deformation at elevated temperatures. To address the high computational cost of SPH, a GPU-based framework was developed with additional enhancement techniques such as particle switching and subdomain decomposition to further reduce the computational time. Moreover, to ensure the stability of the model, artificial viscosity and artificial stress are incorporated into the framework. The model was validated against experimental data and successfully captured key features of the FS process, including the formation of rod flash and the deposition profile. The simulation provides valuable insights into the deposition mechanism, specifically the material flow from the rod to the deposition and the distribution of rod material within the deposition.
Coupled problems arise in several applications. From a general point of view each problem containing more than one primary field is called a coupled one. Usually the class of coupled problems is subdivided into volumetrically coupled problems and problems with surface coupling. The class of volumetrically coupled problems contains e.g. the fluid flow in porous solids described by mixture theory, thermo-mechanically coupled problems, chemo-mechanically coupled problems and electro- or magneto-mechanically coupled problems while in the second class problems like the fluid-solid interaction via an interface are included. All problems is in common that the presence of different fields in the numerical treatment requires special attention with respect to the multi-field formulation and the solution strategy. The session on coupled problems deals with all aspects mentioned above, i.e. ranging from modelling aspects to numerical solution strategies.
Deep-Hole drilling is applied for machining holes with a length-to-diameter ratio larger than ten. The applications are in the automotive, aerospace, and medical industries, and machine fabrication for the food industry. Different deep-hole drilling methods are available, ranging to diameters less than one millimeter. All methods are characterized by the high-quality hole and the high level of productivity archived. One example of a deep-hole drilling method is single-lip drilling, also known as gun drilling.
Single-lip drilling of deep holes is performed using high feed rates in a single pass. Therefore, chips must be removed by the metalworking fluid (mwf) pumped under high pressure through an internal cooling channel to the cutting head. In consequence, the mwf plays an essential role in the process reliability. However, chip jamming is a significant problem when chips wrap around the tool, leading to marks on the borehole wall and an increased drilling torque, causing sudden tool failure. Therefore, modeling the deep-hole drilling process is essential for improving the process. The removal of chips, in particular, is a challenge due to the dynamic chip position and the resulting constantly changing fluid-structure interfaces between the mwf and the chips.
Smoothed Particle Hydrodynamics (SPH) is applied to model the deep-hole drilling process, including the dynamic chip motions. Its Lagrangian and meshfree nature allows SPH to describe arbitrary moving surfaces and interfaces. Its application areas are especially when non-regular, moving free surfaces or dynamic fluid/structure interactions are present. The chips are modeled as rigid bodies using the Discrete Element Method or as flexible depending on the investigated problem, thus requiring coupled fluid-solid modeling.
The talk will briefly introduce SPH and the modeling of deep-hole drilling before problems of recent research are presented. Possible improvements to the process include, for example, improved chip removal by optimizing the drilling tool used [3] or reducing the amount of MWF required to ensure stable chip removal. Furthermore, it will outline how SPH modeling can be expanded to investigate the problematic chip jamming, which can result in drill breakage.
In grinding processes, process parameters are currently set based on experience or empirical trial-and-error tests since little knowledge about basic interactions between process parameters exists, and contrary influences must be balanced. Especially when developing new cutting fluids, a conflict between cooling effects and hydrodynamic load-bearing effects arises. To resolve this conflict, a model should be developed to investigate these interactions and optimise wet grinding processes.
This model can be divided into two parts. A macroscopic description of a grinding wheel is firstly developed to model the pressure built-up, cavitation and thermal effects driven by viscous shearing. The numerical analysis on this scale is conducted via isogeometric analysis (IGA) based on NURBS. This approach, while mainly used for structural problems, is employed to efficiently compute hydrodynamic parameters, especially for problems with complex shaped geometries.
However, this modelling approach can hardly be used to capture the coefficient of friction arising from the engagement of the abrasive grains on a microscopic scale. To account for this influence, a microscopic model has been built based on a probability density function derived from measured deterministic height profiles. This approach, adapted from metal forming simulations, enables a comparably fast computation of the real contact area and an approximate penetration depth of the abrasive grains into the workpiece. By accounting for the influence of surface roughness and contact areas on the hydrodynamic pressure build-up on the microscopic scale, this model opens up the possibility of approximating a global coefficient of friction by linking from the microscopic scale back to the macroscopic scale.
This integrated modelling framework demonstrates how these considerations can be combined to create a lean modelling procedure for optimizing wet grinding processes through numerical analysis.
A body immersed in a fluid (swimmer) can move by deformation or by shifting its centre of mass. We assume that both actions are periodic and that the force acting on the body is zero. For simplicity, we assume that the problem has cylindrical symmetry.
Considering a linear fluid-solid model, the body will exhibit oscillatory motion, but with a zero mean velocity. This changes when nonlinearities by the Navier-Stokes equations are introduced.
We aim to identify configurations that lead to large average velocities. The numerical approximation of this problem is challenging due to two reasons: First, the average velocity is always orders of magnitude smaller than the amplitude of the oscillation. Second, the transition of the dynamical problem to the periodic limit, where the average can be observed with certainty, can take many iterations. The convergence is slow because it depends only on the viscous damping, and usually convergence is not monotonic.
We present the numerical modeling of this problem using an ALE formulation of the fluid-solid interaction problem. We also present ideas and realizations to improve the convergence towards the periodic limit cycles.
Many applications in engineering and the applied sciences require coupling free incompressible flow with porous media flow. Such multi-physics coupling can be achieved via a split-domain approach, in which numerical techniques are tailored to free flow and porous media flow in the respective domains. This, however, introduces the problem of tracking the interface between subdomains on which explicit boundary conditions have to be formulated. A different approach considers the hydro-mechanical Darcy-Brinkman-Stokes model that formulates mass and momentum balance of an incompressible fluid on a domain of varying permeability. It implicitly contains free flow governed by the classical Navier-Stokes equations and porous media flow governed by the Darcy equation as permeability limits.
While the conceptual simplicity of the Darcy-Brinkman-Stokes model makes it attractive from a mathematical modeling perspective, the numerical solution procedure is typically challenging. Stability requirements depend on the flow regime and the design of accurate, uniformly stable Finite Element discretization hence requires special care. Furthermore, strong nonlinearities in material parameters, such as large localized variations in permeability, adversely affect the conditioning of resulting linear systems. In this presentation, we will introduce a mixed Discontinuous Galerkin (DG) finite element discretization for the Darcy-Brinkman-Stokes model that addresses these challenges and covers both free flow and porous media flow regimes.
The method integrates recently proposed ideas for DG methods that solve incompressible Navier-Stokes equations and Darcy flow to obtain a robust solver in both permeability limits via systematic mass-flux stabilization. The method is also stable for sudden spatio-temporal changes in the permeability pattern. We conduct numerical experiments using the discontinuous generalization of triangular and quadrilateral Taylor-Hood finite elements to demonstrate near-optimal L2 convergence for both Stokes and Darcy regimes, as well as for coupled Stokes-Darcy regimes. We show robustness of the method on a variety of 2D numerical benchmark problems, including problems with discontinuous, anisotropic, and time-dependent permeability fields, and discuss the challenges moving forward.
In today’s era, most nations worldwide are undertaking endeavors to enhance the exploration of locally available energy sources, to attain independence from external supplies. The production of alternative fuels from biomass and waste through thermal treatment or their direct utilization in the combustion process remains a predominant method for expeditious and cost-effective heat generation. However, the physical and chemical characteristics of these types of fuels can lead to complications in the operation of heating units, resulting in elevated levels of air pollution. Consequently, the analysis of thermal treatment of solid fuels still remains a significant practical concern. The present study aims to examine biomass thermal treatment in a small-scale reactor utilizing the in-house eXtended-DEM (XDEM) method based on mixed Lagrange-Eulerian approaches. This was facilitated by a novel, independently developed coupling computational interface that enables seamless integration between CFD and DEM, enhancing computational efficiency and accuracy. The granular system is treated as a porous zone with the reactive flow through. The Open-Foam software with the porous zone module is used to predict the parameters of the continuous phase flow by the Eulerian approach. Significant advances are also made in the underlying physical models. Within the DEM framework, each solid particle undergoing thermochemical processes is tracked by the Lagrangian approach, which allows for the prediction of its movement, and shape and structural changes during interaction with hot gas flowing over. Collectively, these alterations contribute to developing a more robust and reliable simulation instrument, which can provide detailed insights into complex multi-phase flows and granular material behavior. A range of numerical results were obtained for a variety of reactor types, to assess the influence of geometry and flow conditions on the distribution of parameters within the reactor. Furthermore, the study incorporated different shapes and sizes of particles. The findings demonstrate the efficacy of XDEM in predicting the phenomena occurring during the thermal treatment of solid fuels, particularly by providing comprehensive information regarding the movement of all the particles undergoing chemical reactions, which are otherwise challenging to achieve through measurement-based methods.
Coupled problems often include simulations on moving domains, where objects fully rotate or undergo large displacements, hence an update of the mesh is required. This mesh update is a crucial and notoriously difficult task. Different methods have been proposed, such as [1, 2], each coming with its own limitations. The proposed approach is based on a block-structured meshing scheme. Generally, meshing can be categorized into structured and unstructured approaches. The handling of complicated geometries is easier with unstructured meshes, for the structured approach less elements may be used and it allows the handling of boundary layers straightforwardly, supporting anisotropic node distributions. Within this work, we utilize the concept of block-structured meshing to enable large displacements and full rotations. A block-structure is a valid, linear but coarse mesh representing the topology. The geometry description and grading are then assigned to the respective block edges and afterwards, submeshes are generated inside the blocks using transfinite maps, based on the desired (spatial) resolution and mesh order. These submeshes are then combined to the final simulation mesh. For moderately large displacements, one may move these blocks based on the simulation results and generate a new mesh in every time step without any connectivity changes, see [3, 4]. For full rotations or large displacements, a connectivity change of the block-structure is necessary and a new simulation mesh is generated. Therefore, a projection of the results between the meshes, featuring different connectivities, is required. To restrict the searching of the nodes to a certain area, one may connect the old and the new block-structure, which results in blocks in space and time. The data projection may be optimized based on the proposed space-time block-structure. Furthermore, a connectivity change is done rarely and, in most simulations, only a part of the mesh requires this step, resulting in a fast procedure. Numerical results confirm the success of the proposed approach.
REFERENCES
[1] M. Behr, T. Tezduyar, The shear-slip mesh update method. Comput. Methods Appl. Mech. Eng., Vol. 174, pp. 261–274, 1999.
[2] F. Key, L. Pauli and S. Elgeti, The virtual ring shear-slip mesh update method. Comput. Fluids, Vol. 172, pp. 352 – 361, 2018.
[3] T. Schwentner and T.P. Fries, Fluid-structure interaction with fully coupled mesh generation, Proc. Appl. Math. Mech., Vol. 23, e202300067, 2023.
[4] T. Schwentner and T.P. Fries, Fully coupled, higher-order, block-structured mesh generation in fluid-structure interaction. Int. J. Numer. Meth. Fluids, 2024.
This section is dedicated to discuss recent advances in multiscale and homogenization techniques for static and dynamic problems. Topics of particular interest are nonlinear homogenization techniques, multiscale modelling of failure processes and localization phenomena, FE2 methods, atomistic to continuum coupling, contact homogenization, model reduction techniques and furthermore homogenization schemes incorporating experimentally determined microstructure data.
Model order reduction has become an established tool to reduce costs in computational engineering. It is based on the idea that potential solutions to parameterized problems possess a certain kind of similarity, i.e. that the actual solution vectors reside in a lower-dimensional solution manifold. Vice versa, a certain solution can represented in terms of a low-dimensional solution vector.
This approach has been used widely for multiscale simulations, where the microscale FE solutions on the RVE level exhibit a high level of repetitiveness, so that the microscale kinematics can be lowered drastically using a reduced basis. On the kinetic side, the respective equations of motions can be derived by means of Galerkin projection from the original internal force vector. Computing the latter by numerical integration in the elements is, however, still an expensive task.
Different hyperintegration methods have been proposed to reduce the number of quadrature points and thus the computation costs. Hyperintegration methods identify quadrature points and their weights based on least-squares error minimization on training data, incorporating certain criteria like the internal forces in the empirical cubature method (ECM) or the elastic free energy in the reduced energy optimal cubature (REOC), while preserving the integration of the volume.
The present contribution formulates additional generalized criteria for hyperintegration. Their individual and combined effects on the computational cost is evaluated for a number of reduced order FE² problems. This way, the computational online costs can be further decreased if the criteria are combined, caused by the better generalization of the resulting hyperintegration scheme. This scheme is implemented in an element-wise manner as Empirical Hyper Element Integration Method (EHEIM), enabling a modular integration into existing FE² codes.
We present a statistically compatible hyper-reduction method and its application to time dependent diffusion processes in a homogenized two-phase materials consisting of inclusions in a matrix. We use a variationally consistent homogenization. The hyper-reduction method introduces generalized integration points which ensure the consistency with the first and second statistical moment of the fully integrated model. Because diffusion in the matrix is assumed to be much faster than diffusion inside the inclusions, the micro-scale cannot be assumed to be at equilibrium which introduces a time dependence.
Fully coupled high-fidelity simulations using the multiscale finite element (FE2) method are prohibitively expensive for many relevant multiscale problems. The computational cost of FE2 simulations is dominated by the assembly and solution of large linear equation systems which are required by a nonlinear solver on the microscale [1,2]. Additionally, obtaining homogenised macroscopic stresses and stiffnesses from microscale representative volume element solutions also requires the assembly and solution of large linear equation systems [3].
Projection-based model order reduction (MOR) methods address the cost of solving linear equation systems by searching for solutions in a low-dimensional approximation space [2]. Existing MOR methods utilise linear, piecewise linear, or specific nonlinear Ansatz approximation spaces [4,5,6]. Additionally, hyperreduction techniques efficiently estimate quantities appearing in the reduced linear equation systems by integrating over a reduced integration domain [4,5,6].
In a recent publication [2], we proposed a nonlinear MOR scheme which uses manifold learning techniques to obtain a flexible, continuously nonlinear approximation space. This facilitates the construction of smaller reduced order models than those obtained by alternative methods, while obtaining similar levels of accuracy. In this contribution, we discuss a tailored hyperreduction methodology and the resulting computationally efficient nonlinear MOR algorithm. We investigate the performance of the proposed scheme for hyperelastic microstructures within a homogenisation framework and conduct performance comparisons.
[1] F. Fritzen, M. Hodapp, The finite element square reduced (FE2R) method with GPU acceleration: Towards three-dimensional two-scale simulations, International Journal for Numerical Methods in Engineering 107 (10) (2016) 853–881. https://doi.org/10.1002/nme.5188
[2] L. Scheunemann, E. Faust, A manifold learning approach to nonlinear model order reduction of quasi-static problems in solid mechanics (2024). arXiv:2408.12415
[3] C. Miehe, J. Schotte, J. Schröder, Computational micro–macro transitions and overall moduli in the analysis of polycrystals at large strains, Computational Materials Science 16 (1-4) (1999) 372–382. https://doi.org/10.1016/S0927-0256(99)00080-4
[4] A. Radermacher, S. Reese, POD-based model reduction with empirical interpolation applied to nonlinear elasticity, International Journal for Numerical Methods in Engineering 107 (6) (2016) 477–495. https://doi.org/10.1002/nme.5177
[5] D. Amsallem, M. J. Zahr, C. Farhat, Nonlinear model order reduction based on local reduced-order bases, International Journal for Numerical Methods in Engineering 92 (10) (2012) 891–916. https://doi.org/10.1002/nme.4371
[6] J. Barnett, C. Farhat, Quadratic approximation manifold for mitigating the Kolmogorov barrier in nonlinear projection-based model order reduction, Journal of Computational Physics 464 (2022) 111348. https://doi.org/10.1016/j.jcp.2022.111348
Ferroelectric as well as ferromagnetic materials are widely used in smart structures and devices as actuators, sensors etc. Regarding their nonlinear behavior, a variety of models has been established in the past decades. Investigating hysteresis loops or electromechanical/magnetoelectric coupling effects, only simple boundary value problems (BVP) are considered. In [1] a new scale–bridging approach is introduced to investigate the polycrystalline ferroelectric behavior at a macroscopic material point (MMP) without any kind of discretization scheme, the so-called Condensed Method (CM). Besides classical ferroelectrics, other fields of application of the CM have been exploited, e.g. [2, 3, 5]. Since just the behavior at a MMP is represented by the CM, the method itself is unable to solve complex BVP, which is technically disadvantageous if a structure with e.g. notches or cracks shall be investigated. In this paper, a concept is presented, which integrates the CM into a Finite Element (FE) environment [4]. Considering the constitutive equations of a homogenized MMP in the weak formulation, the FE framework represents the polycrystalline behavior of the whole discretized structure, which finally enables the CM to handle arbitrary BVP. A more sophisticated approach, providing a basis for a model order reduction, completely decouples the constitutive structure from the FE discretization by introducing an independent material grid. Numerical examples are finally presented in order to verify the approach.
References
[1] Lange, S. and Ricoeur, A., A condensed microelectromechanical approach for modeling tetragonal ferroelectrics,International Journal of Solids and Structures 54, 2015, pp. 100 – 110.
[2] Lange, S. and Ricoeur, A., High cycle fatigue damage and life time prediction for tetragonal ferroelectricsunder electromechanical loading, International Journal of Solids and Structures 80, 2016, pp. 181 – 192.
[3] Ricoeur, A. and Lange, S., Constitutive modeling of polycrystalline and multiphase ferroic materials basedon a condensed approach, Archive of Applied Mechanics 89, 2019, pp. 973 – 994.
[4] Wakili, R., Lange, S. and Ricoeur, A., FEM-CM as a hybrid approach for multiscale modeling and simulationof ferroelectric boundary value problems, Computational Mechanics 72, 2023, pp. 1295 – 1313.
[5] Warkentin, A. and Ricoeur, A., A semi-analytical scale bridging approach towards polycrystalline ferroelectricswith mutual nonlinear caloric–electromechanical couplings, International Journal of Solids andStructures 200 – 201, 2020, pp. 286 – 296.
A novel hyper-reduction technique is proposed and applied to nonlinear magnetostatic and mechanical computational homogenization. The method combines the ideas of microstructural clustering with the empirical identification of a reduced set of integration points. The macroscopic magnetic response (2D) is hardly distinguishable from the finite element results already for 12 integration points at a phase contrast of 1000 for a porous microstructure. Similar values have been found for mechanics.
The topic of this session is the analysis and modeling of turbulent non-reactive and reactive flows based on DNS, LES, RANS, and experiments. A special focus is on fundamentals in turbulence, turbulent reactive flows, turbulent multi-phase flows, turbulence of atmosphere, atmosphere/ocean interaction, modeling and simulation of complex turbulent flows, the interface of numerical algorithms, chemical and physical modeling, as well as high-performance computing with its application to turbulence.
Turbulent flow is characterized by a wide spectrum of temporal and spatial scales, giving rise to complex dynamics. Turbulent flow often cannot be resolved fully in a direct numerical simulation (DNS) and coarsened approximate models for the dynamics at larger scales only, are considered. An important reduced-order model is so-called large-eddy simulation (LES), motivated through the application of a spatial convolution filter to the Navier-Stokes equations coupled to the combustion dynamics equations.
Filtering of the nonlinear terms of even the most basic combustion models gives rise to a complex closure problem involving many terms. This prompts the introduction of sub-filter models to represent the interactions among the various combustion and flow scales. In LES, several sub-filter models have been developed. A major issue concerns the specific uncertainties with which coarsened combustion can be predicted by a specific sub-filter model. This modeling uncertainty can be addressed by turning to self-contained modeling principles such as the dynamic procedure, approximate deconvolution modeling (ADM) or regularization such as Leray and NS-a modeling. The translation of an LES formulation into a computational model introduces further approximations of a numerical nature. These two sources of error, i.e., the sub-filter modelling error and the discretisation error, together induce a total simulation error that needs to be understood and quantified, to instill confidence in a simulation.
In the presentation, the LES closure problem and the decomposition of the total simulation error into modelling and discretisation components will be described. We first focus on the generic turbulence problem of homogeneous isotropic flow and compute the error landscape of eddy-viscosity models. From this, we infer optimal simulation conditions minimising the total simulation error. Subsequently, this approach is applied tolarge-eddy simulation of the turbulent non-premixed Sydney bluff-body flame. The error-landscape approach yields optimal parameter settings for accurate simulations at considerably coarsened resolutions compared to DNS requirements.
Large-Eddy Simulation (LES) are state-of-the-art in engineering applications of Computational Fluid Dynamics (CFD). Fine grids offer high accuracy at high cost, whereas coarse grids require additional modeling assumptions that limit predictive capabilities. This dilemma is addressed by Extended LES (XLES). The original XLES formulation [1] utilized the One-Dimensional Turbulence (ODT) model [2] as a high-fidelity stochastic subgrid-scale (SGS) model that autonomously evolves instantaneous turbulent microscales in a dimensionally reduced setting.
For incompressible flow, XLES utilizes a coarse VLES grid of control volumes which solves the Poisson equation for the pressure to communicate large-scale information to the three auxiliary ODT grids carrying the SGS information. These grids have resolution N²_VLES*N_ODT$, where N_VLES and N_ODT denote the number of grid cells in the coarse VLES grid and a fully resolved 1-D ODT grid that is perpendicular to the VLES plane with resolution N²_VLES respectively, see [3]. The three ODT grids compose the 3-D Cartesian domain. The XLES model is more costly than any VLES, but less expensive than highly resolved LES, and naturally, than DNS, for which the spatial resolution requirements can be roughly estimated as N³_DNS ≅ N³_ODT. The XLES approach effectively removes the bottleneck of the Poisson solver, which operates on the coarse VLES grid with N³_VLES cells.
Building upon a recently developed C++ solver [4], the contribution to the conference will demonstrate and discuss relevant capabilities of XLES considering 3-D turbulent channel flow. The discussion will focus on the convergence properties of the XLES, evaluating how the LES resolution N_VLES affects the flow statistics disentangled from the stochastic SGS model.
References
[1] C. Glawe, H. Schmidt, A. R. Kerstein, and R. Klein. ”XLES Part I: Introduction to Extended Large Eddy Simulation.”, (2015) arXiv preprint arXiv:1506.04930.
[2] A. R. Kerstein. “One-Dimensional Turbulence: Model Formulation and Application to Homogeneous Turbulence, Shear Flows, and Buoyant Stratified Flows.” Journal of Fluid Mechanics 392 (1999): 277–334. https://doi.org/10.1017/S0022112099005376
[3] J. A. Medina Méndez, C. Glawe, T. Starick, M. S. Schöps, and H. Schmidt. ”IMEX-ODTLES: A Multi-Scale and Stochastic Approach for Highly Turbulent Flows.” Proceedings in Applied Mathematics and Mechanics 19 (2019): e201900433.
https://doi.org/10.1002/pamm.201900433
[4] P. Marinković, J. A. Medina, M. S. Schöps, M. Klein, and H. Schmidt. ”Experiences From the Bottom-Up Development of an Object-Oriented CFD Solver with Prospective Hybrid Turbulence Model Applications.” Proceedings in Applied Mathematics and Mechanics 25 (2025): e202400190. https://doi.org/10.1002/pamm.202400190
This study presents a numerical investigation of passive scalar mixing in Homogeneous Isotropic Turbulence(HIT). Different volumetric forcing schemes have been used in the literature, but the side effects are rarely discussed, either because these are assumed irrelevant or because it is too costly to conduct such analysis with a high-fidelity model. Nevertheless, various HIT forcing schemes have been specifically designed to counteract long transients by accelerating statistical convergence through a relaxation term (e.g. [1]). The relaxation term is artificial and has a spectral imprint, so it cannot be excluded as it can adversely affect the inferred mixing properties, particularly intermittency. Case-dependent, nonuniversal statistics may result, at least at a finite Reynolds number.
Coaxial pipe flow is encountered in many engineering applications, such as in heat exchangers, chemical reactors or cooling systems in nuclear engineering. These applications require an accurate modelling of the momentum transport, as it is a prerequisite for the transport of additional scalar quantities. The presence of an inner curved wall in coaxial pipe flows poses a challenge for modelling, especially when the viscous length scale is of the same order of magnitude as the inner wall span-wise curvature radius. Previously, various experimental and numerical experiments have been conducted for different radius ratios to investigate the effects of the inner wall span-wise curvature radius on the flow. In particular, for small radius ratios (wide annular gaps), recent work has shown that traditional wall models fail in predicting bulk quantities due to insufficient representation of the inner wall. In this contribution, we evaluate the suitability of high Reynolds number (HRN) Reynolds-Averaged Navier-Stokes (RANS) models to predict mean flow profiles considering the usual assumptions from boundary layer theory. Specifically, we check whether it is possible for data-informed HRN-RANS model coefficients to reliably predict mean flow statistics. Model coefficients are obtained by regression of simulation results to match available reference data. As an additional insight, we test Clauser's assumptions on the outer layer dynamics of wall-bounded flows, related to the constancy of the turbulent viscosity at very large Reynolds numbers.
We develop and analyze a random field model for the reconstruction of inhomogeneous turbulence from characteristic flow quantities provided by k-epsilon simulations. The model is based on stochastic integrals that combine moving average and Fourier-type representations in time and space, respectively, where both the time integration kernel and the spatial energy spectrum depend on the macroscopically varying characteristic quantities. The structure of the model is derived from standard spectral representations of homogeneous fields by means of a two-scale approach in combination with specific stochastic integral transformations. Our approach allows for a rigorous analytical verification of the desired statistical properties and is accessible to numerical simulation.
The understanding and control of interfacial phenomena is one of the main challenges in multiphase fluid mechanics, at the crossroads of scientific disciplines like Mathematics, Physics, Chemistry and Engineering. Examples are particle-laden flows, bubble columns, flows with cavitation, jet atomization, casting, oil recovery, film, boiling and foaming flows, as well as spreading and dewetting of (complex) liquids and biofluids. All these systems are central for technological advances in the chemical, pharmaceutical, energy, environmental and food industries. In addition, their behavior depends strongly on the typical time and length scales under consideration that are for example crucial for the development of micro and nano fluidics. In the latter case the validity of continuum fluid mechanics might be even questionable. The goal of this section is to provide an overview of the latest developments in this area, covering models at different scales, numerical, statistical and cognitive methods as well as experimental techniques but also surveying new physical insight and recent technical advancements.
A new arbitrary Lagrangian-Eulerian (ALE) formulation for area-incompressible Navier-Stokes flow on evolving surfaces is presented. The new formulation extends the surface ALE formulation of [1] to more general surface motions. It is based on a new curvilinear surface parameterization that describes the motion of the ALE frame. Its in-plane part becomes fully arbitrary, while its out-of-plane part follows the material motion of the surface. This allows for the description of flows on deforming surfaces using only surface meshes. The unknown fields are the fluid pressure, fluid velocity and surface motion, where the latter two share the same normal velocity. The new theory is implemented in the nonlinear finite element framework of [2] using the pressure stabilization scheme of [3] and the mesh stabilization scheme of [4], which are all adapted here to the new ALE frame. The implementation is verified through several manufactured steady and transient solutions, obtaining optimal convergence rates in all cases. The new formulation allows for a detailed study of fluidic membranes such as soap films, capillary menisci and lipid bilayers.
[1] A. Sahu, Y.A.D. Omar, R.A. Sauer and K.K. Mandadapu (2020), Arbitrary Lagrangian-Eulerian finite element method for curved and deforming surfaces: I. General theory and application to fluid interfaces, J. Comput. Phys., 407:109253
[2] R.A. Sauer, T.X. Duong and C.J. Corbett (2014), A computational formulation for constrained solid and liquid membranes considering isogeometric finite elements, Comput. Methods Appl. Mech. Engrg., 271:48-68
[3] C.R. Dohrmann and P.B. Bochev (2004), A stabilized finite element method for the Stokes problem based on polynomial pressure projections, Int. J. Numer. Methods Fluids, 46:183-201
[4] R.A. Sauer (2014), Stabilized finite element formulations for liquid membranes and their application to droplet contact, Int. J. Numer. Meth. Fluids, 75(7):519-545
In the field of polymer injection molding, high-resolution numerical simulations play a vital role in mitigating manufacturing defects. However, the complexity of these simulations presents significant challenges, and efficient numerical analysis remains an area of ongoing research.
In this study, we present a macro-scale two-phase flow formulation for polymer injection molding. We use an in-house finite-element solver to simulate the polymer and air flows as an interfacial flow, utilizing the level-set method in a space-time framework. A space-time discretization is here of special interest when combined with adaptive time refinement. This approach enables enhanced resolution around the moving polymer-air interface without substantially increasing computational time.
We will present this methodology along with its application to high-fidelity simulations of the injection molding process. Furthermore, we will discuss the potential benefits of adaptive time refinement in improving the accuracy and efficiency of interfacial flow simulations.
The integration of fibres into high performance concrete and polymer matrices is to improve the durability and structural integrity of materials. A critical factor in achieving optimal resistance is the precise alignment of the fibres within the rigid matrix. Particle Image Velocimetry (PIV) is used to determine velocity vector fields within a specially designed test rig.
This was created on the basis of a spherical probe rheometer, which is used in materials science for concretes to determine the flow properties [1]. In addition, fibre orientation measurements were taken throughout the container and processed using computer vision and artificial intelligence algorithms to produce a comprehensive orientation analysis.In this study, the decisive influence of flow properties on the movement of the fibres and the resulting orientation is investigated. The Jeffrey equation was crucial in this study as it allows the calculation of the angular acceleration of fibres depending on their prevailing orientation and flow conditions. This work represents a methodological approach that, through the synergistic use of PIV and orientation data, enables accurate analysis of fibre motion with the Jeffrey equation.
[1] Gerland, Florian, Tim Vaupel, Thomas Schomberg, und Olaf Wünsch. „Analysing the Influence of Fibers on Fresh Concrete Rheometry by the Use of Numerical Simulation“. Construction Materials 4, Nr. 1 (25. Januar 2024): 128–53.
https://doi.org/10.3390/constrmater4010008.
We develop a computational approach for two-phase interfacial flows by applying the diffuse interface (DI) method based on the conservative Allen–Cahn equation. The weak compressibility (WC) is assumed in order to model low-speed flows; the mass and momentum are conserved. We report some modelling and numerical issues which are due to the WC assumption, along with the remedies proposed. In particular, the interface region needs to be uniquely identified from the density field that serves as the phase indicator in the present one-fluid formulation. The standard numerical techniques, i.e. the finite volume method and explicit time integration, are applied for the numerical solution. Therefore, the approach allows for efficient computations on multicore CPU and can readily be implemented in other, existing flow solvers. A successful validation is reported for a selection of 2D and 3D benchmark cases. For a prospective use of the proposed method for flows with phase change, such as nucleate boiling, a physical equation of state and the source terms need to be properly dealt with. The DI will capture the vapour-liquid interface treated as a macroscopic transition layer.
The prototypical phase-field models for incompressible N-phase fluid mixtures are the Navier-Stokes Cahn-Hilliard models. Over the last few decades, many NSCH models with non-matching densities have been proposed. Even though these phase-field models aim to represent the same physical phenomena, they seem to differ at first sight. The first objective of this talk is to present a modeling framework that unites these NSCH models.
From the perspective of mixture theory, NSCH models may be understood as reduced models. Namely, the evolution equations for the diffusive fluxes are replaced by constitutive models. The second objective of this talk is to present a new incompressible phase-field model that guarantees full compatibility with mixture theory by replacing the energy-dissipation law with the second law of thermodynamics for mixtures. We compare this model, analytically and computationally, to existing NSCH models with non-matching densities. We conclude the talk by discussing structure-preserving finite element discretizations, and showing relevant benchmark computations such as a rising air bubble in water and the contraction of a liquid filament.
Waves are a ubiquitous natural phenomenon and acoustics are, besides surface water waves, the most obvious representatives, familiar to anybody and quantitatively known to any student of mathematics, physics or a technical subject. To this corresponds a long mathematical tradition, continuing today in the accurate numerical computation of linear and nonlinear wave phenomena. Our session is devoted to the simulation and understanding of waves and wave interactions. The range of applications is thus very broad, while the focus is meant to be on the unifying physical phenomenon. In the past years we had numerous contributions from solid mechanics, porous media flow, turbulence and aeroacoustics, from crack detection to explosions.
Acoustic waves at very low frequencies (straddling infrasound and sound) are produced by various sources, including wind turbines. These waves, with frequencies even below 20 Hz, can propagate over long distances and easily penetrate buildings. Wind turbines generate infrasound primarily due to the movement of the blades and mechanical components. The acoustic insulation of building facades is crucial to reduce the impact of these waves. Materials such as laminated glass, multilayer walls, and sound-absorbing panels can improve acoustic insulation by attenuating vibrations and reducing the noise perceived inside buildings. However, the effectiveness of the insulation depends on the correct design and installation of the materials, as well as the design data available. This work aims to review the calculation and evaluation models of this phenomenon.
Issues of church design are inseparably linked with the sound function. In the contemporary Poznań churches studied by the author, built between 1970 and 2000, there are no deliberate solutions to influence the interior acoustics. Suitable acoustic conditions were obtained accidentally. The last two decades have seen a new trend in church interior design. Targeted measures are being introduced to reduce the reverberation time. This paper presents a detailed analysis of two churches in which acoustic solutions in the form of stretched ceilings were used. These are the Church of St Padre Pio, built in 2013, and the Church of St John Paul II, built in 2018. In the cases studied, the stretch ceiling consists of 30 cm of mineral wool attached to the existing ceiling, a 5 cm air void and a tensioned membrane without perforations. Acoustic parameters assessed include Reverberation Time (RT), Early Decay Time (EDT), Clarity (C80) and (C50), Definition (D50) and Speech Transmission Index (STI). The research was related to studies of churches in Poznań that lack acoustic solutions.
The eigenfrequencies of a vibrating membrane generally depend on its shape. The associated inverse problem was made famous by the work of Kac and has been widely discussed in the literature. We consider this problem in the context of cracks. Is it possible to identify a crack in a membrane if its eigenfrequencies are known?
To answer this question, we first present an analytical approach based on the short-time asymptotic expansion of the heat trace, a well-known spectral invariant. Then, we introduce a data-based method that can be applied in practice. In particular, we train a neural network with simulated data to predict the shape of a crack from the corresponding eigenfrequencies of the domain. The data is computed using isogeometric analysis, a numerical method known for its excellent spectral approximation properties. The underlying eigenfunctions have a singularity at the crack tip, therefore the corresponding eigenvalues cannot be approximated well by standard mesh refinement. We remedy this with a local refinement scheme based on a singular isogeometric mapping and illustrate optimal convergence orders for the eigenfunctions and eigenvalues.
Sound propagation in brass instruments is dominated by spatial and non-linear effects arising from high sound pressure levels. To simplify analysis, one-dimensional linear frequency-domain models, such as the Transfer-Matrix Method, are commonly used to calculate the input impedance of brass instruments. These models neglect non-linear effects and typically assume planar or spherical wave propagation. Finite Element Methods in the frequency domain can improve accuracy for linear spatial wave propagation, particularly for sound radiation from the instrument’s bell. While effective for low-volume, linear regimes, these methods are limited in capturing non-linear dynamics.
To address wave propagation across the full volume range, time-domain models offer a more suitable approach as they account for non-linear effects like wave steepening. In engineering, the one-dimensional Method of Characteristics (MOC) is a proven tool for simulating pulsations in piping systems, known for its low diffusion and dispersion errors compared to other numerical methods.
This study extends the MOC to spatial problems through a dimensional splitting method, enabling the modelling of non-linear behaviour within brass instruments and non-planar wave propagation in their bells. Non-reflective boundary conditions are implemented to accurately simulate sound radiation into the far field. Results are evaluated and compared with those from linear models, demonstrating the advantages of the extended MOC for capturing complex acoustic phenomena in brass instruments.
This session is devoted to the mathematical analysis of natural phenomena and engineering problems. In this area PDEs play a basic role. Therefore lectures discussing analytical aspects of PDE problems as well as problems in the Calculus of Variations are welcome.
We consider a general compressible viscous, heat and magnetic conducting fluid described bycompressible Naiver–Stokes–Fourier system coupled with induction equation. In particular we do not assume conservative boundary condition for temperature and allow heating or cooling on the surface of the domain. We are interested in mathematical analysis when Mach, Froude, and Alfvén number are small - converging to zero. We give a rigorous mathematical justification that in the limit, in case of low stratification, one obtains modified Oberbeck–Boussinesq–MHD system with nonlocal term or non-local boundary condition for the temperature deviation. Choosing proper form of background magnetic field, gravitational potential and domain between parallel plates one found also that the flow is horizontal. The proof is based on the analysis of weak solutions to primitive system and relative entropy method. This is a recent joint work with Florian Oschmann and Piotr Gwiazda.
In this presentation, we discuss the existence proof of weak solutions in three spatial dimensions to an anisotropic Navier--Stokes--Nernst--Planck--Poisson system that satisfy an energy inequality. This system models the electrokinetic flow generated by charged particles dissolved in a liquid crystal with a constant director field. The existence proof is based on an approximation scheme and the weak sequential compactness of the approximating sequence, which is derived from the energy law. Additionally, weak-strong uniqueness is established using a relative energy inequality.
We study the compressible Euler equations on the real line subject to frictional damping. We assume that the velocity vanishes at spatial infinity, and for the density we prescribe two positive limit values. If these values coincide, solutions convergence to a constant state in the long-time limit. However, in the case of different limit values, there is no convergence towards a steady-state, and the long-time behavior is more involved. For its study, we first transform the system into parabolic scaling variables and derive a relative-entropy inequality. In the end, we show convergence of the density towards a unique self-similar solution to the porous medium equation, while the limit velocity is determined by Darcy's law.
We study a viscous Burgers equation, where the viscosity is governed by a positively 1-homogeneous potential. This problem is motivated by the so-called Hibler’s sea ice model, which treats sea ice as a Non-Newtonian fluid, where the stress tensor includes such a term in order to account for the plastic response of the ice. For this simplified model given by the viscoplastic Burgers equation we introduce a suitable notion of weak solution, study their existence, and further discuss their properties. This is joint work in progress with Edriss Titi (U Cambridge and Texas A & M) and Xin Liu (Texas A & M), supported by the DFG within project C09 of CRC 1114.
We investigate the evolutionary incompressible inhomogeneous Navier-Stokes equations in a perforated domain. Provided the inclusions are large enough, we show that, as their number tends to infinity, the limiting system is given by Darcy's law. This result is already known for 1) purely incompressible fluids with constant density, and 2) compressible fluids when the inclusions are the same size as their mutual distance. We generalize these results for incompressible fluids with non-constant density. Additionally, we give convergence rates.
The session especially welcomes contributions to the following topics: uncertainty quantification; risk analysis and assessment; Bayesian methods in engineering; decision analysis; stochastic modeling (including spatio-temporal modeling); stochastic mechanics; stochastic algorithms and simulation; stochastic processes (including time series); resampling methods; stochastic networks. Applications to large scaled problems are encouraged.
In this talk we consider stochastic Galerkin approximations of linear elliptic partial differential equations with stochastic forcing terms and stochastic diffusion coefficients, that cannot be bounded uniformly away from zero and infinity. A traditional numerical method for solving the resulting high-dimensional coupled system of PDEs is replaced by deep learning techniques. In order to achieve this, physics-informed neural networks, which typically operate on the strong residual of the PDE and can therefore be applied in a wide range of settings, are considered. As a second approach, the Deep Ritz method, which is a neural network that minimizes the Ritz energy functional to find the weak solution, is employed. While the second approach only works in special cases, it overcomes the necessity of testing in variational problems while maintaining mathematical rigor and ensuring the existence of a unique solution. Furthermore, the residual is of a lower differentiation order, reducing the training cost considerably. The efficiency of the method is demonstrated on several model problems.
We consider systems of delay differential equations (DDEs) with discrete delays, where the right-hand side is linear or a polynomial of low degree. Physical parameters are replaced by random variables to model and quantify uncertainties. Thus the solutions represent random processes. We expand a random process into the generalised polynomial chaos, which is a series consisting of orthogonal basis polynomials and unknown time-dependent coefficient functions. A stochastic Galerkin method yields a larger deterministic system of DDEs, whose solution represents an approximation of the coefficient functions. We investigate the properties of the stochastic Galerkin systems. In particular, the stability of stationary solutions is examined, where the associated characteristic equations typically yield an infinite set of eigenvalues. Bifurcation analysis can also be considered with respect to the delay as parameter. We present results of numerical computations using DDEs of epidemiological models as test examples.
In various application domains, one wishes to determine which parameter values should be used for a model to match its simulation output with measurement data. In practice however, measurement error on the data means that, at best, one can produce a so-called posterior probability distribution of these parameter values, given an assumed noise model. The Markov chain Monte Carlo method is a popular approach that constructs a Markov chain with this posterior distribution as its invariant distribution. The parameter samples in the chain are selected through an accept-reject strategy, that accepts proposal samples, based on their likelihood, relative to that of the previous accepted sample.
Evaluating this likelihood requires the solution of the given model. Therefore, any errors in the discrete solver will result in errors in the likelihood evaluation. In this presentation, we discuss the case where the Markov chain Monte Carlo method is run on top of a stochastic solver, such as a Monte Carlo particle solver. In this case, the likelihood—and thus the acceptance probability—becomes a random variable whose variance scales with the number of random trajectories simulated by the solver. We discuss the mismatch between theory and practice in this setting. To this end, we combine classical error analysis and simulation results to understand the behavior of the pseudomarginal Markov chains. We then present practical approaches for efficient estimation in such settings.
Quasi Monte Carlo methods are very popular in uncertainty quatification because of their high convergence order compared to Monte Carlo methods. Their downside is that they sample from a uniform distribution and, therefore, they become inefficient for problems with complex and concentrated distributions. We have developed an adaptive Quasi-Monte Carlo (aQMC) quadrature which concentrates the sampling to those subdomains with a high expected error. This is achieved by a greedy subdivision algorithm which employs an error indicator based on the modulus of continuity to control the subdivison and the subsequent resampling. We demonstrate the approach Genz' test functions and the discuss the benefits and limitations of the approach. As a semi-realistic show case, we consider a simple problem from Bayesian inference of chemical kinetic models, where concentrated posteriors naturally appear due to the high nonlinearity and sensitivity of these models and typically rather uninformative priors.
Ropes are used in modern structures as load-bearing elements for various applications, e.g. the main cables or cross-ties in cable-stayed bridges, cable roof structures, or high-voltage transmissions lines. Guy lines are designed to stabilize the structure and keep it in the right position against external loads.
Since the ropes do not transfer compressive forces, they require significant pre-tension forces to operate properly. However, even if the pre-tension force is very high, due to the significant lengths of the guy lines, a sag of the rope is observed under the action of its own weight. Therefore these elements should be analyzed at least as cables with a small sag. Due to significant dimensions, and slenderness guyed towers are very sensitive to dynamic excitations such as earthquakes. The flexibility of the structure leads to significant deformation of the tower under dynamic loads, which in turn leads to the excitation of the guy lines. Since earthquakes are burdened with high uncertainties, the vibrations caused by this type of loads should be analyzed by using stochastic methods.
The subject of considerations is a simplified model of a tower treated as a cantilever pole with a single guy line, modelled as a small-sag cable, attached to the structure at a certain height and anchored to the foundation at the other end. Due to position of the cable in the considered system, it is excited by two sources due to the seismic loads: by ground motions at the point of rigid attachment of the cable in the foundation and by displacements of the tower resulting from its bending deformations at the point of anchoring the cable. The Gaussian white noise process is used for simplified modelling of the earthquake ground motions. The Ritz method and Lagrange’s equations are used to obtain the non-linear system of differential equations of motion where the coupled longitudinal and lateral vibrations are observed. In previous works, the authors considered the system simplified to a single (fundamental) mode approximation. However, in real systems the vibration results from the superposition of the responses in different modes. Therefore, in the proposed approach, the multimodal response of the system is considered in order to take into account the influence of the selected modes on the obtained results. The mean values and variances of particular random state variables are obtained by using the equivalent linearization technique and verified against Monte Carlo simulations.
Optimization is the next natural step after simulation with increasing importance in the future. The aim of this session is to provide the basis of a holistic overview of all areas of optimization. Thus abstracts from both a theoretical as well as an applied perspective are welcome.
Shape optimization problems are ill-posed in general because of the lack of convexity. Therefore, the appropriate regularization is required for the existence of an optimal shape in applications to solid mechanics, as well as fluid and gas mechanics. In the lecture the mathematical model of shape optimization problem is considered. The convergence of the associated gradient flow dynamical system is established. In this way the convergence of the gradient method in shape optimization is shown for the first time in the literature. Numerical examples are presented for the shape and topology optimization in elasticity.
References:
[1] Plotnikov, P.I., Sokolowski, J. Gradient Flows in Shape OptimizationTheory. Dokl. Math. 108, 387 391 (2023).
[2] Plotnikov, P. I. ; Sokolowski, J. Gradient flow for Kohn-Vogeliusfunctional. Sib. Elektron. Mat. Izv. 20 (2023), no. 1, 524–579.
[3] Plotnikov, P. I. ; Sokolowski, J. Geometric aspects of shapeoptimization. J. Geom. Anal. 33 (2024), no. 7, Paper No. 206, 57 pp.
[4] Plotnikov, P. I. ; Sokolowski, J. Geometric Framework for GradientFlow in Shape Optimization, Springer Briefs, submitted.
[5] Sokolowski, J. ; Yixin, Tan. Shape and Topology Optimization of Control Problems in Elasticity, submitted.
The shape and topology optimization is considered for linear elasticity. There are two methods to compute admissible solutions. The first is the gradient flow dynamical system which leads to the gradient type algorithm. The second is a level set method which is based on numerical solution of the Hamilton-Jacobi equations. The first method is actually superior from mathematical point of view, the convergence of this method can be established to an optimal solution by using the advanced theory of quasilinear partial differential equations. However, the topology is defined in principle by the starting point of numerical procedure. The change of topology could be performed by the application of the topological derivative method. In contrast, the level set method is popular and still poorly analyzed from the mathematical point of view to our best knowledge. This presentation introduces in detail the level set method that combines topological derivatives with shape derivatives and apply them to shape and topology optimization. In our method, the topological derivatives can effectively identify and guide new holes or connection changes that may appear in the structure, while the shape derivatives provide the velocity field for interface evolution, thereby achieving control of complex geometric boundaries. To further improve the efficiency and stability of the algorithm, the gradient flow method is also incorporated into the overall framework and combined with the shape gradient to naturally handle the dynamics of shape evolution in a continuous iteration process. The level set method captures the movement, merging or splitting of geometric shapes by updating the level set function in the embedded domain. The effectiveness of the method is verified by multiple numerical examples.
References:
[1] Plotnikov, P. I., Sokolowski, J. Geometric aspects of shape optimization. J. Geom. Anal. 33 (2023), no. 7, Paper No. 206, 57 pp.
[2] Plotnikov, P. I., Sokolowski, J. Geometric Framework for Gradient Flow in Shape Optimization, Springer Briefs, submitted.
[3] Sokolowski, J., Yixin, Tan., Shape and Topology Optimization of Control Problems in Elasticity, to appear.
Topology optimization is a valuable tool in engineering, facilitating the design of optimized structures. However, topological changes often require a remeshing step, which can become challenging. In this work, we propose an isogeometric approach to topology optimization driven by topological derivatives. The combination of a level-set method together with an immersed isogeometric framework allows seamless geometry updates without the necessity of remeshing. Topological derivatives provide topological modifications without the need to define initial holes [1, 2]. This approach is implemented within an open-source isogeometric analysis code [3] and a quadrature library for implicitly defined geometries [4, 5]. We provide several numerical examples to demonstrate the effectiveness of the proposed approach.
References
[1] S. Amstutz and H. Andrä, Journal of Computational Physics 216(8), 573–588 (2006).
[2] P. Gangl, Computer Methods in Applied Mechanics and Engineering 366(7) (2020).
[3] R. Vázquez, Computers and Mathematics with Applications 72(8), 523–554 (2016).
[4] R. I. Saye, SIAM Journal on Scientific Computing 37, A993–A1019 (2015).
[5] R. I. Saye, Journal of Computational Physics 448(1) (2022).
The standard approach to mechanical design is based on strength hypotheses. However, the structural optimization methods do not take into account this important condition determining the correctness of the engineering solution. The situation is different in the case of biological systems, where reference to material strength is a basic condition for the formation of functional mechanically loaded systems. The team developed an optimization system modeled on the phenomenon of bone remodeling, based on rigorous theoretical studies in the field of material continuum optimization, where the condition for achieving the optimal solution is the equalization of strain energy density on the structural surface. Furthermore the strength hypotheses are expressed in terms of strain energy. The aim of the research presented in this paper is using a precisely estimated relationship between the condition of a constant value of the strain energy density on the structural surface and the material strength, according to yield criteria for numerical studies. The given numerical examples contain a reference to the analytical results and indicate a unique feature of the presented method. An important element of the concept of building a system for the biomimetic structural optimization method is the notion of insensitivity zone. The use of this concept allows for regularization without focusing on the existence of the Lagrange multiplier correspondence to the volume constraint. The biomimetic heuristics used will be presented. The approach presented in the paper can be used by engineers as a method for structural optimization no longer bound to the phenomenon of trabecular bone remodeling. However, the discussion of the use of observations of Nature in mechanical design remains open.
Fused deposition modeling (FDM) is a widely used extrusion-based additive manufacturing process, primarily for 3D printing. It facilitates the cost-effective production of customized products. The polymer is melted and then extruded through a nozzle on the product, where the polymer solidifies. The quality of parts produced with FDM depends on multiple factors, including the extrusion nozzle and the process parameters. In this work, we will focus on the impact of the nozzle, which ensures precise material deposition but often limits the maximum achievable printing speed due to pressure losses. The polymer melt’s high viscosity and shear thinning behavior result in significant pressure drops within the nozzle. These can hinder printing speed and lead to feeder slip, reducing dimensional accuracy. Depending on the nozzle geometry, vertex formations of the polymer flow can occur inside the nozzle, contributing to the pressure drop. Reducing this pressure drop can enable higher printing speeds, improved line precision, and better overall quality. To address this, we are optimizing the nozzle shape. We have developed a computational framework integrating fluid dynamics simulations with optimization algorithms to enhance nozzle performance. Polymer melt flow is simulated using a generalized Newtonian model with the Cross-WLF viscosity model to capture the material’s shear-thinning behavior. For shape optimization, we use free-form deformation (FFD), a spline-based parameterization method that enables low-dimensional geometry modification through control point adjustments. Gradient-free optimization algorithms are employed to explore design variations effectively. We present results comparing optimal nozzle designs under different manufacturing constraints and material parameters. The optimized designs demonstrate reduced pressure drop, enabling higher printing speeds and improved dimensional accuracy. This highlights the potential of shape optimization to advance FDM processes.
The aim of this section is to bring together experts in the field of applied and numerical linear algebra, discussing recent theoretical and algorithmic developments.
Rational matrices, that is, matrices whose entries are univariate rational functions appear in control problems and also in the numerical solution of non-linear eigenvalue problems as approximations of other matrices whose entries are more general univariate functions. Very often the rational matrices arising in applications have particular structures that should be preserved/used in the numerical computation of their poles, zeros and minimal indices. In this talk, we consider three classes of structured rational matrices R(z) that are Hermitian upon evaluation on (a) the real axis, (b) the imaginary axis, or (c) the unit circle. Our goal is to show how to construct linear polynomial system matrices, i.e., linearizations, for those R(z) that preserve the corresponding structures and are strongly minimal, a property that guarantees that such polynomial system matrices allow for a complete recovery of the poles, zeros, and minimal indices of R(z). Thus, structured generalized eigenvalue algorithms applied to these pencils will allow us to compute all these quantities in a structure preserving manner. We will present several approaches for solving these problems developed by the authors in the last few years.
A well known realisation theorem says that a rational matrix function, contractive on the disc, has a representationF(z) = D + C(I−zA)⁻¹B, with appropriately chosen matrices A,B,C,D such that the block matrix [A,B;C,D] is a contraction. A multivariate varsion of this result was obtained in [Grinshpan et al. 2016]. We will use that result to obtain a one-variable result for F(z) rational without poles in the annulus, or more general, in a multihole domain. The notion of contractivity needs to be adapted accordingly.
Bundles of matrix pencils (under strict equivalence) are sets of pencils having the same Kronecker canonical form, up to the eigenvalues (namely, they are an infinite union of orbits under strict equivalence). The notion of a bundle for matrix pencils was introduced in the 1990's, following the same notion for matrices under similarity, introduced by Arnold in 1971, and it has been extensively used since then.
During my talk we will describe the closure of the bundle of a pencil L, denoted by B(L), as the union of B(L) itself with a finite number of other bundles, where the dimension of each of these bundles is strictly smaller than the dimension of B(L).
For this reason we derive a formula for the (co)dimension of the bundle of a matrix pencil in terms of the Weyr characteristics of the partial multiplicities of the eigenvalues and of the (left and right) minimal indices. The talk is based on papers:
[1] F. De Teran and F. M. Dopico, On bundles of matrix pencils under strict equivalence, Linear Algebra Appl., 658 (2023), pp. 1–31.
[2] F. De Teran, F. M. Dopico, V. Koval and P.Pagacz, On bundles closures of matrix pencils and matrix polynomials, Annali della Scuola normale superiore di Pisa - Classe di scienze, 2024.
The structure preserving regularization, stabilization, and passivation of (possibly non-regular) linear port-Hamiltonian descriptor (pHDAE) systems by output feedback is discussed. For general descriptor systems the characterization when there exist output feedbacks that lead to an asymptotically stable closed loop system is a very hard and partially an open problem. For systems in pHDAE representation this problem can be completely solved. Necessary and sufficient conditions are presented that guarantee that there exist a proportional and/or derivative output feedback such that the resulting closed-loop port-Hamiltonian descriptor system is regular, asymptotically stable, and strictly passive.
We explore a class of port-Hamiltonian partial differential algebraic equations (pH-pDAEs), characterized by a self-adjoint linear relation and a skew-adjoint operator. We parametrize all their self-adjoint and skew-adjoint realizations making use of extension theory of symmetric relations. To this end, we explicitly construct a boundary triplet for a class of linear relations in range representation defined by two matrix differential operators on one-dimensional spatial domains. This immediately enables us to formulate the corresponding port-Hamiltonian system dynamics via differential inclusions that fits into the classical geometric framework. The proposed methodology is demonstrated through applications to the Dzektser equation and an elastic rod model with non-local elasticity condition.
For all fields of applications the mathematical models are primarily based on differential equations. Hence, their numerical solution plays a fundamental role in numerical mathematics. This section covers mainly the construction and the behavior of numerical methods for differential equations including those of ordinary as well as of partial differential type.
Biofilms describe a the colonization of bacteria on surfaces. They attract more and more bacteria and express a extracellular matrix (EPS). By that, a growing solid domain is formed. Biofilms mainly form in fluid domains. The bacteria are described as concentrations in our continuum description, where attraction and growth is described by an advection-diffusion-reaction equation (PDE) and the growth of EPS by a source term in the mass-balance equation. We solve the coupled fluid and growing solid (FSI) system using an interface tracking method: the arbitrary Lagrangian-Eulerian (ALE) approach for finite elements. Thus, the solid domain is modeled in Lagrangian coordinates and the fluid domain is modeled in Eulerian coordinates. In order to ensure that we are able to resolve narrow areas resulting from the growth process accurate, we solve a biharmonic extension with a mixed formulation as an additional partial differential equation. The resulting model is a highly nonlinear, nonstationary, coupled chemical-growth-FSI variational-monolithic system in which we seek four solution variables: velocities, pressure, displacements, concentrations. As the biofilm growth is a slow process (days), a implicit backward Euler scheme is used as time-stepping scheme. Then, a Galerkin finite element scheme is employed for spatial discretization with inf-sup stable finite elements for the flow part. A Newton method is used for linearization. Therein, the arising linear systems are solved with a sparse direct solver. Our approach is substantiated with several numerical tests on different mesh levels and time step sizes in order to investigate computational stability.
In this presentation, we propose and computationally investigate a monolithic space–time multi-rate scheme for coupled problems. The novelty lies in the monolithic formulation of the multi-rate approach as this requires a careful design of the functional framework, corresponding discretization, and implementation. Our method of choice is a tensor-product Galerkin space–time discretization. The developments are carried out for both prototype interface- and volume coupled problems such as coupled wave-heat-problems and a displacement equation coupled to Darcy flow in a poroelastic medium. The latter is applied to the well-known Mandel’sbenchmark and a three-dimensional footing problem. Detailed computational investigations and convergence analyses give evidence that our monolithic multirate framework performs well.
We present a local hp space-time multigrid method for tensor-product finite element discretizations of the Stokes equations, using Qₚ/Pₚ₋₁ᵈⁱˢᶜ elements in space and a discontinuous Galerkin (DG) discretization in time. A key novelty of this work is the application of \textit{hp} multigrid techniques in both space and time. The method is facilitated by the matrix-free capabilities of the \texttt{deal.II} library. While multigrid methods are well-established for stationary problems, their application in space-time formulations presents unique challenges, particularly in constructing suitable smoothers. To address these challenges, we employ a space-time cell-wise Vanka smoother. Extensive tests on high-performance computing platforms demonstrate its efficiency, handling large-scale problems with over a trillion degrees of freedom, confirming its potential for complex, high-fidelity simulations.
Instationary convection-diffusion problems arise in many applications, such as e.g., pollution simulations, heat transfer problems between thin domains, or in the modelling of flow and transport problems, to name but a few. In the advection-dominated case, the solutions are characterised by boundary layers, which lead to numerical instabilities and hence unphysical solutions when discretised with standard finite element methods. Known strategies to obtain stable solutions include the Streamline-Upwind Petrov-Galerkin (SUPG) method or a residual minimisation/least-squares approach. In this talk we focus on the latter approach. We will present an abstract least-squares framework that includes a built-in error estimator that can be used in a space-time adaptive refinement scheme. Furthermore, we will show that the instationary convection-diffusion equation fits into this framework and conclude with numerical examples that confirm our theoretical findings.
We present a continuous and a discontinuous linear Finite Element method based on a predictor-corrector scheme for the numerical approximation of the Ericksen-Leslie equations, a model for nematic liquid crystal flow including the non-convex unit-sphere constraint. As predictor step we propose a linear semi-implicit Finite Element discretization which naturally offers a local orthogonality relation between the approximate director field and its time derivative. Afterwards an explicit discrete projection onto the unit-sphere constraint is applied without increasing the modeled energy. We compare the established method of using a discrete inner product usually referred to as mass-lumping for a globally continuous Finite Element discretization against a piecewise constant discontinuous Galerkin approach. Discrete Well-posedness results and energy laws are established. Conditional convergence of subsequences of the approximate solutions to energy-variational solutions of the Ericksen-Leslie equations is shown for a time-step restriction. Computational studies indicate the efficiency of the proposed linearization and the improved accuracy by including a projection step in the algorithm.
In this session novel developments devoted to optimization and optimal control problems governed by ordinary or partial differential equations will be discussed. The focus is on theoretical investigations, numerical analysis, algorithmic issues as well as on application.
We aim to solve a topology optimization problem where the distribution of material in the design domain is represented by a scalar valued level set function. The assignment of material at a point in the design domain is then determined by the sign of the level set function which is approximated by a smoothed Heaviside function.
To obtain candidates for local minima, we want to solve the first order optimality system. The typically nonconvex structure of these problems might cause Newton's method with a line search strategy to fail. We therefore opt for a homotopy (continuation) approach which is based on solving a sequence of parameterized problems to approach the solution of the original problem. In addition to the homotopy parameter, the smoothing parameter of the Heaviside function can be used as a continuation parameter. The arising Newton-type method also allows for employing deflation techniques for finding multiple distinct solutions as well as for efficiently tracing Pareto optimal points in multi-objective optimization problems.
First numerical results for PDE-constrained design optimization problems are presented.
This talk presents an application of a linear transient PDE-constrained inverse problem to advection-diffusion problems in a real-world environment Ω ⊂ ℝⁿ for n ∈ {2,3}. We consider a forward operator F(m) = ℬu, where u ∈ V is the solution of the linear partial differential equation r(m, u) = f, and ℬ : V → ℝ^q is the observation operator. The objective is to utilize discrete measurements, represented by the vector d ∈ ℝ^q, to infer the parameter set mₒₚ , defined by
mₒₚ = arg min_ {m∈ℳ} (1/2 ‖ ℱ(m) − d ‖²_ {Γ⁻¹ₙₒᵢₛₑ} + ℛ(m)).
In this application, the transport of pollutants is simulated using the advection-diffusion equation. The inverse problem aims to identify the source of the pollutant and predict its dispersion. This leads to a highly underdetermined inverse problem, necessitating the use of a regularization term ℛ(m). We compare classical L²-regularization [1] with a sparsity-enforcing regularization [2], defined as
ℛ(m) := α ‖m‖_ℳ(Ω),
where
‖m‖ℳ(Ω) = sup {〈m,φ〉: φ ∈ C(Ω̅), ‖φ‖ C(Ω̅) = 1}.
For both methods, we present numerical implementations and evaluate their applicability to critical infrastructure protection.
Moreover, a Bayesian formulation of the inverse problem allows quantification of the uncertainty associated with the prediction of the parameter m. In particular, goal-oriented uncertainty estimation represents a promising approach for evacuation scenarios. This approach quantifies uncertainty for a specific quantity of interest [3]. In a crisis management context, this could pertain to a particular evacuation point or route. Additionally, this talk discusses the application of goal-oriented uncertainty estimation in the context of pollutant transport in complex environments.
[1] Villa, U., Petra, N., Ghattas, O. (2021). h-Adaptivity and Goal-Oriented Optimal Experimental Design for Infinite-Dimensional Bayesian Linear Inverse Problems. arXiv:1308.4084.
[2] Pieper, K., Sprungk, B., Stadler, G. (2021). Sparse Deterministic Approximation of Bayesian Inverse Problems. arXiv:1103.4522.
[3] Spantini, A., Bigoni, D., Marzouk, Y. M. (2017). Goal-Oriented Optimal Experimental Design for Infinite-Dimensional Bayesian Linear Inverse Problems. arXiv:1308.4084.
Industrial manufacturing processes are often described by systems of differential equations with respective material laws. Since these material laws depend on material-specific parameters, which are frequently unknown or can only be obtained through a range of expensive experiments, there is a need for numerical parameter estimation strategies. In this context, we consider a fiber spinning process modeled by a boundary value problem (BVP) of ordinary differential equations where a certain material law needs to be estimated on the basis of few experimental data. First, we consider the problem where the structural form of the material law is known. We have developed a numerical method involving collocation and continuation methods to solve BVPs that arise in a generalized Newtonian setting. Furthermore, we are able to calculate gradients of the BVP solutions with respect to the material parameters and apply nonlinear optimization techniques to optimize the material parameters of interest with respect to given measurement data. In this talk our aim is to extend the approach to the case where the structural form of the material law is not known. We discuss arising difficulties and propose a method for estimating the values of the material law along the fiber to be able to frame the problem in the context of symbolic regression.
CANCELLED
——————
We consider elliptic optimal control problems with total variation penalty of the control. Using the predual of the BV space, we provide a new form of optimality conditions. We discuss algorithmic opportunities to solve the control problems and present numerical results.
A common ansatz in PDE-constrained problems for distributed parameters is that the sought solutions are piecewise constant, modelling situations like localized inclusions of different material properties within an otherwise homogeneous medium. In this situation, variational regularization with a total variation penalty balances being compatible with piecewise constant minimizers with retaining convexity of the regularizer. However, its lack of differentiability means that most numerical methods require some level of smoothing, so that such piecewise constant structures can be observed only approximately and/or at very fine resolutions.
In this work, we instead consider generalized conditional gradient methods that provably approximate minimizers as linear combinations of characteristic functions, by alternating insertion and correction steps. Specifically, we focus on a discretised setting of functions defined on triangulations. This framework allows standard FEM discretizations to coexist with fast graph cut approaches to the total variation, which have long been used in image segmentation and related tasks. We present variants of such methods which allow for pointwise constraints and insertion steps with as small as possible computational cost. After considering some convergence results, these are applied in various canonical test cases, such as elliptic and parabolic inverse source problems with different kinds of measurements.
This talk focuses on the bilinear optimal control of a Fokker-Planck or a transport equation. We consider a drift field with very low regularity (specifically, BV-regularity in space), which necessitates the use of renormalized solutions, a technique developed by Ronald DiPerna and Pierre-Louis Lions. This foundational theory has been significantly extended by Luigi Ambrosio to encompass cases with BV-regular drift fields. The theory of renormalized solutions is essential not only for establishing the uniqueness of solutions to the controlled PDE but also serves as a method for defining these solutions. Our findings indicate that this approach is particularly natural in the context of optimal control. I am going to outline the PDE-constrained optimization problem we have considered and present our functional-analytic framework concerning the existence of optimal controls and the corresponding optimality criteria.
Dynamics and control is an interdisciplinary section which in particular adresses mathematical systems theory and control engineering. The contributions to this section are also concerned with the mathematical understanding and design of controllers which appear in actual applications.
This talk addresses the design of predictive controllers that ensure the satisfaction of safety-critical constraints for uncertain dynamical systems. In particular, model predictive control (MPC) provides a natural approach to satisfy safety-critical constraints for general nonlinear dynamical systems, assuming an accurate model is available. In this talk, I will present robust design methods that account for the mismatch between our model and the true system dynamics. Furthermore, I will discuss data-driven techniques to extract information about the model and its uncertainty from experimental data, providing an automated design pipeline.
A key challenge is to efficiently account for the uncertainty in the prediction of nonlinear systems. I will present a framework that addresses this problem using contraction metrics, a tool from nonlinear control theory.Specifically, this allows us to efficiently predict sets containing all uncertain future trajectories, which are then embedded into standard MPC solvers. I will demonstrate that this framework can account for a wide range of model uncertainties, such as disturbances, uncertain parameters, noisy measurements, unmodeled nonlinearities, and even (unbounded) stochastic noise. I will also highlight how online model adaptation can improve performance while maintaining safe operation. Overall, this provides a modern control framework that integrates techniques from predictive control, nonlinear control, robust designs, and online adaptation to guarantee stability and safety for nonlinear uncertain systems. Practicality will be demonstrated with numerical and experimental applications.
In the second part of the talk, I will present our recent framework, data to predictive control (D2PC). This is an automated design pipeline for predictive controllers for the special case of linear system dynamics subject to Gaussian process and measurement noise. This approach extracts the relevant model information and characterization of uncertainty directly from data using maximum likelihood estimation and asymptotically correct uncertainty quantification. This is accomplished with a generalized expectation-maximization approach, which allows one to incorporate quite general prior information while keeping the computational complexity low. Then, tailored robust control and robust-stochastic-predictive control implementations are developed that directly account for the structure in the model error resulting from the model identification. In combination, this provides a design that is completely automated and guarantees safe operation and stability with a user-chosen probability.
In this talk we present first results for near optimality in expectation of the closed-loop solutions for stochastic economic MPC. The approach relies on a recently developed turnpike property for stochastic optimal control problems at an optimal stationary process, combined with techniques for analyzing time-varying economic MPC schemes. We obtain near optimality in finite time as well as overtaking and average near optimality on infinite time horizons.
Magnetically levitated (Maglev) vehicles aim to address the growing demand for efficient and sustainable high-speed transportation solutions. They offer a promising alternative to traditional modes of transportation like air travel or wheel-on-rail systems. Based on the principle of the electromagnetic suspension (EMS), these vehicles need to be actively controlled to maintain stable levitation. Current efforts focus on developing Maglev trains capable of operating at speeds up to 600km/h. At these high speeds, vertical vibrations caused by track irregularities and other disturbances pose challenges to ride safety and passenger comfort, which have to be addressed by the controller. This work proposes the application of Nonlinear Model Predictive Control (NMPC) for reducing vertical vibrations of the vehicle's car body, thereby increasing passenger comfort while ensuring stable levitation. The novel aspect of this approach is the integration of mechanical suspension dynamics into the prediction model, allowing the controller to predict and minimize the car body's vertical movement. NMPC has emerged as a promising solution for the control of high-speed Maglev vehicles due to its ability to effectively handle system nonlinearities and constraints. A dynamic model is utilized to capture the interaction between the levitation magnets and the car body by incorporating an electromagnetic and mechanical model. While the nonlinear magnet model describes the electromagnetic behavior, the two-mass model represents the suspension dynamics. The track irregularities are interpreted as unknown disturbances. The NMPC algorithm predicts the system behavior over a finite horizon and computes an optimal control input by minimizing a cost function while respecting dynamic and input constraints. The cost function is designed such that closed-loop stability and minimal vibrations are ensured. Simulations using a detailed multibody model based on the Transrapid 09 show that this approach significantly reduces vertical vibrations compared to existing methods while ensuring stable levitation.
CANCELLED
———————
An accurate simulation of gas networks has been the objective of the last decades in numerical analysis, provided that this could later bring improvements on the operation of the network and reduced costs. However, when considering real-world application networks, simulating the gas network becomes computationally expensive due to the scale, complexity, and dynamic nature of the system. In this work we consider the isothermal Euler equations for ideal gas flow with nonlinear damping. Due to the computational cost, we employ a three-level model hierarchy for the optimization: the full model, the linearized model, and the reduced-order model. The full model ensures an accurate representation of the system. The linearized model simplifies the system by capturing its essential dynamics, offering a balance between computational efficiency and fidelity. Additionally, Space Mapping techniques were used to improve the linearized model with respect to the input-output structure. Finally, the reduced-order model employs proper orthogonal decomposition (POD) to retain the system features while reducing the computational cost.
With the isothermal model hierarchy the next step is to develop a controller for the network that can anticipate the demand from the consumers, but also respond to varying demands. For this reason, we develop a MPC strategy that includes constraints in state variables along the network. Using the hierarchy, we look to minimize the computational effort without compromising the accuracy required for practical applications and offer an effective framework for handling complex, high-cost simulations and controller design. We present the current results of this investigation.
Scientific Computing is concerned with the efficient numerical solution of mathematical models from both science and engineering. The field covers a wide range of topics: from mathematical modeling over the development, analysis and efficient implementation of numerical methods and algorithms to software and finally application for the solution of complex real-world problems on modern computing systems. This interdisciplinary field combines approaches from applied mathematics, computer science and a wide range of applications in which in-silico experiments play an increasingly important role.
CANCELLED
——————
We address the computational bottleneck that arises in solving high-dimensional problems such as 6D Boltzmann, Fokker-Planck, or Vlaslov equations, multi-body Hamiltonian systems, and the inference of governing equations in complex self-organizing systems. Specifically, the challenge lies in numerically computing function expansions and their derivatives fast, while achieving high approximation power.
We present the Fast Newton Transform (FNT), a novel algorithm for multivariate polynomial interpolation with a runtime of nearly Nlog(N), where N scales only sub-exponentially with spatial dimension, surpassing the runtime of the tensorial Fast Fourier Transform (FFT). We prove and demonstrate the optimal geometric approximation rates for a class of analytic functions—termed Bos–Levenberg–Trefethen functions—to be reached by the FNT and to be maintained for the derivatives of the interpolants. This establishes the FNT as a new standard in spectral methods, particularly suitable for high-dimensional, non-periodic PDE problems, interpolation tasks, and signal processing. We discuss further applications, such as the Newton Neural Operator, realizing fast (de-)convolution in machine learning tasks.
In this talk, a geometric multigrid solution technique for the incompressible Navier-Stokes equations in three dimensions is presented, utilizing discretely divergence-free finite elements. For this purpose, a set of shape functions is constructed in an a priori manner to span the subspace of discretely divergence-free functions for the Rannacher-Turek finite element pair under consideration. Compared to primal formulations, this approach yields smaller system matrices without a saddle point structure. This prevents the use of complex Schur complement solution techniques and more general preconditioners can be employed. Furthermore, the prolongation operator is designed to preserve globally linear functions while maintaining a high computational efficiency.
Constructing a basis for the subspace of discretely divergence-free finite elements in three dimensions poses significant challenges, making it a bottleneck of this approach. Therefore, the explicit basis construction is utilized only on the coarsest mesh level of the multigrid algorithm for the construction of direct solvers. On finer grids, this information is extrapolated to enforce boundary conditions efficiently. Here, special attention is required for meshes introducing bifurcations in the flow. In such cases, 'global' shape functions are incorporated, which describe the net flux through different branches. This framework also facilitates enforcing total fluxes, which is notoriously difficult in primal approaches.
Various numerical examples for meshes with different shapes and boundary conditions illustrate the strengths, limitations, and future challenges of this solution concept for three-dimensional flow simulations.
Partial differential equations are used to model the morphological evolution of printed organic solar cells, capturing the underlying complex processes. In this talk, we present a Cahn-Hilliard model that describes the dynamics of a polymer, a non-fullerene acceptor, and a solvent, coupled to the Navier-Stokes equations for the fluid's macroscopic motion. To account for solvent evaporation, we incorporate an Allen-Cahn equation into the framework. We discretize the model using a finite-element method with a semi-implicit time-stepping scheme. The resulting (non)linear systems are large-scale and tightly coupled, posing significant computational challenges. We propose a preconditioned iterative scheme that solves these coupled equations efficiently and is robust to variations in discretization parameters. Numerical experiments demonstrate the efficiency of the model and the proposed methodology through several numerical examples.
Additive overlapping Schwarz Methods are iterative methods of the domain decomposition type for the solution of partial differential equations. Numerical and parallel scalability of these methods can be achieved by adding coarse levels. A successful coarse space, inspired by iterative substructuring, is the generalized Dryja–Smith–Widlund (GDSW) space. In Heinlein, Hochmuth, Klawonn (2019), based on the GDSW approach, two-level monolithic overlapping Schwarz preconditioners for saddle point problems were introduced. We present parallel results for the solution of incompressible fluid problems using two approaches: a multilevel monolithic approach and one based on the pressure Poisson equation. These results are achieved through the combination of the additive overlapping Schwarz solvers implemented in the Fast and Robust Overlapping Schwarz (FROSch) library, which is part of the Trilinos package ShyLU, and the FEATFLOW library using a scalable interface for the efficient coupling of the two libraries. This work is part of the project StroemungsRaum - Novel Exascale-Architectures with Heterogeneous Hardware Components for Computational Fluid Dynamics Simulations, funded by the German Bundesministerium fürr Bildung und Forschung (BMBF) as part of the program on New Methods and Technologies for Exascale Computing (SCALEXA).
In this work, we modelled and simulated long-term damage in a fluid-structure interaction problem under a cyclic-in-time fluid regime. A temporal multiscale technique is proposed to efficiently compute the accumulation of internal damage under prolonged periodic loading conditions. This effect eventually manifests as macroscopic defects, such as cracks, or leads to failure. The study and modelling of these damage accumulation processes are addressed through the so-called concept of Continuum Damage Mechanics (CDM)(Lemaitre and Chaboche, 1994). The coupled dynamics are driven by the fluid and structural stresses, as well as the effective stress of the structure (a homogeneous elasticsolid), which is considered the main source of damage. We consider an elastic object attached to the bottom of a flow channel. The flow is driven by a periodic inflow boundary at the left of the channel and an open boundary at the right, which is assumed reminds unchanged over time. A Variational Multiscale Method (VMM) (Guo and Lin, 2015) was applied to efficiently compute the problem, leveraging the isolation of the periodic-in-time fluid-structure interaction fast-scale part of the problem, which allows larger time-stepping of the averaged slow-scale damage process. The spatial domain is considered Arbitrary Lagrangian Eulerian (ALE) coordinates. The computational time is drastically reduced by a magnitude of 10e4, enabling the fitting of the parameters of the damage model to literature or experimental data using a novel gradient-based method (Dominguez et al., 2023), as shown as part of preliminary results. The purposed approach not only demonstrates a variational multiscale methodology for coupling fluid-structure interaction problems with structural damage but also introduces the starting point for a temporal multiscale gradient-based fitting procedure specifically designed for this kind of problem.
Multi-objective optimization for a hydraulic turbine blade is a significant challenge due to the high computational cost of performing the number of computational fluid dynamics (CFD) simulations. It becomes more challenging while dealing with a high number of parameters. This research addresses this issue by proposing an innovative approach that leverages Autoencoder techniques to reduce the dimensionality of the problem, specifically the latent space representation of the original parameter space. This latent space is then used as the domain of optimization, potentially offering a more efficient search landscape. The study compares the performance of optimization runs conducted in the latent space against traditional optimization in the original parameter space. For both scenarios, the genetic algorithm ”Non-dominated Sorting Algorithm II (NSGA-II)” with an island model is utilized, allowing parallel exploration of the search space and promoting diversity in the population. Experiments encompass a range of multi-objective optimization benchmarks such as evaluations in terms of computational time. Furthermore, the study analyzes the trade-offs between time invested in training an efficient Autoencoder and subsequent gains in optimization speed. This research contributes to the field by offering a methodology for accelerating multi-objective optimization in the field of hydraulic turbine design.
Understanding structural vibrations is critical for the design of mechanical structures such as airplanes or cars. In the context of vibroacoustic design the dynamical response of the system in the frequency domain is central. Such frequency response functions are usually computed using discretization techniques such as the finite element method. To this end, potentially large linear systems have to be solved at each queried frequency. For tasks like design space exploration, optimization or uncertainty quantification, the system has to be evaluated many times, quickly resulting in a prohibitive computational burden. Neural network surrogate models offer a promising alternative to traditional numerical simulation. These models can be evaluated rapidly but require large datasets for training. Our work focuses on enhancing the data efficiency of deep learning approaches by incorporating physical knowledge into the network architecture.
In this work, we consider harmonically excited plates with beadings as a benchmark dynamical system. The beading patterns define a geometry modification which affects the local stiffness of the plate and consequently the frequency response function (FRF). Our network is trained to predict the FRF given a beading pattern.
We propose a physically inspired final network layer. Specifically, our network represents the FRF as a (weighted) sum of rational basis functions derived from the FRF of a single mass oscillator model. This design ensures smooth predictions and allows the network to directly predict the eigenfrequency, height, and damping coefficient per peak. Using these parameters, we define a novel loss function that incorporates their contribution explicitly. During training, we have to account for its special numerical properties.
We compare our physics-constrained architecture to a generic architecture that directly predicts the FRFs trained on the same data. Our method predicts the eigenfrequencies more accurately, especially in the low data regime. However, the generic method achieves a smaller FRF mean squared error in the high data regime. Potential future work includes generalizing our method to predict velocity fields, where generic networks currently outperform. Overall, our approach offers a tradeoff: improved eigenfrequency prediction and physical interpretability at the cost of a more complicated architecture and training process.
Determination of future states based on the actual one is a key task in many areas of dynamic system analysis like forecasting, stability analysis or model predictive control. For this purpose, classically a mathematical model needs to be set up from physical principles. The resulting equations of motion are then usually transformed into a system of 1st-order ordinary differential equations (ODE) [1] and solved with established numerical integration methods to determine future system states starting from an initial condition.
However, as mentioned, this classical approach requires the equations of motion to be formulated, which can be rather challenging due to the required expert knowledge or even impossible if the underlying physics is unknown. In such cases, machine learning approaches may be used to first extract the system behavior from data and then train surrogates predicting future states. To make use of established numerical integration methods in the forecasting process, the surrogate should substitute the state function of the underlying ODE. This goal can be reached, e.g., by using divided differences to first generate approximations of the state derivatives and then train an ODE-surrogate as proposed in [2]. However, divided differences will fail in case of noise or for large step sizes.
To overcome these difficulties, here a new approach is proposed, which may be called Artificial Neural Ordinary Differential Equation (ANODE). An explicit numerical integration scheme is wrapped around a feed-forward neural network representing the state function of the ODE to be learned. For training the ANODE, first multiple initial conditions are forward-propagated by solving the corresponding initial value problems up to each specified termination times using any explicit numerical integration scheme. Second, losses are generated by comparing the predicted output states with the respective exact ones. Finally, these losses are used to update the weights of the embedded neural network using backpropagation methods.
The feasibility of ANODE is demonstrated for various planar systems using a fourth-order Runge-Kutta integration scheme to forecast system states based on the trained surrogate. Generally speaking, the proposed ANODE approach works with any explicit single- or multistep integration scheme, it only has to be connected to the backpropagation graph to calculate all required derivatives.
[1] Bestle, D. (1994). Analyse und Optimierung von Mehrkörpersystemen: Grundlagen und rechnergestützte Methoden. Springer, Berlin.
[2] Bestle, D. & Bielitz, T. (2024). Real‐time models for systems with costly or unknown dynamics. Proceedings in Applied Mathematics & Mechanics. https://doi.org/10.1002/pamm.202400008 .
The behavior of materials is influenced by a variety of phenomena that take place across diverse time and length scales. Multiscale modeling strategies are required to have a better understanding of the effect of the microstructure on the macroscopic response. Numerical approaches, such as the FE2 method, consider the macro-micro interaction to predict a global response in a concurrent manner. However, such methods are computationally expensive due to the repeated evaluation of the micro-scale. This limitation has motivated the integration of deep learning methods into a computational homogenization framework to accelerate multi-scale simulations. One such approach is to use a neural network-based surrogate model to replace the micro-scale analysis. Such models are purely data-driven and are faster without compromising accuracy. However, the state of the micro-scale is not available when using a substitutive surrogate model and incorporating known physical relations of the microstructure such as the equilibrium conditions is not feasible.
In this contribution, we use neural operators to predict the micro-scale physics resulting in a combination of data-driven and physics-based hybrid model. This enables physics-guided learning and is flexible for different materials and spatial discretization. The applied multiscale FE$^2$ simulations are based on periodic homogenization theory and consist of a representative volume element (RVE) at each Gauss integration point of the macro-scale. We approximate the solutions of RVE by constructing a physics-informed operator network. Homogenized stresses and strains are then computed using conventional finite element methods and the consistent tangent matrix is computed via automatic differentiation.
We apply the proposed approach as two variants to time-dependent problems in solid mechanics considering viscoelastic material behavior, where the state is represented as internal variables only at the micro-scale. In the first variant, the internal variables are computed based on known physics whereas in the second variant, the internal variables are learnt from data. The results of homogenized stresses and strains from both these methods are analyzed. Additionally, we demonstrate that integrating cutting-edge high-performance computing tools and just-in-time compilation can significantly accelerate performance, achieving speed gains by multiple orders of magnitude.
Data-driven mechanics, introduced by Kirchdoerfer and Ortiz [1], replaces traditional material models with datasets of discrete stress-strain pairs. The solution to boundary value problems is obtained by minimizing a distance between these pairs, denoted material states, and pairs of stress and strain, which fulfill equilibrium and kinematic compatibility and are termed mechanical states.
While the original framework was introduced for elasticity, an extension to encompass inelastic material behavior is a crucial yet challenging step. In our extension [2,3], we address this challenge by introducing a history surrogate, which stores essential information about stress-strain paths of an associated material. At the end of each time step, a propagator updates this history surrogate. For truss structures, intuitive choices for the history surrogate are feasible and accurate results can be obtained, as shown in [3]. However, in higher dimensions, a manual construction of this quantity becomes impractical. To overcome this, we employ a neural network as the propagator, which autonomously handles the task of defining and updating the history surrogate based solely on discrete stress and strain data. We thereby eliminate the need for explicit update rules or user intervention and obtain a scalable framework for addressing higher-dimensional problems.
We apply this framework to truss structures with a path-dependent material response and compare results for an intuitive and neural network propagator. Furthermore, we extend the discussion to two-dimensional problems, examine the challenges associated with this higher-dimensional setting and offer insights into potential strategies for overcoming related issues. We demonstrate the scalability and flexibility of the enhancement by presenting results for two-dimensional boundary value problems.
[1] T. Kirchdoerfer, M. Ortiz, Data-driven computational mechanics, Comput. Methods Appl. Mech. Engrg. 304 (2016) 81-101
[2] T. Bartel, M. Harnisch, B. Schweizer, A. Menzel, A data-driven approach for plasticity using history surrogates: Theory and application in the context of truss structures, Comput. Methods Appl. Mech. Engrg. 414 (2023), 116-138
[3] K. Poelstra, T. Bartel, B. Schweizer, A data-driven framework for evolutionary problems in solid mechanics, J. Appl. Math. Mech (3) (2022), e202100538
The finite element method is a cornerstone of computational science and engineering, providing approximate solutions to boundary value problems. The solution quality is highly dependent on the underlying mesh, traditionally optimized through iterative adaptive refinement. We present a novel deep learning architecture that limits training data requirements by leveraging invariance and equivariance properties. Our approach directly generates high-quality h-adaptive meshes for specified boundary value problems and a target approximation error without iterative steps. Applied to 2D linear-elastic problems, our method achieves a median error reduction of 22.6% compared to uniform meshes with equivalent computational cost.
The Data-Driven Identification (DDI) method introduced by Leygue et al. (2018) aims to estimate the mechanical response of materials without relying on a predefined constitutive model. The main output of DDI is the stress field identified from full-field kinematic measurements and load cell measurements, with the intuition that similar strain values should yield similar stress values. It depends on the availability of heterogeneous and rich data, which are clustered to build a strain-stress database. Given the displacement and strain fields, DDI solves an inverse problem by minimizing the distance between the current mechanical state and its nearest representative in a material database, while ensuring equilibrium is satisfied.
DDI has been tested and applied to both synthetic and real data, including studies on: elasticity, hyperelasticity, viscoelasticity, elastoplasticity, and large deformation behaviors. Despite these successful applications, a comprehensive analysis of the method to confirm that DDI can accurately estimate and converge to the true mechanical stress is still missing. In addition, this method relies on algorithmic parameters that are often chosen empirically.
In this work, we investigate numerically and algorithmically the preliminary work of Leygue (2024), where a criterion for the uniqueness of the solution was introduced. We first illustrate the well-posedness of the DDI minimization problem when this criterion is verified, and then propose three methods for numerically characterizing the criterion. Additionally, we will analyze these methods and compare their computational costs. This study will help us in the selection of the parameters for DDI. As the DDI parameters are pushed to gain accuracy, the DDI becomes ill-posed. However, it is possible to detect and correct the stress estimate in this case. This can be done through additional non-invasive regularization or through the detection of the few degrees of liberty responsible for the loss of uniqueness.
Machine learning (ML) methods have been successfully applied to a broad variety of problems, such as structural engineering, elastostatics, heat transfer, fluid flow and material modelling [1-3]. A general, unified neural network approach as an ML-based counterpart of the finite element method without the need for analytic expressions for material laws is suggested [4]. The simulation process including steps from the material characterization to simulations on a structural level takes place in the proposed neural network framework. Specifically, an adaptation of the Deep Energy Method (DEM) [5] is combined with a Constitutive Artificial Neural Network (CANN) [6] and trained on measured displacement fields and prescribed boundary conditions in a coupled procedure. Tests on compressible and incompressible Neo-Hookean solids with up to twelve CANN parameters show high accuracy of the approach and very good generalization. A small extent of data is required for robust and reliable training.
[1] Le Clainche S., Ferrer E., et al. Improving aircraft performance using machine learning: Areview. Aerospace Sc \& Tech 138, 108354 (2023). https://doi.org/10.1016/j.ast.2023.108354
[2] Hildebrand, S., Klinge, S. Comparison of neural FEM and neural operator methods forapplications in solid mechanics. Neural Comput \& Applic 36, 16657–16682 (2024).
https://doi.org/10.1007/s00521-024-10132-2
[3] Hildebrand S., Klinge, S. Hybrid data-driven and physics-informed regularized learning ofcyclic plasticity with Neural Networks. Mach. Learn.: Sci. Technol. 5 045058 (2024). DOI 10.1088/2632-2153/ad95da
[4] Hildebrand S., Friedrich, J.G., Mohammadkhah, M., Klinge, S., et al. Coupled CANN-DEM Simulation in Solid Mechanics, MachineLearning: Science and Technology, 2024 (accepted)
[5] Nguyen-Tanh V.M., Zhuang X., Rabczuk T. A deep energy method for finite deformationhyperelasticity. Europ. J. Mech – A/Solids 80, 103874 (2020).
https://doi.org/10.1016/j.euromechsol.2019.103874
[6] Linka K., Hillgärtner M., et al. Constitutive artificial neural networks: A fast and generalapproach to predictive data-driven constitutive modeling by deep learning. J. Comp. Phys. 429,110010 (2021). https://doi.org/10.1016/j.jcp.2020.110010
Numerical simulations of turbulent flows represent a significant challenge in fluid dynamics due to their complexity and high computational cost. High-resolution techniques such as Large Eddy Simulation (LES) are able to capture fine details of turbulent structures but are generally not computationally affordable, particularly for technologically relevant problems. Recent advances in machine learning, specifically in generative probabilistic models, offer promising alternatives for turbulence modeling to overcome this issue. For example, our previous work introduced generative adversarial networks (GAN) as a mathematically well-founded approach for turbulence modeling and demonstrated the generalization capabilities of GAN-based synthetic turbulence generators under geometric flow configuration changes. In this work, we compare three generative models - Variational Autoencoders (VAE), Deep Convolutional GAN (DCGAN), and Denoising Diffusion Probabilistic Models (DDPM) - in simulating a 2D Kármán vortex street around a fixed cylinder. Training data was obtained by means of LES. We evaluate each model's ability to capture the statistical properties and spatial structures of the turbulent flow. Our results demonstrate that DDPM and DCGAN effectively replicate the flow distribution, highlighting their potential as efficient and accurate tools for turbulence modeling. We find a strong argument for DCGAN, as although they are more difficult to train, they gave the fastest inference and training time, require less data to train compared to VAE and DDPM, and provide the results most closely aligned with the input stream. In contrast, VAE train quickly and can generate samples quickly but does not produce adequate results, and DDPM, whilst effective, is significantly slower at both inference and training time.
Live broadcast provided in LCC to Rooms 1, 2, 3, 7, 8, 9 (next to Aula Magna) and to Room 001 in building A30 (Rychlewskiego 2, Poznań).
We would be grateful if you could use the link below to confirm your attendance. In the form, please provide your name and surname.
https://docs.google.com/forms/d/e/1FAIpQLScRJuOksdx_2zpKbT5PvAhdD3g2_VugfOU5ApaVZqNp5xpg_Q/viewform
Professor Andrzej Dragan is a Polish theoretical physicist, photographer, composer, filmmaker, and science popularizer. He studied at the University of Warsaw, where he defended his Ph.D. with distinction in 2006. He is a professor of physical sciences at the Faculty of Physics at the University of Warsaw and a visiting professor at the National University of Singapore. He specializes in relativistic quantum information, in which he habilitated in 2014 and obtained the title of professor in 2023. He has also worked at Imperial College London and the University of Nottingham. His research includes quantum optics, relativistic quantum information theory, and quantum field theory in curved spacetimes.
Professor Dragan is the author of 50 scientific papers and books. He has received many prestigious scientific awards, including those from the Polish Physical Society and the Foundation for Polish Science. He has twice been a scholar at the University of Oxford. He actively promotes science by participating in TEDx conferences and giving interviews.
Professor Dragan has also gained recognition as an artist. As a photographer, he developed a unique portrait style known as the Dragan effect. His works have been exhibited in international galleries, and his photos have been published on the covers of magazines worldwide. He has also created music, short films, commercials, and music videos.
In this public lecture he will explore the question "Do quantum measurements affect the past?". His short answer is "Probably not...".
Multibody dynamics enables the simulation of a wide variety of systems, all characterized by having multiple parts in relative motion with one another. Applications span from biological to engineering systems, requiring diverse capabilities which range from real-time simulation to high fidelity modeling of complex multidisciplinary systems. Goal of this mini-symposium is to present a view on the latest developments in models and advanced numerical methods in multibody dynamics. Focus is on techniques that enable applications to complex real-life problems.
Energy transition is one of the great challenges for engineers in the next decades. Caused by its energy density wind turbines will be one of the major energy sources of the future. Tasks of the energy transition will not only be the pure energy production, but also the ensuring of energy supply for industrial and private consumer at any time. For economic reasons lossy energy storage through mechanical and chemical conversion will not overcome the overall problem. A possible approach will be to enhance energy producers and consumers to be more adaptive with respect to the available energy. In the case of wind turbines, this is primarily a challenge for the control of the system.
In cooperation with the wind turbine developer W2E Wind to Energy and the Institute of Automatic Control of the RWTH Aachen University in numerous projects the possibilities of Model Predictive Control are investigated since 2014. The central idea of using Model Predictive Control is to overcome the problem of contrary objectives, such as power output, structural integrity and grid support. While first studies are based on pure simulation studies using coupled multibody-simulations, the research has been focussed on field testing since 2019. From 2020 to 2024 five field tests on a 3 MW wind turbine located near Rostock (Germany) have been carried out.
As the dynamic behaviour of wind turbines can be represented in an appropriate way by coupled multibody-simulation a three-level-testing procedure has been developed for applying the Model Predictive Control to a real prototype. In the first stage the control algorithms are tested against the coupled multibody simulation building a Software-in-the-Loop-simulation. In the second stage the control algorithms are brought to the same programmable logic controller as used on the real wind turbine. The final level has been the field testing on the real wind turbine.
While the first field tests in 2020 and 2022 can be understood as a proof of concept, the field tests carried out in 2023 and 2024 showed that Model Predictive Control and ML-assisted MPC is able to control current wind turbines ensuring the structural integrity under arbitrary wind conditions. The current presentation will give hints on developing a safe testing procedure for Model Predictive Control on real wind turbines. Furthermore also aspects of field tests will be provided to the community that not worked as expected.
Large deformations cannot be neglected in many engineering dynamics applications due to their significant contribution to the overall system dynamics. For instance, human gait analysis often employs multibody models that conventionally represent body segments as rigid bodies. However, bones are covered with muscles and other soft tissues that are not rigidly connected with the bones but act as a wobbling mass. While the dynamics of bones can be adequately modeled under the assumption of small deformations, accurately capturing the dynamics of wobbling masses requires accounting for nonlinear elastic forces due to large deformations and nonlinear material behavior. The nodal-based floating frame of reference formulation offers an efficient computational framework for modeling linearly elastic flexible multibody systems. However, incorporating nonlinear elastic forces within the modally-reduced nodal-based floating frame of reference formulation remains challenging, particularly in a computationally efficient and non-intrusive manner. State-of-the-art approaches, such as projection-based reduction combined with hyper-reduction techniques, allow the inclusion of fully nonlinear force expressions. Despite their accuracy, these methods are computationally demanding and require low-level access to a finite element code, making them less practical for many applications. This work explores different approaches for considering nonlinear elastic forces in the modally reduced nodal-based floating frame of reference formulation, which does not require low-level access to a finite element code. Three methods are explored in this work. The first method involves using multiple floating frames to account for nonlinear effects. The second method involves the successive linearization of the nonlinear elastic forces. In the third method, we aim to derive an expression for the nonlinear elastic forces by integrating the components of the tangent stiffness matrix. To achieve this, multiple tangent stiffness matrices corresponding to different nonlinear static finite element solutions are exported for both methods two and three. We then apply matrix interpolation techniques from parametric model order reduction to obtain a computable and integrable representation of the tangent stiffness matrix, ultimately enabling a closed-form expression for the nonlinear elastic forces. Numerical experiments demonstrate the computational efficiency and accuracy of the methods considered.
Fatigue estimation is an essential aspect of robust machine design. To prevent potential problems with fatigue in operating machines, typical designs are conservative in excluding fatigue damage under predicted working conditions. However, such an approach results in heavy, bulky, energy-inefficient designs with restricted payload and operation speed. We need a robust and online fatigue assessment system (virtual sensor) to enable lightweight designs with modern materials, like high-strength steel and composites. Virtual sensing of fatigue assists an operator, guides the control system, and monitors machine damage.
We propose a framework based on recovering stresses from flexible real-time multibody simulations and rainflow cycle counting of the nominal stresses of a structure to provide an online fatigue assessment system.
The traditional approach to estimating fatigue in complex systems uses very detailed finite element models to obtain stresses. Typically, that implies that only a small fragment of the structure is considered in the finite element analysis with the boundary conditions obtained from significantly simplified simulations of the entire system. While this approach proved to provide accurate enough results in numerous scenarios, its application is limited by two inherent properties of the method. The first relates to the computational cost of detailed finite element analysis, which makes it not feasible to perform such simulations in real-time. The second is the accuracy limitation resulting from the boundary conditions coming from simplified dynamic simulations.
In the suggested approach, we aim to relax the limitation mentioned above related to the computational speed while keeping the accuracy of stress calculations at a sufficient level.
We study techniques for recovering stresses directly from dynamic simulations with reduced-order models and investigate the feasibility of obtaining a stress history suitable for fatigue estimation in this manner. A model of the PATU crane available at LUT for research purposes was used to test the proposed approach. The technique of storing stress modes to generate a reduced order model and using them to calculate stress fields in the entire structure was computationally effective for the stated purposes. Conclusions about requirements for the finite element mesh used for generating reduced order models and the parameters of the reduction method for achieving the desired accuracy of dynamic stress fields in the regions of interest are drawn.
Magnetic Track Brakes (MTBs) are used in railway systems as emergency braking mechanisms that involve mechanical contact between a braking element (pole shoe) and the rail. Accurate prediction of both local and global contact forces and associated deformations occurring during braking is essential for optimizing the performance of the overall braking system. To this end, it is necessary to develop a precise and computationally efficient sliding contact model that can later be used to study the multibody dynamics of the entire braking system.
The proposed model uses quasi-static assumptions to simplify the complex contact dynamics, further, it is relying on an elastic half-space theory for localized contact interactions. The study focuses exclusively on mechanical loading, excluding electromagnetic forces for the sake of simplicity and also because the source of the loading whether it is mechanical or electromagnetic is irrelevant for the demonstrated model and effects. The model employs a quasi-rigid body approach, allowing overall rigid body motion while permitting localized elastic deformations at the contact surfaces only. The elastic half-space theory is used to enhance computational efficiency, eliminating the need for extensive parameter identification, which often requires costly experimental studies. The model incorporates Coulomb friction for sliding interactions and simplifies computation by pre-calculating matrix coefficients for undeformed states, making it highly efficient for quasi-static simulations.
The variation in the contact pattern formed on the contact surface of the pole shoe sliding at different speeds on a frictional surface, as well as the distribution of forces across this surface, has been investigated. Comparisons of the obtained results against finite element simulations show good agreement in the global force distribution on the contact areas but highlight limitations at the boundaries where stress concentrations occur. The comparisons indicate that this approach significantly reduces computational effort while maintaining a balance between accuracy and efficiency. Additionally, the parametric analysis explores how changes in the pole shoe's motion and loading conditions affect the numerical results, emphasizing the model's efficiency and practicality for scenarios with small elastic deformations. The model demonstrates strong applicability for integration within multibody dynamic simulations, offering an efficient tool for evaluating MTB performance under operational conditions.
The section will focus on advanced theoretical, numerical and experimental models for the evaluation of the behavior of structures subjected to static and dynamic loads. Innovative materials characterized by high strength, anisotropy and unconventional mechanical responses pose new challenges to the design and the performance of various structural elements such as beams, plates and shells. In particular, structural issues may appear at different scales when materials with an internal architecture are employed. Particularly welcome are linear and nonlinear models and algorithms that address static and dynamic behavior of structures at different scales.
Coverings made of technical fabrics are a solution eagerly used by engineers to cover a large area without the need for intermediate supports. They are mainly made of two types of composites: polyester threads coated with PVC and fiberglass threads coated with Teflon. Manufacturers of both types of fabrics guarantee their durability for 10 and sometimes even 50 years. However, coverings made of such fabrics sometimes work in difficult environmental conditions. An example of this can be the membrane hanging roof of the Forest Opera in Sopot (glass threads – erected in 2012) or the road salt warehouse near Wejherowo (polyester threads – erected in 2022). In the first case, especially in summer, the fabric is exposed to high temperatures, while in the second case, it is exposed to an aggressive environment (moisture with a high salt content).
After several years of operation, both roofs failed. In the Forest Opera covering, under the influence of a heavy snow load in February 2021, one of the segments of the roof cover was torn. The roof of the warehouse near Wejherowo was completely destroyed after just one year of operation in the winter of 2023. In order to determine the causes of the failure, tests of the mechanical properties of the technical woven fabric from which they were made were carried out. In the case of the Forest Opera, tests of the strength of the fabric were also carried out before the covering was made. In the case of the covering near Wejherowo, only the data of the fabric manufacturer was available.
In the case of the coated woven fabric used in the Forest Opera, a decrease in the load-bearing capacity of the fabric was found to be up to 30\%. It was determined that in addition to the decrease in the load-bearing capacity of the fabric, the cause of the failure was the accumulation of snow in the lower part of the destroyed segment.
In the case of the warehouse near Wejherowo, no significant decrease in load capacity was found. It was established that the cause of the disaster were serious errors in the design of the roofing, and the direct cause of the failure was the destruction of the steel beams with which the roof was tensioned.
During the presentation, the results of laboratory tests and finite element calculations of both coverings were shown.
The static design of reinforced concrete slabs is mainly based on finite element calculations. Ever tighter construction schedules and cost pressure on construction sites ultimately mean that the ceiling support times specified in the static calculation can no longer be met. On site, after the early stripping of the ceilings, temporary ceiling underlays will be provided in order to comply with the underlay time. The same scaffolding uprights are used here as for the production of the ceiling. This usual procedure is not examined in the context of the static calculation of the floor ceiling.
In addition, the dimensioning of the reinforced concrete slabs is carried out taking into account the cracked condition state. The latter, however, is undesirable in classic residential buildings, where ceiling soffits are painted white as standard. In addition, climatic conditions and the construction phases are not taken into account in the calculations. This leads to problems with the subsequent finishing trades, for example with the façade assembly or the installation of the floor covering. In order to investigate real ceiling deformations, a research project was carried out in cooperation between the Vienna University of Technology and PORR Bau GmbH [1]. The aim was to determine the deformations of ceilings, taking into account formwork and scaffolding uprights. For this reason, during the construction of a residential complex in Vienna's 10th district, deformation measurements of the different ceilings were carried out from the day of construction to the installation of the floor structure.
In this contribution, the empirically determined load approach [2] for finite element calculations is extended, in which different formwork support times of the floor ceilings were carried out, whereby the focus is on the multi-storey support of slabs that have not reached the full elastic modulus. The research project is intended to help prevent damage and to optimize the construction processes and the time required for the subsequent expansion work.
[1] S. Filipas: Experimental Validation of an Innovative Method for Reducing the Formwork Support Time of Reinforced Concrete Slabs, Master Thesis, Vienna University of Technology, 2025, in German.
[2] H.W. Müllner, S. Akyol, P. Hofer: Experimental Validation of an Innovative Method for Minimization of Deformation Tolerances of Reinforced Concrete Ceilings; PAMM, 24 (2024), https://doi.org/10.1002/pamm.202400036.
Sandwich plates with perforated metal facings are increasingly used in acoustic constructions, where effective management of sound wave scattering and damping is essential, such as along roads and highways as components of acoustic barriers. This paper presents an experimental and numerical analysis of acoustic sandwich plates with a mineral wool core and perforated metal facings. The study focuses on the proper modeling of the metal facing with perforation bands. From a mechanical point of view, the facing is not bonded to the core in the perforated areas, meaning there is no cooperation between the facing and the core in these regions.
Therefore, to assess the behavior of the panels with perforated facings, the experimental research concentrated on two aspects: determining the appropriate mechanical properties of the individual materials of the plate, including the characteristics of the perforated facing, and examining the structural behavior of the entire plate in terms of its bending strength with the perforated facing. The results showed that perforation of the metal facing significantly reduces its stiffness.
After conducting the experiments, a spatial numerical model was developed and calibrated. The numerical analysis was carried out using the Finite Element Method (FEM), which allowed for the detailed geometry of the perforation in the metal facing to be accounted for. The study considered the material variability of the facing, modeling it as a thin layer with specific thickness and known mechanical properties, such as Young’s modulus, Poisson’s ratio, and density. To accurately reproduce the actual mechanical behavior of the facing, the analysis also included the impact of perforation on its stiffness.
The conclusions from the numerical and experimental analysis emphasize the importance of precise modeling of the perforated metal facing in sandwich plates, particularly in the context of optimizing their acoustic and mechanical properties. The obtained results may serve as a basis for further research into the design of acoustic materials and the engineering of sandwich plates with perforated metal facings.
In lightweight bar-membrane structures, it is essential to shape their joints properly. Especially in tensegrity structures, the reliability of such a connection plays a key role. This work aims to perform numerical and experimental analysis of selected bar-membrane joints that can be used in tensegrity systems. Despite the textile used for the membrane is a nonlinear material it can be linearized assuming it will perform in a specific range of tension thus a linear orthotropic material model may be justified here.
Two polyvinyl chloride (PVC) coated high-tenacity polyester fiber textiles: Serge Ferrari® Flexlight Perform 502S2 and Serge Ferrari® Flexlight Classic 402N were considered for the membrane part of the tensegrity structure. Uniaxial tensile tests on those two materials were performed using Zwick/Roell Z-20 tensile testing machine to determine the tensile strength in warp and fill directions according to the PN-EN ISO 1421:2017-02 code. Based on those results, biaxial non-destructive tensile tests were performed using a BIAX 020 testing machine (Zwick/Roell, Germany) equipped with a video-extensometer to identify the orthotropic material parameters following methods A, C, and D according to the PN-EN 17117-1:2019-02 code. All samples were cruciform shaped with arms parallel to the fabric’s warp and fill so that warp and fill directions were aligned with tension axes during the tests. Then, finite element method (FEM) models simulating biaxial tensile tests were prepared and analyzed using the identified material parameters to validate the adopted numerical model. All numerical analyses were performed using the Hexagon® Marc/Mentat 2024.2 software and correlation coefficients were calculated.
After determining the parameters for linear orthotropic material and obtaining satisfactory convergence between the numerical results and the experimental data, an initial FEM analysis of the selected membrane tensegrity structure was performed to estimate the forces that may occur in the bar-membrane joint. Based on that data, detailed models representing the selected variants of the joints were prepared and analyzed numerically. Finally, analogous experiments were performed in the laboratory. The experimental results were compared with the numerical model and showed relatively high convergence.
ACKNOWLEDGEMENTS
Financial support of these studies from the Gdańsk University of Technology by the DEC-7/2022/IDUB/III.4.3/Pu grant under the Plutonium Supporting Student Research Teams - ‘Excellence Initiative - Research University’ program is gratefully acknowledged.
The section focuses on constitutive modelling of natural and artificial materials subject to elastic and inelastic deformation processes. The aim is to compare new constitutive models formulated on both the phenomenological and the micromechanics basis to determine their validity by comparison of simulations with experiments. A wide range of open problems will be considered in the section, like multi-scale modelling of heterogeneous materials, implementation of constitutive models in numerical applications, and the virtual testing of structural systems.
In shape memory alloys, the competition between interfacial and elastic strain energy contributions leads to twin branching, i.e., to refinement of the twin laminates close to the macroscopic interface between twinned martensite and austenite. We have developed a 1D model of twin branching in which the average twin spacing is a continuous function of the distance from the austenite-twinned martensite interface. The free energy of the branched microstructure comprises the interfacial and elastic strain energy contributions, the latter calibrated using the respective upper-bound estimate derived by Seiner et al. (2020). The total free energy is then minimized, and the corresponding Euler-Lagrange equation is solved numerically using the finite element method. The results show a good agreement with the model of Seiner et al. (2020) in the entire range of physically relevant parameters. Importantly, our continuum framework admits incorporation of energy dissipation into an incremental problem of evolution of the branched microstructure. The effect of rate-independent dissipation is thus studied for an evolving microstructure. The results show that significant effects on the microstructure and energy of the system are expected only for relatively small domain sizes.
Multiphase-field models allow for the simulation of microstructural evolution and phase transformations in materials, eliminating the necessity for computationally expensive interface tracking. Furthermore, multiphase-field methods facilitate the integration of various driving forces, e.g., chemical and mechanical driving forces, within a unified framework. The multiphase-field method provides a powerful approach within the field of chemo-mechanical modeling, enabling an in-depth examination of the complex interplay between diffusional processes and mechanical deformation and stresses in materials. In light of the numerous potential applications of this method and its ongoing development, it is crucial to establish standardized benchmark tests. In this work, the coupling between chemical, capillary, and mechanical driving forces is investigated employing a coupled phase-field model [1]. Herein, chemical driving forces based on the grand potential density [2, 3] are employed and parameters from a Calphad-database are incorporated to approximate the Gibbs free energy and, thus, quantify the chemical driving forces. Furthermore, mechanical driving forces are formulated based on the mechanical jump conditions [4]. The chemo-mechanical coupling is discussed in terms of a series of phase equilibrium simulations of Fe-C binary alloys, which contribute to the development of standard benchmark examples to validate chemical, capillary, and mechanical driving forces [5]. Thermodynamic and mechanical equilibrium conditions are derived from sharp interface formulations and compared with phase-field simulations.
References
[1] B. Svendsen, P. Shanthraj and D. Raabe J. Mech. Phys. Solids 112 619–36, 2018.
[2] A. Choudhury and B. Nestler, Phys Rev E, Vol. 85 (2), 021602, 2012.
[3] M. Plapp, Phys Rev E, Vol. 84 (3), 031601, 2011.
[4] D. Schneider, F. Schwab. E. Schoof, A. Reiter, C. Herrmann, M. Selzer, T. Böhlke, B. Nestler, Comput. Mech., Vol. 60 (2), 203–217, 2017.
[5] T. Kannenberg, A. Prahs, B. Svendsen, B. Nestler, D. Schneider, Modelling Simul. Mater. Sci. Eng. 33(1) 015004, 2025.
Solid-state processing techniques like friction extrusion (FE), friction stir welding (FSW), and friction surfacing (FS), represent advanced methods for processing Al alloys. These techniques utilize frictional heat and intense mechanical deformation to induce microstructural evolution without reaching the melting point. This study focuses on Al-Cu-Li alloys, which are widely employed in the aerospace industry due to their exceptional strength-to-weight ratio and superior mechanical properties. The extreme stress and strain conditions inherent to these processes pose challenges for directly observing microstructural transformations, necessitating the use of numerical modelling to simulate microstructural evolution and optimize processing parameters.
Different dynamic recrystallization mechanisms (DRX) occur during FE, FS and FSW, depending on the stacking fault energy of the processed alloy but also on the specific processing conditions. To investigate these phenomena, this study presents a fully coupled computational framework that integrates the multiphase-field (MPF) method with a crystal plasticity (CP) model. The MPF method is used to simulate nucleation and grain boundary migration, while the CP model captures anisotropic mechanical behavior, including strain hardening, lattice rotation, and texture evolution. The framework employs a Fast Fourier Transform (FFT) based finite-strain elasticity solver to enhance computational efficiency. This integrated framework establishes a direct linkage between microstructural evolution and macroscopic mechanical responses, enabling a detailed understanding of the interplay between processing conditions, microstructural changes, and material performance in polycrystalline Al alloys during solid-state processing.
The so-called phase-field crystal (PFC) model [1, 2] emerged as a prominent approach to describe crystal structures at large (diffusive) timescales through a continuous, periodic order parameter representing the atomic density. It reproduces the main phenomenology for crystalline systems, including solidification and crystal growth, capillarity, lattice deformations as well as nucleation and defect kinematics. The PFC model describes self-consistently anisotropies resulting from the lattice structure and inherently includes elasticity effects. We discuss several extensions of the classical PFC model and showcase their model capabilities through a broad range of benchmark simulations. We examine realistic solidification settings in both open as well as closed systems [3]. Then, we discuss in detail the coupling of the PFC model with a macroscopic velocity field, explicitly accounting for the relaxation of elastic excitations [4]. We analyze how this model extension further improves the description of elasticity within the PFC framework, providing deeper insights during complex grain boundary evolution. Specifically, we study direction-dependent mobilities and unidirectional motion of grain boundaries under oscillatory driving forces or cyclic thermal annealing for both bicrystal and polycrystalline microstructures. Consistent with experimental results and molecular dynamics simulations new insights into grain boundary kinetics are provided [5].
[1] K.R. Elder, M. Katakowski, M. Haataja, M. Grant, Modeling Elasticity in Crystal Growth, Physical Review Letters 88, 245701 (2002)
[2] K.R. Elder, M. Grant, Modeling elastic and plastic deformations in nonequilibrium processingusing phase field crystals, Physical Review E 70, 051605 (2004)
[3] M. Punke, S. M. Wise, A. Voigt, M. Salvalaglio, A Non-Isothermal Phase-Field Crystal Modelwith Lattice Expansion: Analysis and Benchmarks, arXiv preprint arXiv:2408.16449 (2024)
[4] V. Skogvoll, M. Salvalaglio, L. Angheluta, Hydrodynamic phase field crystal approach to interfaces, dislocations, and multi-grain networks, Modelling and Simulation in Materials Scienceand Engineering 30, 084002 (2022)
[5] C. Qiu, M. Punke, Y. Tian, Y. Han, S. Wang, Y. Su, M. Salvalaglio, X. Pan, D. J Srolovitz, David, J. Han Grain boundaries are Brownian ratchets Science 385, 6712 (2024)
Cryogenic structural components, including collars, bladders, and keys for superconducting magnets, as well as elements for liquid hydrogen storage systems, are often fabricated from austenitic stainless steel (e.g., 316L) due to favorable mechanical properties and corrosion resistance. However, producing these complex geometries through traditional methods is challenging. Additive manufacturing presents a promising alternative, though the numerical understanding of material behavior under extreme cryogenic conditions remains limited. This study advances the numerical simulation of deformation-induced martensitic transformation (DIMT) in additively manufactured fused filament fabricated (FFF) 316L stainless steel. Central to this effort is the prediction of tensile behavior at temperatures ranging from ambient down to 4K. Supporting experiments—including tensile tests and microstructural characterization via scanning electron microscopy (SEM) and computed tomography (CT)—provide essential input parameters and validation data for the numerical framework.
The numerical modelling in this study is based on a nonlinear, temperature-dependent finite element approach incorporating a newly developed constitutive material law. This law couples a phase-kinetic description of the martensitic transformation with a mixed kinematic/isotropic plastic hardening formulation. By solving the underlying conservation laws and boundary conditions while considering temperature-dependent material parameters, the model provides a realistic representation of stress-strain states and evolving martensitic phase fractions across a wide range of thermal conditions. The implementation within a commercial finite element software relies on user-defined subroutines that integrate the constitutive relations and transformation kinetics. The simulations use adaptive time-stepping and iterative strategies to handle highly nonlinear, cryogenic loading scenarios efficiently. After parameter identification through experimental data, the numerical results are systematically compared with measured values from tensile tests and microstructural analyses. This iterative validation process continuously enhances the predictive capability of the model.
By merging advanced material-theoretical concepts with robust numerical methods, the presented framework offers deeper insight into the mechanical behavior of additively manufactured austenitic steels under extreme thermal conditions. Ultimately, it supports the targeted design and optimization of cryogenic lightweight components and contributes to the fundamental understanding of material modeling challenges in applied mechanics.
Coupled problems arise in several applications. From a general point of view each problem containing more than one primary field is called a coupled one. Usually the class of coupled problems is subdivided into volumetrically coupled problems and problems with surface coupling. The class of volumetrically coupled problems contains e.g. the fluid flow in porous solids described by mixture theory, thermo-mechanically coupled problems, chemo-mechanically coupled problems and electro- or magneto-mechanically coupled problems while in the second class problems like the fluid-solid interaction via an interface are included. All problems is in common that the presence of different fields in the numerical treatment requires special attention with respect to the multi-field formulation and the solution strategy. The session on coupled problems deals with all aspects mentioned above, i.e. ranging from modelling aspects to numerical solution strategies.
Ferroelectrics exhibit many interesting effects, both linear and nonlinear, which is why these materials are widely used in science and industry. Recently, the nonlinear effects have also been employed in the field of energy harvesting [1,2,3], while for a long time only linear effects were exploited. Moreover, nonlinear effects are irreversible and are accompanied by energy dissipation, which generally leads to a temperature rise of the material. For modeling the characteristic nonlinear effects of ferroelectric materials, there are various possibilities, in particular microphysical and, phenomenological models. Ferroelectric energy harvesting concepts are investigated theoretically, based on a microphysically motivated thermo-electromechanical multiscale constitutive framework. Taking advantage of comparatively large changes of strain and polarization due to domain switching, the electric output is higher compared to what is commonly known as piezoelectric energy harvesting. Dissipative self–heating and augmented damage accumulation, on the other hand, may impede the operability of the harvesting device, in particular if tensile stress is required for depolarization, as suggested by recent works [1,3]. A new harvesting cycle [1] thus dispenses with tensile stresses and instead exploits the potential of existing residual stresses. It is further investigated to which extent a bias field, commonly applied to support repolarization as an important stage of the cycle, can be omitted, saving considerable effort on the technical implementation. Process parameters are obtained from various simulations by pareto-optimization, considering, inter alia, the effect of ambient temperature. Furthermore, interactions between electrical circuit and a multiscale framework are investigated, as well as the influence on the electrical output.
[1] A. Warkentin, L. Behlen and A. Ricoeur, Smart Mater. Struct. 10.1088/1361-665X/acafba (2023).
[2] L. Behlen, A. Warkentin and A. Ricoeur, Smart Mater. Struct. 30 035031 (2021).
[3] W. Kang, L. Chang and J. E. Huber, Nano Energy 93 (2022), p. 106862.
Structured magnetorheological elastomers (MREs) are composite materials which show magneto-mechanical coupling effects. They consist of magnetizable particles arranged in a chain-like pattern, embedded in a soft elastomer matrix. Explicitly resolving the microstructure of real-world samples is infeasible, necessitating the adoption of a multiscale modeling approach. In this work we introduce a macroscale modeling framework for structured MREs using physics-augmented neural networks (PANNs) [1,2], with consideration of the material's transversely isotropic behavior. The proposed PANN macromodel adheres to essential physical principles such as objectivity, material symmetry, thermodynamic consistency, the volumetric growth condition, and ensures zero free energy, stress, and magnetization in the absence of external mechanical and magnetic loads [1].Under the assumptions of negligible electric fields and current densities, and focusing on quasi-stationary processes, the framework employs finite element (FE) simulations based on a variational principle for magneto-hyperelastic materials [3]. For addressing the quasi-incompressibility of the matrix, a four-field formulation is chosen. The framework begins with data generation, wherein a representative volume element (RVE) undergoes sampled macroscopic magneto-mechanical loadings in FE simulations. The resulting homogenized microscale variables are used to construct a macroscale dataset for the training and testing of the PANN macromodel. Training of the PANN employs a Sobolev training approach, where the automatic detection of the particle chain direction is incorporated [2]. When trained on the complete dataset, the PANN accurately predicts magnetization, mechanical stress, and total stress within the range of the training data.
REFERENCES
[1] Kalina, K. A., Gebhart, P., Brummund, J., Linden, L., Sun, W. and Kästner, M. Neural network-based multiscale modeling of finite strain magneto-elasticity with relaxed convexity criteria. Computer Methods in Applied Mechanics and Engineering 421, 2024.
[2] Kalina, K. A., Brummund, J., Sun, W. and Kästner, M. Neural networks meet anisotropic hyperelasticity: A framework based on generalized structure tensors and isotropic tensor functions. Preprint, arXiv:2410.03378, 2024.
[3] Gebhart, P. Skalenübergreifende Modellierung magneto-aktiver Polymere auf Grundlage energie-basierter Variationsprinzipien. PhD thesis. TU Dresden, 2024.
ndustry 4.0 requires more and more data from the production process, e.g., on transmitted forces, torques and wear, in order to reduce expensive downtimes and to increase productivity. Direct or indirect methods can be used to collect this data. Indirect methods include, for example, determining the transmitted torque of a coupling by measuring the motor current. For direct measurement, the sensors can be attached externally to the respective machines or integrated directly into the load carrying machine elements. This last variant is referred to as sensor-integrating machine elements.
It is advantages to create a numerical model for such a sensor-integrating machine element in order to be able to predict the effect of design decisions on e.g. (i) the performance of the machine element or (ii) of the sensor. The focus of this work lies on a sensor-integrating jaw coupling with which the torque can be measured via dielectric elastomer sensors [1]. These sensors are located in the individual teeth of the gear rim of the coupling. They change their capacitance when they are deformed which is used to predict the torque.
The gear rim of the jaw coupling is made of thermoplastic polyurethane, which has viscoelastic and temperature-dependent properties. In order to be able to take these temperature-dependent properties into account, an existing finite element model, in which a mechanical-electrical model is implemented, is extended with the thermal field. Due to the use case as a sensor, it was shown that a one-way coupling between the mechanical and electrical field is sufficient. With the proposed model, the heating of the gear rim due to loads with high frequencies and the effect of this heating on the capacitance change is analyzed. Due to the temperature dependency, temperature sensors must be used in the real coupling in order to take into account the change in stiffness due to a temperature change. Another important point is therefore to determine a suitable position for the temperature sensors in the model setup.
[1] Ewert, A., Menning, J. D., Prokopchuk, A., Rosenlöcher, T., Henke, E. F. M., Wallmersperger, T., \& Schlecht, B. (2024). Concept of a sensor-integrating jaw coupling for measuring operating data. Forschung im Ingenieurwesen, 88(1), 28.
The consistent numerical analysis of balance laws in multiphysical systems demands specialized time integration methods. These methods must respect the system’s invariants, as outlined by Noether’s theorem while adhering to key thermodynamic principles: conservation of linear and angular momentum, energy conservation, and positive dissipation. The GENERIC(General Equation for Non-Equilibrium Reversible-Irreversible Coupling) framework provides a foundation for creating such integrators in discrete systems. A thermo-visco-elastic pendulum undergoing large deformations is used as a model problem, featuring heat transfer and viscous dissipation—both of which pose challenges for numerical methods. Energy-Momentum-Entropy(EME) schemes, based on discrete gradients as proposed by Gonzalez [1], are developed for this model, demonstrating consistency with energy and entropy principles. The implications of selecting variables such as temperature, entropy, internal energy, or total energy as independent variables are explored. Special attention is given to a specific GENERIC formulation based on Mielke’s approach [2], as well as the introduction of auxiliary variables like strain and their impact on the discrete system’s structure. The model is extended to include electromagnetic forces, enabling the consistent simulation of coupled electro-mechanical and magneto-mechanical systems, which are essential in modern engineering. The development of GENERIC-based integrators for discrete systems represents an initial step towards applying these methods to continuum mechanics.
REFERENCES
[1] Gonzalez, O. Time integration and discrete Hamiltonian systems. Journal of NonLinearScience 6,449–467, (1996).
[2] Mielke, A. Formulation of thermoelastic dissipative material behavior using \linebreak GENERIC.Continuum Mechanics and Thermodynamics (2011).
We propose a unifying approach combining a mixed polyconvexity-inspired framework with the thermodynamically consistent GENERIC (General Equation for Non-Equilibrium Reversible Irreversible Coupling) formulation to simulate coupled nonlinear thermo-elastodynamic problems while preserving specific structural properties, depending on the chosen thermodynamic variable.GENERIC provides a thermodynamically consistent framework that splits the underlying Evolution equations additively into reversible and irreversible parts (see [5]). By selecting suitable thermodynamic variables, GENERIC ensures structural properties such as the conservation of total energy or a non-decreasing total entropy when discretized in time and space (see e.g. [4]). Building on this, we extend the framework to incorporate a mixed Hu-Washizu type formulation, inspired by the polyconvex stored energy functions considered in [1]. This formulation employs the properties of the tensor cross product [2] to enforce the right Cauchy-Green strain tensor, along with its cofactor and determinant, as independent variables through a set of kinematic constraints. We apply the special operator form of GENERIC (see [3]) to this mixed formulation to obtain a new mixed GENERIC formalism and subsequently the strong form of a mixed nonlinear fully coupled thermo-elastodynamical framework, where the systems temperature, internal energy or entropy can be chosen as thermodynamic variable - each offering distinct advantages. Eventually, the numerical performance of the newly designed method is investigated in representative numerical examples.
[1] Betsch, P., Janz, A., and Hesch, C. A mixed variational framework for the design of energy-momentum schemes inspired by the structure of polyconvex stored energy functions. In: Comput. Methods Appl. Mech. Engrg., 335: 660–696, 2018.
[2] Bonet, J., Gil, A. J., and Ortigosa, R. On a tensor cross product based formulation of large strain solid mechanics. In: Int. J. Solids Structures, 84: 49–63, 2016.
[3] Mielke, A. Formulation of thermoelastic dissipative material behavior using GENERIC. In: Continuum Mech. Thermodyn., 23: 233–256, 2011.
[4] Schiebl, M. and Betsch, P. Structure-preserving space-time discretization of large-strain thermo-viscoelasticity in the framework of GENERIC. In: Int. J. Numer. Methods. Eng., 122(14),
[5] Öttinger, H. Beyond equilibrium thermodynamics. John Wiley \& Sons, 2005.
The topic of this session is the analysis and modeling of turbulent non-reactive and reactive flows based on DNS, LES, RANS, and experiments. A special focus is on fundamentals in turbulence, turbulent reactive flows, turbulent multi-phase flows, turbulence of atmosphere, atmosphere/ocean interaction, modeling and simulation of complex turbulent flows, the interface of numerical algorithms, chemical and physical modeling, as well as high-performance computing with its application to turbulence.
In turbulent reactive flows, the chemical reactions occur at scales that are even smaller than the smallest turbulence scales. As a result, the highly non-linear chemical reaction source terms appear to be in unclosed (averaged or filtered) form and must be modeled in a similar fashion in both RANS and LES approaches. The probability density function (PDF) method offers a distinct advantage of being able to treat non-linear chemical reactions without any assumptions or approximations–a capability not possible by any other approach. In this study, the LES/PDF methodology and OpenFOAM-based numerical solution algorithm are presented. A hybrid solution method is used to solve the filtered mass and momentum equations (LES solver) while a Lagrangian particle-based Monte Carlo algorithm (PDF solver) is used to solve the transport equation of the modeled filtered joint PDF of compositions. Both LES and PDF solvers are developed entirely within the OpenFOAM framework and are designed to simulate turbulent reacting flows in complex geometries of practical combustion devices. The combustion chemistry is treated using the ISAT method and differential diffusion is partially taken into account. Sample simulation results are presented to show the performance of the LES/PDF methodology and numerical solution algorithm. The method is first applied to the non-swirling Cambridge/Sandia turbulent stratified flames including the premixed (SwB1), moderately stratified (SwB5) and highly stratified (and SwB9) cases. The methane chemistry is represented by the 16-species reduced ARM1 mechanism. The results are found to be in good agreement with the experimental data. The method is then applied to the swirling cases in various stratification conditions. The results are found to be in good agreement with the experimental data demonstrating the superior performance of the LES/PDF methodology.
To estimate the kinetic parameters of surface catalytic reactions, ubiquitously ideal reactor models like the CSTR, stagnation point flows, or the plug flows are exploited. This allows the system to be governed by either analytical or a simple ODE based solution. However, modern in-situ atomic scale surface characterization experiments seldom allow such flow geometries, e.g. by the need to install devices like probes and pumps in the vicinity of the catalyst. Also in such scenarios, the coupling of transport and kinetics can be expected to be crucial and computational modelling requires using full scale CFD since the above-mentioned models may lead to misleading information about the surface coverage, activity or reaction conditions over the surface of the catalyst. However, the concomitant huge computational burden prevents conventional CFD from being a practical tool in these cases beyond a few validation runs, particularly when solving optimization problems for kinetic parameter estimation. Therefore, there is a need for surrogate or reduced order CFD models for solving for more general flows and arbitrary reactor geometries.
In this study, we present a kinetic parameter estimation case study employing a Reduced Order CFD Method (ROM) for the non-ideal reactor cases. The ROM relies on the limiting case where experiments are conducted under excess of one species, e.g. a buffer gas or one of the reactants. In this limit, mass density and transport coefficients become independent of the concentration changes caused by the catalytic conversion. This allows the decoupling of transport and kinetics where only the transport needs to be solved in the pre-processing (the offline phase of the ROM), which, while still being a CFD problem, is rather cheap compared to the conventional ROM approaches. The coupling of transport and kinetics then reduces to a small nonlinear algebraic problem with a computational complexity which is comparable to a simple CSTR model, nevertheless, accounting for non-ideal flow behavior which is expected from conventional CFD. Together with the offline phase being completely independent of the kinetic model, this makes the approach particularly suited for extensive testing of kinetic models or parameter estimation for these models based on complex experimental setups, e.g. combinations of surface characterization with non-intrusive concentration imaging.
The pressure to reduce greenhouse gas emissions is growing, which demands new and innovative technologies to produce mobile as well as stationary energy. The CO2 methanation offers a pathway to reduce greenhouse gas emissions by directly converting CO2 to CH4. This also plays a crucial role in "power-to-gas" (P2G) technologies by providing an approach to store excess renewable energy in the form of methane in an existing natural gas infrastructure without requiring any investment in additional storage infrastructure e.g., for H2, while the stored energy can be used when needed. However, methanation is a complex process due to its exothermic nature, interaction of the gas species with the catalyst, and possible catalyst degradation. Therefore, a deeper understanding is required for the methanation reaction, its different reaction pathways and side reactions. In this work, we aim to understand the direct production of synthetic natural gas from CO2 and H2 in a Sabatier process with the help of experiments over a Ni/Al2O3 catalyst. A detailed surface reaction mechanism is developed to extend the study numerically by validating the simulation results with the experimental data. A one-dimensional model, LOGEcat, based on a single-channel catalyst model, is used for kinetic modelling. Experiments as well as simulations have been performed at various conditions, such as temperature variation, N2 dilution to the inlet composition, and different H2/CO2 ratio. We have successfully captured the experimental trends using the kinetic model developed for the conditions considered for the analysis. The backward reactions become dominant at higher temperatures and the methanation is kinetically limited at low temperature. We note that the conversion of H2 and CO2 to CH4 reaches a maximum around 623 K and a higher H2/CO2 ratio also increases the conversion of CO2 to CH4 for ratios above H2/CO2= 4.
The methanation of carbon monoxide (CO) and carbon dioxide (CO₂) over nickel-based catalysts plays a critical role in addressing global energy and environmental challenges. As a cornerstone process in synthetic natural gas (SNG) production, the catalytic methanation enables renewable energy storage and the use in transportation. It also underpins CO₂ utilization strategies, converting greenhouse gases into valuable methane, and supports hydrogen purification for industrial applications. These capabilities position methanation as a vital contributor to decarbonization and sustainable energy systems. This study investigates the reaction kinetics of methanation over a Ni-CeO₂ catalyst using a one-dimensional computational model. The model provides an efficient framework for simulating catalytic reactions and evaluating the influence of different reaction conditions. A detailed reaction mechanism will be developed, incorporating irreversible elementary steps parameterized by pre-exponential factors, activation energies, and temperature exponent derived from the Arrhenius equation. A sensitivity analysis will highlight the key kinetic parameters affecting the system performance. The model will be validated against experimental data before applying to new unexplored simulation conditions. This work aims to understand the kinetics involved in methanation processes using Ni as the active material and CeO₂ or Ce-Sm-mixed oxides as support and promoter. The redox capability of ceria potentially increases the activity of the catalyst by facilitating oxygen exchange between different reaction intermediates. This oxygen storage of ceria is enhanced by incorporating samarium into the ceria lattice. This contributes to the design of more efficient catalytic systems for sustainable energy applications.
The understanding and control of interfacial phenomena is one of the main challenges in multiphase fluid mechanics, at the crossroads of scientific disciplines like Mathematics, Physics, Chemistry and Engineering. Examples are particle-laden flows, bubble columns, flows with cavitation, jet atomization, casting, oil recovery, film, boiling and foaming flows, as well as spreading and dewetting of (complex) liquids and biofluids. All these systems are central for technological advances in the chemical, pharmaceutical, energy, environmental and food industries. In addition, their behavior depends strongly on the typical time and length scales under consideration that are for example crucial for the development of micro and nano fluidics. In the latter case the validity of continuum fluid mechanics might be even questionable. The goal of this section is to provide an overview of the latest developments in this area, covering models at different scales, numerical, statistical and cognitive methods as well as experimental techniques but also surveying new physical insight and recent technical advancements.
This presentation focuses on the development and application of Extended Discontinuous Galerkin (XDG) methods which has been developed to simulate multiphase flows with high accuracy. From a numerical point of view, the challenge of multiphase flows is the low regularity of the solution. Due to abrupt changes in density and viscosity and due to surface tension effecte, there are discontinuities in pressure velocity gradient. If evaporation is present, there is even a jump in velocity. The XDG method addresses these issues by offering high-accuracy approximations and solutions for such discontinuous phenomena. The fluid interface is embedded within a Cartesian background mesh and the discontinuous finite elements, defined on the background mesh are extended in a fashion so that they are capable of approximating singularities (e.g., jumps and kinks) in the pressure and the velocity field with a high order of accuracy.This talk aims to provide a comprehensive overview of the XDG methodology. Furthermore, various results of the application of the XDG method in multiphase flows will be show, e.g. droplet rebound, wetting- and dewetting and evaporating droplets on soft surfaces in a Leidenfrost state.
Computational Fluid Dynamics becomes more and more important for industrial applications that involve multiphase flows. For such flows in medium and large scales the Euler-Euler approach is most frequently used and often the only feasible one. In many flow situations, the involved interfaces cover a wide range of scales leading to different coexisting morphologies. Established simulation methods differ for the individual interfacial scales. Large interfaces are represented in a resolved manner typically based on the one fluid approach, e.g. Volume of Fluid (VOF) or Level Set. Unresolved (dispersed) flows are modelled using the two- or multi-fluid approach. A simulation method that requires less knowledge about the flow in advance would be desirable and should allow describing both interfacial structures – resolved and unresolved – in a single computational domain.
The morphology adaptive multifield two-fluid model MultiMorph, which is developed at HZDR, is able to handle unresolved and resolved interfacial structures coexisting in the computational domain with a unified set of equations. An interfacial drag formulation for large interfacial structures is used to describe them in a VOF-like manner, while the usual closure models are applied for the unresolved phases. In addition, MultiMorph allows to simulate transitions between the morphologies. This concerns both physical transitions such as entrainment and detrainment as well as transitions resulting from a change in the size of the numerical mesh within the domain, if this changes the resolvability of a phase interface. This also includes the appropriate modelling of under-resolved interfaces and over-resolved bubbles or droplets.
Examples for the application include 3D simulations on bubble entrainment by a plunging jet, complex flow at a tray of a distillation column and drag reduction by bubble injection below a ship hull.
Meller, R., Schlegel, F., \& Lucas, D. (2021). Basic verification of a numerical framework applied to a morphology adaptive multifield two‐fluid model considering bubble motions. International Journal for Numerical Methods in Fluids, 93(3), 748-773. 10.1002/fld.4907.
Meller, R., Tekavčič, M., Krull, B., \& Schlegel, F. (2023). Momentum exchange modeling for coarsely resolved interfaces in a multifield two-fluid model. International Journal for Numerical Methods in Fluids, 95(9), 1521-1545. 10.1002/fld.5215.
Schlegel, F., Meller, R., Krull, B., Lehnigk, R., Tekavčič, M. (2023). OpenFOAM-Hybrid - A Morphology Adaptive Multifield Two-Fluid Model. Nuclear Science and Engineering, 197(10), 2620-2633. 10.1080/00295639.2022.2120316.
Schlegel et al. (2024). Multiphase Code Repository by HZDR for OpenFOAM Foundation Software (Version 12-s.1-hzdr.1), Rodare. http://doi.org/10.14278/rodare.3284
Flotation is used to separate particle intermixtures in many industrial applications, most notably in the mining industry for extracting valuable minerals from mined ores. During the flotation process, particles attach to air bubbles based on their hydrophobicity. More efficient recovery of fine and ultra-fine particles (smaller than 10 µm) benefits from highly turbulent flows and smaller bubble sizes.
We present results from a two-phase model experiment of a pressurised pneumatic flotation cell. The cell is constructed based on the principles of the Concorde Cell™ (Metso, see Jameson, Miner. Eng. 23, 835–841, 2010 and Yáñez et al., Miner. Eng. 206, 108538, 2024). In a vertical pipe, air bubbles are generated by a plunging jet: a water jet impinges on a free water surface, entrains the surrounding air and generates sub-millimetre sized bubbles. These are transported downwards to the lower tip of the pipe, where a nozzle accelerates the flow and ejects a bubbly jet into a water-filled cell. Due to the pressure drop over the tip nozzle, the air surrounding the plunging jet has to be pressurised.
The properties of the generated bubbles and their evolution along the downcomer height are investigated by recording images of the bubbly flow using optical shadowgraphy. The bubble size is determined with a machine learning algorithm to segment the bubbles in the shadowgraphs. Control parameters of the process are the water and air flow rates, as well as the surfactant concentration in the water. The generated bubbles are found to considerably decrease in size for higher surfactant concentrations up to a certain critical concentration, beyond which the size stays constant. Along the height of the downcomer, it can also be verified that the surfactant successfully suppresses coalescence. The bubble size increases slightly with the air flow rate and decreases with the water flow rate. Both dependencies can be collapsed by expressing the bubble size as a Weber number.
The pseudopotential method [Shan, Chen (1993), https://doi.org/10.1103/PhysRevE.47.1815] is a popular multiphase extension to the lattice Boltzmann method (LBM). Several improvements to the method have been proposed, allowing for better thermodynamic consistency, surface tension control, or stability for high density ratios [Czelusniak et al. (2020), https://doi.org/10.1103/PhysRevE.102.033307]. However, pseudopotential LBMs are rarely tested for grid convergence, which is essential in complex applications.
The pseudopotential method is a diffuse interface approach, where the interface has a finite thickness, and the balance equations for mass, momentum, and energy are applied in this region with additional terms. Discretization errors can significantly impact results near the interface, making the use of finer grids critical. In general, grid convergence of pseudopotential LBMs is conducted in two approaches, which can be distinguished with the help of the Cahn number Ch. The first refinement principle increases the total number of grid nodes N along with the number of grid nodes at the interface W, approaching the continuous diffuse interface solution, i.e. Ch=W/N=constant. The second refinement principle increases N, but keeps W constant, that is, Ch approaches 0. The latter reduces the thickness of the physical interface, so that the solution should approach a sharp interface. In this work, we focus on the first principle (Ch=constant) and discuss how to achieve consistency in the case of a diffuse interface. Our approach is based on keeping the pressure tensor invariant in physical units for different resolutions.
With our strategy, we are able to validate the second-order accuracy of the pseudopotential LBM in several benchmarks, such as a planar interface flow, a static droplet simulation, and a two-phase flow between parallel plates. In addition, we also use our procedure to validate the second-order accuracy of the pseudopotential LBM in application-relevant benchmark problems, such as the impact of a droplet on solid and structured surfaces. All test cases are implemented in the parallel open-source library OpenLB [Krause et al. (2021), https://doi.org/10.1016/10.1016/j.camwa2020.04.033].
In conclusion, our proposed validation strategy will be essential for establishing the usage of efficient LBM in industrial simulations to develop technologies such as waterproof surfaces [Evans, Bryan (1999), https://doi.org/10.1016/S0007-8506(07)63233-8] or to improve the heat exchange efficiency of droplet evaporation [Misyura et al. (1999), https://doi.org/10.1016/j.ijheatmasstransfer.2022.122843].
Waves are a ubiquitous natural phenomenon and acoustics are, besides surface water waves, the most obvious representatives, familiar to anybody and quantitatively known to any student of mathematics, physics or a technical subject. To this corresponds a long mathematical tradition, continuing today in the accurate numerical computation of linear and nonlinear wave phenomena. Our session is devoted to the simulation and understanding of waves and wave interactions. The range of applications is thus very broad, while the focus is meant to be on the unifying physical phenomenon. In the past years we had numerous contributions from solid mechanics, porous media flow, turbulence and aeroacoustics, from crack detection to explosions.
Modeling wave propagation in soil or soil-structure domains using conventional finite element methods necessitates the simulation of extensive spatial domains to avoid reflections at artificial boundaries and to ensure the accuracy of absorbing boundary conditions. Especially for three-dimensional problems, the computational costs associated with this approach increases significantly, making it impractical for large-scale applications.
To address these challenges, the scaled boundary finite element method (SBFEM) provides an efficient alternative by reducing the near field to a minimum and defining an unbounded subdomain around it. This subdomain is constructed by scaling the soil-structure interface towards infinity, enabling the representation of infinite domains without requiring excessively large finite models. Within this framework, the reaction forces at the interface are computed using the acceleration unit-impulse response method, introducing a convolution integral into the equations of motion governing wave propagation in the bounded domain.
This contribution examines the idea of dividing the unbounded subdomain into further subdomains to enable a more efficient evaluation of the convolution integral and a faster treatment of the corresponding terms in the time solver. While the partitioning of the unbounded domain into decoupled subdomains has been proposed before, this study focuses on its evaluation and assessment of the number of divisions with respect to computational efficiency and accuracy. The results underscore the potential of this strategy to enhance computational performance while maintaining acceptable accuracy, offering significant advantages regardless of the problem size.
In mechanical engineering, structural vibrations are one of the primary causes of sound radiation. Given the growing demands for environmental protection, Noise, Vibration, and Harshness (NVH) analysis has emerged as a distinct engineering discipline in recent years. To address these NVH challenges, simulations are typically employed, following a structured process generally divided into three sequential steps:1) Structural analysis,2) Dynamic analysis, and 3) Acoustic analysis.The structural analysis is considered state of the art. For the dynamic analysis, flexible multibody simulations have demonstrated their effectiveness as a tool for predicting structural vibrations. These simulations provide the boundary conditions for the acoustic analysis, directly in the time domain. To cut down computational complexity, reduced flexible bodies are often used, allowing industrial applications to be computed in acceptable times.Acoustic analyses are typically performed in the frequency domain, however, for transient signals, transformations are required, which can introduce challenges such as leakage effects. Direct calculation of sound radiation in the time domain is potentially advantageous but either computationally expensive under specific formulations or not yet sufficiently explored for practical applications.The Boundary Element Method (BEM) is particularly attractive for acoustic problems in NVH analysis, as it only requires discretizing the boundary of the vibrating structure, reducing complexity and making the method appealing to small and medium-sized enterprises. Galerkin formulations provide numerical stability but are associated with significantly high computational costs, limiting their practical applicability.This underscores the importance of our research, which focuses on the further development of a stable boundary integral formulation using the hypersingular boundary integral operator. The formulation leads to a space-time representation, where a specific chosen set of basis and test functions results in a system of equations with a triangular Galerkin matrix that can be efficiently solved via a time-stepping procedure.To minimize the computational effort for large models and long-term simulations in practical applications, the operators are projected onto a reduced problem. The reduced basis, which can be interpreted as acoustic mode shapes, is determined using Proper Orthogonal Decomposition (POD). This approach proves to be especially effective in combination with reduced flexible multi-body simulations, where structural vibrations and, consequently, the Neumann data for the acoustic calculations are affine-linear. Preliminary investigations on practical test cases reveal a substantial reduction in computational effort with negligible errors, establishing this approach as exceptionally efficient for industrial applications.
In this contribution, a finite element scheme to impose mixed boundary conditions without introducing Lagrange multipliers is presented. The strategy relies on finite element exterior calculus and a domain decomposition to interconnect two systems with different causalities. The spatial domain of the dynamical system is split into two parts by introducing an arbitrary selected interface. Each subdomain is discretized with a mixed finite element formulation that introduces in a natural way a uniform boundary condition as the input. In the mixed formulations the spaces are selected from a finite element subcomplex to obtain a stable discretization The two systems are then interconnected together by making use of a feedback interconnection. This is achieved by discretizing the boundary inputs using appropriate spaces that couple the two mixed finite element formulations. The final systems includes all boundary conditions explicitly and does not contain any Lagrange multiplier. Each subdomain is integrated using an implicit midpoint scheme in an uncoupled way from the other by means of a leapfrog scheme. The proposed strategy is tested on three different examples: the Euler-Bernoulli beam, the wave equation and the Maxwell equations. Numerical tests assess the conservation properties of the scheme and the effectiveness of the methodology.
Unstable acoustic modes in high Mach number boundary layer flows contribute to the laminar-turbulent transition and play an important role in boundary layer noise emission. Therefore, we investigate the influence of porous wall linings on the acoustic modes in high-velocity boundary layers. These porous walls, which are characterized by their porosity and porous layer thickness, are commonly used for flow stabilization. However, Fedorov et al. (2003) showed an amplification of the first mode instabilities for certain porous wall configurations.
The aim of the current work is to deepen the understanding of the destabilization mechanism and to get a complete picture of which wall configurations lead to different types of instabilities. For the inviscid problem, we find unstable modes depending on the porous layer thickness, which do not persist in the case of rigid walls. It should be noted that the amplification induced by thin porous layers is significantly different from that observed in the literature for thick layers.
For an exponential mean flow profile with a shape factor $H\approx 1.3$, the compressible Rayleigh equation, i.e. the linearized Euler equations for compressible shear flows, is solved exactly in terms of Heun functions. With the impedance wall boundary condition, the stability problem is reduced to an algebraic eigenvalue equation, which allows us to compute the complete inviscid eigenvalue spectrum without spurious modes.
Based on the knowledge that modes supersonic with respect to the far field can radiate into the far field (Tam \& Burton, 1984), we classify the newly appearing instabilities with respect to their sound radiating character. Furthermore, for subsonic boundary layers, we identify those wall parameter ranges that lead to the onset of inviscid instabilities.
This session is devoted to the mathematical analysis of natural phenomena and engineering problems. In this area PDEs play a basic role. Therefore lectures discussing analytical aspects of PDE problems as well as problems in the Calculus of Variations are welcome.
In this presentation, we address the application of variational methods to partial differential equation (PDE) systems, particularly within the context of geological phenomena. Our work leverages the GENERIC (General Equations for Non-Equilibrium Reversible-Irreversible Coupling) framework and its variants, such as damped Hamiltonian or extended gradient systems, to describe porosity waves processes. The description of porosity waves is characterized by fluid-filled pressure waves that propagate through porous media due to localized variations in porosity and permeability. The presentation also outlines the mathematical analysis that supports these models. The focus is on ensuring well-posedness and structure-preservation. By integrating the principles of GENERIC, we ensure that the models respect key physical constraints, such as energy conservation and entropy production, while remaining adaptable to diverse geological scenarios. This is a joint work with Dirk Peschka (WIAS Berlin) and Marita Thomas (WIAS and FU Berlin) within the project C09 "Dynamics of rock dehydration on multiple scales" of the CRC 1114 "Scaling Cascades in Complex Systems".
We consider different measure-valued solvability concepts from the literature and show that they could be simplified by using the energy-variational structure of the underlying system of partial differential equations. In the considered examples, we prove that a measure-valued solution can be equivalently expressed as an energy-variational solution. The first concept represents the solution as a high-dimensional Young measure, whether for the second concept, only a scalar auxiliary variable is introduced and the formulation is relaxed to an energy-variational inequality. We investigate several examples stemming from interface dynamics, elasticity, and liquid crystals. The wide range of examples shows that this is a recurrent feature in evolution equations in general.
The Prandtl boundary layer theory has influenced many scientific disciplines, from aerodynamics to rheology. Analytically, it is nowadays widely recognized that the Prandtl equations are ill-posed in Sobolev spaces, even near certain shear flows, unless additional structural assumptions of monotonicity are imposed. However, they are well-posed in Gevrey class 2 spaces and whether well-posedness or ill-posedness holds in spaces between Sobolev and Gevrey class 2 remains an open problem. This talk presents a recent result, in collaboration with Joshua Kortum (University of Würzburg), which constructs a suitable family of explicit solutions to the Prandtl equations around a quadratic shear flow. These solutions are derived using a formula based on hypergeometric functions, allowing for direct classification of those exhibiting a dispersion relation of Gevrey class 2 type. Due to the explicit formulation, we further establish a result on the ill-posedness of the Prandtl equations within certain weigthed Gevrey classes.
In this talk, we investigate the linearized Prandtl equations around a shear flow with non-degenerate critical point. Under a separation of variable ansatz, we completely classify all separated solutions of the equation and discuss qualitative properties of the latter. The solutions of the corresponding (algebraic) eigenvalue problem can be computed by the well-known harmonic oscillator. As a consequence, we give explicit values of eigenvalues and -functions for which the linearized Prandtl equations are unstable. However, we also show that this instability cannot be prolonged a simple way outside the well-known Sobolev regime (in particular not to Gevrey spaces).
Coagulation equations describe the evolution in time of a system of particles that are characterized by their volume. Multi-dimensional coagulation equations have been used in recent years in order to include information about the system of particles which cannot be otherwise incorporated. Depending on the model, we can describe the evolution of the shape, chemical composition or position in space of clusters. In this talk, we focus on a model that is inhomogeneous in space and contains a transport term in the spatial variable modeling the sedimentation of clusters. We prove local existence of mass-conserving solutions for a class of coagulation rates for which in the space homogeneous case instantaneous gelation (i.e., instantaneous loss of mass) occurs.
The session especially welcomes contributions to the following topics: uncertainty quantification; risk analysis and assessment; Bayesian methods in engineering; decision analysis; stochastic modeling (including spatio-temporal modeling); stochastic mechanics; stochastic algorithms and simulation; stochastic processes (including time series); resampling methods; stochastic networks. Applications to large scaled problems are encouraged.
Uncertainty quantification for Gaussian Random Fields as coefficients in partial differential equations is a often studied problem. Here we review UQ for Lévy Random Fields and prove the convergence of a Karhunen-Loeve-like expansion. Also we deal with (learned) sparse grid quadratures for multivariate Lévy distributions and apply this to the Darcy flow equation.
Contemporary issues in the dynamics of engineering systems are becoming increasingly complex due to growing demands for reliability, durability, and efficiency in structural designs. One of the key aspects of analyzing such systems is the consideration of uncertainties, which may arise from various sources, such as material variability, geometric imperfections, or difficulties in accurately modeling boundary conditions and loads. In dynamic analysis, uncertainties have a particularly significant impact on critical parameters such as natural frequencies, damping coefficients, and dynamic responses to external excitations. There are various approaches to modeling uncertainties, including probabilistic and non-probabilistic methods. Probabilistic approaches, although widely used, require extensive data, which may not always be available. In such cases, non-probabilistic approaches, such as modeling with bounded parameter ranges, allow for obtaining satisfactory estimations even in the absence of a known probability distribution of uncertainties. In the context of systems with elements made of viscoelastic materials, estimating the probability distribution of design parameter uncertainties can be particularly challenging, while determining their upper and lower bounds is often feasible. This study presents a dynamic analysis of systems with viscoelastic elements whose parameters are uncertain. The proposed method, which combines interval analysis and the Laplace transform, is applied to systems with viscoelastic elements for the first time. The mechanical behavior of viscoelastic elements is described using rheological models. Design parameters are represented as interval numbers, where the lower and upper bounds of the parameter values are known. The Laplace transform is employed to convert the equations of motion into a linear system of equations, while the inverse Laplace transform is used to obtain the dynamic response of the system. To reduce overestimation, a typical drawback of interval analysis, the element-by-element method is applied, and the efficiency of this approach is analyzed. The effectiveness of the proposed method is demonstrated through numerical examples, and the obtained results are compared with those derived using the vertex method.
Early design simulative cabin noise assessment is indispensable when designing novel aircraft. When considering an early design stage, many crucial parameters needed for the simulation, such as material parameters and excitation, are not known yet and make the noise assessment challenging. Therefore, uncertainties should be considered in the simulation chain, in order to obtain a more robust early noise assessment.
This contribution builds on a recently published work of the authors \linebreak (https://doi.org/10.48550/arXiv.2408.08402) that allows for an inclusion of stochastic excitations in the noise assessment process and yields efficient approximations of the mean sound pressure and its covariance. In this contribution, the existing framework is extended to also include material uncertainties. Here, a low-rank decomposition of the covariance matrix of the excitation is used as input for a model order reduction algorithm as to yield rapidly evaluable models for many different excitations. When we introduce uncertainties in the material parameters, however, we also have to consider the change in the underlying system matrix. Here, the covariance of the output is approximated by the First Order Second Moment method (FOSM). FOSM not only requires the covariance matrix of the excitation, but also the derivatives of the system matrices with respect to the uncertain parameters and the covariance of the uncertain parameters. While the covariance of the excitation can be computed efficiently through the previously proposed method, the latter requires efficient methods for evaluating matrix products and for solving linear systems, on which this contribution will also shed light. Subsequently, the covariance of the output is composed of a sum of these matrix products depending on how many uncertain material parameters are considered.
The method is developed for a simple vibroacoustic problem, a plate cavity system as prior step to applying it to a full-scale aircraft cabin noise model. As a starting point the Young’s modulus is chosen as only uncertain parameter. The excitation is chosen to be a realistic turbulent boundary layer excitation, meaning it includes inherent underlying uncertainties. With the proposed method we can predict the sound pressure inside a cavity and its covariance for changing material parameters and stochastic excitations, while maintaining a reasonable computational cost.
With the rapid adoption of Digital Twins in recent years, simulation models designed to replicate real-world physical systems have become increasingly common. To achieve accurate representations, it is typically necessary to update model parameters based on observations collected from sensors or measurements of the physical asset. However, no model can fully capture the infinitely complex nature of reality. As a result, quantifying the uncertainty in model predictions is essential for reliable decision-making. Bayesian updating frameworks provide an appealing approach for parameter calibration, inherently accounting for such uncertainties.
One often-overlooked source of error is model form uncertainty. This type of uncertainty arises from the fundamental discrepancies between the model and reality, stemming from the assumptions and simplifications made during model construction. Ignoring model form uncertainty can lead to overly confident predictions that fail to accurately reflect sensor observations. To address this, we propose an embedded model form uncertainty framework that attributes the model variability to a stochastic extension of the model's latent parameters. This approach enables the quantification of uncertainties that can be represented by a variation in the model parameters. Of particular interest are scenarios involving noisy observations or additional discrepancies that cannot be directly integrated into the model.
By incorporating uncertainty through the parameters, this method not only quantifies uncertainty in predictions but also propagates model form uncertainty to other Quantities of Interest (QoI) that rely on the same model or its parameters. Consequently, QoI computations yield more reliable values, accounting for the potential uncertainties introduced by imperfect models during parameter updating. Moreover, this approach facilitates a more comprehensive statistical analysis of QoI distributions, offering deeper insights into the model's reliability and highlighting areas for potential improvement. By incorporating model form uncertainty, decision-makers can achieve a more robust and nuanced understanding of system behavior and prediction quality.
For all fields of applications the mathematical models are primarily based on differential equations. Hence, their numerical solution plays a fundamental role in numerical mathematics. This section covers mainly the construction and the behavior of numerical methods for differential equations including those of ordinary as well as of partial differential type.
Capturing and preserving physical properties, e.g., system energy, stability and passivity, using data-driven methods is currently a highly-researched topic in surrogate modeling. To ensure that the desired physical properties are retained, structure-preserving projection techniques are used in the field in model reduction (MOR).
In this talk, we present structure-preserving MOR with nonlinear projections, which are needed for problems with slowly decaying Kolmogorov n-widths. In particular, we focus on the reduction of the class of port-Hamiltonian systems. Based on a differential geometric framework, we derive novel MOR methods, ensuring that the pH structure in the reduced-order model (ROM) is preserved. Further, we present numerical results, which show that the new presented methods preserve the interconnectedness property within the port-Hamiltonian ROMs.
This talk considers Poisson systems with a Hamiltonian describing the energy in the system. If the Hamiltonian is quadratic, the energy is exactly conserved using the Gauss-Legendre methods as numerical integrators. Since Gauss methods are implicit Runge-Kutta methods, a system of equations must be solved at each time step. The computation can be very expensive for high-dimensional systems which asks for fast structure-preserving iterative solvers. In the linear case, the Gauss methods can be reformulated via their respective stability functions. These stability functions coincide with the diagonal Padé approximations of the exponential function. They can be understood as matrix-valued functions that map elements of a Lie algebra to a quadratic Lie group, which characterizes the conservation of energy. When it comes to evaluation, there are two options. Either the Padé approximation is computed directly or a further reformulation using the idea of partial fraction decomposition is considered. In both cases, one or more systems of linear equations have to be solved because inverse matrices appear. If the underlying matrix of the Possion system is sparse, Krylov subspace methods are a good choice. A special variation of a Krylov subspace method is an Arnoldi approximation. We show that the Arnoldi method through a clever choice of the basis of the Krylov subspace, leads to energy-conserving iterations.
Accurate simulations of gas flow through pipeline networks can offer invaluable insights for network transmission and design operators. This becomes increasingly important with the integration of hydrogen-blended fuels in the development of hydrogen-based energy systems. Existing models primarily address single-component, one-dimensional flow within individual pipe elements, interconnected at network junctions. Recently, composite flow models within pipes, often utilizing mixture fraction methods, have gained significant attention. These methods employ a segregated approach to flow and composite transport, albeit within the constraints of one-dimensional modeling. Physically, due to hydrogen's low molecular weight, the dynamics of constituent flow and advection in blended gas mixtures can be challenging to predict. Exploring higher-dimensional models can provide deeper insights into the underlying physics. This study focuses on 2D and 3D pipe flow simulations for composite gases performed using finite-element space discretizations and IMEX time-stepping. A comparison is subsequently performed against existing 1D models.
In this presentation, we focus on the numerical approximation of ground states of rotating Bose-Einstein condensates. This requires the minimization of the Gross-Pitaevskii energy functional on a Riemannian manifold. As an iterative solver for finding such minimizers we propose a generalized Riemannian gradient method with Sobolev gradients and an adaptively changing metric. We prove that the scheme reduces the energy in each iteration and we further explore its global and local convergence properties. In particular, the local convergence rates can be explicitly quantified in terms of spectral gaps involving E''(u), where E is the energy functional and u a ground state. The theoretical findings are supported by numerical experiments. Additionally, we discuss recent numerical results related to the performance of the Riemannian conjugate gradient method in our setting.
In this session novel developments devoted to optimization and optimal control problems governed by ordinary or partial differential equations will be discussed. The focus is on theoretical investigations, numerical analysis, algorithmic issues as well as on application.
We discuss connections between the problem of approximate sampling from a given probability measure and the problem of minimizing functions defined on the space of probability measures, motivated by machine learning applications. For both these problems, one can construct continuous-time stochastic processes that converge to the target measure as time goes to infinity. Among many ways of studying convergence rates for such processes, we focus on the approach via functional inequalities. In particular, we discuss the Polyak-Lojasiewicz inequality on the space of measures and its relation to the classical log-Sobolev inequality, and we explain why it is a natural condition for obtaining exponential convergence of Fisher-Rao gradient flows. We also mention applications of such flows to solving min-max problems on the space of probability measures, motivated by the problem of training Generative Adversarial Networks. Based on joint papers with Razvan-Andrei Lascu (Heriot-Watt), Linshan Liu (Heriot-Watt) and Lukasz Szpruch (University of Edinburgh).
We analyze the robustness of optimal control problems with respect to spatially localized perturbations. We prove that, if the involved dynamic or stationary constraint satisfies a domain-uniform stabilizability and detectability assumption, then these localized perturbations only have a local effect on the optimal solution, even if the (uncontrolled) state equation gives rise to global transport. We provide various applications of the theory to stationary Helmholtz-type problems or dynamic equations such as advection-diffusion-reaction and transport equations.
Solving PDE-constrained generalized Nash equilibrium problems (GNEPs) poses significant mathematical and computational challenges. Inspired by applications in gas markets, we study a GNEP where multiple agents seek to maximise their individual profit on a commodity transported through the network. The dynamics of the shared state variable, representing the commodity flow, are modeled using a viscosity-regularized transport equation on the network. Each agent controls this state through their individual strategies, which are subject to private constraints and unknown to other agents. Further, each agent must abide by shared state constraints, which are managed using Moreau-Yosida regularization. As the agents’ strategies and their constraints are unknown to each other, we propose a centralized, distributed approach to solving the GNEP. While the state is solved via a global operator who observes the actions of all agents, the adjoint-based computation of the individual updates to the strategies is distributed across the agents. This natural parallelization creates an exchange of information between the global and local levels, which allows for scalability and efficiency in solving large-scale optimization problems. In this talk, we will detail the proposed method and discuss its convergence properties. Numerical experiments will illustrate the algorithm’s performance, demonstrating its robustness and efficiency.
Column liquid chromatography plays a vital role in the so-called downstream processing of biopharmaceuticals, where the goal is to extract and purify certain proteins from a mixture. Our goal is to employ a model-based approach for process optimization to improve the yield and purity of the product, while also achieving further economical and ecological benefits, e.g., due to an economical use of buffer components. Rate models in combination with suitable reaction schemes that model the specific adsorption process are often employed to describe chromatographic processes. The optimal control problems (OCPs) we are going to set up are hence governed by advection-diffusion-reaction-type partial differential equations (PDEs) with typically highly nonlinear reaction terms. Furthermore, since at least one flow reversal is often performed in practical applications to obtain sharper elution profiles, we are facing switching dynamics. Lastly, it is important to determine robust solutions in order to safeguard against, e.g., uncertain model parameters, such as reaction rates and feed composition. In this talk we present several strategies towards robustly optimal switching control applied to chromatographic separation processes. We discuss the obtained results, focusing on the quality of the product and how reversing the flow direction may play a role in terms of robustness.
Dynamics and control is an interdisciplinary section which in particular adresses mathematical systems theory and control engineering. The contributions to this section are also concerned with the mathematical understanding and design of controllers which appear in actual applications.
Simulations of complex dynamical systems using the Finite Element Method (FEM) can become computationally very expensive since large systems of equations have to be solved for multiple instances in time or frequency. Projection-based model order reduction (MOR) methods are state-of-the-art methods to compute an accurate approximate solution of the high-dimensional model with significantly less computational effort. For multi-query applications, the reduced model should also maintain parametric dependencies of the original model, which can be achieved by parametric Model Order Reduction (pMOR) methods. Many of those methods require an affine representation of the parametric dependency, which is difficult to realize for, e.g., geometric parameters [1].
PMOR based on matrix interpolation does not require such an affine parametric dependency [3]. In this method, local reduced models are computed for a set of samples in the parameter space. Afterwards, the reduced operators are transformed to a common basis and interpolated so that reduced models can be predicted for queried parameter points. To judge the accuracy of the predicted reduced model without having to evaluating the high-dimensional model, error estimators can be used. Recently, a survey on a-posteriori error estimators for parametric reduced models has been published [2]. All of these error estimators require the full operators, the reduced operators and the reduced basis at the queried parameter point. In pMOR by matrix interpolation, however, only the predicted reduced operators are available after the interpolation. In this work, different ways how the error estimators can be applied in the context of pMOR by matrix interpolation are investigated and compared.
References
[1] P. Benner, S. Gugercin, and K. Willcox. A survey of projection-based model reduction methods for parametric dynamical systems. SIAM Review, 57(4):483_531, Jan 2015.
[2] L. Feng, S. Chellappa, and P. Benner. A posteriori error estimation for model order reduction of parametric systems. Advanced Modeling and Simulation in Engineering Sciences, 11(1):5, Mar 2024.
[3] H. Peuscher, J. Mohring, R. Eid, and B. Lohmann. Parametric model order reduction by matrix interpolation. Automatisierungstechnik, 58:475_484, Aug 2010.
We present a framework for developing non-intrusive reduced-order models (ROMs) to predict nonlinear dynamical systems' frequency response functions (FRFs), focusing on gear transmission systems. This approach addresses the challenge of modeling complex, multi-valued, and non-monotonic FRFs across a wide frequency range of up to 10 kHz. The methodology combines the Craig-Bampton method with harmonic balance analysis to efficiently generate high-fidelity snapshots. Data preprocessing uses parametric spline interpolation for consistent representation of non-monotonic responses. Autoencoders are employed to reduce the dimensionality of response data, while deep feedforward neural networks map input parameters to the reduced latent space. The methodology is validated against high-fidelity simulations of a single-DOF gear system with non-monotonic solutions and a real multi-DOF gear transmission system. The results show that the framework accurately predicts FRFs for nonlinear systems while minimizing computational costs. The proposed method can efficiently accelerate the optimization of nonlinear dynamical systems.
Constrained mechanical systems occur in many applications, such as modeling of robots and other multibody systems. In this case, the motion is governed by a system of differential-algebraic equations (DAE), often with large and sparse system matrices. The problem dimension strongly influences the effectiveness of simulations for system analysis, optimization, and control, given limited computational resources. Therefore, we aim to obtain a simplified surrogate model with a small number of states that is able to accurately represent the motion and other important properties of the original high-dimensional DAE model. Classical model reduction methods intrusively exploit the system matrices to construct the projection of the high-fidelity model onto a low-dimensional subspace. In practice, the dynamical equations are often an inaccessible part of proprietary softwares, i.e., there is a need for equivalent model-free reduction approaches to generate reduced models using only accessible simulation data. In this work, we show an application of the non-intrusive operator inference (OpInf) method to DAE systems of index 1, 2, and 3. Considering the fact that for proper DAEs there exists an ODE realization on the so-called hidden manifold, the OpInf optimization problem directly provides the underlying ODE representation of the given DAE system in the reduced subspace. An important advantage is that only the DAE solution snapshots in a compressed form are required for identification of the reduced system matrices. Stability and interpretability of the reduced-order model is guaranteed by enforcing the symmetric positive definite structure of the system operators using semidefinite programming. The numerical results show the implementation of the proposed methodology for different examples of constrained mechanical systems, tested for various loading conditions.
The research area of model reduction enables the acceleration of engineering design cycles through the construction of low-dimensional surrogate models. In this talk, we focus on the reduction of linear structured systems (LSS) from the perspective of balanced truncation. Such systems exhibit particular structures, which may involve second-order derivatives, time delays, or integro-differential operators. Balanced truncation eliminates states that are both hard to reach and hard to observe, based on the energy interpretation of the Gramians. However, computing the controllability and observability Gramians or their square-root factors can be challenging and computationally expensive. Here, we focus on their efficient computation for LSS. While for standard linear time-invariant systems, one can efficiently solve Lyapunov equations to compute the Gramians, there is no such general algebraic Lyapunov equation that encodes the Gramians for LSS. Instead, natural approaches consider the integral frequency domain representation to approximate the Gramians using quadrature rules. We identify these approximations with the solutions to generalized Sylvester equations, allowing us to employ and adapt an existing active sampling strategy and compute low-rank factors of the Gramians as solutions to the matrix equations. The procedure iteratively selects quadrature nodes to serve as interpolation points that provide the most relevant information while constructing low-rank solutions to the derived matrix equation. Thereby, each step involves solving a corresponding optimization problem in a low-dimensional subspace and the computation of the maximum residual error leading to the next quadrature node. In addition, we illustrate the proposed method for obtaining reduced-order models via balanced truncation on some numerical examples and compare it to standard methods for computing Gramians.
Solving data assimilation (DA) problems often involves expensive forward model simulations to compute the expected output for multiple initial conditions. Since this quickly becomes infeasible in high dimensions, reduced models can play an important role in making such computations tractable. In particular, model reduction methods from control (systems) theory are well established for obtaining efficient reduced models while preserving the map from control input to observed output. In this talk, we present a new interpretation of the Bayesian inverse problem, which is essential in DA, as a control problem. We then discuss how established systems-theoretic methods for model order reduction can be adapted to this new setting.
Over the last decade mathematics has become the cornerstone in Signal and Image processing ranging from various methods for signal reconstruction to modelling of imaging modalities over its classical disciplines compression, denoising, segmentation, and registration to feature extraction. The used methodologies include such diverse fields as harmonic analysis, inverse problems, variational analysis, mathematical statistics, partial differential equations, optimization, approximation theory and sampling theory. The aim of this section is to foster interdisciplinary collaboration and the development of new directions in mathematical signal and image processing spawned from the interaction of various mathematical communities.
Nonlinear eigenproblems are an interesting tool for image processing and clustering tasks in data science. Recent approaches aim to determine eigenvectors, and in particular ground states, of nonlinear operators via respective energy functionals such as the total variation regularization. Typically, these eigenvectors are numerically approximated using gradient flows to minimize a nonlinear Rayleigh quotient, which is challenging due to the nonconvex and nondifferentiable nature of the problem.
In this talk, we formulate a dual theory for nonlinear eigenvectors based on the Fenchel conjugate in the setting of a broad class of energy functions on Banach spaces, e.g., the popular p-Dirichlet energy on Sobolev spaces. Within this framework we identify the dual problem as the nonlinear eigenproblem of the inverse operator. In particular, in our formulation the minimization of the nonlinear Rayleigh quotient translates to the maximization of a dual Rayleigh quotient.
We show that the simple inverse power method possesses the ability to maximize the dual Rayleigh quotient and converges to eigenvectors even in our abstract setting with only mild assumptions on the energy functions. Furthermore, we give an intuitive geometric characterization which connects the primal and dual problem of computing nonlinear eigenvectors. To validate our theoretical results, we perform various numerical experiments for computing nonlinear eigenfunctions.
In this talk, we address the challenge of determining the norm of a linear operator solely by evaluating it. This problem naturally arises whenever the adjoint is not known and cannot be attained. After giving an overview of algorithms fitted to this problem, we propose an algorithm that employs a semi-stochastic approach to iteratively refine the estimate, achieving almost sure convergence to the true norm. Beginning with the underlying problem, we will explore the construction of an algorithm that requires only oracle access to the operators, avoids explicit matrix storage and ensures minimal memory usage. We will calculate optimal step sizes and provide a visual depiction of the proof of almost sure convergence. Numerical experiments demonstrate the practical efficiency of the method across various scenarios, highlighting its potential applications whenever adjoint consistency cannot be guaranteed.
Scientific Computing is concerned with the efficient numerical solution of mathematical models from both science and engineering. The field covers a wide range of topics: from mathematical modeling over the development, analysis and efficient implementation of numerical methods and algorithms to software and finally application for the solution of complex real-world problems on modern computing systems. This interdisciplinary field combines approaches from applied mathematics, computer science and a wide range of applications in which in-silico experiments play an increasingly important role.
Many models in computational science and engineering are based on time-dependent partial differential equations. Hence, integration along the time axis arises as an important numerical problem in many domains. While parallelization by decomposing the spatial computational domain is an established technique, on its own it will not suffice to provide the massive degree of concurrency required by upcoming exascale systems. Parallel-in-time integration (PinT) methods provide additional concurrency along the temporal axis and can improve parallel scaling. Classical PinT methods like Parareal, MGRIT, or PFASST rely on a computationally cheap coarse integrator to propagate information forward in time, while a parallelizable expensive fine propagator provides accuracy. Typically, the coarse method is a numerical integrator using lower resolution, reduced order or a simplified model. Similarly, iterative methods like spectral deferred correction (SDC) methods, the main ingredient of PFASST, require good initial guesses for fast convergence.
Considering that machine learning-based methods to approximate PDEs are becoming more and more successful, in this talk we propose to use neural operators as coarse propagators in Parareal or to initialize SDC iterations. Using Rayleigh–Bénard convection as a benchmark problem, we discuss design and training of suitable neural operators, investigate their performance and demonstrate space-time parallel scaling. This is joint work with Andreas Herten, Chelsea John, Stefan Kesselheim (FZ Jülich), Abdul Qadir Ibrahim, Thibaut Lunet, and Daniel Ruprecht (TUHH).
This paper introduces a novel approach combining physics-informed neural networks (PINN) with the generalized finite difference method (GFDM) to solve groundwater flow equations. In recent years, with advancements in computational performance and algorithm improvements, artificial intelligence has rapidly developed and been applied to many areas. One neural network architecture that incorporates physical meaning by integrating governing equations, known as the PINN, has gained widespread attention. By utilizing neural networks to approximate functional values and employing automatic differentiation for derivative computation, while incorporating the residual of governing equations (partial differential equations, PDE), boundary conditions, initial conditions, and practical measurement data into the loss function for optimization, the neural network can be trained to solve boundary value problems (BVPs) and initial boundary value problems (IBVPs). However, automatic differentiation is computationally expensive. To address this issue, the GFDM is integrated into the PINN framework to replace automatic differentiation and accelerate the training procedure of the original PINN. The GFDM, which adopts Taylor series expansion and the moving least square method, approximates partial derivatives as a linear accumulation of functional values and weighting coefficients at each node and nearby nodes within the computational domain. Moreover, by employing a space-time (ST) coupling concept, known as ST-GFDM, the time axis can be treated as an additional spatial dimension, facilitating the solution of time-dependent problems more efficiently. This study demonstrates the feasibility and enhanced computational performance of the PINN with ST-GFDM approach for solving BVPs and IBVPs in groundwater flow scenarios. Several numerical examples validate the method’s effectiveness, confirming its accuracy and superior computational performance compared to the original PINN.
Model order reduction is a technique for reducing the complexity of high-dimensional systems by employing a low-dimensional parametrization of system states. It facilitates efficient computations and reduces memory requirements, albeit at the expense of decreased simulation accuracy and challenges in preserving essential system properties such as sparsity, positivity, and physical laws. Thus, selecting an appropriate model order reduction method with a suitable latent dimension is crucial for achieving a trade-off between accuracy and efficiency.
In this work, we propose autoencoders that decode latent states using actual system states selected from a dataset (known as the matrix C from CUR decompositions). We compare this approach with two established techniques: proper orthogonal decomposition and proper CUR decomposition. To evaluate the methods, we construct reduced-order models and simulate the wake flow past a single cylinder governed by the incompressible Navier-Stokes equations. The investigation focuses on simulation accuracy and the ability to preserve underlying physical laws, such as incompressibility, in the reduced-order models.
[1] Danny C. Sorensen, Mark Embree (2016) A DEIM Induced CUR Factorization, SIAM J. Sci. Comput., Vol.38, No.3, pp.A1454-A1482
[2] Jan Heiland, Yongho Kim (2025) Polytopic Autoencoders with Smooth Clustering for Reduced-order Modeling of Flows, J. Comput. Phys., Vol.521, pp.113526
In recent years, simulation and scientific computation of electroplating processes are of growing importance to address the increasing requirements on the layer properties. In particular, the current-density and coating-thickness distributions on the electrodes are of special interest and are modelled via couplings between (quasi)static electric fields, transport processes and fluid dynamics. Due to non-linear couplings between transport processes and (quasi)static electric fields and nonlinear boundary conditions, the modelling and numerical treatment turn out to be challenging for this field of study. The talk gives an overview of the state of the art and the related challenges as well as the current limitations in modeling electroplating processes. The talk will discuss possible strategies and concepts to overcome the challenges and limitations of state-of-the-art models as well as the potential of scientific computing and numerical analysis in this area of research.
Sea ice is an import part of Earth’s climate system and the accurate, large-scale simulation of sea ice dynamics remains challenging. As the development of faster processors has slowed down, a turn to more specialized hardware is needed to achieve more accurate and higher resolution simulations. Graphics processing units (GPUs) offer an order of magnitude higher floating-point performance and efficiency compared to CPUs, but often require significant engineering effort to utelize effectivly. Therefore, several frameworks have emerged in recent years which aim to simplify general-purpose GPU programming. Heterogeneous compute frameworks such as SYCL and Kokkos make it possible to develop a unified code base that works accross GPUs and CPUs. Machine learning frameworks like PyTorch combine an easy to use interface with highly specialized backends that can make use of new hardware features such as tensor cores to accelerate large-scale linear algebra workloads, and furthermore, provide a simple path-way to integrate machine learning components.
In this talk, we compare available options for the GPU-parallelizaton of the novel sea-ice code neXtSIM-DG. Its dynamical core is based on higher-order finite elements for the momentum equation and discontinous Galerkin elements for the advection and is highly parallezible. we discuss characteristics of our discretization and its consequences for the GPU implementation. For the full port of the dynamical core we use Kokkos as, based on our assessement, it combines usability with good performance. With moderate changes compared to the OpenMP-based CPU code, the new implementation achieves a sixfold speedup on the GPU while being as fast as the reference on the CPU.
Artificial Neural Networks (ANNs) are algorithmic structures used in Machine Learning (ML), which is the ability to learn without being explicitly programmed. Recently, ANNs have been used to solve boundary value problems as an alternative to traditional numerical methods in computational mechanics. Physics-informed Neural Networks (PINNs) are the most promising type of ANNs in this context. PINNs are guided by loss functions that incorporate the laws of physics relevant to the given boundary value problem, defining the error to be minimized. Thus, this is referred to as unsupervised training, i.e., no training data with labels is required. Different approaches are utilised to construct the loss function of PINNs such as using the strong form of the governing partial differential equation or evaluating the variational energy formulation of the boundary value problem, thereby minimizing the energy.
In this regard, it is an open question for PINNs how to optimally select the number of collocation points. Moreover, it is unclear how the smoothness of the solution influences their performance. Therefore, we perform a systematic study to evaluate the performance of PINNs when dealing with different combinations of inputs, viz., the location and number of collocation points, and the smoothness of the body load.
In this study, we keep the geometry as simple as a 3D unit cube but vary the body force by changing the respective polynomial degree. The performance is evaluated using error norms established in computational mechanics, such as the normalized L2 norm. This norm is calculated using the predictions from the PINN and the analytical solution for various combinations of surface and interior points of the unit cube. We observe the minimum number of collocation points required to fit the solution and discuss the associated computational efforts.
Centuries of physics have established thermodynamically sound laws for solids in continuum mechanics, forming the basis for hyperelastic material models. However, these models often struggle with diverse materials, where pre-selecting a model to fit experimental data risks inadequacy. This challenge intensifies when modeling not just the (elastic) stress-strain relationship but also inelastic responses such as visco-elasticity, elasto-plasticity as well as growth and remodeling of living tissues. To overcome this, machine learning offers an alternative but often yields unphysical predictions. Therefore, Constitutive Artificial Neural Networks (CANNs) [1] were developed to embed thermodynamic principles into the network’s architecture, ensuring physically consistent results [1]. Previously, iCANNs [2], extending CANNs to general inelastic behavior, were proposed. Here, inspired by the universal approximation capabilities of traditional neural networks, we extend the original iCANN architecture to cover a broader spectrum of inelasticity [3]. We apply the iCANN to uncover visco-elasticity [2,3], elasto-plasticity [4], and tensional homeostasis in living tissues [5]. Therefore, we employ the multiplicative decomposition of the deformation gradient into elastic and inelastic parts [6]. A dual potential in terms of stresses is postulated [7] to capture inelasticity, while additionally a yield surface is introduced for elasto-plasticity and homeostasis is taken into account through homeostatic surfaces. We evaluate our iCANN by its ability to discover artificial data and real experiments of visco-elastic polymer, elasto-plastic steel and homeostasis of bioengineered tissues. We discuss and highlight various challenges that arise during discovery, with a particular focus on the inelastic regime.
[1] K. Linka and E. Kuhl. Computer Methods in Applied Mechanics and Engineering 403, 115731 (2023).
[2] H. Holthusen, L. Lamm, T. Brepols, S. Reese, E. Kuhl. Computer Methods in Applied Mechanics and Engineering 428, 117063 (2024).
[3] H. Holthusen, K. Linka, E. Kuhl, T. Brepols. arXiv:2502.17490,
https://arxiv.org/abs/2502.17490 (2025).
[4] B. Boes, J.-W. Simon, H. Holthusen. arXiv:2407.19326, https://arxiv.org/abs/2407.19326 (2024).
[5] H. Holthusen, T. Brepols, K. Linka, E. Kuhl. Computers in Biology and Medicine 186, 109691 (2025).
[6]. J. Casey. Mathematics and Mechanics of Solids 22, 528-537 (2017).
[7] P. Germain. Meccanica 33, 433-444 (1998).
For the optimization of semiconductor components, a detailed understanding of low-resistance ohmic contacts between metal and semiconductor is essential. This process is known as silicidation. On a macroscopic level, it can be described mathematically by coupled reaction-diffusion equations. A promising approach to solving such systems of non-linear differential equations is the use of physics-informed neural networks (PINNs). We will showcase the advantages of using PINNs in comparison to traditional solvers. The focus of this talk is on the choice of the representation model in the PINN method. Firstly, we will review various neural network architectures, including the recently introduced Kolmogorov-Arnold networks (KANs) and their adaptations. We will provide a thorough comparison of the performances of the different architectures for the silicidation problem. Secondly, we discuss how to incorporate different physical priors into the neural network's architecture. Such physical priors may include boundary conditions, initial conditions, or constraints on the range of the solution's output. Hard-constraining physical priors allows one to reduce the number of optimization tasks and can thus improve the training performance. In particular, we present a new method of hard-constraining discontinuous initial conditions adapted to diffusion problems. This work was supported by the Fraunhofer Internal Programs under Grant No. PREPARE 40-08394.
The formulation and calibration of constitutive models is still a challenging task for materials which exhibit complex nonlinear elastic or inelastic behavior. For this reason, data-driven methods and in particular the use of neural networks (NNs) have become increasingly popular in recent years. NN-based approaches that fulfill essential physical conditions a priori, often referred to as physics-augmented neural networks (PANNs), have proven to be particularly suitable for this purpose [1].In this contribution, we present an approach based on PANNs [1,2,3] that are applied as macroscopic surrogate models for the expensive computational homogenization of representative volume elements (RVEs). Our approach allows the efficient finite element simulation of materials with complex underlying microstructures which reveal an overall anisotropic and nonlinear elastic behavior on the macroscale within a data-driven decoupled multiscale scheme [4]. By using a set of problem-specific invariants as the input of the PANN and the Helmholtz free energy density as the output, essential physical principles, e.g., objectivity, material symmetry or thermodynamic consistency are fulfilled by construction [1]. The invariants are formed from structure tensors of 2nd, 4th or 6th order and the right Cauchy-Green deformation tensor. Besides the network parameters, the structure tensors are automatically calibrated during training so that the underlying anisotropy of the RVE is reproduced optimally. In addition, a trainable gate layer in combination with lp regularization is included to remove unneeded invariants automatically and improve interpretability. Within the proposed algorithm, a suitable set of structure tensors is automatically chosen [3]. The developed approach is exemplarily applied to several descriptive examples. Necessary data for the training of the PANN surrogate model are collected via computational homogenization of RVEs.
[1] Linden et al. (2023). Neural networks meet hyperelasticity: A guide to enforcing physics. Journal of the Mechanics and Physics of Solids, 179, 105363.
[2] Kalina et al. (2024). Neural network-based multiscale modeling of finite strain magneto-elasticity with relaxed convexity criteria. Computer Methods in Applied Mechanics and Engineering, 421, 116739.
[3] Kalina et al. (2024). Neural networks meet anisotropic hyperelasticity: A framework based on generalized structure tensors and isotropic tensor functions. arXiv preprint
arXiv:2410.03378.
[4] Kalina et al. (2023). FEANN: an efficient data-driven multiscale approach based on physics-constrained neural networks and automated data mining. Computational Mechanics, 71(5), 827-851.
The accurate modeling of soft tissue material behavior remains a challenging task in biomechanics due to its complex, non-linear, and anisotropic characteristics. Traditional methods, such as Finite Element Methods (FEM), while reliable, demand significant computational resources and expert knowledge for implementing physical laws. Machine Learning (ML) techniques, particularly Neural Networks (NN), offer an innovative alternative by automating data-driven tasks, enabling more efficient and scalable modeling frameworks.
This work focuses on evaluating two state-of-the-art ML architectures—Neural Ordinary Differential Equations (N-ODEs) and Unconstrained Monotonic Neural Networks (UMNNs)—for simulating hyperelastic responses in soft tissue materials. Both architectures were implemented in Julia and benchmarked using experimental stress-deformation data. The results indicate that while both approaches deliver high accuracy, UMNNs outperform N-ODEs in computational efficiency and numerical stability. The intrinsic monotonicity of UMNNs allows for robust convergence, even in cases where N-ODEs exhibit instability.
Beyond hyperelastic modeling, this study extends the ML framework to include deteriorated material behavior, focusing on the softening effects observed in one-dimensional fibrous structures. By integrating a data-driven deterioration function into the ML models, the framework accurately captures non-monotonic stress-deformation responses. The deterioration function, validated against experimental data, effectively simulates dissipative material behaviors. This extension enables modeling of both the elastic and deteriorative phases of material response, providing a comprehensive solution for biomechanical simulations.
Key findings emphasize the versatility and efficiency of UMNNs as a computational tool for material modeling. Compared to traditional methods, the ML-based approach demonstrates superior scalability, adaptability, and performance, offering a promising direction for future research in computational biomechanics. By bridging advanced ML techniques with classical continuum mechanics and FEM, this thesis contributes to the development of innovative, data-driven solutions for simulating the complex behavior of soft tissue materials. Apart from this, an artificial deterioration data set shows, that the Machine Learning model does not necessarily need to find the exact virtual elastic representation or deterioration parameters as an intermediate step in order to reach good convergence in the actual response. The question, as to how important the uniqueness of these quantities is to the actual simulation is left for further study.
We will discuss models used in classical molecular dynamics, and some mathematical questions raised by their simulations. In particular, we will present recent results on the connection between a metastable Markov process with values in a continuous state space (satisfying e.g. the Langevin or overdamped Langevin equation) and a jump Markov process with values in a discrete state space. This is useful to analyze and justify numerical methods which use the jump Markov process underlying a metastable dynamics as a support to efficiently sample the state-to-state dynamics (accelerated dynamics techniques à la A.F. Voter). It also provides a mathematical framework to justify the use of transition state theory and the Eyring-Kramers formula to build kinetic Monte Carlo or Markov state models.
References:
[1] G. Di Gesù, T. Lelièvre, D. Le Peutrec and B. Nectoux, Jump Markov models and transition state theory: the Quasi-Stationary Distribution approach, Faraday Discussion, 195, 469-495, (2016).
[2] T. Lelièvre and D. Perez Recent advances in Accelerated Molecular Dynamics Methods: Theory and Applications, In: Comprehensive Computational Chemistry, Vol. 3, p.360-383, (2024).
[3] T. Lelièvre, D. Le Peutrec and B. Nectoux, Eyring-Kramers exit rates for the overdamped Langevin dynamics: the case with saddle points on the boundary,
https://hal.archives-ouvertes.fr/hal-03728053 .
Most simulations of continuum models require the repetitive evaluation of some non-linear functions. If these functions are only implicitly given by the outcome of some expensive high-fidelity model, these evaluations can easily become the computational bottleneck of this coupled simulation. A surrogate model for the high-fidelity part is therefore needed. However, if the input dimension of the high-fidelity model is high, the training of the surrogate often requires infeasible numbers of simulations, the so-called curse of dimensionality. During the last years, we have developed a multilevel on-the-fly sparse grid approach to address this problem [1]. This approach exploits the good convergence behavior of sparse grids also in higher dimensional settings and, additionally, that only a small low-dimensional subset of the high-dimensional input space is visited during a continuum simulation. We present the extension of this approach to cases when the evaluation of the high-fidelity model requires statistical sampling. Each evaluation therefore carries some finite noise error, which variance depends inversely on the invested CPU time. We present ideas for balancing these errors with the sparse grid approximation error to ensure convergence and, at the same time, to avoid unnecessarily accurate and expensive sampling. We showcase the approach on realistic problems from heterogeneous catalysis, where continuum reactor models are coupled with kinetic Monte Carlo simulations for the surface reactions. We find that the proposed approach requires only modest computational resources for these problems, where a direct coupling would be infeasible. Finally, we discuss for these kinds of models how an automatic termination can be realized.
[1] T. Hülser, B. Kreitz, C.F. Goldsmith, S. Matera, Computers and Chemical Engineering (2024), https://doi.org/10.1016/j.compchemeng.2024.108922
Stochastic dynamics with metastability are a recurring theme in many scientific disciplines, for instance, in simulations of macro-molecules, in climate systems, and in applications of uncertainty quantification. Metastability describes the existence of long-lived macro-states in a dynamical system's state space, such that transitions between these macro-states are rare events. It is thus also closely related to control systems. There is a wide range of biased sampling algorithms, which seek to overcome the rare event nature of the dynamics using a time-dependent input.
In this study, we join the ideas of Koopman-based modeling and biased sampling. The key ingredient is the generator extended dynamic mode decomposition algorithm (gEDMD), a variant of EDMD to approximate the Koopman generator. For control-affine stochastic differential equations (SDEs), the application of gEDMD reduces the Kolmogorov backward equation into an ODE that is bi-linear in expectation and input. This simplified structure can be utilized for designing controllers which are geared towards accelerated sampling of rare events.
In this talk, I will report on recent progress concerning the data-driven analysis of metastable systems using Koopman generators. First, I will introduce Koopman operators for (controlled) stochastic systems, the gEDMD method, and its application to optimal control problems. Second, I will present the numerical results showing that the gEDMD method for control-affine SDEs can be used to a) accurately predict the expectation of observable functions of interest for fixed control input; b) solve optimal control problems (OCPs) with integrated running cost and terminal cost; c) design OCPs which enforce accelerated transitions between metastable states.
We present our work on simulating the self-assembly of large protein systems, with a focus on the assembly of tobacco mosaic virus coat proteins into a helical shell. The size and timescales of these systems make all-atom molecular dynamics simulations unfeasible.
To tackle this, we constrain the proteins to a single conformation and introduce a coarse-grained energy model that includes van der Waals, electrostatic, and hydrophobic interactions. A key challenge is that the energy model needs to account for the rigidity assumption. This means that directly implementing physical interactions, like exact van der Waals radii or Poisson-Boltzmann electrostatics, is insufficient and requires additional considerations.
Our simulations are driven by the hybrid Monte Carlo algorithm to efficiently sample the Gibbs distribution of possible configurations. We discuss results from numerical experiments with different versions of the energy model. A key finding is that achieving high interaction specificity through the collective interplay of van der Waals, electrostatic, and hydrophobic interactions is essential for self-assembly.
Participants explore the fundamentals and applications of quantum computing in applied mathematics and mechanics through an interactive tutorial that delves into revolutionary concepts and simulation tools.
The presentation will outline the activities of the PSNC, the challenges addressed with the use of cutting-edge data processing infrastructure (including quantum computers), as well as the scope of the project under which the tutorial is being delivered.
The presentation will address two topics. First, a brief introduction to quantum computing to capture its current landscape. Second, an overview of gate-based quantum algorithms and their applications in mechanical design, optimization, system dynamics, and machine learning.
The hands-on session on the basis of quantum computing will introduce participants creating quantum circuits, and with using this framework basic quantum phenomena such as superposition, entanglement will be explained.
The second part of the hands-on session will dive deep into the realm of quantum algorithms. The participants will be shown how some quantum and hybrid algorithms work and can be used in real-world applications and use-cases such as factoring, optimization and machine learning.
During this poster session, GAMM Juniors share highlights of their ongoing research during the coffee breaks between sessions. These emerging scholars in applied mathematics and mechanics engage in a variety of activities including organizing summer schools and interdisciplinary workshops to promote the interests of young academics both within GAMM and in the broader scientific community.
Lunch will be served at the conference venue in Rooms 51 and 53 for a pre-purchased voucher.
——
Young Academics Meet Mentors provides a forum for early career researchers to engage with seasoned mentors. The discussion will unfold in a relaxed "World Café" atmosphere, with food and drinks provided. Pre-registration is required.
The YAMM Lunch takes place in the building A30 (Faculty of Architecture and Faculty of Engineering Management building), in Room 001, opposite the main entrance from Rychlewski Street. The location is approximately 350 meters from the LCC (a 5-minute walk).
https://maps.app.goo.gl/pouXstysmNJZMbrV6
2025 marks the 17th anniversary of our first foundational paper on the Discontinuous Petrov- Galerkin method, in short the DPG method. In the talk, I will attempt to present DPG fundamentals for linear problems [1] illustrated with a few numerical examples, and an outlook at the generalization of the DPG methodology to nonlinear problems represented by nonlinear elasticity [2].
[1] L. Demkowicz and J. Gopalakrishnan, The Discontinuous Petrov-Galerkin Method, Acta Numerica, 2025.
[2] J. Zhang and L. Demkowicz, Nonlinear Elasticity with the Discontinuous Petrov-Galerkin Method. I. Various Variational Formulations, Oden Institute Report 2024/5.
Multibody dynamics enables the simulation of a wide variety of systems, all characterized by having multiple parts in relative motion with one another. Applications span from biological to engineering systems, requiring diverse capabilities which range from real-time simulation to high fidelity modeling of complex multidisciplinary systems. Goal of this mini-symposium is to present a view on the latest developments in models and advanced numerical methods in multibody dynamics. Focus is on techniques that enable applications to complex real-life problems.
Various approaches exist to preserve the underlying geometric structures of mechanical systems under spatial discretization or numerical integration. They can be constructed to guarantee symplecticity of the discrete Hamiltonian flow, to respect the manifold structure of the configuration space (both is true in Lie group variational integrators, see e.g. [1]), or to preserve the energy balance. In a multi-body system, different parameterizations of the configuration space are possible: minimal coordinates, which intrinsically satisfy the kinematic constraints, redundant coordinates, which require the explicit handling of the algebraic constraint equations, or the solution update on the configuration manifold, as in global schemes on Lie groups. A simple planar pendulum is used in [2] to illustrate the application of their energy and constraint-preserving Petrov-Galerkin numerical integration approach to constrained mechanical systems, written in a form generalizing Hamiltonian and gradient systems. In this talk, we present the application of the scheme to planar serial multi-body systems with multi-articular elastic couplings, whose potential energy contribution can nicely be expressed through the chosen Cartesian coordinates. We provide comparisons with the integration of minimal models in joint coordinates and global formulations on the configuration manifold.
[1] Herrmann, M. and Kotyczka, P. "Relative-kinematic formulation of geometrically exact beam dynamics based on Lie group variational integrators". Computer Methods in Applied Mechanics and Engineering, vol. 432 A, 2024, article 117367.
doi: 10.1016/j.cma.2024.117367.
[2] Egger, H., Habrich, O. and Shashkov, V. "On the Energy Stable Approximation of Hamiltonian and Gradient Systems" Computational Methods in Applied Mathematics, vol. 21, no. 2, 2021, pp. 335-349. doi: 10.1515/cmam-2020-0025.
Simulating the dynamics of multibody systems involves dealing with finite rotations, which presents a key challenge in numerical mechanics due to the fact that rotations are governed by nonlinear transformations. In this contribution, we utilize a mixed variational time approach to analyze rigid body dynamics. Time integration is based on a discretization of the Hamilton`s principle for constrained systems, using a finite element formulation in time. Consequently the equations of motion assume the form of differential-algebraic equations (DAEs). Therefore, an approximation of the Lagrange multipliers is also necessary. Moreover quaternions offer an efficient way to describe rotations, avoiding singularities encountered with other representations. This procedure provides both a systematic framework for generating time-stepping methods and a possibility of a global solution over the entire time interval. The latter can be used to solve boundary value problems that require a flow of information backwards in time. This approach may serve as a foundation for addressing more advanced problems in inverse dynamics and optimal control for rigid bodies. We apply the integration scheme to representative mechanical rigid body systems to investigate the numerical properties.
Model predictive control (MPC) is a leading approach in modern control system design, as it effectively considers system dynamics and operational constraints and as it can anticipate behavior. However, when dealing with unknown or overly complex systems that cannot be fully described with first-principles models, can we still predict and control their behavior effectively with MPC? Recently, data-enabled predictive control (DeePC) has gained increasing attention as a promising alternative. This approach eliminates the need for prior knowledge of the control system and negates the requirement for system identification to derive an exact model. Instead, the system can be treated as a black box, allowing direct use of input and output measurement data for control purposes. But should we really disregard all existing knowledge about the system when it comes to systems that we usually know at least partially? How operational is the data-enabled method in the real world compared to a model-based method, where the model may also be identified from data using system identification?
This work provides a comprehensive discussion of these questions based on practical experience. Analyzing an utterly unknown system and conducting a competitive analysis can be quite challenging. Therefore, an omnidirectional robot designed and built at our institute is selected to generate measurement data. An experimental performance comparison for tracking a steady state is provided using two prominent predictive control approaches: data-enabled predictive control based on Willems' fundamental lemma and model predictive control utilizing a model identified with subspace identification methods. Unlike DeePC, which does not rely on a predefined model, MPC based on subspace identification methods involves constructing a state-space model of the system through input/output observations. To ensure comparability, the same measurement data is employed to build the Hankel matrix and to generate the state-space model.
Many publications focus primarily on simulated results, often neglecting critical aspects of hardware implementation, such as data collection, computational complexity, and noise management. These elements are crucial for the practical application of the methods and require further investigation. This work addresses these concerns and explores recent advancements to enhance the performance of DeePC, including regularization. The trade-offs between model complexity, computational efficiency, and control performance in hardware experiments for each approach are discussed. This comparison provides valuable insights for practitioners and researchers in selecting the most appropriate predictive control strategy for their specific applications.
Section S02 will focus on numerical and experimental techniques to study the structure, function and evolution of biological systems for a broad spectrum of scales, i.e., addressing the cellular, tissue, organ, or organ system scale. The topics will focus on, but are not limited to, multiscale modelling, gait analysis, foot biomechanics, musculoskeletal and orthopedic biomechanics, remodeling, cardiovascular biomechanics, multiphase modeling of biological tissues, tumor growth modeling, transport oncophysics, nanomechanics of biological materials, nanomedicine, modeling of drug delivery, and image-based methods for assessing and interpreting clinical data.
The physiological behaviour of the cardiovascular system is highly affected by the mechanical response of arterial segments, that is in turn dependent from both tissue histological architecture and the contractile tone of smooth muscle cells. This work presents a comprehensive multi-scale and multi-field computational framework that accounts for: i) a lumped 1D description of the arterial vascular tree network; ii) a continuum 3D model at the microscale of the local chemo-mechano-biological response of arterial tissues, accounting for both passive and active tissue behavior; iii) biochemical-dependent vasoconstriction and vasodilation mechanisms, like those induced by baroreflex and the NO-ROS-PN biochemical chain. Two-way coupling is considered: simulations from 3D chemo-mechano-biological models drive how parameters of the lumped description vary as a function of segment dilation, tissue histology, or vasoconstriction; local variations of properties of key vessel segments affect global cardiovascular functions. The applicative case study investigates the role of vasodilation and vasoconstriction in carotid and cerebral arteries as a protective mechanism against changes in blood flow in the brain through the circle of Willis. Healthy homeostatic states are first reproduced and discussed, and pathological conditions are then analyzed.
The mechanical characterisation of ultrasoft materials, such as brain tissue, over a wide time range is often limited by inconsistent responses when using different experimental techniques. These inconsistencies are predominantly attributable to the disparate testing conditions across the experiments. However, multi-modality tests are essential to calibrate a model that is applied to finite element simulation in the time and frequency domains. Consequently, to achieve reliable mechanical parameters, a robust identification strategy for all experiments with distinct time scales is required.
This study aims to identify and combine the viscoelastic material parameters obtained from experiments conducted in the time and frequency domain. A phantom material based on oxidized hyaluronic acid (OHA) and gelatin (GEL), showing potential to mimic the viscoelastic behavior of brain tissue, was examined via three testing techniques. Quasi-static experiments at the rheometer provide insight into the time behaviour of the hydrogel. The material response is studied through the nonlinear deformation of the sample in compression and torsional shear. Via a vibration table the frequency-dependent behaviour in the medium range, from 20 to 200Hz, is analysed. This technique enables to study of the oscillation response of a hydrogel sample under excited vibration. The material response in the high-frequency domain is investigated with magnetic resonance elastography. A 0.5T magnet measured the vibrations induced in the material by a piezoelectric actuator, enabling the acquisition of data for frequencies up to several kHz.
A finite element simulation of the experiments in Abaqus was employed to determine the mechanical parameters of the OHA-GEL hydrogel. For the calibration, a hyperelastic Ogden model in combination with a viscoelastic Prony series was selected. To capture both the time and the frequency response of the hydrogel, a combination of multiple testing modalities was integrated into the inverse material parameter identification process.
Traumatic spinal cord injury (SCI) in humans and many mammals is a non-regenerative condition that can lead to motor function loss and disability. Mechanical factors are increasingly recognized to be influencing spinal cord regeneration, yet accurate characterization of the mechanical behavior of spinal cord tissue is lacking. To address this gap, we employ a multimodal approach that combines indentation test data with macroscale test data conducted on the same sample. The spinal cord is a composite material with grey matter in the center surrounded by white matter, both with distinct mechanical properties. Furthermore, the mechanical behavior exhibits strong variations that are typical for biological systems. Therefore, to obtain insight into the local behavior, indentation experiments are conducted over a spatial grid that includes grey matter, white matter, and the interface between the two. Furthermore, cyclic loading in tension, compression, and torsion of the same sample informs the collective behavior of the tissue under varying large-strain loading conditions. The experiments are followed up with finite element simulations of the indentation and rheometer experiments, incorporating realistic boundary conditions and geometry. By regressing the simulated force data from all the different tests together against the experimental data, we identify the hyperelastic and viscoelastic material parameters of the spinal cord and its individual components.
Colorectal cancer remains a major global health concern, and understanding the mechanisms of its progression is critical for advancing treatment and prevention strategies. In this presentation, we explore how PDE-based models can be applied to study tumor invasion and basement membrane dynamics, with an explicit focus on colorectal cancer. These models offer valuable insights into the interactions between tumor development and the surrounding tissue environment, providing insights into the complex mechanisms underlying cancer progression.
The colonic crypts, which are fundamental structures in the epithelial layer of the colon, play a critical role in regulating cell proliferation and differentiation. Disruptions to these processes can lead to the initiation and progression of colorectal cancer. In this context, mathematical modeling provides a powerful tool to examine the dynamics of tumor invasion, including how tumor development progresses and breaches the basement membrane, a key barrier in tissue architecture.This talk examines the application of partial differential equation (PDE) models to simulate tumor development and basement membrane dynamics at the crypt level in three dimensions. To address the complexities of colorectal cancer progression, we employ a combination of mathematical frameworks tailored to different aspects of the problem: diffuse-interface models to capture transitions between tumor and healthy tissue, geometry-tracking methods for evolving tumor boundaries, and mechanical models to represent the deformation and stresses within the crypt structure. Each framework addresses specific aspects of tumor dynamics, and their combination provides a more comprehensive representation of the underlying processes.
Preliminary simulation results will be presented, demonstrating how continuous mathematical models can capture the spatial progression of tumor development and the structural interactions with the basement membrane. These findings emphasize the role of crypt geometry and epithelial organization in shaping cancer dynamics. By calibrating model parameters, we aim to improve the predictive accuracy of these frameworks and enhance our understanding of tumor growth and invasion.
Microorganisms are among the most successful forms of life on our planet. They are omnipresent, even in and around our bodies. For example, the oral cavity is home to hundreds of indigenous species of microorganisms that form complex communities known as biofilms. These biofilms can cause infections, particularly when foreign objects, such as implants, are introduced into a patient’s body. Oral implants are especially vulnerable, as part of the implant must protrude into the oral cavity to provide a base for the artificial tooth. In the worst case, these biofilms can lead to peri-implantitis, an infection of the gums and bones, which can result in bone loss and subsequent implant failure.
To prevent such diseases, it is crucial to understand and model the behavior and interactions within the biofilm. In this context, in silico experiments are a valuable complement to in vitro and in vivo studies. In this work, we present a growth model capable of simulating a wide range of interactions between different species within the biofilm. Typically, in the literature, interactions between species are described by introducing additional state variables along with some ad-hoc expressions in the mathematical model. However, this approach results in a computationally expensive model. In contrast, the approach used in this study employs an interaction matrix that describes how species interact with one another, eliminating the need for extra variables. Furthermore, the model includes nutrients and antibiotics as input parameters. The interactions between different species can thus be investigated with a changing nutrient supply or in the case of a start or stop of treatment with antibiotics. The model hereby predicts the growth induced by the availability of nutrients and other species serving as such and the death due to either antibiotics, a scarcity of space, or the presence of other species.
The model is derived from the Hamiltonian principle and, as such, automatically satisfies the laws of thermodynamics. Using such a continuum framework for deriving governing equations provides a powerful and efficient way to model complex systems at a macroscopic level.
The study presents an approach to the optimisation of the thickness field of surgical implants used in the treatment of abdominal hernias in humans, a medical condition with a high recurrence rate. Building upon previous research, this investigation aims to enhance the mechanical compatibility between native tissues of the human abdominal wall and implants reducing the likelihood of hernia recurrence. The primary objective is to achieve reduced and uniformly distributed reaction forces at the interface between the patient's tissue and the implant through optimization of the implant's topology by changing thickness along the surface of the implant. The research employs a combination of commercial finite element software (Marc Hexagon) and custom-developed Python code for optimisation and control. Four distinct implant materials are analysed and compared. The shape of the implant is a decagonal membrane. The implant model is constructed with 6108 4-node membrane finite elements, each node having three degrees of freedom. The first model of the implant is isotropic and the other three are ortothropic with different ortotropy ratios: 3.19, 2.02, and 18.84 representing the commercial implants Parietex, Bard, and DynaMesh, respectively. The loading conditions are simulated through forced displacements of the model supports located in ten places on the implant where it's connected to the abdominal wall. The displacements are different in 5 of the supports and are symmetrical on the Y axis ranging between 2-5.75 mm and correspond to deformation of the abdominal wall during human physiological activities. The optimisation process utilizes a surface equation to describe thickness in the spatial coordinates of the implant instead of using separate elements. This allowed to reduce of the quantity of unknown variables during optimisation. This approach provides a balance between computational efficiency and the ability to generate complex optimised surfaces. For the implants with different material properties, the optimisation achieved slightly different thickness fields. In all of them, we can see better reaction force distribution than before the thickness optimisation.
The section will focus on advanced theoretical, numerical and experimental models for the evaluation of the behavior of structures subjected to static and dynamic loads. Innovative materials characterized by high strength, anisotropy and unconventional mechanical responses pose new challenges to the design and the performance of various structural elements such as beams, plates and shells. In particular, structural issues may appear at different scales when materials with an internal architecture are employed. Particularly welcome are linear and nonlinear models and algorithms that address static and dynamic behavior of structures at different scales.
Thin-walled composite structures, valued for their lightweight potential, find extensive application in the aerospace and shipbuilding industries. However, the stability behavior of these structures needs to be considered. As a locally post-buckled structure demonstrates the capacity to withstand increasing loads without immediate failure, it necessitates not only a buckling analysis but also a post-buckling analysis to fully leverage its lightweight potential.
Many commonly used composites exhibit bending-twisting coupling effects, a significant factor influencing the buckling behavior. Despite this, computationally efficient buckling analyses often neglect bending-twisting coupling. To achieve optimized designs, it is crucial to employ analysis methods that consider these effects. Therefore, a Ritz method tailored for post-buckling analysis of rectangular simply supported plates featuring bending-twisting coupling is introduced. Derived based on energy methods, this approach enables the description of stability behavior, modeling deformation, load distributions, and characteristic quantities such as effective width. The novel computational model is utilized to evaluate the impact of nondimensional parameters associated with bending-twisting coupling on the buckling and post-buckling behavior of composite plates. This research contributes to the development of computationally efficient ritz-methods for designing optimized thin-walled composite structures. This is achieved by reducing the number of integrals to be evaluated and simultaneously efficiently tracing the equilibrium path in the post-buckling regime.
Current simulation models for continuous sheet metal forming usually rely on transient simulation strategies that yield the stationary forming state after a large number of time steps. This transient progression of states involves a gradual downstream transport of internal plastic variables. Since the intermediate states have little practical relevance, this established strategy is both numerically inefficient and inconvenient for conducting parameter studies with respect to the desired stationary state. The here presented method overcomes these disadvantages by removing the time dependence altogether. In particular, a novel stationary predictor corrector algorithm is developed for the iterative solution of the problem of elastic-plastic bending of axially moving plates in an established finite element framework.
A mixed kinematic parametrisation in the spirit of arbitrary Lagrangian Eulerian methods (ALE) is used for the finite element discretisation of a thin rectangular metal sheet that is modelled as an unshearable Kirchhoff-Love plate. Out-of-plane distributed, self-equilibrated loadings are imposed at spatially fixed lines to mimic the continuous, bending dominant roll forming process. Higher load magnitudes induce plastic deformations, which need to be transported in downstream direction through the non-material finite element mesh. A previously developed structural plasticity model is employed to formulate the corresponding constitutive laws directly in terms of plate curvature strains and stress resultants.
The iterative solution of the axially moving plate bending problem is achieved by repeated application of elastic predictor and plastic corrector steps. Contrary to standard return-mapping schemes typically employed by transient algorithms, the plastic corrector phase is modified to additionally account for the advection of plastic variables in downstream direction. The condition of stationary operation is imposed directly such that the change of the plastic variables for a given material point is solely determined by convection. A spatial finite difference scheme is applied to solve the corresponding stationary advection problem along the streamlines of material particles, which are in alignment with the integration points of the regular finite element mesh.
Clamped and free boundary conditions are imposed at the upstream and downstream boundaries of the open control domain, respectively. At steady state operation, plastic deformations arise in close proximity to the distributed external loading and persist in downstream direction. Conventional transient time-stepping simulations, conducted for the sake of reference, are clearly outperformed by the proposed stationary algorithm in terms of numerical efficiency.
We propose a novel variationally consistent membrane wrinkling model for analyzing the mechanical responses of wrinkled thin membranes [1]. The elastic strain energy density is split into tensile and compressive terms via a spectral decomposition of the strain tensor. Tensile and compressive parts of the stress and constitutive tensors are then obtained via consistent variation from the respective strain energies. Considering only the positive part of the strain energy in the variational formulation, we obtain a membrane with zero compressive stiffness. By adding the negative strain energy multiplied with a very small factor, we further obtain a residual compressive stiffness, which improves stability and allows handling also states of slackening. Comparison with results from analytical, numerical and experimental examples from the literature on membrane wrinkling problems demonstrate the great performance and capability of the proposed approach, which is also compatible with commercial finite element software.
References:
[1] D. Zhang, J. Kiendl; A variationally consistent membrane wrinkling model based on spectral decomposition of the strain tensor, Computer Methods in Applied Mechanics and Engineering, 432:117386, 2024.
Thin-walled composite structures are widely used in weight-critical applications such as aircraft and spacecraft. However, ensuring the stability of such structures under various load cases remains a key challenge in their design and optimization. For omega-stringer stiffened panels, the local buckling and postbuckling behavior is investigated using closed-form analytical solutions. The stiffened panel under consideration consists of the skin plate with eccentrically attached stringer feet along the longitudinal sides of the panel, while the remaining part of the omega-stringer is modeled by corresponding elastically restrained edges. The computational model is based on the energy methods and approximates the postbuckling behavior near the bifurcation point using a simplified plate model. To evaluate the new analysis method, a comparison with the finite element analysis is being drawn. Compared to numerical methods, the present model reduces the computational effort, which is particularly advantageous in the design phases.
The use of multilayer structural elements is widespread in modern industrial applications. Recently, the use of so-called metamaterials has become widespread, in which the necessary mechanical properties are provided by the use of a cellular structure. Determining the physical and mechanical properties of such materials experimentally in many cases is impossible or very difficult. Therefore, numerical methods are used for this, primarily the Finite Element Method (FEM). At the same time, especially in cases of a new, non-standard configuration of cells, significant errors may occur in determining the physical and mechanical properties of a material with generalized properties, for example, orthotropic. Therefore, it is useful to verify the obtained data by the results of numerical and experimental modeling in a complex stressed state and, possibly, to clarify the data on physical and mechanical properties.
Such approach is used in the paper. Three-layer plates with an auxetic inner layer under static and impact loading are considered. The physical and mechanical properties of the averaged orthotropic material of the inner layer are obtained using Finite Element modeling. The results of numerical modeling of bending and tension of a plate with averaged orthotropic properties are compared with the data of full three-dimensional modeling and experimental research.
The obtained results are used to analyze the dynamic behavior of three-layer plates, the outer layers of which consist of carbon fiber material, and the inner layer presents by the studied material with auxetic properties. The problems of impact loading of plates during their interaction with a metal striker are considered. Deformation is analyzed taking into account the impact on the outer plate, which can lead to damage to the internal layer’s cells or complete penetration of the plate. Variants of plates made of different materials are considered.
The subject of this article are sandwich panels commonly used as building envelope elements. Typical sandwich panels are three-layer elements consisting of thin but rigid facings and a thick but shear-deformable core. The produced panels are subjected to systematic strength tests, on the basis of which the parameters used in the design of the panels are determined. The produced panels differ in many geometric and technological parameters. They have different thickness, width, thickness of the facings. The profiling of these facings is also different. Moreover, the same values are determined in tests that differ in the static scheme of loading and supporting the elements. Despite this, the panels are usually grouped into so-called families, for which representative strength parameters are determined. Grouping panels into families aims to reduce the costs of strength tests on the one hand, and to simplify the data used in the design process on the other.
The aim of this article is to statistically approach the problem of the influence of geometric and technological factors on specific strength parameters of sandwich panels. From the manufacturer's point of view, knowledge of the significance of a given production parameter on the strength of a sandwich panel can be very important in the context of minimizing costs or maximizing the load-bearing capacity of the panels. The basic strength parameter that determines the scope of the panel's application is the wrinkling stress. This results from the fact that most panels are damaged in the span or on the support due to the wrinkling of the steel facing, which is compressed. Therefore, the results of several years of laboratory tests performed for different panel thicknesses, facing thicknesses, profiling types, core densities and manufacturers were used. The statistical analysis of the results confirmed a significant relationship between the profiling type and wrinkling stress, but also showed that the results of some types of tests should not be combined to create a family. The conclusions resulting from the analyses are of both cognitive and practical nature.
The scientific activity of the Institute of Mathematics, Poznan University of Technology was funded under grant no. 0213/SBAD/0122.
Coupled problems arise in several applications. From a general point of view each problem containing more than one primary field is called a coupled one. Usually the class of coupled problems is subdivided into volumetrically coupled problems and problems with surface coupling. The class of volumetrically coupled problems contains e.g. the fluid flow in porous solids described by mixture theory, thermo-mechanically coupled problems, chemo-mechanically coupled problems and electro- or magneto-mechanically coupled problems while in the second class problems like the fluid-solid interaction via an interface are included. All problems is in common that the presence of different fields in the numerical treatment requires special attention with respect to the multi-field formulation and the solution strategy. The session on coupled problems deals with all aspects mentioned above, i.e. ranging from modelling aspects to numerical solution strategies.
Macromolecules in polymeric liquids (polymer melts, solutions, foams and gels, polymer liquid crystals) are usually distributed randomly, but under large external loads, deformations and high temperatures they can change shapes and orientations. The oriented macromolecules can move with a high degree of friction anisotropy. How to determine anisotropic friction forces and moments acting on the macromolecules? In this contribution, making several assumptions, models of anisotropic internal friction in polymeric liquids are proposed in continuous and discretized forms. This follows from common approaches to systems of the oriented polymeric macromolecules on the microscopic scale. They are studied as continuum media or as assemblies of discrete models of individual macromolecules. The first approach - Langevin motion equations describe Brownian random movements of polymer macromolecules in stochastic dynamics. The Langevin equations have anisotropic viscous friction terms and stochastic noise terms. The proposed friction models include translational and rotational anisotropic viscous friction and various types of friction anisotropy. The anisotropic viscous friction is studied in the frame of continuum modelling. The second approach - the macromolecule systems are modelled as assemblies of a large number of isolated microscopic elements. Taking into account the single macromolecule the following problems are analysed: various discrete models of the macromolecules, different forms of their kinematics, and a presence of anisotropic dry friction in contact with a hypothetical base plane. Anisotropic dry friction forces and moments are investigated in the cases as follows: sliding of bead-like macromolecules (i.e. spheres connected by springs), rolling without or with slipping of rod-like macromolecules, spinning and sliding of disc-like macromolecules, snake-like sliding of long macromolecules. Macromolecule dynamics is usually investigated numerically with the aid of known computational methods e.g. molecular dynamics, multi-body dynamics, FEM and others. The anisotropic friction models have practical applications in predictions of structural and dynamical properties of polymeric liquids and in simulation models of the polymer processing (e.g. polymer controlled decompositions and recycling techniques).
The successful development of ultrasonic drives, e.g. for high-precision positioning, requires a thorough understanding of the material behavior of the piezoceramics utilized. In particular, the material behavior during dynamic high-power operation at high resonance frequencies and large vibration amplitudes plays a decisive role. The microstructure experiences high loading rates at high cyclic mechanical stress magnitudes. The experimental findings indicate that the material reacts by dissipating energy while simultaneously heating. This phenomenon can be attributed to internal friction effects and/or domain switching processes. In order to enhance the operational efficiency of ultrasonic drives, micromechanical material models can be used to predict the behavior of the piezoceramic. This model approach attempts to represent the macroscopic phenomenological material behavior by considering the microscopic processes occurring within the microstructure. With this information in hand, an attempt can be made to attribute the experimentally observed performance losses and the actuator heating to processes in the material. The micromechanical modeling approach is introduced and a concept for an efficient integration into a reduced modeling framework of an ultrasonic drive system is presented.
This work demonstrates a fully thermomechanically coupled material model for shape memory alloys (SMAs) that accurately predict key behaviors that occur in SMA, such as the shape memory effect, superelasticity, stress and strain recovery, and martensite reorientation. Developed within the generalized standard material (GSM) framework, the model is derived from a rate potential, with its variations producing the governing equations: linear momentum balance, energy balance, and the evolution of internal variables. Building on the energy and dissipation formulations of the Sedlák [1] model, the proposed framework is applied to the design of an SMA-based out-of-plane bistable microactuator. This actuator features two antagonistically coupled SMA microbridges, enabling bistable behavior where the device snaps between two stable states under thermomechanical loading, exploiting constrained recovery forces to perform mechanical work. Simulation results demonstrate the model’s accuracy and computational efficiency in capturing the complex thermomechanical response of SMA devices, highlighting its usage for advanced bistable actuator design.
[1] Sedlak P, Frost M, Benešová B, Zineb TB, Šittner P. Thermomechanical model for NiTi-based shape memory alloys including R-phase and material anisotropy under multi-axial loadings. International Journal of Plasticity. 2012 Dec 1;39:132-51.
[2] Sielenkämper M, Wulfinghoff S. A thermomechanical finite strain shape memory alloy model and its application to bistable actuators. Acta Mechanica. 2022 Aug;233(8):3059-94.
We present a multi-phase field model to study break-up and droplet formation in thin metal films. This model represents individual metal grains as distinct phase-field variables and captures the effects of coarsening and surface diffusion [1]. By coupling the phase-field model to a chemical potential field through the grand potential formulation, we develop a robust framework for simulating dewetting phenomena. Our work investigates the role of interfacial energy, which can exhibit significant spatial variations. These are caused by a variation in the relative crystallographic orientation of the film and substrate, together with anisotropic properties of one of the components of the system. The inhomogeneity of interfacial energy has a significant impact on grain coarsening rates, droplet formation and the overall stability of the thin film. We systematically analyse how variations in interfacial energy inhomogeneity affect these processes, revealing critical links between material properties and film behaviour. The model is applied to thin nickel films commonly used in solid oxide fuel cells, where dewetting can adversely affect performance by reducing electrical conductivity [2]. To validate our results, we collaborate with experimental colleagues by comparing our simulations with experimental observations of nickel film dewetting. This combined approach of modelling and laboratory experiments provides a deeper understanding of dewetting dynamics and offers processing pathways to optimise thin film performance in technological applications.
Thin metal films exposed to a hydrogen atmosphere are experimentally studied as a hydrogen uptake results in compressive stresses [2]. Compressive stresses are critical for hydride formation. Depending on the atmospheric state, a release of hydrogen can counteract the thermodynamic driving force of hydride formation. However, the thermomechanical state is path-dependent due to the plastic deformation history. In [1] this behavior has been investigated, and an analytical model has been developed accounting for the elasto-plastic behavior of thin niobium hydrogen films. As purely elastically deforming films have been found to suppress hydride formation at ambient temperature, the mechanical state, i.e. the plastic deformation, becomes essential for the thermodynamic behavior of the films.
To deal with this issue, the analytical model, which has been developed in [1], is extended to simulate palladium hydrogen films, that undergo strong plastic deformations upon hydrogen cycling. A cycle of loading and unloading includes an uptake of hydrogen as well as a release of hydrogen which is considered as the main influence on the mechanical stress evolution. During the loading half-cycle, a mechanical stress state including elasto-plastic deformations is obtained. The plastic deformations lead to eigenstresses influencing the following half-cycle of unloading. Considering multiple cycles, the eigenstresses have an influence on both half-cycles.
The simulation results for the mechanical state as well as for the chemical state are compared to experimental results of cycled palladium thin films of different thickness. The comparison reveals that the outlined model suits for the description of the related phenomena in a first attempt.
REFERENCES
[1] A. Dyck, T. Böhlke, A. Pundt, and S. Wagner. Phase transformation in the niobium hydrogensystem: Effects of elasto-plastic deformations on phase stability predicted by a thermody-namic model. 251.
[2] Stefan Wagner, Thilo Kramer, Helmut Uchida, Patrik Dobron, Jakub Cizek, and AstridPundt. Mechanical stress and stress release channels in 10–350 nm palladium hydrogen thinfilms with different micro-structures. 114:116–125. Publisher: Elsevier Ltd.
Laser beam welding has emerged as an advanced, contactless joining technique. This is primarily due to its high feed rates and low thermal distortion compared to conventional welding processes, [1]. These advantages stem from the focused energy input, enabling precise and controlled execution. Additionally, the automation potential of laser beam welding broadens its applicability across various industries, including automotive and aerospace engineering, shipbuilding, medical technology, electronics, and tool manufacturing, [2]. Despite its benefits, the process is not without challenges, particularly the formation of solidification cracks. These cracks develop during the solidification phase, originating from microcracks within the weld bead and propagating to the surface as cooling progresses, [3]. These critical states form in the mushy zone, which forms behind the weld pool and contains the transition between the fully liquid and solid phases. This region exhibits a dendritic microstructure that can trap liquid inclusions. When these inclusions solidify, the absence of liquid replenishment generates critical material states, potentially inducing stresses that lead to crack formation. This study addresses the issue on macroscopic and microscopic levels. On a macroscopic scale, the mushy zone in the welded specimen is identified using suitable heat source models, [4,5].On a microscopic scale, the evolving dendritic microstructure stemming from prior phase field simulations, [6], is analyzed, incorporating thermal and elastoplastic effects to capture the inherent stress and strain states. This dual approach provides insights into critical conditions that may contribute to the identification of material failure.
[1] M. Dal and R. Fabbro. An overview of the state of art in laser welding simulation. Optics \& Laser Technology, 78, 2-14, 2016.
[2] U. Dilthey. Schweißtechnische Fertigungsverfahren 1: Schweiß- und Schneidetechnologien, 3., bearbeitete Auflage ed., Springer-Verlag: Berlin, Heidelberg, 2006.
[3] E. Folkhard, G. Rabensteiner, E. Pertender, H. Schabereiter and J. Tösch. Metallurgie der Schweißung nichtrostender Stähle, Springer-Verlag: Vienna, 1984.
[4] A. Artinov, V. Karkhin, N. Bakir, X. Meng, M. Bachmann, A. Gumenyuk and M. Rethmeier. Lamé curve approximation for the assessment of the 3D temperature distribution in keyhole mode welding processes. Journal of Laser Applications, 32, 022042 (8 pages), 2020.
[5] P. Hartwig, L. Scheunemann and J. Schröder. A volumetric heat source model for the approximation of the melting pool in laser beam welding. Proceedings in Applied Mathematics and Mechanics, 23(4), e202300173, 2023.
[6] M. Umar, D. Schneider and B. Nestler. Solidification of quaternary X5CrNi18-10 alloy after laser beam welding: A phase-field approach. Procedia CIRP, 124, 460-463,2024.
This section is dedicated to discuss recent advances in multiscale and homogenization techniques for static and dynamic problems. Topics of particular interest are nonlinear homogenization techniques, multiscale modelling of failure processes and localization phenomena, FE2 methods, atomistic to continuum coupling, contact homogenization, model reduction techniques and furthermore homogenization schemes incorporating experimentally determined microstructure data.
Many applications in the field of nonlinear elasticity aim at finding a global minimizer of functionals of the form
I(u) = ∫_Ω W(∇u(x))dx
over a domain Ω ⊂ ℝᵈ in spatial dimension d ∈ {2, 3} for a suitable weak class of deformations u : Ω → ℝᵈ. In many relevant cases, the density W : ℝᵈ × ᵈ → ℝ̅ ≔ ℝ ∪ {∞} does not satisfy a suitable notion of convexity, and the existence of minimizers cannot be guaranteed. In fact, the infimum may not be reached, and nonconvexity, e.g. multiwell structure of the energy density, may lead to the emergence of increasingly fine microstructures within the minimising sequences. Moreover, the application of standard discretisation methods for the minimisation of I typically leads to mesh-dependent results with oscillations in the discrete deformation gradient at the length scale of the mesh size. Therefore, alternative approaches are introduced for both mathematical analysis and numerical simulation using relaxed formulations, e.g.~based on polyconvexification of the energy density, that focus on macroscopic features responsible for global behaviour by extracting the relevant information from the unresolved microstructures. Different algorithms have been developed in the literature to perform numerically the polyconvexification. However, the computational time of these algorithms remains high due to the high-dimensionality of the problem. In this talk, by combining the recent singular value polyconvexification and Fully Input Convex Neural Network (FICCN), we accelarate the prediction of the polyconvex envelope. The significant speed-up associated with the neural network compression is demonstrated in a series of numerical experiments.
Mean-field homogenization theories mainly rely on the Eshelby problem, which provides solutions for eigenstrain concentration in an ellipsoidal inclusion embedded within an infinite matrix. In nature, however, microstructures often feature inclusions with non-ellipsoidal shapes for which no analytical Eshelby solutions exist [1]. To address this limitation, we aim to develop a generalizable solution that accommodates arbitrary inclusion shapes by adapting the recently developed deep material networks (DMNs).
DMNs [2, 3] learn the underlying microstructure of a material and are able to predict the material's linear and, to a certain extent, nonlinear macroscopic behavior. DMNs are hierarchical, tree-shaped artificial neural networks. Each neuron represents a classical laminate for which stiffness homogenization, strain concentration, and rotation operations are defined. They are typically trained on finite element (FE) results of geometrically exact representations of a specific microstructure configuration. The trained network can be used to predict the material's nonlinear behavior. However, DMNs are restricted to one single microstructure. To account for varying volume fractions of the material constituents, a varying orientation of inclusion-type constituents or a multi-scale hierarchical material is not straightforward and requires new training data generation and, thus, numerous additional FE simulations [1].
Inspired by these challenges, we propose a novel modeling framework utilizing DMNs to solve the Eshelby problem. Our approach, termed the Deep Eshelby Network (DEN), integrates seamlessly with classical mean-field homogenization theories, bypassing the need for exhaustive microstructure-specific training. Similar to the DMNs, the deep Eshelby network is a tree-shaped neural network where each neuron performs a stiffness homogenization and strain concentration based on the laminate homogenization theory. Unlike DMNs, which are trained based on homogenized stiffnesses, the DEN is trained on strain concentration tensors associated with the Eshelby problem, which are derived from FE simulations of superellipsoidal inclusions embedded in an infinite matrix. By leveraging superellipsoid parameterization, the DEN can predict strain concentration tensors for a wide range of physically occurring inclusion geometries. The proposed and already-trained DEN can be used to homogenize a large variety of multi-scale microstructures without the need for any additional training, overcoming the limitations of analytical approaches while being more efficient than FE simulations.
[1] Traxl Roland, and Lackner Roman (2018). Mech Mater, 126, 126–39.
[2] Liu Zeliang, and Wu C. T. (2019). J Mech Phys Solids, 127, 20–46.
[3] Gajek Sebastian, Schneider Matti, and Böhlke Thomas (2020). J Mech Phys Solids, 142, 103984.
3D printing of sand cores enables the design of complex casted parts and an easier prototyping approach. However, the 3D printed materials behave differently from the traditional, core-blown sand core materials. In particular, the layer-by-layer printing process may introduce some anisotropy in the macroscopic thermal, mechanical and flow properties. These macroscopic properties directly influence the surface quality of the molded part, as well as its mechanical properties. Hence, it is of interest to understand how the printing parameters influence the behavior of the sand core. In this contribution, we focus on the link between the microstructure of the 3D printed material and its macroscopic properties. More specifically, we present a microstructure generation approach that aims to reproduce the 3D printing process. Such an approach relies on an approximation of the sand grains by clusters of overlapping spheres, on a directional contraction method that reproduces the layer-by-layer deposition process, and on a grid-free addition of binder between the grains. We compute macroscopic properties based using a FFT based homogenization solver [1]. We compare the computational results to experimental results and conclude on the relevance of the proposed microstructure generation method.
[1] H. Moulinec and P. Suquet, “A fast numerical method for computing the linear and nonlinear mechanical properties of composites,” Comptes Rendus de l’Académie des Sciences, Série II, Mécanique, Physique, Chimie, Sciences de l’Univers, Sciences de la Terre, vol. 318, no. 11, pp. 1417–1423, 1994"
Composite materials often exhibit complex anisotropic mechanical responses governed by their microstructural geometry and material phase distributions. Capturing these responses and their dependence on microstructural parameters is thus a critical challenge. In this work, we address this problem through a data-driven model-discovery framework that infers interpretable constitutive relationships directly from homogenized stress-strain data. By sparsely selecting relevant invariants of the Cauchy-Green deformation tensor, this approach aims to construct a strain energy density function that naturally encodes anisotropy while fulfilling thermodynamic constraints.
Previous works have introduced surrogate modeling techniques, such as neural networks, to approximate the homogenized response of a Representative Volume Element (RVE) as functions of various design parameters. These methods can learn anisotropy types and preferred directions but often yield “black-box” models that lack interpretability and require extensive hyperparameter tuning. In contrast, the proposed model-discovery approach delivers interpretable constitutive relationships by design and facilitates the direct inference of anisotropic features.
With the discovered model at hand, one obtains a surrogate that can be readily integrated into computational frameworks---potentially offering an alternative to more expensive concurrent multiscale methods like FE$^2$. We demonstrate the effectiveness of our approach on both synthetic and homogenized datasets, showing that it yields improved interpretability, reduced computational costs, and reliable material responses compared to existing methodologies.
We present a deep-learning-based approach to the compression of multi-scale partial differential operators characterized by highly heterogeneous coefficients. Such operators naturally appear in many science and engineering disciplines when modeling processes in heterogeneous media, that are marked by the interaction of effects on multiple scales. In order to simulate the effective behaviour of the operator on a macroscopic target scale of interest without having to resolve all microscopic features in the model with a computational mesh, many numerical homogenization methods that compress such operators reliably to macroscopic surrogate models suitable for this task have been developed over the years. We propose to approximate these surrogates with a neural network in a hybrid offline-online algorithm, that aims at combining the advantages of classical model-based numerical homogenization with those of the data-driven regime of deep learning. In the offline phase, a neural network is trained to approximate the coefficient-to-surrogate map from a dataset consisting of coefficient-surrogate-pairs, which is computed with classical numerical homogenization algorithms. This has the advantage that in the subsequent online phase, compressing previously unseen coefficients via forward passes through the trained network is significantly accelerated compared to the classical homogenization algorithm used in the offline phase. This makes multi-query applications where online efficiency is crucial, for example the simulation of evolution equations with time-dependent multi-scale coefficients, computationally feasible. We apply this hybrid framework to a prototypical elliptic homogenization problem in connection with a representative modern numerical homogenization method, the Localized Orthogonal Decomposition. To justify our approach, we prove that the surrogates generated by Localized Orthogonal Decomposition can be approximated to arbitrary accuracy by a feedforward neural network. The practical feasibility of our method is demonstrated in a series of numerical experiments.
The presented complete simulation chain connects the simulation of the reactive injection molding process with the structural mechanical properties based on computational homogenization.
The simulation of the foam expansion is carried out macroscopically with ITWMs FOAM solver. The rheology is calibrated in terms of rising pipe experiments for different pipe diameters and reaction masses, [1].
The results of these FOAM simulations are amongst others material densities and pore size distributions. These quantities are then applied to generate so-called representative volume elements (RVE) and the effective mechanical properties are computed in ITWMs solver FeelMath, [2],[3]. The required microscopic material properties of the PUR foam are calibrated in terms of compression tests on homogenous foam samples.
Finally, an inhomogeneous beam like structure is simulated on the macroscale by using the effective foam properties and the results are compared to corresponding experiments.
Therefore, the established method yields a Digital Twin of the PUR foam component and is applicable to more complex foam structures.
[1] Schäfer, K., D. Nestler, J. Tröltzsch, I. Ireka, D. Niedziela, K. Steiner, and L. Kroll. “Numerical Studies of the Viscosity of Reacting Polyurethane Foam with Experimental Validation,” 2020. https://doi.org/10.3390/polym12010105
[2] Kabel, M., and H. Andrä. “Fast Numerical Computation of Precise Bounds of Effective Elastic Moduli.” Fraunhofer ITWM, 2013. https://doi.org/10.24406/publica-fhg-296341
[3] Fraunhofer ITWM: FeelMath, www.itwm.fraunhofer.de/feelmath, Accessed: 20.12.2024
The topic of this session is the analysis and modeling of turbulent non-reactive and reactive flows based on DNS, LES, RANS, and experiments. A special focus is on fundamentals in turbulence, turbulent reactive flows, turbulent multi-phase flows, turbulence of atmosphere, atmosphere/ocean interaction, modeling and simulation of complex turbulent flows, the interface of numerical algorithms, chemical and physical modeling, as well as high-performance computing with its application to turbulence.
We investigate effect of porous insert located upstream of the separation edge of a backward-facing step (BFS) in early transitional regime as a function of Reynolds number. This is an example of hydrodynamic system that is a combination of separated shear flow with large amplification potential and porous materials known for efficient flow destabilisation. Spectral analysis reveals that dynamics of BFS is dominated by spectral modes that remain globally coherent along the streamwise direction. We detect two branches of characteristic frequencies in the flow and with Hilbert transform we characterise their spatial support. For low Reynolds numbers, the dynamics of the flow is dominated by lower frequency, whereas for sufficiently large Reynolds numbers cross-over to higher frequencies is observed. Increasing permeability of the porous insert results in decrease in Reynolds number value, at which frequency cross-over occurs. By comparing normalised frequencies on each branch with local stability analysis, we attribute Kelvin–Helmholtz and Tollmien–Schlichting instabilities to upper and lower frequency branches, respectively. Finally, our results show that porous inserts enhance Kelvin–Helmholtz instability and promote transition to oscillator-type dynamics. Specifically, the amplitude of vortical (BFS) structures associated with higher-frequency branch follows Landau model prediction for all investigated porous inserts
\newline
https://doi.org/10.1017/jfm.2024.639).
The air-sea gas flux is proportional to the difference of partial pressure between the sea-water and the overlying atmosphere multiplied by gas transfer velocity k, a measure of the effectiveness of the gas exchange. Because wind is the source of turbulence making the gas exchange more effective, k is usually parameterized by wind speed. Unfortunately, measured values of gas transfer velocity at a given wind speed have a large spread in values. Surfactants have been long suspected as the main reason of this variability but few measurements of gas exchange and surfactants have been performed at open sea simultaneously and therefore their results were inconclusive. Only recently, it has been shown that surfactants may decrease the CO2 air-sea exchange by up to 50%. However the labour intensive methods used for surfactant study make it impossible to collect enough data to map the surfactant coverage or even create a gas transfer velocity parameterization involving a measure of surfactant activity.
Previous research done by our group showed that fluorescence parameters allow estimation the surfactant enrichment of the surface microlayer, as well as types and origin of fluorescent organic matter involved. We plan to measure, from a research ship, all the variables needed for calculation of gas transfer velocity k (namely CO2 partial pressure both in water and in air as well as vertical flux of this trace gas) and to use mathematical optimization methods to look for a parameterization involving wind speed and one of the fluorescence parameters which will minimize the residual k variability. Although our research will still involve water sampling and laboratory fluorescence measurements, the knowledge of which absorption and fluorescence emission bands are the best proxy for surfactant activity may allow to create remote sensing products (fluorescence lidars) allowing continuous measurements of surfactant activity at least from the ship board. The improved parameterization of the CO2 gas transfer velocity will allow better constraining of basin-wide and global air-sea fluxes, an important component of global carbon budget. Our group is at present in mid a four year project, SURETY (NCN grant no. 2021/41/B/ST10/00946) aiming at estimating how much surfactants affect gas transfer velocity, and choose optical parameters suitable for parameterization of the process. This presentation aims at present our motivation and first results.
Atmospheric boundary layer (ABL) is a region is the lowest part of the Earth's atmosphere, where interactions between the surface and the atmosphere play a crucial role in shaping weather patterns, climate processes, and air quality. Stably stratified ABL, form mostly over night due to surface radiative cooling, they also prevail in the snow-covered polar regions. Under the stable stratification, air near the surface is cooler than the air above it, inhibiting vertical mixing. Turbulence within the stably stratified boundary layer is produced by shear, however, due to the negative buoyancy flux turbulent motions can be locally suppressed or significantly reduced. The characteristic features of the stably stratified ABL are the intermittency, non-stationarity and the presence of gravity wave motions. These additional complexities make turbulence parametrization challenging. In this work we study observational data from MOSAiC (Multidisciplinary drifting Observatory for the Study of Arctic Climate) expedition and calculate terms in the budget of the turbulence kinetic energy. We focus in particular on the periods of turbulence decay and turbulence development. We calculate the non-dimensional dissipation coefficient and show that it varies significantly, especially during the transient periods. We put forward a parametrization scheme for the strongly stratified ABL and present its experimental verification.
Cumulonimbus clouds, commonly called thunderclouds are widely associated with thunder and lightning. Even though they develop originally from cumulus clouds, very little is known about their behavior, especially in terms of turbulent statistics. In this study, using data obtained from measurement campaigns, we analyze, in parallel, the variation of the electric field, temperature, and wind velocity. Furthermore, we estimate turbulence kinetic energy dissipation rate from the wind velocity spectra and the second-order structure functions. A comparison of turbulent statistics between non thunderstorm clouds, such as stratocumulus clouds and cumulonimbus clouds will also be performed.
The outflow of a gas from a pressurised vessel is one of the classical fundamental problems in fluid mechanics. However, the compressible and subcritical case is more complex than one might think and there is no general analytical solution. This case was examined in more detail.
Based on the Bernoulli’s equation, a fast and accurate model for the prediction of transient gas outflow from vessels is derived. The system is considered adiabatic, isentropic, and subcritical and the gas is considered ideal and compressible. Two cases are considered. In the first case, a thin-wall vessel is considered in which frictional pressure losses are negligible. This case leads to the Inertial Loss Model (ILM). In the second case, a thick-wall vessel is considered in which frictional pressure losses inside the outlet must be taken into account. This leads to the Frictional Inertial Loss Model (FILM). For the first case, a dimensionless number is to be identified based on the pi‑theorem according to Buckingham, which is to be used to describe any subcritical, adiabatic discharge. The model and the dimensionless number are validated by numerical simulations and experiments.
A plot of the dimensionless number against the dimensionless pressure difference results in a single curve that obtains all parameter settings. This means that it is possible to determine all isentropic, subcritical discharges from thin-wall, adiabatic pressure vessels with two equations within a few seconds. The mean deviation of the inertial loss model to the simulations is only 1.939 % and of the correlation to the simulations only ‑2.343 %. The correlation is characterised by its generality, fast and accurate solution, and is therefore a practical tool for calculating the discharge duration of any isentropic, adiabatic, subcritical pressure discharge. In particular, because it does not require complex numerical simulations or calculations of differential equations, it is a practical tool in daily engineering life.
The mean deviation between the FILM for thick-wall vessels and the experiments is ‑1.75 %. This shows that it is possible to determine the outflow duration, even for more complex problems with frictional pressure losses, quickly and accurately.
This session is devoted to the mathematical analysis of natural phenomena and engineering problems. In this area PDEs play a basic role. Therefore lectures discussing analytical aspects of PDE problems as well as problems in the Calculus of Variations are welcome.
In extremely thin ferromagnetic films, an additional interaction, the so-called Dzyaloshinskii-Moriya interaction (DMI), arises in the micromagnetic energy along with a magnetocrystalline anisotropy favoring the magnetization to be out-of-plane. In such materials, topologically nontrivial, point-like configurations of the magnetization called magnetic skyrmions are observed, which are of great interest in the physics community due to possible applications in high-density data storage. Typically thought of as stabilized by DMI, it has been demonstrated that they may also arise due to stray field effects. For sufficiently thin films, we will characterize skyrmions as local minimizers of a 2D micromagnetic energy augmented by DMI and describe their asymptotics in the regime of dominating exchange (or Dirichlet) energy. In this regime, skyrmions collapse into a point and quantitative rigidity estimates for such maps are required to capture the asymptotics. However, the skyrmions might become smaller than the film thickness, indicating a breakdown of the 2D model. We will present a more carefully obtained thin film model and investigate the question of (non-)existence of stray-field stabilized skyrmions.
The Swift-Hohenberg equation arises as a basic pattern forming model. We consider the dynamics of a space fractional version of this model near instability using amplitude equations. More precisely, we prove that there exists an approximation by a Ginzburg-Landau equation near the first bifurcation point.
In this talk, we discuss a quasi-stationary model for stress-modulated growth of elastic materials. Inspired by models for crystal plasticity, the model features a multiplicative decomposition of the total deformation gradient into an elastic part and the growth tensor. The growth tensor is given by the solution to an ordinary differential equation on a suitable Banach space that depends, possibly non-linearly, on the elastic stress which is induced by the subsequent elastic deformation. The elastic deformation is given by the minimizer of a variational problem with a determinant constraint, which in turn depends on the solution to the ordinary differential equation, the growth tensor, by means of a coordinate transformation. Moreover, the growth process is driven by the presence of nutrients. In this model the nutrient concentration is determined by an elliptic reaction-diffusion equation. The ordinary differential equation that determines the growth process depends on the solution to this reaction-diffusion equation and the coefficients of the reaction-diffusion equation depend on the growth tensor and the elastic deformation.
We discuss existence and regularity of global solutions to this model in the case of one spatial dimension. Furthermore, we give an outlook onto the multi-dimensional case. Existence of solutions to the ordinary differential equation is proved by means of Picard-Lindelöf's theorem. Hence, for existence and regularity results, it is crucial to prove that the elastic stress depends Lipschitz-continuously on the growth tensor. While in the one-dimensional case the variational approach yields good results due to the fact that minimizers to the variational problem solve the associated Euler-Lagrange equations despite the determinant constraint, this poses a major obstacle in the analysis of the multi-dimensional case in which analyzing the associated equations seems to be more promising.
A hierarchy of classical plate models such as nonlinear bending theory, von Karman theory and linearized von Karman theory was derived from three-dimensional nonlinear elasticity by Gamma-convergence (see [2], [3]). The justification of the linear Timoshenko beam model and the Reissner-Mindlin plate model via Gamma-limit of the linear-elasticity model for a three-dimensional body was obtained in [1], [4]. We are looking for a way to justify the Timoshenko model by Gamma-convergence starting from a nonlinear model for a three-dimensional elastic body.
REFERENCES
[1] Falach L., Paroni R. , Podio-Guidugli P. , A justification of the Timoshenko beam model through Γ-convergence,Analysis and Applications 15:02, 261-277 (2017).
[2] Friesecke, G., James, R.D., Müller, S., A hierarchy of plate models derived from nonlinear elasticity by Gamma-convergence, Arch. Rational Mech. Anal., 180, 183-236 (2006).
[3] Friesecke, G., James, R.D., Müller, S., Rigorous derivation of nonlinear plate theory and geometric rigidity, C.R. Acad. Sci.Paris. Sér I, 334, 173-178 (2002).
[4] Paroni R., Podio-Guidugli P., Tomassetti G., The Reissner–Mindlin plate theory via Γ-convergence, C. R. Math. Acad. Sci. Paris, 343, 437–440 (2006).
Pointwise convexity conditions of the Legendre-Hadamard type for energy minimizers are derived in the context of a special Cosserat elasticity model for fiber-reinforced solids [1]. In this approach, the Cosserat rotation tensor accounts for the kinematics of embedded fibers, regarded as continuously distributed Kirchhoff rods. Firstly, we obtain new Legendre-Hadamard inequalities for three-dimensional fiber-reinforced elastic solids using variational calculus as in [2]. Secondly, we consider nonlinear elastic shells reinforced by a family of fibers [3] and we derive the counterparts of these necessary conditions for energy minimizers in the theory of shells.
References:
[1] M. Shirani and D.J. Steigmann. A Cosserat model of elastic solids reinforced by a family of curved and twisted fibers. Symmetry, 12: 1133, 2020.
[2] M. Shirani, D.J. Steigmann and M. Birsan. Legendre-Hadamard conditions for fiber-reinforced materials with one, two or three families of fibers. Mechanics of Materials, 184: 104745, 2023.
[3] M. Birsan, M. Shirani and D.J. Steigmann. Convexity conditions for fiber-reinforced elastic shells, Mathematics and Mechanics of Solids, https://doi.org/10.1177/10812865241261485, 2024.
The session especially welcomes contributions to the following topics: uncertainty quantification; risk analysis and assessment; Bayesian methods in engineering; decision analysis; stochastic modeling (including spatio-temporal modeling); stochastic mechanics; stochastic algorithms and simulation; stochastic processes (including time series); resampling methods; stochastic networks. Applications to large scaled problems are encouraged.
Many real-world systems exhibit inherently nonlinear dynamics which can induce sudden, large and oftentimes irreversible changes when an external forcing parameter is varied. This phenomenon is known as tipping with Greenland Ice Sheet melting and the weakening of the Atlantic Meridional Overturning Circulation as prominent examples of climate tipping elements. We model these tipping phenomena via bifurcation theory and analyse these bifurcations in the realm of uncertainty [1]. In this talk, we will focus on parameter uncertainty. This arises naturally when model parameters need to be estimated from measurement data or cannot even be measured directly but need to be inferred from other observable quantities.
A natural question is how this parameter uncertainty propagates through the nonlinear dynamics and how it affects the bifurcation landscape, in particular the bifurcation curves. We use sensitivity analysis to tackle this question. In [2], the authors introduced a sensitivity analysis of uncertainty propagation for differential equations with random inputs subject to perturbations of the input measures. To build upon these results, we choose the Fréchet distance as a metric on the space of bifurcation curves. We transfer results from [2] to bifurcation theory. Thereby, we provide worst case estimates on the distance between bifurcation curves based on the distance of given model input parameter distributions contributing to a risk assessment of climate tipping phenomena.
[1] K. Lux, P. Ashwin, R. Wood, and C. Kuehn. Assessing the impact of parametric uncertainty on tipping points of the atlantic meridional overturning circulation. Environmental Research Letters, 17(7):075002, 2022.
[2] O. G. Ernst, A. Pichler, and B. Sprungk. Wasserstein Sensitivity of Risk and Uncertainty Propagation. SIAM/ASA Journal on Uncertainty Quantification, 10(3), 2022.
Probability theory offers a practical and sound framework for assessing the reliability (or its complement, the failure probability) of engineering systems. In such a framework, the uncertainty associated with the input variables of a numerical model is described in terms of a joint probability distribution. Then, the probability of failure of the system is computed by integrating such joint probability distribution over the set of input variables that lead to an undesirable behavior. However, in cases of practical interest, it may be challenging to define a crisp probability distribution due to issues such as lack of knowledge, data scarcity and corrupted data, among other issues. Under such a situation, one possibility is describing uncertainty through probability models whose parameters (such as mean value or standard deviation) are described considering intervals. This corresponds to a so-called parametric probability box (p-box) approach. By considering a p-box, it is possible to capture both aleatoric and epistemic sources of uncertainty in a problem. In this setting, the failure probability is no longer a crisp, deterministic value but instead, it becomes an interval as well, that is, an imprecise failure probability. Assessing this interval is of utmost importance, as it provides a measure of how sensitive a particular system is with respect to the effect of epistemic uncertainty. Nonetheless, the calculation of imprecise probabilities is usually a demanding task, as it becomes necessary to propagate aleatoric and epistemic uncertainty in a double-loop fashion, which demands repeated evaluations of the numerical model describing the behavior of an engineering system.
In view of the challenges described above, this work presents an approach for estimating imprecise failure probabilities. The approach is based on the concept of an augmented reliability problem, where the epistemic distribution parameters are artificially regarded as aleatoric. The augmented reliability problem is solved using the First-Order Reliability Method (FORM). The functional dependence of the failure probability with respect to the distribution parameters can be calculated explicitly based on the FORM assumption. This allows determining the bounds of the imprecise failure probability in closed form once the design point associated with the augmented reliability problem has been located. An example illustrates the application of the proposed approach, indicating that it can provide a good estimate on the intervals of an imprecise failure probability with reduced numerical efforts.
In engineering practice, epistemic uncertainty widely exists due to imperfect modeling, simplifications, and limited data. Among these uncertainties, uncertainty in distribution parameters has drawn significant attention. To comprehensively evaluate structural reliability under the distribution parameter uncertainty, often referred to as conditional failure probability, several methods have been proposed. However, these methods have not addressed the reliability evaluation involving correlated random variables. In this study, an explicit calculation formula for the quantile of the conditional failure probability involving correlated random variables is proposed. The proposed method starts by transforming the limit state function with correlated random variables into the standard normal space using orthogonal normal transformation. Subsequently, the first three central moments of the conditional reliability index are calculated through sparse grid quadrature, enabling the derivation of its probability distribution. Finally, an explicit expression for the quantile values of the conditional failure probability is formulated based on the distribution of the conditional reliability index. The effectiveness and accuracy of the proposed method are demonstrated through two illustrative examples, where the results closely match those obtained using Monte Carlo simulations.
References
Li, P.P. Lu, Z.H. and Y.G. Zhao. An effective and efficient method for structural reliability considering the distributional parametric uncertainty, Applied Mathematical Modelling, 106 (2022): 507-523.
Li, P. P. Zhao, Y. G. Dang, C. Broggi, M. Valdebenito, M. A. and Faes, M. G. An efficient Bayesian updating framework for characterizing the posterior failure probability. Mechanical Systems and Signal Processing, 222(2025): 111768.
He, J. Gao, S.B. and J.H. Gong. A sparse grid stochastic collocation method for structural reliability analysis, Structural Safety, 51: 29-34, 2014.
Zhao, Y.G. Weng, Y.Y. and Z. H. Lu. An orthogonal normal transformation of correlated non-normal random variables for structural reliability, Probabilistic Engineering Mechanics, 64 (2021): 103130.
Computational models play a crucial role in decision-making across various disciplines, including engineering, science, economics, policy-making, and beyond, especially when input randomness introduces stochasticity into the outputs. Sensitivity estimation (SE) is essential for understanding how input variability influences these stochastic outputs. Unlike traditional methods that focus directly on input values, this study addresses a critical gap by examining the impact of input distribution hyperparameters on output uncertainty.
We present an SE methodology that is designed to evaluate how the parameters of input distributions influence the moments and cumulative distributions of the outputs. Novel sensitivity indices (SIs) are proposed, grounded in the first three moments and cumulative distribution function of the outputs. This approach can as such be naturally extended towards exceedance probabilities A numerical approach is developed to compute these SIs as part of the post-processing phase of uncertainty quantification, utilizing a moment-based model to approximate the output distribution. With the developed approach, there is no extra model evaluations required for SE.
Three illustrative examples are investigated, including nonlinear expressions and finite element models, which demonstrate the efficiency and versatility of the proposed SE method. The results highlight its potential to provide a more comprehensive understanding of the interplay between input distribution parameters and stochastic outputs, offering valuable insights for more informed decision-making in computational modeling.
The structural design optimization aims for a robust structure with minimal use of material resources, leading to slender and thin-walled structures, which entails a higher risk of stability problems. This issue is further influenced by geometrical imperfections, which are subject to uncertainties. In a common stochastic approach, imperfections can be simulated as random fields with the Karhunen-Loeve-Expansion (KLE) and applied to a finite element (FE) model as geometric deviations. Thus, the probability of a loss of stability can be determined via Monte Carlo Simulation (MCS). To quantify additional epistemic uncertainties within random field simulations, the concept of polymorphic uncertainty modeling is used. In that case, optimization-based interval or fuzzy analyses are required to compute, e.g., imprecise failure probabilities.
In the context of a reliability analysis in civil engineering, the probability of failure has to be lower than a given threshold value, depending on the structure’s consequence class. Numerical calculations of such low probabilities, require a very large number of samples, resulting in high computation times. Accounting for polymorphic uncertainty within optimization tasks for shell structures significantly further increases the computational effort, as it requires a triple-loop of optimization, fuzzy/interval analysis, and stochastic analysis.
A challenge in surrogate modeling with random fields is the high number of input variables. In order to reduce the computation time, a surrogate model based on Artificial Neural Networks (ANNs) is developed to replace the FE buckling analysis for the stochastic analysis. The idea is to use the random numbers of the KLE series as inputs for the ANN. In the training process, the training samples are computed by a geometrical nonlinear FE model. After successful training, the surrogate model is able to predict the buckling load with negligible computation time, compared to the MCS using the computationally demanding FE model. The novelty of the presented method is the ANN surrogate model for the buckling analysis to perform optimization tasks considering polymorphic uncertainties. As advantage, the computation time can be further reduced by truncating the KLE series according to a quality index. The required number of terms in the KLE series determines the number of input neurons and therefore, the computational effort of the ANN training. Numerical examples demonstrate that the presented ANN approach can be used reliably in structural optimization problems with polymorphic uncertain parameters.
We present a high-performance multi-level stochastic gradient descent method to optimally control the state of systems guided by partial differential equations under uncertain input data. The gradient descent method, used to find the optimal control, leverages a parallel budgeted multi-level Monte Carlo method as stochastic sub-gradient estimator. As a result, we get tight control over the sub-gradient’s bias, introduced by numerical discretizations, and the sub-gradient’s variance with respect to the invested computational resources. The method is particularly well-suited for high-dimensional control problems by exploiting the parallelism and the distributed data structure of the budgeted multi-level Monte Carlo method. Lastly, we study the method’s performance at hand of a three-dimensional elliptic problem with log-normal coefficients and Matérn covariance functions.
Optimization is the next natural step after simulation with increasing importance in the future. The aim of this session is to provide the basis of a holistic overview of all areas of optimization. Thus abstracts from both a theoretical as well as an applied perspective are welcome.
Additive manufacturing techniques such as 3D printing, selective laser sintering and others have enabled the fabrication of complex geometries such as meta-materials. Generally, the resulting geometry may be separated into scales, assuming sufficient characteristic length differences. For lattice cell meta-materials, a 'macro-scale' detailing the overall structure and its partitioning into subcells and a 'micro-scale', which describes the geometry of these subcells, may be defined. For Finite Element computations, the dependency of the mechanical behaviour of the macro-scale on the micro-scale geometry necessitates fine meshing, which results in prohibitively high computational costs, especially for multi-query analyses. One viable approach is the usage of effective material parameters obtained via homogenisation approaches, which show high efficiency but are limited to periodic designs. For multi-query analyses such as topology optimisation, a precomputed reduced order model (e.g. response surfaces or neural networks) may be introduced to approximate a unit cell's effective elastic material parameters based on specified design parameters [1],[2]). A downfall of these methods is the significant computational effort required for the training stage for general anisotropic materials. Schwahofer et al. performed Free Material Optimization for orthotropic unit cells and semi-periodic designs based on simplified modelling of a unit cell's trusses via beams, showcasing the promise of beam-based modelling for these applications [3].In this contribution, topology optimisation is conducted for anisotropic unit cells without defining periodicity constraints. The truss-based geometry of the unit cells is modelled in a simplified manner with beam elements, allowing for an efficient evaluation of the structure's performance. This method, while preserving the physicality of the problem, requires no previous computational steps and significantly reduces computational costs.
[1]: C. Imediegwu, R. Murphy, R. Hewson, and M. Santer, “Multiscale structural optimization towards three-dimensional printable structures,” Struct Multidisc Optim, vol. 60, no. 2, pp. 513–525, Aug. 2019, https://doi.org/10.1007/s00158-019-02220-y
[2]: N. Black and A. R. Najafi, “Deep neural networks for parameterized homogenization in concurrent multiscale structural optimization,” Struct Multidisc Optim, vol. 66, no. 1, p. 20, Jan. 2023, https://doi.org/10.1007/s00158-022-03471-y
[3]: O. Schwahofer, S. Büttner, J. Binder, D. Colin, and K. Drechsler, “Multiscale Optimization of 3D‐Printed Beam‐Based Lattice Structures through Elastically Tailored Unit Cells,” Adv Eng Mater, vol. 25, no. 20, p. 2201385, Oct. 2023, https://doi.org/10.1002/adem.202201385
3D printing techniques based on the photopolymerization process stand out from other printing processes due to their high resolution. Applications include, for example, the fabrication of microfluidic devices and dental models. However, the dimensional conformity of the 3D printed objects is still a challenge. The properties and geometry are highly dependent on the process parameters.
In particular, using a grayscale masked stereolithography apparatus (gMSLA), a liquid polymer resin is selectively exposed to UV light in a layer-by-layer fashion to create three-dimensional structures. Using a projection LCD mask, the UV-light exposure of each point can be adjusted accurately by varying the local grayscale value. This determines where the material is cured or remains liquid within each layer.
A simulation of the printing process is developed, in order to get a deeper insight into the interaction of the relevant process parameters. This enables an accurate prediction of the geometry and material properties of the printed object.
The process model provides the basis for subsequent optimization. The input parameters, particularly the grayscale input, are optimized such that errors due to the printing process can be compensated. Thereby, the accuracy of the printed parts can be improved. Moreover, the optimization procedure can be integrated into topology optimization, such that the the grayscale mask input is used as design variable, directly.
The paper concerns the problem of minimization of the compliance of linear elastic structures made of an isotropic material. The bulk and shear moduli are the design variables, both viewed as non-negative fields on the design domain. The design variables are subject to the isoperimetric condition which is the upper bound of two kinds of Lp-norms of the elastic moduli. The case of p=1 corresponds to the original concept of the Isotropic Material Design (IMD) method proposed in the paper: S. Czarnecki, Isotropic material design, Computational Methods in Science and Technology, 21 (2), 49–64, 2015. In the present paper the IMD method will be extended by assuming the Lp -norms-based cost conditions. In each case the optimum design problem is reduced to the pair of mutually dual problems of the mathematical structure of a theory of elasticity of an isotropic body with nonlinear power-law type constitutive equations. Both the states of stress and strain determine the optimal layouts of the bulk and shear moduli of the least compliant structure. The new methods proposed deliver both the lower and upper estimates of the optimal compliance predicted by the original IMD method.
The mechanical efficiency of civil engineering structures is a key factor in the sustainable transformation of the building sector. Topology optimization is a powerful tool for the preliminary draft of mechanically improved structures. The interpretation as a material distribution problem in a predefined design space that is discretized by finite elements is a proven approach to the method; the optimization is performed with respect to a given load profile [1]. The static load of civil engineering structures is typically dominated by their self-weight, which introduces design-dependent loads into the optimization problem. A custom topology optimization routine based on Sequential Quadratic Programming has been developed and applied in the context of sustainable civil engineering design. Previous work has shown that minimizing the elastic strain energy with a constraint on the disposable amount of material yields structures that are characterized by their load transfer mainly by normal forces [2]. Concrete, the most popular construction material in the building sector, exhibits a pronounced tension/compression strength anisotropy. It is known for its high compressive strength and its brittle failure at comparatively low tensile stress [3]. Steel reinforcement increases the tensile strength of the material but is costly compared to fresh concrete and can increase structural self-weight due to its higher mass density. It is therefore desirable that civil engineering structures display load transfer by compressive normal forces.
This study presents topology optimization in civil engineering design with tensile/compressive anisotropic strength, and builds on the findings presented in [4]. A penalization scheme for the stiffness of finite elements subjected primarily to tensile stresses is introduced. The load transfer of each finite element is evaluated in terms of the principal stress with the highest absolute value at its nodes. The approach is tested using benchmark problems considering self-weight. The load transfer of the resulting structures is investigated.
[1] M. P. Bendsøe. “Optimal shape design as a material distribution problem.” Structural Optimization, 1(4):193 – 202, (1989).
[2] Masarczyk, Daniela, et al. "Sustainability in bridge design—investigation of the potential of topology optimization and additive manufacturing on a model scale." PAMM: e202400147, (2024).
[3] B. P. Hughes, and J. E. Ash. “Anisotropy and failure criteria for concrete." Matériaux et Construction 3: 371-374, (1970).
[4] K. Cai. "A simple approach to find optimal topology of a continuum with tension-only or compression-only material." Structural and Multidisciplinary Optimization 43: 827-835, (2011).
This study focuses on the development and application of topology optimization methods for buckling structures under size constraints. Buckling, as a common failure mode in mechanical structures, can significantly affect the structural performance and reliability. Therefore, it is of great significance to optimize the topology of buckling structures to improve their stability and strength.
In this research, the density-based method is used to obtain an initial design by iteratively updating the material distribution based on the compliance criterion. However, this method may generate a complex fine geometry that is difficult to fabricate. To address this issue, by imposing constraints on the density distribution, the material distribution is limited to ensure that the material is distributed as uniformly as possible throughout the structure, mitigating material clustering, avoiding excessively thin material layers, and impacting both the manufacturing process and performance.
To evaluate the effectiveness of the proposed methods, numerical examples are conducted to optimize the topology of buckling structures with different constraints. The optimization results show that the proposed methods can effectively obtain optimal topologies with improved stability and strength, while satisfying the size constraints.
In addition, this study also investigates the influence of different optimization parameters on the optimization results. It is found that the selection of design variables, optimization algorithm, and convergence criteria plays a critical role in the optimization process. By analyzing the optimization results, insights into the optimization design of buckling structures with size constraints are obtained.
In conclusion, this research presents a comprehensive study on topology optimization methods for buckling structures with size constraints. The proposed methods provide a new approach for the optimization design of buckling structures, which can enhance the stability and strength of mechanical structures while considering practical manufacturing constraints. Further research will focus on the integration of these methods into more complex structures and the development of new optimization algorithms for better performance and efficiency.
Design optimization of frame structures is crucial for numerous civil engineering applications requiring lightweight yet dynamically robust designs. The minimum weight design of such structures under dynamic loading presents significant computational challenges due to the non-convexity of the feasible domain, polynomial dependence of stiffness on design variables, and disconnected feasible set with singularities. Traditional nonlinear programming approaches often fail to provide reliable solutions to these problems, necessitating more sophisticated mathematical tools.
We present a novel topology optimization approach based on the moment-sum-of-squares (mSOS) hierarchy of semidefinite programming relaxations to address these challenges. This marks the first application of mSOS techniques to dynamic topology optimization. Our method builds upon our previous research in weight minimization under fundamental free-vibration eigenvalue constraints, extending it to encompass both standard and robust dynamic compliance constraints, as well as peak power constraints.
Our contribution establishes theoretical connections between these formulations, demonstrating that for loads excited below the lowest resonance frequency, both dynamic compliance and peak power optimization constraints can be viewed as special cases of the free-vibration eigenvalue constraints. We also establish a direct relationship between the worst-case state fields in robust optimization scenarios and the free-vibration eigenmodes associated with the lowest non-zero eigenvalue.
The efficacy of our approach is validated through numerical examples, demonstrating: (1) finite convergence of the mSOS hierarchy, and (2) the ability to generate high-quality feasible points even at low relaxation degrees. These results enable minimum weight design of frame structures under dynamic loads with guaranteed global ε-optimality certificates.
The aim of this section is to bring together experts in the field of applied and numerical linear algebra, discussing recent theoretical and algorithmic developments.
Taking the 1952 landmark paper of Hestenes and Stiefel on the conjugate gradient method as their historical starting point, Krylov subspace methods for solving linear algebraic systems have been around for more than 70 years. Tens of thousands of research articles on Krylov subspace methods and their applications have been published by authors coming from the most diverse scientific backgrounds. Nevertheless, many questions about the behavior of Krylov subspace methods both in exact arithmetic and finite precision computations remain open.
In this talk we will present several examples from the recent paper [1] that illustrate important and practically relevant open questions about Krylov subspace methods. We will focus on the nontrivial nonlinear behavior of the methods, which is their main mathematical asset as well as their beauty. Our main goal is to argue that, despite their long history and widespread use in practical applications, Krylov subspace methods should still be seen as mathematical objects that are worth studying. Any progress in their understanding, even of their mathematical fundamentals, will bring us a step further in exploiting their full nonlinear computational potential.
The talk will be based on joint work with Erin Carson and Zdenek Strakos (Charles University, Prague).
[1] E. Carson, J. Liesen and Z. Strakos, Towards understanding CG and GMRES through examples, Linear Algebra and its Applications, 692 (2024), pp. 241-291.
Linear algebraic systems in saddle point form arise in numerous applications in science and engineering. In general saddle point matrices are highly indefinite which slows down the convergence of many iterative methods, including Krylov subspace methods. By negating the second block row, symmetry of the saddle point matrix is traded for a spectrum that is contained in the right half plane. Further, the resulting matrix splits `naturally' into symmetric and skew-symmetric parts.
In this talk we will present a general analysis of the spectral properties of the modified saddle point matrix, derive sufficient conditions under which it is diagonalizable and has a real and positive spectrum and explain how an inner product in which the modified saddle point matrix is selfadjoint can be constructed.
This is joint work with Jörg Liesen (TU Berlin).
When using implicit Runge-Kutta methods (IRK) for solving linear parabolic or hyperbolic PDEs, solving the system of stage equations M * k = bis usually the computational bottleneck as the dimension of this problem for an s-stage IRK method becomes O(sn) where the spatial discretization dimension n can be very large. Hence the solution process requires the use of iterative solvers, whose convergence can be less than satisfactory. Moreover, due to the structure of the stage equations, the matrix M does not directly inherit any of the preferable properties of the spatial operator, making GMRES the go-to solver. Therefore we need a preconditioner and in [Neytcheva & Axelsson, 2020] and also [Howle et al., 2021, 2022 & 2024] a family of block preconditioners was utilized and numerically tested with promising results. These preconditioners are all based on the underlying Kronecker product structure of M and the fact that all manipulations with the s-by-s Butcher matrix A are cheap. Recently, in [Gander \& Outrata, 2024] and also [Dravins et al., 2023 \& 2024], two seemingly different approaches for a spectral analysis of the preconditioned systems were proposed. In the first paper, the emphasis is on using these results for estimating the preconditioned GMRES convergence, while in the others, the authors derive appealing asymptotic results on the spectrum clustering.
In this talk we will show the equivalence of the two approaches. We will also extend the techniques used for estimating the GMRES convergence profile to the framework used by Dravins et al., thus covering a larger family of parabolic or hyperbolic PDEs, for which we can get an insight into the preconditioned GMRES behavior. The main tools used are the Kronecker product structure, conformal mapping theory and Schwarz-Chrstoffel maps. We will demonstrate our results on several numerical examples.
In this talk, we want to introduce a novel framework for solving nonlinear matrix equations by inexact matrix-valued Newton-Krylov methods. In this framework, we consider global Krylov subspace methods for solving the correction equation and discuss the benefits of this approach. Finally, we show how to use the proposed algorithm to solve nonlinear eigenvector dependent eigenvalue problems (NEPv) and compare with the well-established self-consistent field iteration on different examples arising in quantum physics and data science applications.
Quaternions are a four-dimensional non-commutative algebra and a division ring of numbers introduced by Hamilton in 1843. The main obstacle in deriving eigenvalue algorithms for matrices of quaternions, due to non-commutativity, is the efficient implementation of shifts. Other linear algebra concepts naturally carry over from real or complex numbers. Reduced biquaternions are a four-dimensional commutative number algebra, introduced by Segre in 1892. The main obstacles when deriving algorithms for matrices of reduced biquaternions are the existence of non-invertible non-zero elements, and the need to consistently define some basic linear algebra concepts in this setting. We present efficient algorithms for the QR factorization, eigenvalue decomposition, and singular value decompositions of real, complex, quaternion, and reduced bi-quaternion matrices, as well as matrices of dual numbers of those four number types. The algorithms are kept as generic as possible. We present applications for computing generalized inverses and image analysis. The algorithms are efficiently implemented in Julia. This work has been partially supported by the Croatian Science Foundation under the project IP-2020-02- 2240.
In this talk we will present eigenvalue estimates for the solutions of operator Lyapunov equations with a non-compact (but relatively Hilbert Schmidt) control operator. We compute eigenvalue estimates from Galerkin discretizations of Lyapunov equations and discuss the appearance of spurious (nonconvergent) discrete eigenvalues. This phenomenon is called the spectral pollution. Our main tools, which are of independent interest in their own right, are new improved asymptotic estimates for the eigenvalue decay in the case of control operators of large or infinite rank as well as a rank criterion for determining the part of the spectrum of the discrete Gramian which is converging to the eigenvalues of the full Gramian (pollution free part of the discrete spectrum). We test our theoretical results on a collection of academic prototypes using both finite element as well as spectral element discretizations. We will also give a general overview of the regularity theory for the eigenvectors of solutions of Lyapunov operator equation and its influence on construction high order piecewise polynomial approximations (hp-adaptive finite elements).
For all fields of applications the mathematical models are primarily based on differential equations. Hence, their numerical solution plays a fundamental role in numerical mathematics. This section covers mainly the construction and the behavior of numerical methods for differential equations including those of ordinary as well as of partial differential type.
In order to achieve realistic simulations of complex dynamics, a detailed mathematical model and a precise numerical method are required. However, there are often inaccuracies or terms in the model that are uncertain or unknown. This is where data assimilation comes in, which corrects the model with the help of measurement data. Since data assimilation typically involves numerous simulations of the underlying system, model order reduction is employed to mitigate computational complexity. In this talk, we investigate model order reduction and data assimilation techniques for the inverse problem to infer unknown model parameters in order to identify possible damages in layered materials. This research is part of the DFG project FOR3022 “Ultrasonic Monitoring of Fibre Metal Laminates Using Integrated Sensors“.
The Reduced Basis Method (RBM) is a well-established model reduction technique to realize multi-query and/or realtime applications of parameterized partial differential equations (PPDEs). The RBM relies on a well-posed variational formulation of the PPDE under consideration. Since the RBM is a linear approximation method, the best possible rate of convergence is given by the Kolmogorov N-width. It is well-known that the decay of the Kolmogorov N-width is exponentially fast for suitable elliptic and parabolic problems, but is poor for transport- or wave-type problems. This motivates our goal of developing a well-posed variational formulation for the wave equation, which also allows for a nonlinear model reduction in order to overcome the limitations of a possibly poor Kolmogorov N -width. To this end, starting from a strandard weak formulation of the wave equation, we construct a extended formulation with a parameter dependent ansatz space. This new formulation is well posed and optimally stable in the sens, that inf-sup and continuity constant are both equal to one. Due to the parameter dependent ansatz space, this formulation naturally leads to a nonlinear model order reduction. Further, we present numerical experiments regarding the performance of the RBM. Thereby we use an unconditionally stable space-time Petrov-Galerkin discretization for the wave equation based upon a modified Hilbert type transformation, leading to a parameter-independent discretization of the ansatz space with a parameter-dependent topology. Following the idea of a nonlinear model order reduction, a fully parameter-dependent discretization, not only in the topology, is a work in progress.
We present a novel, fast solver for the numerical approximation of linear, time-dependent partial differential equations, leveraging model order reduction techniques and the Laplace transform. Specifically, we consider the application of this method to the linear, second-order wave equation.
We begin by applying the Laplace transform to the evolution problem, which results in a time-independent boundary value problem depending solely on the complex Laplace parameter and the problem’s data. During an offline stage, we carefully sample the Laplace parameter and solve the associated collection of high-fidelity problems. Subsequently, we apply proper orthogonal decomposition (POD) to this collection of solutions to obtain a reduced basis. The linear evolution problem is then projected onto this basis and solved using any suitable time-stepping method.
Numerical experiments demonstrate the method’s performance in terms of accuracy and, particularly, speed-up when compared to standard approaches.
The stratification of lakes plays an important role for understanding and analysing the behaviour of lakes, as well as determining potential turning points of the system. In stably stratified lakes, the water layers can have drastically different chemical compositions, and an overturn may lead to dangerous reactions affecting the lake and surrounding environments.
Many common lake models use the potential density to compare water parcels and compute transportation effects. This approach can lead to inaccuracies, as it neglects thermobaric effects arising from the interplay between temperature and pressure. This becomes particularly apparent in deep lakes with low salinity, as density differences in deeper regions are so low that thermobaricity drives the water exchange.
The aim of this project is the incorporation of thermobaricity in commonly used lake models, as well as an analysis of thermobaric effects in model cases. In a first step, a minimal 1D model for the temperature profile of lakes is developed. This model is based on the heat equation, and an additional term is introduced to reflect the transport of heat by buoyancy-driven mixing. The resulting model is used to illustrate characteristic thermobaric effects and as a starting point for a stability analysis of the system.
The aviation sector is expanding annually, with an expected market size of nearly 340 billion USD in 2024. This growth raises significant concerns about fuel consumption and the corresponding CO2 emissions. Every day, approximately 1.3 million people commute within Europe using airplanes, resulting in roughly 240,000 tonnes of CO2 emitted into the atmosphere. Therefore, reducing fuel emissions is of utmost importance. Here we focus on implicitly reducing fuel consumption and CO2 emissions by optimizing the routes that aircraft take.
We present an algorithm capable of finding a continuous globally optimal trajectory for an aircraft in a stationary wind field. The algorithm effectively solves a Hamilton-Jacobi-Bellman equation associated with the flight trajectory optimization problem, employing linear finite elements and effective parallelization. Furthermore, we demonstrate linear convergence of the algorithm by giving an explicit bound for the error estimate of arrival time. Finally, we sketch an approach to global optimality guarantees. By combining the algorithm with existing optimal control approaches, we can find a globally optimal path within a desired accuracy.
In this session novel developments devoted to optimization and optimal control problems governed by ordinary or partial differential equations will be discussed. The focus is on theoretical investigations, numerical analysis, algorithmic issues as well as on application.
In this talk, we will present a well posed optimal control problem subject to the Navier-Stokes equations with inhomogeneous do-nothing boundary conditions, as they are frequently used in hemodynamic computations. The simulation of cardiovascular blood flow is an area of research rapidly gaining in significance due to the increased availability of computational resources, and the widespread occurrence of cardiovascular diseases, which these simulations can help diagnose and treat. As a full CFD model of the whole blood circuit however remains infeasible, most blood flow simulations are carried out on truncated domains with different types of boundary conditions: Dirichlet boundary conditions for walls (homogeneous) and inflow boundaries (inhomogeneous), as well as do-nothing boundary conditions on outflow boundaries. For the clinical diagnosis, the spatial distribution of blood pressure is of most importance, which however cannot be measured directly. Due to recent advancements in technology, it is however possible to obtain fully space and time dependent data of the blood velocity (4D-MRI). Pressure then can be recovered by solving an optimal control problem, where the unknown boundary data on the in and outflow boundaries act as controls. Until now, no wellposedness results of such an optimal control problem were available, due to the Navier-Stokes equations with do-nothing boundary conditions only being solvable for small data. As this issue is inherent to the choice of boundary condition, and also present in the 2D Navier-Stokes equations, we present a formulation of an optimal control problem in 2D, that is well posed, despite the issues of the boundary condition. We deduce first and second order optimality conditions and discuss regularity of the optimal state, adjoint state and control. To this end, we present some specially tailored regularity results for the instationary Stokes equations subject to mixed inhomogeneous boundary conditions. Our theoretical results are illustrated with numerical examples.
A receding horizon control framework is coupled with a Luenberger observer to construct an output-based control input stabilizing parabolic equations. The actuators and sensors are indicator functions of small subdomains, representing localized actuation and localized measurements. It is shown that, for a class of explicitly given sets of actuators and sensors, we can guarantee the stabilizing property of the constructed input. Results of numerical simulations are presented validating the theoretical findings.
The synthesis of optimal feedback-laws for optimal control problems is a challenging topic. Classical approaches consist in constructing an optimal feedback-law by approximating the value function of the problem. However, these types of methods suffer from the curse of dimensionality, namely, its computational cost increases exponentially with the dimension of the underlaying control problem. For this reason, machine learning based methods have been proposed for remedy this problem. Although there are promising results from the practical point of view, performance guarantees of feedback laws produced by these methods are still necessary. Recently, performance guarantees have been provided by controlling the semi-concavity and the H¹ error of the approximation. Nevertheless, this is not possible for usual machine learning models. This talk is focused on a machine learning based approach for approximating semi-concave functions. The main novelty of this approach lies in preserving the semi-concavity of the approximation, which is crucial for ensuring performance guarantees of the synthesis of optimal feedback-laws. Further, we demonstrate that any semi-concave function can be approximated by this method. The performance of this approach will be illustrated by numerical examples, where the approximated function is known to be semi-concave.
The minimum energy estimator - also called Mortensen observer - was originally designed for the reconstruction of the state of nonlinear finite-dimensional dynamical systems subject to deterministic disturbances based on partial and flawed measurements. In this presentation we propose a generalization to systems governed by PDEs. Using the example of a nonlinear defocusing wave equation we formulate the underlying optimal control problem and formally derive the associated observer. After discussing theoretical results on well-posedness we introduce a spatial discretization of the wave equation inspired by spectral methods. This allows the numerical realization of the observer based on a polynomial approximation of the value function. We conclude with a comparison of the obtained state reconstruction to the well-known extended Kalman filter.
Range invariance is a property that - like the tangential cone condition - enables a proof of convergence of iterative methods for inverse problems. In contrast to the tangential cone condition it can also be verified for some parameter identification problems in partial differential equations PDEs from boundary measurements, as relevant, e.g., in tomographic applications. The goal of this talk is to a) highlight some of these examples of coefficient identification from boundary observations in elliptic and parabolic PDEsb) provide convergence rates under additional regularity (source conditions)
Dynamics and control is an interdisciplinary section which in particular adresses mathematical systems theory and control engineering. The contributions to this section are also concerned with the mathematical understanding and design of controllers which appear in actual applications.
We investigate the small-time local controllability of a nonlinear control-affine system modeling the rotational motion of a satellite on a circular orbit, focusing on an underactuated scenario where the control torque is generated solely by magnetorquers. The satellite is treated as a rigid body subject to electromagnetic actuation. The main contributions include establishing small-time local controllability around the relative equilibrium under natural assumptions on the mass distribution of the rigid body. This is achieved using the Lie algebra rank condition and Sussmann's controllability condition. The study also reveals that the linearized system is not controllable in a neighborhood of the equilibrium. Hence, the presented research contributes to the advancement of essentially nonlinear controllability theory for dynamical systems with drift.
This talk is devoted to the study of periodic solutions for a class of weakly nonlinear control systems in a Hilbert space. We assume that the linear part of the system generates a C₀-semigroup of bounded linear operators and that the nonlinearity is Lipschitz continuous. For an arbitrary bounded measurable input defined on the interval [0, T], we formulate the problem of finding a solution x(t) satisfying the periodic boundary condition x(0) = x(T). Under technical assumptions concerning the resolvent of the infinitesimal generator and the Lipschitz constant, we prove an existence and uniqueness result for such periodic solutions. The proof is based on the Banach contraction principle and is used to derive an iterative scheme for approximating the periodic solutions.
As an example, we consider a mathematical model of a dispersed-flow tubular reactor (DFTR) with boundary control under Danckwerts-type boundary conditions. This nonlinear parabolic model is represented as an abstract differential equation, for which the periodic solutions are constructed using the proposed iterative scheme.
For the design of reference tracking controllers for nonlinear systems linearising controllers are often applied. Especially input-output linearisation is a widely used approach. However, this is only applicable for a specific system class. In particular, a system output with well defined relative degree must exist, and there are several systems that do not fulfill this requirement.
One well-known example of such a system is a rolling ball on a beam, which can be tilted by a control input. The linearization of this system in any equilibrium is controllable, and we find that a linear combination of the ball position and the beam tilt is a flat output. Thus, a feedforward control can be directly computed and any stable error dynamics may be prescribed. However, for the nonlinear system the relative degree of this flat output is not everywhere well defined: While the relative degree is one less than the state dimension for most states, it increases especially at those instants where the beam is at rest. Thus, one cannot construct a controller based on input-output linearization.
Several authors have studied this particular system and gave different approaches for approximate or even exact tracking controllers. In this contribution we construct a sequence of tracking controllers that approximate a prescribed error dynamics with different accuracy, starting with a linear controller for the linearised system. These controllers impose higher demands on the differentiability of the reference trajectory. However, the tracking error can be reduced in the vicinity of an equilibrium. This method is applicable also for other systems, whose linearization is controllable.
Event-triggered state estimation has gained significant attention in resource-constrained environments such as network control system due to its ability to save communication and energy resources. For example, it has been observed that event-triggered Kalman filtering-based approaches can achieve resource efficiency in general nonlinear systems [1]. Also the idea of Moving Horizon Estimation (MHE) with its application in event-triggered settings is known to address state estimation tasks effectively [2]. The presented work investigates and compares different such event-triggered state estimation techniques for nonlinear systems, focusing on their performance under different mechanisms and triggering rules.
In this talk, we introduce the principles of event-triggered state estimation and focus on two types of triggering mechanisms: innovation-based and send-on-delta, combined with stochastic and deterministic triggering rules. We compare the common state estimation methods, including the Kalman filtering-based approaches: Extended Kalman Filter (EKF), Unscented Kalman Filter (UKF) and Cubature Kalman Filter (CKF), as well as optimization-based MHE. Numerical results are presented for four nonlinear systems, including a two-link robot arm. Due to the limitations of event-triggered MHE for nonlinear systems, a linear system-based design is used for comparison.
The results demonstrate that CKF achieves the best performance in highly nonlinear systems, while EKF and MHE are more effective in linear or weakly nonlinear settings. Notably, an innovation-based mechanism consistently offers more appropriate triggering sample time, while a deterministic rule enhances the estimation performance. Despite its accuracy, MHE incurs the highest computational costs, followed by CKF, UKF, and EKF.
[1] Marzieh Kooshkbaghi, Horacio J Marquez, and Wilsun Xu. “Event-triggered approach to dynamic state estimation of a synchronous machine using cubature Kalman filter”. In: IEEE Transactions on Control Systems Technology 28.5 (2019), pp. 2013–2020.
[2] Xunyuan Yin and Jinfeng Liu. “Event-triggered state estimation of linear systems using moving horizon estimation”. In: IEEE Transactions on Control Systems Technology 29.2 (2020), pp. 901–909.
The design of flatness-based open-loop control for various kinds of hyperbolic partial differential equations (PDEs) has been discussed in several publications [1,2], including its application to quasi-linear hyperbolic PDEs [3]. Furthermore, flatness-based tracking control for quasi-linear hyperbolic PDEs has already been analyzed in [4]. However, the study of flatness-based observer design remains pending, despite energy-based and backstepping-based approaches already being considered (see e.g. [5]). This contribution, therefore, focuses on the flatness-based observer design for quasi-linear systems, specifically accounting for non-collocated measurements for error injection. The design approach relies on the so-called hyperbolic observer form [6]. Transforming a given system into this specific state-space representation can be achieved through a flatness-based analysis of the system under consideration. In these new normal form coordinates, the transient response of the observer is governed by a stable difference-differential equation. The design methodology is illustrated using the well-known Saint-Venant equations [7]. These equations are widely employed to model shallow water in open channels, such as irrigation channels with rectangular cross-sections [8, 9], and for modeling snow avalanches [10]. The system discussed in [11] has been analyzed for flatness-based open-loop control, while flatness-based tracking control has been studied in [12]. Additionally, a model transformation using material-fixed coordinates, as presented in [13], leads to an adapted quasi-linear hyperbolic PDE. This transformation significantly simplifies the model structure for observer design. The proposed design utilizes position measurement alongside the non-collocated measurement of the water level and transforms the given system into a hyperbolic observer canonical form. Finally, an efficient implementation strategy based on higher-order approximation methods, as discussed in [14], is implemented in an energy-preserving manner.
[1] J. Rudolph, Int. J. Control 81, 2008.15
[2] F. Woittennek, IFAC Proc. Vol. 46, 2013.
[3] T. Knüppel, IEEE Trans. Autom. Control 60, 2015.
[4] N. Gehring, IFAC-PapersOnLine 51, 2018.
[5] T. Strecker, Automatica, 2022.
[6] F. Woittennek, On the hyperbolic observer canonical form, 2013.
[7] A. J. C. B. d. Saint-Venant, Théorie du mouvement non permanent des eaux, avec application aux crues des rivières et à l’introduction des marées dans leur lit, 1871.
[8] H. Chanson, Hydraulics of Open Channel Flow (Elsevier, May 2004).
[9] W. H. Graf, Fluvial Hydraulics: Flow and Transport Processes in Channels of Simple Geometry, 1998.
[10] A. N. Nazarov, Fluid Dynamics, 1991.
[11] J. Kopp, Automatica, 2020.
[12] J. Wurm, AT.
[13] J. Wurm, IFAC-PapersOnLine, 2022.
[14] L. Mayer, PAMM, 2023.
Nonlinear observability based on the concept of indistinguishability of states was introduced by Hermann and Krener in 1977. In this approach, the observability map needs to be tested for injectivity. In general, observability is difficult to verify for nonlinear systems. The well-known observability rank condition, which is based on the inverse function theorem, only makes a sufficient statement about a version of local observability.
The entries of the observability map consist of time derivatives of the output along the system’s dynamics. These derivatives can be expressed as Lie derivatives. For linear time-invariant systems, the number of output time derivatives required for a observability test is bounded with regard to the system’s dimension due to the Cayley–Hamilton theorem. In this case, the observability rank condition is both sufficient and necessary. Recently, it has been shown the the observability of polynomial systems can be decided with a finite number of steps using Hilbert's basis theorem. In this paper, the methodology developed for polynomial systems will be extended to rational systems.
Over the last decade mathematics has become the cornerstone in Signal and Image processing ranging from various methods for signal reconstruction to modelling of imaging modalities over its classical disciplines compression, denoising, segmentation, and registration to feature extraction. The used methodologies include such diverse fields as harmonic analysis, inverse problems, variational analysis, mathematical statistics, partial differential equations, optimization, approximation theory and sampling theory. The aim of this section is to foster interdisciplinary collaboration and the development of new directions in mathematical signal and image processing spawned from the interaction of various mathematical communities.
Hyperspectral image (HSI) denoising is an important preprocessing step for downstream applications. Fully characterizing the spatial-spectral priors of HSI is crucial for denoising tasks. In recent years, denoising methods based on low-rank subspaces have garnered widespread attention. In the low-rank matrix factorization framework, the restoration of HSI can be formulated as a task of recovering two subspace factors. However, hyperspectral images are inherently three-dimensional tensors, and transforming the tensor into a matrix for operations inevitably disrupts the spatial structure of the data. To address this issue and better capture the spatial-spectral priors of HSI, this paper proposes a modeling approach based on Tucker decomposition with subspace continuity prior parameterization. This data-driven and model-driven joint modeling mechanism has the following two advantages: 1) Tucker decomposition allows for the characterization of the low-rank properties across multiple dimensions of the hyperspectral image, leading to a more accurate representation of spectral priors; 2) Implicit neural representation tools enable the adaptive and precise characterization of the subspace factor continuity under Tucker decomposition, which we have discovered for the first time. Extensive experiments demonstrate that our method outperforms a series of competing methods.
We introduce a novel relaxation strategy for denoising hyperbolic-valued data. The main challenge is here the non-convexity of the hyperbolic sheet. Instead of considering the denoising problem directly on the hyperbolic space, we exploit the Euclidean embedding and encode the hyperbolic sheet using a novel matrix representation. For denoising, we employ the Euclidean Tikhonov and total variation (TV) model, where we incorporate our matrix representation. The major contribution is then a convex relaxation of the variational ansätze allowing the utilization of well-established convex optimization procedures like the alternating directions method of multipliers (ADMM). The resulting denoisers are applied to a real-world Gaussian image processing task, where we simultaneously restore the pixel-wise mean and standard deviation of a retina scan series.
This paper introduces patch assignment flows for metric data labeling on graphs. Labelings are determined by regularizing initial local labelings through the dynamic interaction of both labels and label assignments across the graph, entirely encoded by a dictionary of competing labeled patches and mediated by patch assignment variables. Maximal consistency of patch assignments is achieved by geometric numerical integration of a Riemannian ascent flow, as critical point of an Lagrangian action functional. Experiments illustrate properties of the approach, including uncertainty quantification of label assignments.
We study the minimization of smooth, possibly nonconvex functions over the positive orthant, a key setting in Poisson inverse problems, using the exponentiated gradient (EG) method. Interpreting EG as Riemannian gradient descent (RGD) with the e-Exp map from information geometry as a retraction, we prove global convergence under weak assumptions -- without the need for L-smoothness -- and finite termination of Riemannian Armijo line search. Numerical experiments, including an accelerated variant, highlight EG's practical advantages, such as faster convergence compared to RGD based on interior-point geometry.
Scientific Computing is concerned with the efficient numerical solution of mathematical models from both science and engineering. The field covers a wide range of topics: from mathematical modeling over the development, analysis and efficient implementation of numerical methods and algorithms to software and finally application for the solution of complex real-world problems on modern computing systems. This interdisciplinary field combines approaches from applied mathematics, computer science and a wide range of applications in which in-silico experiments play an increasingly important role.
This talk addresses the mathematical challenges and numerical treatment of large-scale sea-ice dynamics in climate models. The underlying model adopts a viscous-plastic framework to describe sea-ice as a two-dimensional thin layer on the ocean surface.
In the first part of the talk, we explore the formulation of the model, focusing on developing a presentation that is well-suited for modern numerical approximation techniques. In the second part we will focus on numerical tools for an approximation of the model with finite elements. Specifically, we present a new approach which approximates the flow via edge integration on local flat triangles using the nonconforming linear Crouzeix-Raviart element. This discretization is implemented in the ICON weather and climate model [3].
To address oscillations in the velocity field caused by the discretization of the viscous-plastic stress tensor with the Crouzeix-Raviart element, we propose an edge-based stabilization technique [1]. On the meshes used in climate models the Coruzeix-Raviart discretization is a good compromise between accuracy and efficiency of the numerical setup [2].
In the final part of the talk, we derive optimal error bounds for the Crouzeix-Raviart element in both the H1-norm and L2-norm. These theoretical findings are validated by numerical experiments [4].
[1] C. Mehlmann, P. Korn: Sea-ice dynamics on triangular grids, J. of Computational Physics, 428, e-id 110086, 2021
https://doi.org/10.1016/j.jcp.2020.110086
[2] C. Mehlmann, S. Danilov, L. Losch, J.F. Lemieux, P. Blain, C. Hunke, P. Korn: Simulating linear kinematic features in viscous-plastic sea ice models on quadrilateral and triangular grids, Journal of Advanced Earth System Modeling, 13, e2021MS002523, 2021
https://doi.org/10.1029/2021MS002523
[3] C. Mehlmann, O. Gutjahr: Modeling sea-ice dynamics in the tangential plane on the sphere, Journal of Advances in Modeling Earth Systems, 14, e2022MS003010,2022
https://doi.org/10.1029/2022MS003010
[4] C. Mehlmann: Analysis of the Crouzeix-Raviart Surface Finite Element Method for vector-valued Laplacians, e2312.16541, arXiv, 2024,
https://arxiv.org/abs/2312.16541
Modelling the dynamics of particles moving in a liquid is crucial for numerous applications, including marine snow aggregation in oceans and the movement of Lagrangian devices in chemical reactors. While very small particles can be accurately modeled as passive tracers, larger particles require consideration of additional forces. The Maxey-Riley-Gatignol equations (MaRGE) describe the motion of particles that are larger than passive tracers yet not large enough to significantly disturb the surrounding fluid.
However, the Basset history term arising in the MaRGE makes their numerical solution difficult, because the force acting on a particle at a given time depends on its full past trajectory. Therefore, this term is often neglected in practical applications. By analyzing the Finite Time Lyapunov Exponents (FTLE) of a large number of simulated particles in different flows, we show that ignoring the history term leads to substantial qualitative differences not only in individual trajectories but also in the larger scale Lagrangian dynamics.
Based on a recent reformulation of the MaRGE by [S. G. Prasath, V. Vasan, R. Govindarajan, Accurate solution method for the Maxey–Riley equation, and the effects of Basset history, Journal of Fluid Mechanics 868 (2019) 428–460] we present a numerical approach for their solution based on finite differences and compare it against other schemes. We also investigate the use of MaRGE as a dynamic model in a filtering algorithm to track particles via dead reckoning.
Magnetostriction refers to the deformation of magnetostrictive materials under the influence of a magnetic field, as well as the effect of mechanical stresses on their magnetic properties. These effects are important in the design of electromechanical devices, such as electric motors and transformers, where magnetostriction can lead to additional stresses, vibrations, and a reduction in the overall lifespan of the machine. To accurately model and mitigate these effects, a coupled approach that considers both electromagnetic and mechanical interactions is necessary.
In this work, we use a coupled formulation to describe the interaction between the magnetic and mechanical fields. For spatial discretization, we use isogeometric analysis (IGA), a numerical method particularly well-suited for problems involving deformations. By utilizing CAD-based basis functions such as B-splines and NURBS for both geometry representation and the solution space, IGA enables an exact geometric representation while allowing to directly incorporate of deformations directly. This results in a highly accurate modeling of magnetostrictive effects. The effectiveness of the approach is demonstrated through numerical examples.
Isogeometric Analysis (IGA) aims to enhance the seamless integration between computer-aided design (CAD) and computer-aided engineering (CAE). Local refinement is essential for efficiently resolving fine details in numerical simulations. In the context of IGA, hierarchical B-splines have gained prominence, providing an effective framework for local refinement. The work applies the methodology of truncated hierarchical B-splines (THB-splines), which enhance the refinement process by preserving the partition of unity and ensuring linear independence, making them particularly suitable for adaptive and efficient numerical simulations. The framework is further enriched by Bézier extraction, facilitating the efficient implementation of spline-based methods within finite element frameworks and in combination with the hierarchical structure of THB-splines, resulting in the multi-level Bézier extraction method. This discretization method is applied to 2D magnetostatic problems. The implementation is based on an open-source Octave/MATLAB IGA code called GeoPDEs, which allows us to compare our routines with globally refined spline models and locally refined ones where the solver does not rely on Bézier extraction.
In this talk we will consider the problem of learning a convex regularizer from a theoretical perspective. In general, learning of variational methods can be done by bilevel optimization where the variational problem is the lower level problem and the upper level problem minimizes over some parameter of the lower level problem. However, this is usually too difficult in practice and a popular method is the approach by so-called unrolling (or unfolding) of a solver for the lower level problem. There, one replaces the lower level problem by an algorithm that converges to a solution of that problem, chooses a number N of iterations to be performed and uses the N-th iterate as a substitute for the true solution.
While this approach is often successful in practice, little theoretical results are available. In this talk we will consider a situation in which one can make a thorough comparison of the bilevel approach and the unrolling approach in a particular case of a quite simple toy example. Even though the example is quite simple, the situation is already complex and reveals a few phenomena that have been observed in practice: Deeper unrolling is often not beneficial, especially if algorithm parameters such as stepsizes are not learned as well. With learned stepsizes, deeper unrolling often does not improve performance, but gives good results already for shallow unrolling.
Measurement data is often sampled irregularly i.e. not on equidistant time grids. This is also true for Hamiltonian systems. However, existing machine learning methods, which learn symplectic integrators, such as SympNets [2] and HénonNets [1] still require training data generated by fixed step sizes. To learn time-adaptive symplectic integrators, an extension to SympNets, which we call TSympNets, was introduced in [2]. We adapt the architecture of TSympNets and extend them to non-autonomous Hamiltonian systems. So far, the approximation qualities of TSympNets have been unknown. We close this gap by providing a universal approximation theorem for separable Hamiltonian systems and show that it is not possible to extend it to non-separable Hamiltonian systems. To investigate these theoretical approximation capabilities, we perform different numerical experiments.
References
[1] J. W. Burby, Q. Tang, and R. Maulik. Fast neural Poincaré maps for toroidal magnetic fields.Plasma Physics and Controlled Fusion, 63(2):024001, 2020.
https://doi.org/10.1088/1361-6587/abcbaa .
[2] P. Jin, Z. Zhang, A.Zhu, Y.Tang, and G. E. Karniadakis. SympNets: intrinsic structure-preserving symplectic networks for identifying Hamiltonian systems. Neural Networks,
132:166–179, 2020.
https://doi.org/10.1016/j.neunet.2020.08.017 .
Data-driven modeling represents a fundamental technique in the analysis and design of complex dynamical systems. However, the direct application of machine learning algorithms is not an appropriate methodology for high-dimensional systems. A common approach to address this challenge is to utilize dimension reduction techniques, which inherently introduce additional errors in surrogate systems. Recently, some studies have focused on learning high-dimensional systems by exploiting the geometric structure of the underlying dynamics via the method of sparse Full-Order Model (sFOM) inference [1, 2, 3]. In this talk, we further investigate the sFOM inference in the context of incompressible fluid dynamics. We exploit the physics-based quadratic structure of the Navier-Stokes equations, as well as a sparse approximation for the system operators, dictated by geometrical adjacency. Using training data for the system states, we then solve “local", regularized least-squares problems for each system degree of freedom. This renders the data-driven inference and storage of the full-order operators feasible. We thus investigate the properties of the learned, sparse models for two incompressible fluid dynamics test-cases, namely the flow over a cylinder and the lid-driven cavity. We also examine the predictions of reduced-order models, derived from the inferred sFOM, via projection to a low-dimensional manifold through the Proper Orthogonal Decomposition (sFOM-POD). We highlight the potential change in stability and predictive capabilities of the learned dynamical systems.
References
[1] Y. Schumann and P. Neumann. On linear models for discrete operator inference in time dependent problems. Journal of Computational and Applied Mathematics, 425:115022, 2023.
[2] L. Gkimisis, T. Richter, and P. Benner. Adjacency-based, non-intrusive model reduction for vortex-induced vibrations. Computers \& Fluids 275 (2024): 106248.
[3] A. Prakash and Y. J. Zhang. Nonintrusive projection-based reduced order modeling using stable learned differential operators, arXiv preprint arXiv:2410.11253 (2024).
Hydrogels are highly versatile and environmentally responsive materials, designed through specific chemical composition and synthesis methods. The solvent absorption capability in hydrogels can be either enhanced or inhibited by various environmental stimuli, such as temperature, pH, light, or specific ions and molecules. This responsiveness makes hydrogels ideal materials for developing smart devices capable of adapting to changing conditions.
In our previous work, we explored the relationship between processing and properties by predicting the swelling behavior of temperature-responsive hydrogels based on their synthesis procedures. In the current study, we focus on investigating the relationship of structural parameters of hydrogels with their swelling behavior, as conceptualized based on the Flory-Rehner theory. These conceptualized parameters are challenging to determine experimentally but are highly valuable for explaining the swelling behavior of hydrogels with different compositions and synthesis procedures. This approach aims to unravel the relationship between inner structure and properties.
Building on Flory-Rehner theory, the basic understanding of swelling behavior, including volume phase transition, etc., are discussed from the viewpoints of structure, dynamics, kinetics and equilibrium thermodynamics. The structural parameters to describe the polymer structures involve the degree of polymerization between crosslinks, number of ionic groups on the chain between crosslinks, and polymer volume fraction in the reference state. Regarding the aspect of thermodynamics, the Flory-Huggins interaction parameter quantifies the energy of interdispersed polymer and solvent molecules. In temperature-sensitive hydrogels, the interaction parameter varies with temperature.
Based on the models of experimentally observed phenomena, various mathematical expressions of the interaction parameter as the function of temperature are provided in the literature. Inserting the mathematical expression of the interaction parameter into Flory-Rehner theory, we are able to model the swelling behavior of temperature-responsive hydrogels. Inversely, we can investigate the structural parameters of hydrogels based on the swelling behavior data collected from experiments utilizing a data-driven approach. Specially, through Bayesian Optimization, we optimize the structural parameters of the model built on Flory-Rehner theory by training it with the experimental swelling behavior data. This makes the present research a crucial step toward establishing the complete PSPP (Processing-Structure-Property-Performance) linkage for hydrogels.
Training a neural network to learn differential equations requires a significant amount of data. Collecting the data set is sometimes difficult or expensive. When we have a limited budget, it is important to optimize the process of selecting the data we should use to train the neural network. We present two methods for selecting an optimal data set. First, we present a way to select the optimal data set through Optimal Experimental Design(OED) and learn the differential equations with the optimal data set. Second, inspired by the Dual Weighted Residual(DWR) method, we optimize the data selection process for learning differential equations using the Deep Ritz method.
We present a novel training algorithm based on adaptive random Fourier features sampling [1] for the learning of the drift and the diffusion components of stochastic differential equations (SDEs). Specifically, we considerItˆo diffusion processes and a loss function [2] derived from the Euler-Maruyama (EM) integration scheme. Promising results, improving both the loss and the training time have been observed compared to traditional gradient-based methods.
[1] Aku Kammonen, Jonas Kiessling, Petr Plech´aˇc, Mattias Sandberg, and Anders Szepessy. Adaptive randomfourier features with metropolis sampling, 2020.
[2] Felix Dietrich, Alexei Makeev, George Kevrekidis, Nikolaos Evangelou, Tom Bertalan, Sebastian Reich, andIoannis G. Kevrekidis. Learning effective stochastic differential equations from microscopic simulations:Linking stochastic numerics to deep learning, 2023.
Many approximate solutions of the time-dependent Schrödinger equation can be formulated as exact solutions of a nonlinear Schrödinger equation with an effective Hamiltonian operator depending on the state of the system. We show that Heller's thawed Gaussian approximation, Coalson and Karplus's variational Gaussian approximation, and other Gaussian wavepacket dynamics methods fit into this framework if the effective potential is a quadratic polynomial with state-dependent coefficients. We study such a nonlinear Schrödinger equation in full generality [1]: after deriving general equations of motion for the Gaussian's parameters, we demonstrate the time reversibility and norm conservation and analyze conservation of the energy, effective energy, and symplectic structure. We also describe efficient, high-order geometric integrators for the numerical solution of this nonlinear Schrödinger equation. The general theory is illustrated by known as well as new examples of this family based on various approximations for the potential energy. In my talk, I will accompany the theory with applications of the different versions of Gaussian wavepacket dynamics to the calculation of vibrationally resolved electronic spectra as well as with numerical examples demonstrating the geometric properties and fast convergence of the integrators. Time permitting, I will mention how the method can be generalized to propagating non-Gaussian states (using Hagedorn wavepackets [2,3] ) and to nonzero temperatures (using thermofield dynamics [4]).
[1] J. J. L. Vaníček, J. Chem. Phys. 159, 014114 (2023).
[2] Z. Tong Zhang and J. J. L. Vaníček, communication, J. Chem. Phys. 161, 111101 (2024).
[3] Z. Tong Zhang, Máté Visegrádi, and J. J. L. Vaníček, letter, Phys. Rev. A 111, L010801 (2025).
[4] T. Begušić and J. Vaníček, J. Chem. Phys. 153, 024105 (2020).
This work focuses on numerical methods for simulating the time evolution of the Schrödinger equation, primarily in one dimension. Symplectic splitting algorithms are employed to separate the kinetic and potential components of the propagator. The semigroup associated with the free-particle Schrödinger operator is then represented in a multiwavelet basis. In the talk, I will demonstrate how to efficiently discretize this representation and leverage it for adaptive simulations.
We present a novel numerical method for solving nonlinear Schrödinger (NLS) equations with time-dependent potentials on the real line. The spatial discretization employs a Hermite spectral decomposition, providing analytical expressions for the evolution of basis functions under the free Schrödinger operator. A linearization approach for solutions to the underlying NLS equation with time-dependent potential leads to a second-order Strang type time-splitting scheme, derived by combining the variation-of-constants formula with quadrature rules. Numerical experiments on the Gross—Pitaevskii equation, widely used in quantum physics to describe Bose—Einstein condensates, are presented to support the theoretical results.
Supercooled liquids exhibit spatial heterogeneity in the dynamics of their fluctuating atomic arrangements, with particle dynamics often dominated by stringlike motions where lines of particles perform activated hops cooperatively [1]. In glassy systems, a void takes the form of a quasivoid composed of a few neighboring free volumes and is transported through the stringlike motions it induces [2]. Similar quasivoid actions are also observed in fully glassy systems with a large polydispersity. In our study, we are investigating a three-dimensional polydisperse repulsive potential system using the Activation-Relaxation Technique nouveau(ARTn)[3], an algorithm that explores the potential energy landscape by converging from a local minimum energy configuration to nearby saddle-point configurations. We aim to collect thousands of saddle points to analyze the distribution of activation energies [4] and compare the positions of quasivoids identified by ARTn and iso-configuration methods. Additionally, we will compare the corresponding hopping rates calculated using iso-configuration with the results from ARTn.The eventual aim is to find the interactions between strings and build a transition state theory based on quasivoids.
[1] A. H. Marcus, J. Schofield, and S. A. Rice, Experimental observations of non-Gaussian behavior and stringlike cooperative dynamics in concentrated quasi-two-dimensional colloidal liquids, Phys. Rev. E 60, 5725 (1999).
[2] C. T. Yip, M. Isobe, C. H. Chan, S. Ren, K. P. Wong, Q. Huo, C.-S. Lee, Y.-H. Tsang, Y. Han, and C. H. Lam, Direct Evidence of Void-Induced Structural Relaxations in Colloidal Glass Formers, Phys. Rev. Lett. 125, 258001 (2020).
[3] N. Mousseau and G. Barkema, Traveling through potential energy landscapes of disordered materials: The activation-relaxation technique, Phys. Rev. E 57, 2419 (1998).
[4] S. Swayamjyoti, J. F. Löffler, and P. M. Derlet, Local structural excitations in model glasses, Phys. Rev. B 89, 224201 (2014).
The conductor-like screening model or COSMO model is used for computing the electrostatic interaction between a solvent and a particular solute molecule. Mathematically this involves solving a Laplace boundary value problem on a domain with a large number of intersecting balls. We proposed to solve this problem more efficiently by employing a numerical method based on Schwarz's domain decomposition method. The purpose of our work is to investigate the methodology, set up a theoretical foundation of the approach, and study the convergence behavior for the inexact ddCOSMO method. In inexact ddCOSMO, we solve the sub-problems approximately using a finite number of spherical harmonics. We focus on deriving error bounds and studying the dependence of the error on the iteration number of the Schwarz algorithm and the number of spherical harmonics used. Our analysis tackles challenges beyond classical domain decomposition theory as we lose the trace value zero of the solution due to the inexactness.
The meeting takes place in the building A30 (Faculty of Architecture and Faculty of Engineering Management building), in Room 001, opposite the main entrance from Rychlewski Street. The location is approximately 350 meters from the LCC (a 5-minute walk).
https://maps.app.goo.gl/pouXstysmNJZMbrV6
Topological Data Analysis (TDA) introduces innovative approaches for interpreting high-dimensional data by leveraging underlying structures and relationships. TDA uncovers qualitative similarities and differences, offering a unique perspective in various applications. In particular, the concept of persistence diagram vectorization enhances Machine Learning (ML) algorithms by providing an intermediary layer that improves performance and yields alternative insights into medical datasets. The assessment of surface topography and roughness is crucial for applications such as friction analysis, contact deformation studies, and coating adhesion investigations. While traditional roughness parameters—such as Ra, Rq, and Rz—offer quantitative measures of surface irregularities, they often struggle to differentiate between surface samples subjected to different treatment stages. These conventional methods primarily rely on statistical descriptors of height variations, which may overlook critical geometric and structural patterns inherent in complex surfaces.
To address this limitation, we explore computational topology techniques, particularly persistent homology, to capture essential geometric and topological features of surfaces. By comparing traditional roughness metrics with topological descriptors, we highlight how persistent homology-based invariants offer a richer characterization of surface morphology. Unlike standard roughness measures that focus on local height variations, topological methods track the evolution of connected components and voids across multiple scales, enabling a more robust differentiation between surface treatments. Furthermore, techniques from scalar field topology are used in the exploratory phase to give insights into different local phenomena.
This study extends earlier effort in [1] to construct novel roughness parameters derived from persistent homology to enhance the classification of surface samples. We will discuss key design choices in integrating TDA into existing ML pipelines by comparing traditional roughness quantification techniques with TDA-driven methodologies, we demonstrate the advantages of incorporating computational topology for improved surface characterization, with the goal to enhance the accuracy and interpretability of ML models in material science applications.
[1] Senge, J.F., Astaraee, A.H., Dlotko, P., Bagherifard, S., Bosbach, W.A., Extending conventional surface roughness ISO parameters using topological data analysis for shot peened surfaces, Scientific Reports, 12, 5538 (2022).
Dynamical systems analysis plays a crucial role in understanding the behavior of complex phenomena across various fields, including engineering, physics, and biology. However, the inherent complexity of these systems often presents significant challenges in their analysis, such as non-linearity and high-dimensionality. Topological methods have emerged as a powerful toolkit for addressing these difficulties, offering insights into the qualitative structure of dynamical systems without relying heavily on traditional metrics. By capturing the underlying shape and connectivity of data, these methods facilitate a deeper understanding of system dynamics. One noteworthy approach within the topological analysis that will be presented is the Euler Characteristic Profile. This method offers a robust framework for quantifying topological features and their evolution over time. Furthermore, we will illustrate its potential applicability in the analysis of biomedical data, highlighting how topological methods can deepen our understanding of intricate biological systems.
A key concept in comparison and classification of dynamical systems is the notion of topological conjugacy. In the talk we will consider a problem of testing topological (semi-)conjugacy of two trajectories coming from unknown dynamical systems when only finite samples of those trajectories are given. A number of tests and various numerical examples indicating their scalability and robustness will be discussed. In addition, we will see how presented methods apply for searching of sufficient embedding dimension, mentioned in Takens’ Embedding Theorem, which provides a theoretical framework for time series analysis in various applications.
The Dowker complex of a relation was introduced by C.H. Dowker in 1952. Given a relation R on X times Y, the Dowker complex D(X, Y, R) is a simplical complex with vertex set X, and a subset sigma of X forms a simplex, if there is a y in Y with which every element of sigma is in relation. The number of such elements y is called the `weight' of the simplex. Although the construction has attracted a steady interest over the few decades following its introduction, this interest has vastly increased with the appearance of Topological Data Analysis at the beginning of this century. This follows from the fact that the Dowker construction is a very good mechanism for turning data sets (aka point clouds) into filtered complexes.
I will start by introducing the Dowker construction via a sequence of examples. Next, I will recall a number of results in this area, especially those concerned with extending Dowker duality to the filtered setting. Given a measure on the space Y, I will present a bifiltration on D(X, Y, R), obtained by requiring that the intersection of balls of appropriate size around the points of sigma has sufficiently large measure. The stability of this construction will be discussed. Finally, I will present a few applications.
This is joint work with Niklas Hellmer.
A. Blumberg and M. Lesnick, Stability of 2-parameter persistent homology, Found. Comput. Math. (2022).
M. Brun and L. Salbu, The Rectangle Complex of a Relation, Mediterranean Journal of Mathematics 20.1, (2023).
M. Brun, B. Garcia Pascual, and L. Salbu, Determining Homology of an Unknown Space from a Sample, European Journal of Mathematics 9.4, (2023).
C. H. Dowker, Homology Groups of Relations, Annals of Mathematics, 56.1 (1952).
N. Hellmer and J. Spaliński, Density Sensitive Bifiltered Dowker Complexes via Total Weight, ArXiv: 2405.15592.
M. Robinson, Cosheaf representations of relations and Dowker complexes, J. Appl. and Comput. Topology, 6.1 (2022).
D. R. Sheehy, A Multicover Nerve for Geometric Inference, CCCG: Canadian Conference in Computational Geometry, (2012).
Section S02 will focus on numerical and experimental techniques to study the structure, function and evolution of biological systems for a broad spectrum of scales, i.e., addressing the cellular, tissue, organ, or organ system scale. The topics will focus on, but are not limited to, multiscale modelling, gait analysis, foot biomechanics, musculoskeletal and orthopedic biomechanics, remodeling, cardiovascular biomechanics, multiphase modeling of biological tissues, tumor growth modeling, transport oncophysics, nanomechanics of biological materials, nanomedicine, modeling of drug delivery, and image-based methods for assessing and interpreting clinical data.
Coupled multibody-finite element (MB-FE) simulations are state-of-the-art when characterizing a system's global dynamic behavior as well as local stress and strain phenomena due to deformation. In biomechanical research, this approach can address different hierarchical levels, including cellular, tissue, or organ system levels. Artisynth is an open-source Java-based biomechanical simulation framework for co-simulating multibody and finite element models with forward and inverse simulation capabilities. It therefore has huge potential in future biomechanical research. Despite its ability to import and edit OpenSim models directly, a fully functional Artisynth model of the lower limb that can be readily extended by finite element structures to study gait is missing up to this day. Here, we present the current state of our multibody model of the lower limb, which is based on OpenSim’s Gait2392 model. The model produces physiologically reasonable joint angles during gait for a given set of experimental marker trajectories and calculates promising joint moments and muscle forces when compared to simulations of the same model and motion in OpenSim 4.5. Experimentally measured ground reaction forces are applied directly to the foot segment of the model, as performed in OpenSim 4.5. Calculated angles for right hip adduction, flexion, and rotation reach from -10.3° to 3.2°, -22.8° to 25.2° and -9.1° to 0.5°, whereas corresponding angles in OpenSim span -7.7° to 6.9°, -21.9° to 22.1° and -11.1° to 7.5°. For the left, our angles are -8.5° to 4.2°, -20.8° to 25.1° and -11.3° to 3.0° vs. -7.9° to 5.5°, -22.6° to 21.7° and -10.3° to 6.5°. Our right knee angles range from -69.5° to -0.2° vs. -69.8° to 1.0° in OpenSim, whereas corresponding left angles are -69.8° to -0.1° and -70.7° to 0.1° in OpenSim, respectively. Finally, calculated right ankle angles cover -17.4° to 11.5°, whereas they are -8.8° to 16.0° in OpenSim. For the left, the ankle angles are -18.1° to 11.5° vs. -9.7° to 14.3° in OpenSim. This model can serve as a foundation for coupled MB-FE simulations that we intend to extend in our future research. Yet the principle is also transferable to biomechanical research in adjacent fields, where research can be accelerated using already existing OpenSim models.
Accurate motion capture is essential for the analysis and reproduction of human motion, particularly in dynamic activities like running. While traditional camera-based systems with retro-reflective markers remain the gold standard, they are limited by their reliance on controlled environments, restricted spatial range, and extensive setup requirements. Emerging alternatives utilizing inertial measurement units (IMUs)—comprising accelerometers and gyroscopes—offer significant advantages, including environmental independence, portability, and minimal setup time. However, leveraging IMUs for precise motion tracking presents two key technical challenges. The first challenge arises from the nature of IMU data, which consists of. Transforming local frame accelerations and angular velocities data into global orientations and positions requires integrating the measurements. This integration, however, is susceptible to drift, an accumulation of measurement errors, and necessitates a reliable initial pose estimate. The second challenge is the alignment of IMU-derived orientations with anatomically meaningful joint angles. Achieving this requires a robust "latching" mechanism, which maps IMU measurements from a global inertial frame to a neutral anatomical position. Current methods addressing the first challenge often focus on the foot segment and exploit the quasi-cyclic nature of running motion. For instance, principal component analysis (PCA) of angular velocity data over multiple strides has been used to estimate orientations, achieving approximately 7° one-dimensional errors. However, such approaches incur high computational costs and require significant data storage. Solutions to the second challenge frequently depend on exact sensor placement, additional hardware like cameras, or specific user postures or movement during calibration. These methods either negate the practical benefits of IMUs compared to traditional systems or are prone to errors due to sensor misplacement, inclined running surfaces, or user execution inaccuracies. In this study, we propose a novel method to address these challenges. The orientation problem is decoupled into two sub-problems: "leveling" and "heading," which are solved using sensor fusion techniques and physiological properties. The "latching" mechanism for the shank is implemented at the heel strike, utilizing an empirical relationship between shank orientation and impact accelerations. The sagittal foot angle is refined through precise analysis of acceleration data over one running cycle. The proposed latching methods demonstrate high robustness and reliability even on inclined running surfaces. This method was evaluated with 30 runners on a treadmill, achieving sagittal angle estimates within 2° of the gold standard. This demonstrates the feasibility of achieving accurate, reliable, and practical motion capture with IMUs, addressing the limitations of existing methods.
Finite element analysis (FEA) can be a powerful tool for studying the biomechanical behavior of human tissues, offering insights into the performance of biological systems under various loading conditions. However, the finite element analysis has been applied sporadically to the elbow joint so far [1]. This presentation outlines the process of developing a finite element model of the human elbow joint system, emphasizing geometry creation and material property implementation.
The modeling workflow begins with the acquisition of bone geometry through computed tomography. These datasets are processed to segment and reconstruct three-dimensional anatomical structures, ensuring accuracy in capturing intricate geometrical details. Subsequently, the elbow ligaments are modeled manually based on anatomical literature sources. The 3D geometries are converted into finite element meshes, striking a balance between computational efficiency and model fidelity.
Material property definition is a crucial step in simulating realistic biomechanical behavior. Bones are assigned orthotropic properties to replicate their complex material behavior, while ligaments are modeled as hyperelastic materials to account for their nonlinear stress-strain response.
This presentation will provide a walkthrough of the geometric modeling and material property assignment. The challenges and solutions associated with finite element simulations of biological tissues will be introduced, fostering a deeper understanding of its applications in biomechanical research.
This study is a follow-up to the study "Finite element modeling of concentrated impact loads on the masticatory muscles at the head" in GAMM 2024 Magdeburg. A human-subject study with collision loads is an appropriate methodology to determine thresholds for safe human-robot collaboration. However, the ethical requirements for such a study are very strict. Therefore, an available FE model has to be developed to simulate human-robot collisions, especially for collisions with the human head. A FE model of the head is presented in the article specifically designed to replicate the response to impact loads on various locations on the human head (forehead, temple, and masticatory muscles). The model is based on the structures of the THUMS model and the anatomy of the encephalon (MRI axial slices). To enable the intended use for collision simulations with cobots, the model is tailored to the biomechanical threshold listed in ISO/TS 15066 and focuses on the characteristics of soft tissues. For the model development, optimization and, most importantly, validation, we have used experimental data (force-deformation curves) from the said human-subject study. A novel technique for developing biomechanical corridors was used to calculate the force-deformation curves (data from a human-subject study) used for model development, optimization, and validation. The primary objective was to precisely replicate the biomechanical response of the body locations under consideration (mainly the upper tissue layer) in order to ensure the reliability of the simulation and provide meaningful results that can be used as threshold values for pain onset for different collision scenarios. The results of the optimization and validation show that the degree of overlap between the reference data and the response from the simulation is considerably high. The biomechanical properties of the soft tissues used for the three body locations on the head have been successfully reconstructed in the FE model. Under specific conditions, we can now use the model to validate the pain-onset thresholds determined in the human-subject study. Afterward, we conducted a series of collision simulations with a collaborative robot at the said body locations using the new head model. In the last step, we compared the simulation outcomes with the thresholds from ISO/TS 15066 and the human-subject study.
A concept for the embedding of homogenized fibers into bulk materials is proposed and generalized from [1, 2, 3]. Each fiber family in the bulk material is implied by a set of two level-set functions: The intersecting level sets of these function pairs are the continuously embedded fiber geometries. A mechanical model for the simultaneous consideration of all fibers in a large displacement, hyper-elastic context is proposed. The fiber model is coupled to classical mechanical models of homogeneous and isotropic bulk materials. This enables a new concept for advanced, fibrous materials such as in biological tissues and textiles. For the numerical analysis, the bulk domain is meshed using classical, higher-order elements. It is noteworthy that these elements do by no means align to the embedded fibers which is characteristic for a fictious domain method (FDM). However, the present approach does not come with the usual challenges of FDMs and was called Bulk Trace FEM in [1]. Boundary conditions and numerical integration are done as in the classical FEM and there is no need for stabilization. Numerical results confirm the success of the proposed embedding of fibers in various contexts, even enabling optimal, higher-order accurate results when smooth solutions are expected.
REFERENCES
[1] T.P. Fries, M.W. Kaiser: On the Simultaneous Solution of Structural Membranes on all Level Sets within a Bulk Domain, Comp. Methods in Appl. Mech. Engrg., 415, 116223, 2023. DOI: 10.1016/j.cma.2023.116223
[2] M.W. Kaiser, T.P. Fries: Simultaneous analysis of continuously embedded Reissner-Mindlin shells in 3D bulk domains, Internat. J. Numer. Methods Engrg., 125, e7495, 2024. DOI: 10.1002/nme.7495
[3] T.P. Fries, J. Neumeyer, M.W. Kaiser: A new concept for embedding sub-structures via level-sets, Proceedings of the 16th World Congress on Computational Mechanics (WCCM 2024), Vancouver, Canada, 2024. DOI: 10.23967/wccm.2024.025
Bamboo is characterized by a high growth rate and mechanical properties comparable to those of wood. Therefore, bamboo is of great interest as a sustainable building material while being an efficient CO2 absorber. The material has a hierarchical structure with heterogeneously distributed properties and correspondingly exhibits a strongly anisotropic behaviour [1,2]. Previous research has been investigated the mechanics and the structure of vascular bundles, fiber-like structures present at the mesoscopic scale of the bamboo culm wall. [3,4]. The vascular bundles are composed of a conductive tissue surrounded by sclerenchymatic fibre bundles and are embedded in a matrix of parenchyma cells [1]. However, a profound characterization of the structural and mechanical properties at this scale is not available. In our contribution, computational methods for investigating the statistically inhomogeneous mesostructure and the linear elastic properties of moso bamboo are presented. The mesostructure of the culm wall is characterized using state-of-the-art computational image processing. The characterization process includes the segmentation of cross-section images of the bamboo culm wall, AI-based detection of individual vascular bundles, and a statistical analysis of the image data to derive the characteristics of the structure. In addition to the volume fractions of the fibre bundles and the conductive tissue, characteristics for describing the size, morphology, orientation, and arrangement of the vascular bundles are determined for the first time. This method and the associated characteristics can be applied to any statistically inhomogeneous structure with inclusions. With the structural characteristics at hand, a statistically homogeneous representative volume element of the mesostructure can be reconstructed at any radial position of the wall. With the local linear elastic properties of matrix and fibre bundles given by the literature [3], an FFT-based homogenization model is set up to determine the macroscopic stiffness of the culm wall. This method is used to investigate the anisotropy and inhomogeneity of the material and gives an insight into the local phenomena on a mesoscopic level.
References
[1] W.Lise, The Anatomy of Bamboo Culms, 1998, Vol. 18, 11-208
[2] F. Nogata, H. Takahashi, Intelligent functionally graded material: Bamboo, 1995, Vol. 5.7, 743–751.
[3] P. Dixon, L. Gibson, The structure and mechanics of Moso bamboo material, 2014, Vol. 11.99, 20140321
[4] L. Al-Rukaibawi et al, A numerical anatomy-based modelling of bamboo microstructure, 2021, Vol. 30, 125036
Failure of materials and structural components is an important issue as long as man-made constructions exist. The section focuses on damage mechanics and fracture mechanics for all kinds of solid materials and structures. It aims at bringing together related original research covering experimental observations, modeling approaches and numerical techniques. Moreover, material failure is a complex process, which may be considered on different length scales ranging from atomistic scale up to macro scale of engineering structures. Since failure behavior of materials strongly depends on loading situations, contributions addressing static, dynamic and multi-axial failure are welcome as well as fatigue problems.
Phase-field modelling of fatigue fracture has been approached by many different models in recent years. Yet due to the high number of load cycles involved, computational time remains one of the main challenges, especially for fracture in ductile materials such as metals.
In this contribution, we revisit our efficient phase-field model for fatigue fracture [1] with a simplified consideration of cyclic plasticity. We combined the phase-field method for brittle fracture with the Local Strain Approach [2], a traditional fatigue concept from structural durability. It involves assumptions for the stress-strain behaviour including local plasticity and the damaging effect of load cycles, based on experimental material data.
Now, we improve the model by refining both the approximation of the stress-strain behaviour and the evaluation of the damaging effect of the load cycles with a new damage parameter. In a second step, we introduce a comprehensive phase-field model with elastic-plastic material law [3]. This we use to evaluate the two efficient models with the simplified integration of plasticity. The range of application of the three models is discussed, compromising between accuracy and computational time. The results of the comparison are published in [4].
[1] Kalina, M., Schöne, V., Spak, B., Paysan, F., Breitbarth, E., Kästner, M. (2023). Fatigue crack growth in anisotropic aluminium sheets -- Phase-field modelling and experimental validation. International Journal of Fatigue, 176, 107874.
https://doi.org/10.1016/j.ijfatigue.2023.107874
[2] T. Seeger, Grundlagen für Betriebsfestigkeitsnachweise, Stahlbau-Handbuch. Vol. 1 Teil B (1996) 5-123.
[3] Ulloa, J., Wambacq, J., Alessi, R., Degrande, G., François, S. (2021). Phase-field modeling of fatigue coupled to cyclic plasticity in an energetic formulation. Computer Methods in Applied Mechanics and Engineering, 373, 113473.
https://doi.org/10.1016/j.cma.2020.113473.
[4] Kalina, M., Schneider, T., Waisman, H., Kästner, M. (2024). Phase-field models for ductile fatigue fracture (arXiv:2411.05015). arXiv. http://arxiv.org/abs/2411.05015
Fragmentation of tempered glass is a dynamic fracture process that is continuously fed by internal residual stresses, where uninterrupted crack propagation, branching, merging of cracks, and formation of fragments take place in a glass pane until complete failure. The simulation of those complex crack patterns using fracture approaches and the finite element method can be challenging, due to the complexity of the underlying phenomena and the sensitivity of the predicted crack paths to spatial discretization, i.e., mesh dependence of the predicted fracture patterns.
In the current contribution, a modelling approach to brittle fracture is presented, where a fracture criterion is developed addressing both a sound emulation of crack related phenomena and numerical remedies, which are applied to reduce the influence of spatial discretization on the predicted crack paths. The main idea is the setting of a local fracture criterion, which is evaluated subdomain-wise and is simultaneously influenced by spatially nonlocal information. A purely local ground-state fracture criterion is initially formulated in terms of crack driving and fracture resistance energies. Subsequently, this local fracture criterion is modified using nonlocally-informed scaling processes. The proposed method allows to take into account the possible influence of surrounding broken material and micro-cracks on the local fracture resistance energy, which is evaluated subdomain-wise. Moreover, it is demonstrated that the approach considers the emulation of crack initiation in a basically sound manner, which excludes predicting unrealistic fracture initiation at regions situated away from crack tips. Besides that, crack merging, which is an important aspect to consider when simulating the formation of fragments, is shown to be properly addressed within the suggested approach. The model implementation constitutes an evaluation of the fracture criterion finite element-wise within the discretized domain, where the associated numerical treatment is similar to the implementation of eigenfracture and eigenerosion methods. A parallel implementation of the framework is undertaken, where associated synchronization algorithms are developed. The plausibility of the fracture modelling approach is evaluated, through simulating complex fracture patterns and fragmentation in tempered glass with an insight into corresponding experimental results.
Anisotropic damage models are required to fulfill the damage growth criterion and to yield mesh-independent results in structural simulations. The former can be achieved by a proper design of the elastic free energy and the latter by a suitable regularization, e.g. by a gradient-extension. Here, we compare the performance of different elastic strain energy formulations for anisotropic damage at finite strains from [1] and [2] on a structural level. For the models’ regularization, we utilize identical micromorphic gradient-extensions.
[1] van der Velden, T., Brepols, T., Reese, S., Holthusen, H., 2024. A comparative study of micromorphic gradient-extensions for anisotropic damage at finite strains. Int. J. Numer. Methods Eng. 125(24), e7580.
[2] van der Velden, T., Reese, S., Holthusen, H., Brepols, T., 2024. An anisotropic, brittle damage model for finite strains with a generic damage tensor regularization. arXiv, 2408.06140.
Controlling damage evolution in the context of fatigue plays a key role in extending the number of operational cycles. A major challenge lies in accurately capturing the material response and loading conditions. This work particularly aims at the numerical prediction of fatigue in quasi-brittle materials. Such simulations are relevant to complement experiments by completing and extending the range of cycle numbers and load paths. For this reason, the proposed study introduces a modeling framework inspired by the concept of endurance surfaces [1, 2]. This concept will be integrated into an established damage model [3], within a thermodynamically consistent and gradient-enhanced framework [4, 5].
Benchmark problems under monotonic loading conditions are first analyzed for different geometries to demonstrate the potential of this approach. Special emphasis is placed on damage mitigation by varying the loading parameters. In addition, preliminary investigations of cyclic load paths are performed in order to transition to operational loading scenarios. This study lays the foundation for future extensions to more complex load paths and a coupling with ductile damage, ultimately leading to a unified model capable of capturing fatigue behavior over a wide range of materials and cycle numbers.
[1] Ottosen, N. et al. (2008). Continuum approach to high-cycle fatigue modeling. International Journal of Fatigue 30. 996-1006.
[2] Lindström, S. et al. (2020). Continuous-time, high-cycle fatigue model: Validity range and computational acceleration for cyclic stress. International Journal of Fatigue 136. 105582.
[3] Menzel, A. et al. (2002). Anisotropic damage coupled to plasticity: Modelling based on the effective configuration concept. International Journal for Numerical Methods in Engineering 54. 1409-1430.
[4] Forest, S. (2009). Micromorphic Approach for Gradient Elasticity, Viscoplasticity, and Damage. Journal of Engineering Mechanics 135.
[5] Langenfeld, K. et al. (2020). A micromorphic approach for gradient-enhanced anisotropic ductile damage. Computer Methods in Applied Mechanics and Engineering 360. 112717.
Bonded CFRP structures offer advantages over conventionally manufactured structures due to their potential processing speeds, pre-equipment capabilities, reparability, and maintenance friendliness. Fundamentally, an as-yet untapped potential lies in the ability for damage-free debonding and rebonding, enabling structural components to be repurposed for new applications at the end of their initial service life, particularly in the context of sustainability.
In this work, we developed and tested an analytical approach for the mathematical description of the debonding process using a wedge effect. Therefore, the bending stress of a simplified beam element is analyzed. Using the safety factor SF under the action of the fracture force, the corresponding maximum bending stress σ_{b,max},max can be determined analytically. Interestingly, the analytical considerations reveal that the calculation of the safety factor SF is entirely independent of the previously considered beam width B and the associated crack length c.
Subsequently, based on this approach, we developed a numerical model employing the Finite Element Method (FEM) and a cohesive zone model (CZM) to demonstrate the applicability and suitability of this method for describing the physical phenomena occurring during damage-free debonding. The results show that this approach enables a physically based description of the damage-free debonding of CFRP structures. This lays a foundational step toward the sustainable disassembly of lightweight structures.
The section will focus on advanced theoretical, numerical and experimental models for the evaluation of the behavior of structures subjected to static and dynamic loads. Innovative materials characterized by high strength, anisotropy and unconventional mechanical responses pose new challenges to the design and the performance of various structural elements such as beams, plates and shells. In particular, structural issues may appear at different scales when materials with an internal architecture are employed. Particularly welcome are linear and nonlinear models and algorithms that address static and dynamic behavior of structures at different scales.
Consistent one- and two-dimensional theories are derived from the three-dimensional linear theory of elasticity by the combination of the consistent-approximation method with the pseudo-reduction technique. Well-known and higher-order differential equations are obtained for structural components like plates and beams by using neither apriori assumptions nor (engineering-motivated) correction factors.
We construct a one-dimensional first-order theory for functionally graded elastic beams using the variational-asymptotic method. This approach ensures an asymptotically exact one-dimensional equations, allowing for the precise determination of effective stiffnesses in extension, bending, and torsion via numerical solutions of the dual variational problems on the cross-section. Our theory distinguishes itself by offering a rigorous error estimation based on the Prager-Synge identity, which highlights the limits of accuracy and applicability of the derived one-dimensional model for beams with continuously varying elastic moduli across the cross section.
This work revisits the linearized theory of micropolar elasticity for small deformations and presents a numerical implementation using mixed finite elements in the open-source software FEniCSx. Three boundary value problems are explored to showcase the physical implications of a micropolar formulation and give intuition to how the elastic parameters of the theory can relate to substructural effects on the microscale. A new analytical solution is derived for a two-dimensional shearing problem, incorporating forces and moment couples. This solution, as well as the well known analytical solution to the bending test for the state of pure bending, are used to gain an understanding of the elastic parameters in micropolar solids and to validate the numerical implementation. The torsion of a rectangular cuboid under rotational displacement boundary conditions is also examined numerically, demonstrating the characteristic micropolar effect of non-zero shear strain along the cuboid edges for a set of boundary conditions representative of those that might be prescribed in an experimental laboratory setting.
The rope curve or catenary line is a mechanical problem that has been solved since the 17th century. The cosine hyperbolic curve as a form of the rope curve is the subject of many mechanics textbooks. Nevertheless, it is worth taking another look at the chain line because technical applications, e.g. cable cars or suspension cables, have different boundary conditions which require different approaches to the calculation. The equations of the catenary are usually non-linear and in some cases cannot be solved analytically, which means that there are always steps in the calculation process that have to be solved numerically. In the first part, a numerically robust solution scheme is presented for the case of the rope with a given rope length between two fixed suspension points. The main focus here is on the simple determination of initial values for the non-linear equations to be solved, taking extreme cases into account. In the second part, a rope is considered in which one suspension point can be moved horizontally and a counterforce acts there. This case occurs, for example, with loop-shaped supply cables on cranes. Finally, the third part looks at the case that occurs with ropeways, in which the rope is held at a suspension point with a constant rope force. This can be achieved using a tensioning weight, for example. This case leads to ambiguous solutions. In addition, a universal constant can be derived here, which can be used to determine the minimum force required to hold a suspended rope at one end.
As structures are scaled down to the microscale, the influence of microstructural effects becomes increasingly significant in determining their mechanical behavior. Accurately capturing the scale effect is essential for understanding and predicting the performance of micro-scaled structures.
This study focuses on developing a space-fractional finite element method tailored for scale-sensitive truss structures. The space-fractional truss model (s-FTM) is derived within the framework of space-fractional continuum mechanics and the principle of virtual work. In this approach, the non-locality is incorporated using fractional derivatives, and the scale effect depends on the order of fractional derivatives and length scale. Importantly, the classical (local) truss theory emerges as a special case of s-FTM.
The two-dimensional (planar) trusses are then modeled as systems formed by assembling space-fractional finite elements. Numerical analyses are conducted on micro-scale truss structures and the research conducted focuses on static analysis. The results highlight the role of non-local parameters (fractional derivative order and the length scale function) in capturing the non-local behavior of scale-sensitive structures.
ACKNOWLEDGEMENTS
This research was funded in whole by the National Science Centre, Poland, grant number 2022/45/N/ST8/02421. For the purpose of Open Access, the authors have applied a CC-BY public copyright license to any Author Accepted Manuscript (AAM) version arising from this submission.
REFERENCES
[1] W. Sumelka and T. Blaszczyk. Fractional continua for linear elasticity. Archives of Mechanics, 66:147-172, 2014.
[2] P. Stempin and W. Sumelka. Space-fractional Euler-Bernoulli beam model - Theory and identification for silver nanobeam bending. International Journal of Mechanical Sciences, 186:105902, 2020.
[3] P. Stempin and W. Sumelka. Formulation and experimental validation of space-fractional Timoshenko beam model with functionally graded materials effects. Computational Mechanics, 68:697-708, 2021.
[4] P. Stempin and W.Sumelka. Space-fractional small-strain plasticity model for microbeams including grain size effect. International Journal of Engineering Science, 175:103672, 2022.
[5] P. Stempin, T.P. Pawlak and W. Sumelka. Formulation of non-local space-fractional plate model and validation for composite micro-plates. International Journal of Engineering Science, 192:103932, 2023.
The section covers all fields of vibrational problems in solid mechanics or mechatronics including nonlinear effects. Submissions may address, for example, systems with nonlinear material behavior, nonlinearities in joints, mathematical solution methods (analytical or numerical), control or description of nonlinear behavior like bifurcations or chaos, or experimental idendification of nonlinearities.
Two bodies of the same mass attached to opposite ends of a massless spring of natural length l₀ move on a smooth semi-plane (x<0) of a horizontal plane with the same velocity v₀, and the spring is not deformed. Assume that at the initial instant of time, the second body crosses the line x=0 and starts to slide on the rough semi-plane (x>0) while the first body continues to move on a smooth semi-plane. According to the Amontons-Coulomb law, the second body is acted on by the dry friction force F_fr=μN that is proportional to the normal reaction force N=mg, where μ is a coefficient of friction, and g is a gravity acceleration. As the friction force is directed opposite the second body velocity v₂>0 the string is compressed and the bodies are acted on by the elastic force F₂=−k(x₂−x₁−l₀) =−F₁, where k is a stiffness of the spring, and x₁, x₂ are the x-coordinates of the bodies. The system starts to oscillate while the velocity of its center of mass decreases and may become zero. Depending on the initial velocity v₀ of the bodies, different kinds of motion of the system can arise as a result. In particular, if the spring is asymmetric and its stiffness k for extension is greater than its stiffness for compression it may happen that the first body moving to the left (v₁<0) starts to pull the second body and the system moves to the left. Therefore, the center of mass of the system starts to move to the left and this phenomenon may be interpreted as reflection of the bodies by friction. In this talk, we analyze and classify possible motions of the system depending on the initial conditions.
Today’s road traffic is characterized by an enormous effort in the consumption of space, energy, and resources. The initiative ”Pure Mobility” intends to develop a mobility system in addition to public transport based on small electric vehicles in combination with innovative roads made of water-permeable, sustainable, and resource-efficient materials. The driving resistances and in particular the rolling resistance depend on tire geometry and material. and tire-road-interaction. This study focuses on the investigation of the vehicle rolling resistance in tire-road interaction. In order to accomplish this, the commercial finite element software Abaqus is used for tire analysis, which includes tire inflation, footprint analysis, steady state rolling analysis as well as an explicit analysis step of the tire rolling on a surface. APython-based post-processing algorithm is employed for identifying the hysteresis losses in tires rolling on a rigid surface. These hysteresis losses appear due to the material behavior of the tire rubber, which is modeled using a hyperelastic material model in conjunction with a Prony-Series. A parameter study varying inflation pressure, wheel load, and translational tire velocity is conducted to check the plausibility of the developed method. The results are input to a standard automotive simulation tool to compare the energy consumption of tire-road combinations in WLTC. The rolling resistance, with the resulting energy demand, is a key factor in the design of the Pure Mobility Vehicle in combination with the Pure Mobility Road.
The classical problem of (n+1) bodies with variable masses attracting each other according to Newton's law of universal gravitation is investigated. The central body P0 is assumed to lose mass isotropically, while the masses of the bodies P1,…, Pn can change anisotropically with different rates, which leads to the appearance of reactive forces. The laws of the masses change are arbitrary twice differentiable given functions of time. Since the differential equations of motion are not integrable, the problem is studied using perturbation theory methods. An exact solution to the two-body problem with variable masses which describes the aperiodic motion of the bodies along quasi-conical sections is used as the first approximation. It is assumed that the bodies P1,…, Pn move around the central body P0 along quasi-elliptical orbits in such a way that their orbits do not intersect and the mean-motion resonances are absent in the system.
Differential equations determining the perturbed motion of the bodies are obtained in terms of the osculating elements of aperiodic motion on quasi-conical sections in the framework of Newton’s formalism. In the case of small eccentricities and inclinations of the orbits, the perturbing forces are expanded into power series in small parameters and contain the secular terms and the short-period perturbations due to the orbital motion of the bodies. Averaging the equations of the perturbed motion over mean longitudes of the bodies P1,…, Pn , we eliminate the short-period oscillations and obtain the evolutionary equations which describe the orbital elements behavior on long-time intervals. These equations are solved numerically for different laws of body mass variations and numerical parameters corresponding to the exoplanetary system TOI-700 composed of five bodies. All the relevant computations are carried out using Wolfram Mathematica.
The conventional ring-spinning process has been used for over a century, being one of the most used processes in the textile industry. However, this process has some disadvantages at high spindle speeds, such as friction in the traveler/ring components that generates heat, resulting in yarn breakages or lower-quality production.
New technological advances like superconducting magnetic bearings (SMB) systems have been implemented to face the limitations in the spindle speed. The SMBs use superconductors and permanent magnets to achieve magnetic levitation, decreasing in this way drastically the friction and enabling spindle speeds up to 50,000 rpm.
Previous studies have investigated SMB dynamics, and it has been proved that the mathematical model for a six-dimensional motion can be reduced into a second-order ordinary system in matrix form that consists of a mass, stiffness, and damping matrices, but due to the freedom given by the magnetic levitation, there is the presence of oscillations in the permanent magnet due to external forces. A recent study implemented an Eddy Current Damper (ECD) based on copper rings that increase the damping coefficients, resulting in the reduction of oscillations in the permanent magnet.
In this contribution, the study of the SMB with ECD is extended to analyze the frequency response in all directions of movement for different cases of thicknesses of the copper rings. During the theoretical analyses, it was deduced that the natural frequency in the tilting angles split into two distinct natural frequencies, which is characteristic of spinning disks due to gyroscopic effects.
Experiments were conducted in an SMB tester, and the motion of the permanent magnet was captured using laser position sensors. The oscillations were measured for different angular velocities to study the resonance frequencies involved during the acceleration phase. Then, parametric identification was performed to determine the stiffness and damping coefficients for all the copper ring configurations. Forces and torques due to external forces were estimated based on the model.
Composite thin-walled shell and plate elements are indispensable components of modern critical structures and technical devices for various purposes. This is due to their efficient material usage and ability to provide the required stiffness in specific directions under operational conditions. Intense cyclic loads cause geometrically nonlinear vibrations in these elements. Preventing resonance during operation requires determining amplitude-frequency characteristics at the design stage. Research in the nonlinear mechanics of thin-walled structural elements began with Kármán's quadratic plate theory, an extension of Kirchhoff-Love’s classical linear theory. Later, some researchers used nonlinear technical theory based on S. P. Timoshenko's shear model. However, these theories do not adequately account for the specific deformation characteristics of composite plates and shells. The refined theory developed by the authors addresses this issue by approximating stress-strain characteristics using finite sums of Legendre polynomials with respect to the coordinate normal to the mid-surface. This method, based on I. Vekua’s ideas and further developed by B. Pelekh and M. Sukhorolskyi, satisfies boundary conditions on the surface. The resulting two-dimensional relations involve minimal-order differential operators while accounting for the material's compliance to transverse shear and compression — key properties of polymer-matrix reinforced composites — for both linear and nonlinear deformation. In Kármán’s theory, known relations describe the dependence between fundamental frequency and vibration amplitude for rectangular plates and cylindrical shells. These nonlinear relations, derived using A. Volmir’s method of integrating over one vibration period, are polynomial or transcendental. However, applying a similar approach in the case of the generalized and aforementioned refined theory leads to the appearance of secular terms in the amplitude-frequency characteristics, which contradict the physical nature of free vibrations. The authors addressed this issue by employing the generalized perturbation method, previously applied by R. Lewandowski to problems of geometrically nonlinear beam vibrations. Both approaches yield close results within certain ranges of physical-mechanical characteristics and geometric parameters of the thin-walled elements considered. However, significant discrepancies in specific cases prompt a more detailed analysis of the mathematical deformation models and their differential operators. Some aspects and results obtained by the authors will be presented for discussion.
Piezoelectric actuators are used to drive high-precision positioning systems. Ceramic actuators can withstand very high compressive loads but are susceptible to tensile forces. Cyclic loads and frequent load changes can cause cracks to form in the material over its lifespan. The influence of these cracks on the behavior and the durability of the actuator is the subject of current research. The long-term target is to clarify if a prediction of failure is possible based on the dynamic behavior of the actuator.
Cracks reduce the stiffness of the actuator locally, which changes the dynamics of the entire system. In such a mechatronic system, changes in the structure can also be detected in the electrical variables via the electromechanical coupling. Since cracks have a major influence on the lifetime of the actuators, it is extremely important to investigate and build up knowledge about the dynamic behavior and simulation of the actuators, in particular in the presence of cracks.
With the shift in resonance frequencies and the appearance of additional frequency components, it is the aim to determine the status of an actuator. The first step is to identify whether the actuator is damaged or not. In addition, the aim is to obtain a conclusion about the crack size and position based on the dynamic behavior.
The section focuses on constitutive modelling of natural and artificial materials subject to elastic and inelastic deformation processes. The aim is to compare new constitutive models formulated on both the phenomenological and the micromechanics basis to determine their validity by comparison of simulations with experiments. A wide range of open problems will be considered in the section, like multi-scale modelling of heterogeneous materials, implementation of constitutive models in numerical applications, and the virtual testing of structural systems.
Elastomers are renowned for their ability to undergo significant deformations and recover their original shape when the acting forces are removed. However, prolonged exposure of this type of polymer to heat, UV light or chemicals can alter the original elastic material structure and therefore be the reason for a pronounced decrease of material performance. Given the importance of lifetime estimation in field of design and manufacturing, the elastomer characteristics and their dependence on environmental influences must be properly investigated to prevent premature component failure. This study, currently in its very early stages, focuses on the comparison of the selected filled and unfilled natural rubber compounds and their differences in mechanical capabilities. The environmental effects are be represented by the thermal ageing, the brief principle, the methods and the results of which will be presented and discussed with the goal of establishing the critical ageing time for individual materials.
The mechanical behavior of elastomers, such as natural rubber, is significantly influenced by intrinsic reinforcement effects occurring at large strains, namely strain-induced crystallization (SIC). The outstanding fracture resistance of the material is essentially attributed to the presence of this effect, which makes it interesting for a wide variety of applications. Common constitutive modelling approaches to SIC usually incorporate micro-mechanical considerations, consequently resulting in computationally expensive formulations. Nevertheless, certain limitations are still not overcome, so that an exact depiction of the complete phenomenon is yet to be accomplished. Recently emerged data-driven approaches using physics-augmented neural networks (PANNs) have been successfully applied to model non-linear inelastic material, cf. [1], achieving high representational worth. In this contribution, a PANN-based approach for the modeling of SIC from a continuum-level point of view is proposed, providing a flexible and efficient, and also precise method for the computational treatment of this effect. In doing so, the suggested invariants-based model, which is constructed to describe purely incompressible deformations, ensures thermodynamic consistency of the dissipative phase transition process through a formal construction as a generalized standard material. Moreover, the fulfillment of further desirable properties of material models, such as convexity considerations and normalized initial states of the descriptive potentials, contributing to a plausible extrapolation behavior of the model, is ensured by definition. The model, calibrated on experimentally obtained data, is subsequently validated for unseen deformation sequences, exhibiting significant advantages over classical constitutive models, both in computational efficiency and accuracy.
References:
[1] M. Rosenkranz, K. A. Kalina, J. Brummund, W. Sun, M. Kästner [2024]: "Viscoelasticity with physics augmented neural networks: model formulation and training methods without prescribed internal variables", Computational Mechanics 74, 1279–1301
Damping elements made of elastomers are often used in dynamic systems such as vehicles or centrifuges. Numerical simulation methods in general and multi-body simulation (MBS) in particular are applied to predict loads and kinematic quantities in these technical systems. This requires knowledge of mass, damping and stiffness properties. Therefore, the modeling of elastomer components is essential for high prediction accuracy. However, the complex and extremely non-linear material behavior of elastomers requires material models capable of reproducing these mechanical properties. Various phenomenological or physically-based approaches exist. The latter offer the advantage that the few material parameters are valid independent from the load conditions, hence, experimental effort for material characterization is limited. The dynamic flocculation model (DFM) by Lorenz \& Klüppel [1,2] is such a physically-motivated model, which reproduces the material response of filled elastomers in one dimension. For a more realistic modeling of elastomer components, a three-dimensional formulation is required, which is implemented for finite element analyses (FEA). For this purpose, Freund [3,4] proposed a generalization method from 1D to 3D by establishing the concept of representative directions (CRD). This is based on a finite number of directions, approximately equally distributed in space, for which the corresponding one-dimensional material response is calculated and integrated over all directions. In conclusion, an efficient coupling to FEA is necessary to integrate the DFM into MBS. Consequently, the computation time of the FE implementation of the material model should be minimized. This is a contradiction to the CRD, so this contribution investigates on strategies to improve the efficiency of the CRD applied to the DFM. These include a method to reduce the number of directions and an approach to minimize the memory requirements for a constant number of directions.
References
[1] Lorenz, H., Klüppel, M.: A microstructure-based model of the stress–strain behavior of filled elastomers. In: Heinrich, G. et al. (Eds.), Constitutive Models for Rubber VI, pp. 423-428 (2009)
[2] Raghunath, R.: A new physically motivated thermoviscoelastic model for filled elastomers. PhD thesis, OvGU Magdeburg (2017)
[3] Freund, M. et al.: Finite element implementation of a microstructure-based model for filled elastomers. International Journal of Plasticity 27, pp. 902-919 (2011)
[4] Freund, M. : Verallgemeinerung eindimensionaler Materialmodelle für die Finite-Elemente-Methode. PhD thesis, TU Chemnitz (2012)
This study focuses on a detailed investigation of the material behavior of rubber blends composed of natural rubber (NR) and styrene-butadiene rubber (SBR) within the frequency domain. The goal is to analyze the specific properties of these blends to understand the significance of the phase transition between the individual phases of the two components. Due to the immiscibility of the two components, separate phases form at microscale. A key characteristic of these NR-SBR blends is that the rubber phases are not strictly segregated. Instead, an interphase exists where the polymer chains of NR and SBR overlap. This interphase significantly influences the mechanical behavior of elastomer blends. Previous considerations were based on simplified assumptions about the material distribution within the interphase. Using a basic modeling approach involving linear interpolation with a phase parameter.
CANCELLED
——————
The self-heating of rubber is a less understood topic with only a few publications in the scientific area. Vibration absorber systems are constructed to avoid the movement of machines, vehicles and other devices. If vibration absorbers include rubber elements the vibration energy can be transferred into heat by the dissipation of the material. This publication concentrates on the self-heating of rubber by the congruence of measurement and numerical calculation of the inner heating in the rubber element. Here the material is simulated with respect of an inner punctual energy source. During the eigenfrequency of the basic system the rubber mounting between shaker test rig and basis is heated in a high level without the use of the absorber. Because the vibration amplitude of the system is on a large value. With the same performance of the shaker test rig and in addition with the use of the rubber including absorber the self-heating of the rubber mounting on the basis reduces on a low level. This is the proof of the functionality of the absorber. The main focus of the publication is the comparison of the temperature measurements of the rubber parts with the numerical calculation. This simulation concentrates on the difference method and shows a good congruence with the measurements. Concluding it can be mentioned that this publication is a special attempt to investigate the self-heating of rubber elements to increase knowledge of this topic in the scientific area.
Coupled problems arise in several applications. From a general point of view each problem containing more than one primary field is called a coupled one. Usually the class of coupled problems is subdivided into volumetrically coupled problems and problems with surface coupling. The class of volumetrically coupled problems contains e.g. the fluid flow in porous solids described by mixture theory, thermo-mechanically coupled problems, chemo-mechanically coupled problems and electro- or magneto-mechanically coupled problems while in the second class problems like the fluid-solid interaction via an interface are included. All problems is in common that the presence of different fields in the numerical treatment requires special attention with respect to the multi-field formulation and the solution strategy. The session on coupled problems deals with all aspects mentioned above, i.e. ranging from modelling aspects to numerical solution strategies.
The climate crisis represents one of the most pressing and intricate challenges humanity faces in the modern era. The effects of rising global temperatures are already evident and cannot be ignored. There is widespread agreement that the most effective approach to mitigating global warming and its consequences is a significant and immediate reduction in greenhouse gas emissions worldwide. In this effort, renewable energy sources are pivotal, with green hydrogen energy storage playing a key role as a powerful carbon-neutral energy source. Proton Exchange Membrane Water Electrolysis (PEMWE) emerges as a promising technology for producing high-purity hydrogen. It offers several advantages, including high efficiency, flexibility in adapting to dynamic electrical loads, and the ability to operate under high current densities and pressures.
This work presents a comprehensive framework aimed at capturing the complex multi-physical phenomena occurring within the membranes of PEMWE cells. The most efficient solid electrolyte for water electrolysis is Nafion, a perfluorinated sulfonic acid with sulfonic acid groups that are covalently bounded to a polymer matrix. These polar functional groups enable the membrane to absorb water, resulting in the nanoscale segregation of the polymeric matrix from the water-filled channels forming a porous structure. In the aqueous phase, the dissociated protons are free to move, facilitating ionic conductivity, while the fixed anions in the membrane structure do not contribute to this conductivity.
The proposed framework is based on the theory of porous media (TPM), traditionally valid for immiscible phases, extended by the Theory of Mixtures (TM) that includes single or multiple dissolved solutes or mobile ions in electrodynamics. This approach is found to be robust for modeling the strongly coupled interactions that take place within PEMWE systems, including the mechanical behavior of the Nafion polymeric material, water transport and pressure within the membrane's nanopores and proton diffusion driven by concentration and electric potential gradients. These factors are essential for understanding and optimizing the performance of PEMWE systems.
REFERENCES:
[1] Antonini A, Heider Y, Xotta G, Salomoni V, Aldakheel F (2025) Computational multi-physic modeling of membranes in proton exchange membrane water electrolyzers. “Submitted”
[2] Aldakheel F, Kandekar C, Bensmann B, Dal H, Hanke-Rauschenbach R (2022) Electro-chemo-mechanical induced fracture modeling in proton exchange membrane water electrolysis for sustainable hydrogen production. Comput. Methods Appl. Mech. 400:115580, DOI https://doi.org/10.1016/j.cma.2022.115580
The Structural Battery represents an innovative carbon fiber-reinforced polymer composite, designed to serve a dual purpose: As a storage device for electrical energy (viz. as a battery) and as a robust support for mechanical loads. Carbon fibers play a multifaceted role, acting as an active electrode material, current collector, and mechanical reinforcement. These fibers are integrated into a Structural Battery Electrolyte, comprising a solid phase (a porous polymer network) and a liquid electrolyte that facilitates the movement of ions, particularly Li-ions. The ion-mobility is brought about by stress-assisted diffusion (driven by the chemical potential gradient), migration (induced by the electric field), and convection (resulting from fluid motion, i.e., seepage) [1]. In summary, the liquid phase within the porous polymer network promotes ion transport between electrodes, while the solid phase effectively distributes mechanical stresses.
This presentation showcases the capabilities of a recently developed computational two-scale modeling framework, exemplified by [2,3], in evaluating the integrated electro-chemo-mechanical properties of Structural Battery electrode materials. The governing equations of the problem are established upon coupling Gauss law and mass conservation for each mobile species with mechanical (quasi-static) equilibrium. By utilizing Variationally Consistent Homogenization, we are able to establish a two-scale model where both the macro- and the sub-scale equation systems emerge from a single-scale formulation. We explore various couplings and their properties across the scales through numerical assessment of the intricate characteristics of Structural Battery electrode materials.
[1] Carlstedt, D., Runesson, K., Larsson, F., Jänicke, R., \& Asp, L. E. (2023). Variationally consistent modeling of a sensor-actuator based on shape-morphing from electro-chemical–mechanical interactions. Journal of the Mechanics and Physics of Solids, 179, 105371.
[2] Rollin, D. R., Larsson, F., Runesson, K., \& Jänicke, R. (2023). Upscaling of chemo-mechanical properties of battery electrode material. International Journal of Solids and Structures, 281, 112405.
[3] Tu, V., Larsson, F., Runesson, K., \& Jänicke, R. (2023). Variationally consistent homogenization of electrochemical ion transport in a porous structural battery electrolyte. European Journal of Mechanics-A/Solids, 98, 104901.
Electrochemical machining (ECM) is a non-conventional machining process which allows the contactless manufacturing of high-strength and hard to machine polycrystalline materials without inducing residual stresses or thermo-mechanical surface changes in the workpiece [1]. Furthermore, since the ECM technology allows high productivity rates, it is used to machine high-precision components in the aerospace and automotive industries. However, determining the optimal machining parameters, such as the required tool shape, the composition and flow rate of the electrolyte and the strength of the electric current, is time-consuming and cost-intensive. Hence, the ECM process is predominantly used for applications requiring large batch sizes. To extend its industrial applicability, for example by numerically predicting the desired machining parameters in advance, detailed knowledge and a deep understanding of the machining process is required. In this context, van der Velden et al. [2] introduced a thermoelectrically coupled finite element simulation approach where the anodic dissolution process is modeled using effective material parameters. The dissolution state of the finite elements is indicated by a dissolution variable which evolves according to Faraday’s law. The contact between the workpiece and the electrolyte, which is required for the material to dissolve, is ensured by an activation function which introduces a mesh-dependency to the simulation approach. In order to avoid this mesh-dependency, in this work the anodic dissolution process is described by means of a phase-field approach, where the dissolution variable serves as the scalar phase-field variable and its evolution in time follows the Allen-Cahn equation. The applicability of the proposed model is demonstrated by different numerical examples.
[1] Klocke, F; Zeis, M; Klink, A; Technological and Economical Capabilities and Manufacturing Titanium- and Nickel-Based Alloys via Electrochemical Machining (ECM). Key Engineering Materials, Vol. 504 – 506, pp. 1237 – 1242, 2012
[2] van der Velden, T.; Rommes, B.; Klink, A.; Reese, S.; Waimann, J; A novel approach for the efficient modeling of material dissolution in electrochemical machining. International Journal of Solids and Structures, Vol. 229, 111106, 2021
In this talk, we discuss the morphology formation within the active layer of organic solar cells. The active layer consists of regions of pure electron donor and acceptor and is formed during the drying of a thin film. This process is based on a spinodal decomposition of the donor and acceptor in a solvent, where the solvent is allowed to evaporate. The obtained model couples the phase separation process, hydrodynamic, and evaporative effects. We will present the derivation of a thermodynamically consistent model coupling these phenomena, and present numerical examples based on finite element simulations.
Osteoporosis is the most common bone disease worldwide. The disease is characterized by a loss in bone density, which reduces the bone stiffness over time and thus increases the likelihood of fractures. In past contributions (e.g., [1-3]), we introduced a multiscale material model of cancellous bone considering mechanical, electric and magnetic effects. An important application of this model is the simulation of early detection of osteoporosis via sonography, which may be a viable future diagnostics tool. In this talk, we present important developments in our modeling approaches. First, we introduce a dimensionless (scaled) formulation of the underlying PDE system to generalize the problem and to enhance its numerical stability. Furthermore, we derive a simplified, decoupled model from the original equations and discuss its advantages and disadvantages in comparison to the fully coupled model. To solve the new model, we then apply the finite element (FE) fast Fourier transform (FFT) method, which has been used in the past to solve a variety of multiscale problems [4]. We show numerical results and discuss results regarding accuracy, applicability and computational costs.
References:
[1] Blaszczyk, M., Hackl, K., “Multiscale modeling of cancellous bone considering full coupling of mechanical, electric and magnetic effects”, Biomech. Model. Mechanobiol., (2021).
[2] Stieve, V., Blaszczyk, M., Hackl, K., “Inverse modeling of cancellous bone using artificial neural networks”, Z. Angew. Math. Mech., (2022).
[3] Blaszczyk, M., Hackl, K., “On the effects of a surrounding medium and phase split in coupled bone simulations”, Z. Angew. Math. Mech., (2024).
[4] Gierden, C., Kochmann, J., Waimann, J., Svendsen, B., Reese, S., “A review of FE-FFT-based two-scale methods for computational modeling of microstructure evolution and macroscopic material behavior”, Arch. Comput. Methods. Eng., (2022).
This section is dedicated to discuss recent advances in multiscale and homogenization techniques for static and dynamic problems. Topics of particular interest are nonlinear homogenization techniques, multiscale modelling of failure processes and localization phenomena, FE2 methods, atomistic to continuum coupling, contact homogenization, model reduction techniques and furthermore homogenization schemes incorporating experimentally determined microstructure data.
Solid oxide fuel cells (SOFCs) are among the most promising energy conversion technologies, as they (i) enable a significant reduction in emissions of air-polluting gases and of carbon dioxide and (ii) simultaneously achieve high energy efficiencies. The overall performance of the fuel cell strongly depends on the porous electrodes - anode and cathode - at which the electrochemical reactions occur. The microstructure of these electrodes is characterized by key factors, such as the material composition and the volume fractions, the n-point correlation functions, the tortuosity, the percolation, and the density of the double or triple phase boundary. The objective of the current work is to present the algorithms that are used to determine these descriptors, which affect the transport of gas, electrons and ions as well as the mechanical behaviour. The computational first-order homogenization is applied to determine the most important physical effective properties, in particular various conductivities, the permeability and the mechanical behaviour including creep. In addition, the open-source tool MCRpy - a differentiable microstructure reconstruction algorithm - is used to characterize and reconstruct the microstructure. A thorough comparison between an original and a reconstructed anode in terms of their geometrical and physical properties will be presented. With the proposed analysis, it is possible to build correlations between geometrical microstructure properties and the resulting effective physical behaviour. Thus, microstructure-property relationships of SOFC electrodes can be established.
We present an approach to simulate electrochemical processes in solid oxide fuel cells porous electrodes, focusing on the interplay of diffusion and ion exchange in their complex material structure. The ion exchange that takes place at the material interfaces leads to discontinuities that pose a challenge for numerical modeling. Solving diffusion problems within these domains leads to strongly oscillating solutions due to the complex microstructure. To overcome the computational challenges of directly resolving the microstructure, we apply a homogenization technique based on the asymptotic expansion approach. By separating macroscopic and microscopic scales, we derive a cell problem and a macroscopic problem. The homogenized model serves as an efficient approximation to the original microscopic model. An important aspect of this work is the treatment of interface discontinuities. Both the solutions of the microscopic problem and the cell problem exhibit discontinuities at material interfaces. To accurately model these discontinuities, we use extended finite elements (XFEM). Finally, the results of the microscopic and macroscopic models are compared, highlighting the effectiveness of the homogenized approach and the XFEM framework in capturing the multiscale behavior of the system.
The aim of this work is to develop and implement an advanced two-scale simulation framework that links the nanoscale to the length scale of microns via computational homogenization. We propose a novel framework that integrates the LAMMPS molecular statics solver with a finite element method (FEM). The core objective of this work is to reduce computation time by enabling parallel execution of nanodomain simulations. Key novelties are the establishment of robust communication between the FE solver and LAMMPS for data exchange similar to the FE² method. We address the architectural design of the proposed system with a focus on parallelization techniques, performance optimization, and the high-level client-server design pattern adopted for efficient scale bridging. The implementation allows for more complex and larger-scale simulations, positioning it as a significant advancement in the field of multiscale modeling.
The quantitative prediction of macroscopic mechanical properties of materials requires the consideration of the material microstructure. The focus of this talk is on metallic material systems with lamellar microstructures, such as NiAl-Cr(Mo) [1] or binary Fe-Al [2]. These consist of individual domains in which the materials constituents are arranged in fine layers (thickness in the micrometre range) with a distinct layer normal direction. The layer interfaces act as obstacles for dislocations, leading to dislocation pileup and subsequently to an increase in the macroscopic yield stress. This effect strongly depends on the width of the layers of the individual phases.
We propose a physically motivated material model for the mechanical behaviour of a single domain. To this end, we describe the microstructure of the domain as a rank-1 laminate, which allows for efficient two-scale homogenisation. Exact localisation relations are used to explicitly resolve the local stress and strain fields. Within the framework of gradient crystal plasticity [3,4], the yield conditions take the form of a system of coupled Fredholm integro-differential equations for the plastic slip, which is solved semi-analytically.
The model allows for a physically motivated description of the Hall-Petch effect, taking into account the material contrast and relative orientation of the constituents, as well as the lamella widths.
[1] D. Wicht, M. Schneider and T. Böhlke, On Quasi-Newton methods in fast Fourier transform-based micromechanics, International Journal for Numerical Methods in Engineering 121 (2019) 1665-1694
[2] A. Schmitt, K.S. Kumar, A. Kauffmann, M. Heilmaier, Microstructural evolution during creep of lamellar eutectoid and off-eutectoid FeAl/FeAl2 alloys, Intermetallics, Volume 107, (2019), Pages 116-125
[3] H. Erdle, T. Böhlke, Analytical investigation of a grain boundary model that accounts for slip system coupling in gradient crystal plasticity frameworks, Proc. R. Soc. A 479 (2023) 20220737.
[4] Morton E. Gurtin, A gradient theory of single-crystal viscoplasticity that accounts for geometrically necessary dislocations, Journal of the Mechanics and Physics of Solids, Volume 50, Issue 1, (2002), Pages 5-32
Cement paste serves as a binder for cementitious materials, playing a crucial role in the development of construction materials. Its composition and constituent ratios can be adjusted to meet specific requirements. The microstructure of cement paste, formed during the hydration process, typically comprises a combination of various hydration products, unreacted clinker particles and pores. Geometric models representing the microstructure of cement paste can be generated based on experimental methods, such as X-ray microtomography, or by using software dedicated to simulating the hydration process. Various numerical models of cement paste microstructure have been proposed in the literature, differing in their level of detail and the properties assigned to individual phases. This work concentrates on investigating the influence of the cement paste’s microstructure model parameters on its effective elastic properties. During this study, the Virtual Cement and Concrete Testing Laboratory (VCCTL) software is used to establish the microstructure model, including detailed information related to different phases. In this case, various clinker phases (alite, belite, tricalcium aluminate), as well as various hydration products (calcium silicate hydrate, portlandite, ettringite, etc.), can be distinguished. On the other hand, models presented in the literature often simplify this description for practical reasons. The simplifications include, for instance, treating clinker phases as a single entity and unifying the properties of hydration products. It is predicted that the results of this study will allow assessing the impact of such simplifications on stiffness predictions. Another factor to be investigated is the influence of the size of the representative volume element. For the determination of the effective elastic constants, computational homogenization based on the fast Fourier transform (FFT) is adopted. The microstructures corresponding to the state after 28 days of hydration are considered; two cases involving different water-to-cement ratios are analysed. It is expected that the obtained results will facilitate the preparation of virtual models of the cement paste microstructure, allowing achieving a balance between model simplicity and accuracy of homogenization. Such models can later be used in the framework of multiscale simulations involving interactions across different scales.
Nanostructures are at the forefront of research due to their extraordinary mechanical, thermal, and chemical properties. Slender atomic arrangements, such as nanorods and nanobeams, open up possibilities for developing highly sensitive sensors. This contribution investigates these specific structures. If local effects such as vacancy defects and bond breaking or making scenarios play a role in such structures, a numerical simulation must be carried out within the framework of molecular dynamics [1]. However, as this becomes inefficient for very large nanostructures, multiscale models are developed and used. Numerous multiscale models already exist for modelling rod- and beam-like nanostructures for static and dynamic problems. These includes for example surface elasticity theory, nonlocal elasticity theory, nonlocal strain gradient theory, the Cauchy-Born rule, and the FE² method. Our focus is on the modelling of static problems using the FE² approach, based on the works of Miehe and Koch [2]. The homogenisation methods and the requirements for representative volume elements (RVE) are well established for continuous microstructures. This also applies to the context of structural elements such as beams on the macro scale (e.g. Klarmann et al. [3]). However, suitable approaches for linking atomistic micro-models with macro-level structural elements are still lacking. To address this, we propose a hierarchical multiscale model that integrates atomistic simulations on the micro-scale with structural elements, specifically Timoshenko beams, on the macro scale. The atomistic simulations are conducted within the framework of the finite element method [4][5], allowing for the incorporation of various types of atomic interactions and potentials. To ensure that the homogenised values are independent of the RVE length, new constraints are introduced. The multiscale model is validated by fully atomic simulations and compared in terms of computational speed and implementation effort.
[1] D. J. Tildesley, M. P. Allen: Computer simulation of liquids, Clarendon Oxford (1987).
[2] C. Miehe, A. Koch: Computational micro-to-macro transition of discretized microstructures undergoing small strain, Archive of Applied Mechanics 72, 300–317 (2002).
[3] S. Klarmann, F. Gruttmann, S. Klinkel: Homogenization assumptions for coupled multiscale analysis of structural elements: beam kinematics. Comput Mech 65, 635–661 (2019).
[4] J. Wackerfuß: Molecular mechanics in the context of the finite element method, Int. J. Numer. Meth. Engng. 77, 969-997 (2009).
[5] J. Wackerfuß, F. Niederhöfer: Using finite element codes as a numerical platform to run molecular dynamics simulations, Computational Mechanics 63(2), 271–300 (2019).
The topic of this session is the analysis and modeling of turbulent non-reactive and reactive flows based on DNS, LES, RANS, and experiments. A special focus is on fundamentals in turbulence, turbulent reactive flows, turbulent multi-phase flows, turbulence of atmosphere, atmosphere/ocean interaction, modeling and simulation of complex turbulent flows, the interface of numerical algorithms, chemical and physical modeling, as well as high-performance computing with its application to turbulence.
Hydrogen role in energy systems has become a central focus of global interest due to its potential as a zero-carbon fuel. Hydrogen provides a transformative alternative to hydrocarbons, exhibiting unique combustion properties. However, its high reactivity often leads to combustion instabilities, e.g., flame flashback or blowout, which increase the risk of combustor failure. Therefore, to minimize these challenges, hydrocarbons fuels such as methane, propane, and ammonia are add to the hydrogen-air mixtures, aiming to influence its laminar burning velocity (LBV), a critical property in all fuel types that significantly affects the combustion process. In this work, we aim to investigate the effects of these gaseous additives on the behavior of the laminar burning velocity and the formation of nitrogen oxide (NO_X) emissions. An updated detailed and reduced reaction mechanism are utilized to ensure accurate representation of chemical kinetics for the selected fuels. The simulations are performed using a one-dimensional freely-propagating adiabatic premixed flame (FPPF) model in Cantera, incorporating both kinetic and thermodynamic modeling. The results are validated against experimental data. The study examines a range of operating conditions, including variations in inlet pressure, temperature, and H₂/CH₄, H₂/C₃H₈, and H₂/NH₃ ratios. We have successfully predicted the hydrogen behavior with each fuel selected under the conditions considered for the analysis. The results reveal that among the fuel mixtures studied, the H₂/C₃H₈ blend exhibits the lowest LBV at high equivalence ratios, reaching 45 cm/s at ϕ = 3.8. Similarly, the flame temperature for this mixture follows the same trend, with a minimum value of 1485 K at ϕ = 3.8. NO_X emissions decrease with increasing equivalence ratio, with NO concentrations reaching a minimum of 0.35 ppm at ϕ = 3.8.
Turbulent mixing is an ubiquitous phenomena and plays an important role in our everyday life and numerous industrial applications. The numerical study of turbulent mixing processes poses a standing challenge since it requires a capture of all relevant length and time scales while still being computationally feasible. To eliminate this dilemma, an efficient and full-scale resolving modeling approach for turbulent mixing, termed Hierarchical Parcel-Swapping (HiPS), was introduced by A.R. Kerstein [J. Stat. Phys. 153, 142-161 (2013)]. HiPS mimics the effects of turbulence on time-evolving, diffusive scalar fields by a stochastic, hierarchical swapping mechanism. The diffusive scalar fields are interpreted as a binary tree structure. Length scales decrease geometrically with increasing tree level, and corresponding time scales follow Kolmogorov's inertial range scaling. The state variables reside only at the base of the tree and are understood as fluid parcels. To model the effects of turbulent advection, sub-trees are swapped stochastically at rates determined by turbulent time scales. Mixing only take place between adjacent fluid parcels at rates consistent with the prevailing diffusion time scales. The HiPS formulation is extended to consider multiple scalars with arbitrary diffusivities. In the talk, we will detail HiPS as a mixing model and put it into context of the most common mixing models. Results for the scalar power spectra and scalar dissipation rate will be shown for varying Schmidt numbers. Additionally, particle dispersion is compared to experimental measurements. To demonstrate the efficiency and capabilities of HiPS, results of a full engine cycle simulation using HiPS as mixing model will be presented. Considering the reduced order and associated computational efficiency, HiPS is an attractive mixing model, which can be used as a subgrid model for coarse-grained flow simulations.
The study examines the impact of geometric reduction and simplification of the swirl burner on the morphology of the lean flame and the NO emissions produced when ammonia is gradually added to fuel. Analyses were conducted in both two-dimensional and three-dimensional formats where the most commonly employed geometric reductions, axisymmetric swirl and periodicity, were subjected to the analysis and compared with the full geometry of a swirl burner featuring a 30-degree blade swirl outflow. The analysis has been performed on a swirl flame with a convergent-divergent nozzle, where the flow swirl was derived from a profile of the velocity vectors at the inlet. The simulation was conducted using the EDC combustion model and RANS RSM-Omega turbulence with Xiao and Okafor mechanisms, also considering heat transfer and radiation (DO). The ammonia share was increased gradually up to 25% by volume for a lean mixture with an equivalence ratio of 0.7.
The prediction of NO emissions was successful for the full combustion chamber geometry, with a slight over-prediction for the periodic system. The tested models demonstrated a high degree of accuracy in predicting the share of NO in the flue gas for fuel compositions containing 2.5% and 5% ammonia by volume. The NO emission results obtained for a fuel containing 5% ammonia were in good agreement with the experimental values, even for the 2D case. However, the results for a fuel containing 10% ammonia diverged from experimental values, indicating a significant influence of secondary flow effects on the flame shape and its emissions. The anticipated reduction in emissions was observed in the 3D analyses; however, the values were slightly overestimated in comparison to the experimental data due to the reaction kinetics mechanism employed. Additionally, there are uncertainties regarding the boundary conditions for the heat transfer of the chamber and these may potentially influence the temperature distribution and the final NO_X emissions.
In 1928, André Lévêque defended his PhD thesis in Paris. Its title was Les Lois de la Transmission de Chaleur par Convection - The Laws of Convective Heat Transfer - and it was published that same year in Annales des Mines, the famous French mining engineering journal. Lévêque's contribution was a new way to think about how heat moves across a thin layer of fluid close to a wall. Lévêque is sometimes credited with solving a thermal boundary-layer problem. This is not exactly true.
André Lévêque seems to have been the first to think about the transition from surface to freestream temperature across a very thin region close to the surface. He observed that in this region, the most important fluid velocities change linearly with normal distance from the surface. Lévêque's solution was specific to heat transfer into a Poiseuille flow. In this type of flow, fluid velocity is a function of normal distance from the wall only; it does not change with streamwise location.
In 1953, Schuh showed how to apply this idea to boundary-layers, modifying Lévêque's solution so that the wall tangent became a function of streamwise location. Kestin and Persen come to this idea independently, outlining with clarity the solution that Schlichting describes in Boundary-Layer Theory.
This solution, of the thermal boundary-layer equation for flows of large Prandtl number, appears to be Kestin and Persen’s, not Lévêque’s.
This session is devoted to the mathematical analysis of natural phenomena and engineering problems. In this area PDEs play a basic role. Therefore lectures discussing analytical aspects of PDE problems as well as problems in the Calculus of Variations are welcome.
In recent years, there has been a spike in interest in multiphase tissue growth models. Depending on the type of tissue, the velocity is linked to the pressure through the nonlocal Brinkman law or the local Darcy law. While both velocity-pressure relations have been studied in the literature separately, little emphasis has been placed on the fine relationship between them. In this talk, we report on several advances on localisation limits connecting both frameworks.
\
We start by demonstrating that the interplay between advection and diffusion in the incompressible porous media equation with diffusion -- a classical active scalar transport equation -- can lead to the enhanced dissipation. Subsequently, we derive a scaling limit that perfectly balances these two physical mechanisms. The high degeneracy of the limiting equation prevents us from proving existence of weak solutions in the distributional sense. To address this challenge and finish the existence proof, we use the gradient flow structure of the equation to define weak solutions within a more robust "geometric" framework.
This talk is about a convergence result for the spatial discretization of a reaction-diffusion system. The approximation is based on a homogeneous lattice, where in each node a reaction-ODE system describes the evolution of the concentrations, and where the transport between the different lattice nodes is given by additional exchange reactions. Assuming detailed balance, this large coupled reaction system can be understood as a (generalized) gradient flow characterized by cosh-type dissipation potentials and the relative entropy. Sending the lattice width to zero, we show how the limit reaction-diffusion-PDE system can be recovered with variational methods and an energy-dissipation principle. The talk is based on joint work with Alexander Mielke and Artur Stephan.
A gradient system (X, ℰ, ℛ) consists of a state space X (a separable, reflexive Banach space), an energy functional ℰ : dom(ℰ) ⊂ X → ℝ ∪ {∞}, and a dissipation potential ℛ : X → [0, ∞), which is convex, lower semicontinuous, and satisfies ℛ(0) = 0. The associated gradient-flow equation is given by
0 ∈ ∂ ℛ(u'(t)) + ∇ ℰ(u(t)) or equivalently u'(t) ∈ ∂ ℛ*(−∇ ℰ(u(t))).
In this talk, we consider the case where the dual dissipation potential ℛ is given by the sum ℛ = ℛ₁ + ℛ₂ for two dissipation potentials ℛⱼ : Xⱼ → [0, ∞), where Xⱼ ⊂ X. This decomposition also leads to a splitting of the right-hand side of the combined gradient-flow equation:
u'(t) ∈ ∂ (ℛ₁* + ℛ₂*)(−∇ ℰ(u(t))) = ∂ ℛ₁*(−∇ ℰ(u(t))) + ∂ ℛ₂*(−∇ ℰ(u(t))).
This enables the construction of solutions via a split-step method.
To do this, let τ = T/N define a uniform partition of the interval [0,T]. Starting from an initial datum u₀ ∈ dom(ℰ), we define a piecewise constant time-discrete solution U_τ : [0,T] → dom(ℰ) ⊂ X₁ by setting U_τ(0) = u₀ and performing the Alternating Minimizing Movement, given by:
U_τ(t) = Uₖ¹ for t ∈ ((k{−}1)τ, (k{−}1/2)τ], U_τ(t) = Uₖ² for t ∈ ((k{−}1/2)τ, kτ],
where Uₖ¹ ∈ Argmin_ {U ∈ X₁} { τ/2 ℛ₁ (2/τ} (U − Uₖ₋₁²)) + ℰ((k−1/2)τ, U)},
Uₖ² ∈ Argmin_ {U ∈ X₂} { τ/2 ℛ₂ (2/τ (U − Uₖ¹) ) + ℰ(kτ, U) }.
In this talk, I will demonstrate how the curves U_τ converge to the solution of the combined gradient-flow equation (involving ℛ = ℛ₁ + ℛ₂*) as N → ∞. The analysis relies on methods from the calculus of variations and the use of the energy-dissipation principle for gradient flows.
This talk is based on joint work with Alexander Mielke (Berlin) and Riccarda Rossi (Brescia).
Self-similar behavior is a well-studied phenomenon in extended systems. However, the consideration is often restricted to scalar problems having exact self-similar solutions (e.g. the porous medium equation) or to problems with a trivial behavior at infinity. In this talk, we study a reaction-diffusion system and other related dissipative systems on the whole real line with prescribed non-trivial limits at infinity to investigate their solutions' long-time behavior. The system under consideration has the special property that it possesses a continuum of constant solutions. By assuming that the solutions are in equilibria at infinity, we study the convergence towards so-called self-similar profiles. With this, we answer how the solutions mix the two stable asymptotic boundary values when time increases. The key idea is to rescale space and time into parabolic scaling variables and to derive energy-dissipation estimates for the relative Boltzmann entropy. In the original variables, these profiles correspond to asymptotically self-similar behavior describing the phenomenon of diffusive mixing of the different states at infinity.
We present a thermodynamically consistent framework for reaction-diffusion systems modeling the evolution of a finite number of charged species in a temperature-dependent setting. Thermodynamical consistency guarantees, in particular, that the fundamental laws of charge and energy conservation hold, and that the total entropy is monotone as time evolves. This is achieved by starting from an abstract gradient flow system in Onsager form coupled to Poisson's equation and by specifying various model parameters subsequently. The main goal of the talk is the derivation of an entropy-entropy production (EEP) inequality for a class of electro-energy-reaction-diffusion systems. First of all, one has to introduce an appropriate relative entropy functional and to calculate the corresponding entropy production. Taking the complexity of these functionals into account, we consider a two-level semiconductor system for electrons and holes (keeping the electrostatic and energetic coupling). Assuming that some of the involved functions are bounded, we are able to establish an EEP inequality, which entails the exponential equilibration of appropriately bounded global solutions.
The session especially welcomes contributions to the following topics: uncertainty quantification; risk analysis and assessment; Bayesian methods in engineering; decision analysis; stochastic modeling (including spatio-temporal modeling); stochastic mechanics; stochastic algorithms and simulation; stochastic processes (including time series); resampling methods; stochastic networks. Applications to large scaled problems are encouraged.
We consider the inverse problem of recovering the shape of an object from measurements of how it scatters time-harmonic waves. To quantify the uncertainy, the problem is cast in a Bayesian framework and we discuss possible choices for the prior distribution for the shape. As prototype of time-harmonic scattering, we consider the Helmholtz equation to relate the shape to the measurements and obtain the likelihood, from which the Bayesian posterior is obtained. We study the well-posedness of the inverse problem as well as numerical methods to sample from the posterior. In doing this, we focus on the role of the frequency of the incoming wave excitation on the result of the inversion. This is rigorously quantified in our well-posedness results via frequency-explicit stability estimates for the posterior, and observed numerically via simulations for different frequencies. This is joint work with Safiere Kuijpers (University of Groningen).
Chemical kinetic models pose particular challenges to Bayesian parameter estimation due to their high nonlinearity and sensitivity. Additionally, priors are largely unformative whereas experimental data has a comparatively high accuracy. As a consequence posteriors can become highly complex and localized compared to the prior. As a possible route to address such problems, we explore an approach based on normalizing flows in conjunction with Quasi-Monte Carlo sampling. This approach involves learning a bijective neural network for parameter transformation such that uniform sampling in the transformed parameter space yields an efficient importance sampler for the Bayesian posterior. This is done in a sequential fashion exploiting the bijectivity to draw samples from an existing approximation for learning the next layer in the network. Due to the high localization, directly learning the posterior will not be robust and we will investigate the use of tempering to increase robustness. As a realistic showcase, we evaluate the performance of this method on an established empirical model for methanol synthesis over Cu-based catalysts, using both synthetic and experimental data. Our findings indicate a high potential of this approach for Bayesian inference of chemical kinetic models but also its challenging nature due to, e.g., the high number of hyperparameters in the neural network which need to be tuned.
Robust optimization is a crucial technique for enabling design optimization in safety-critical applications, such as those found in the aerospace industry. The objective is to identify an optimal design that is relatively insensitive to uncertainties arising from uncertain model parameters, the model form itself, or unknown environmental conditions. These uncertainties introduce randomness into the objective function, and quantiles serve as appropriate robustness measures, representing an effective optimization target.
The main challenge for quantile-based robust optimization problems is the computational burden, since each objective function evaluation requires solving an uncertainty forward problem. Surrogate-based optimization techniques can alleviate this computational burden, and in particular, Bayesian optimization techniques based on Gaussian processes (GP) have been proposed in the literature.
In this study, we propose a mono-level approach for quantile-based robust optimization. Mono-level approaches construct a global surrogate in the augmented design space consisting of the design and stochastic domains. Such mono-level approaches have been applied in the context of reliability-based design optimization (RBDO) and for robust optimization in the case of a linear robustness measure. Since the computation of a quantile is a nonlinear operation, we rely on sampling-based techniques to compute quantile estimates. We construct an error indicator of the quantile by propagating the epistemic uncertainty of the GP surrogate to the quantile estimate. The quantile estimate and the error indicator are then used within an acquisition policy to select the next infill point in the design space. Our investigation shows that the accuracy of the sampling-based quantile estimate is crucial for the success of the acquisition policy. To allow for large sample sizes, we therefore incorporate an efficient posterior sampling approach introduced in [1], which has the advantage of linear scaling with increasing sample size.
We compare our novel approach to a state of the art bi-level surrogate-based approach. The bi-level approach performs an outer-loop Bayesian optimization in the design space and requires a second-level inner-loop Bayesian optimization at each design point to compute the target quantiles. This approach has the disadvantage that a GP surrogate is built from scratch at each design point evaluation, neglecting the available information gained in previous iterations. For moderate dimensional stochastic domains, our proposed mono-level approach outperforms the bi-level approach in terms of model evaluations. We demonstrate the results on analytical models as well as on benchmarks from structural mechanics.
[1] Wilson et. al. Efficiently sampling functions from Gaussian process posteriors. PMLR, 2020.
Semiconductor technology plays an essential role in modern life today. Devices in \linebreak nanoscale including sensors using semiconductor technology have crucial real-world applications ranging from bio-sensing medical applications to energy conversion in solar cells as well as the generation of security keys in cyber-security. Modeling charge transport in semiconductor devices has been of great importance especially when random effect of dopant atoms are taken into account. Furthermore, reconstructing the unknown doping profile introduces a challenging infinite-dimensional inverse problem in semiconductor devices. Here, first, an overview of the mathematical modeling of charge transport in semiconductor devices using PDE models incorporating uncertainty sources is given. Then, a PDE-based Bayesian inverse problem for semiconductors is formulated, in which the goal is to reconstruct the uncertain doping function. To solve this inverse problem, an infinite-dimensional Bayesian approach is presented to sample the posterior using a Markov chain Monte Carlo method with a preconditioned Crank-Nicolson proposal. Furthermore, the presented approach is enhanced by a physics-informed prior to tackle the challenges including the high-dimensionality, ill-posedness, and limited electrode measurement data. Some numerical results of the reconstruction from the voltage-current measurements are presented.
Quantum computing harnesses the principles of quantum mechanics to tackle computational tasks that are infeasible for classical machines. Qubits, capable of existing in superposition and forming entangled states, enable quantum algorithms like Shor’s factoring method to surpass traditional techniques. This technology holds promise across numerous domains, including cryptography, pharmaceutical research, and artificial intelligence, addressing complex challenges beyond classical capabilities. Despite its potential, quantum computing is still in its infancy. Current devices in the Noisy Intermediate-Scale Quantum (NISQ) era suffer uncertainty due to limited qubit counts and high susceptibility to environmental disturbances and hardware flaws. While techniques such as surface codes can reduce errors, they also increase circuit complexity and computational demands, introducing additional error sources. Various strategies have been developed to model uncertainty in quantum systems, including worst-case fidelity analysis. However, these methods often rely on probabilistic assumptions about noise and tend to overlook critical output-level information such as observables. As a novel approach, quantum uncertainty shall be considered from the perspective of imprecise probabilities, where qubits are represented as ranges of possible states rather than fixed values. This view aligns well with possibility theory, which uses possibility and necessity measures to represent uncertainty without requiring precise probability distributions. This makes possibility theory particularly suitable for capturing qubit disturbances caused by noise and imprecision. To explore this perspective, a possible modeling framework for quantum algorithms is proposed and benchmarked against established methods. The framework is evaluated on various quantum algorithms, demonstrating its capacity to manage quantum noise and uncertainty effectively. It enables a detailed analysis of conditions under which quantum algorithms lose informational reliability, offering a unique way to reason about qubit behavior in uncertain environments. Additionally, the proposed framework supports the comparison of quantum algorithms regarding their robustness against noise. By considering both theoretical modeling and algorithmic performance, this approach highlights the potential of possibility theory as a tool for simplifying quantum system analysis, improving noise management, and advancing quantum computing reliability.
Optimization is the next natural step after simulation with increasing importance in the future. The aim of this session is to provide the basis of a holistic overview of all areas of optimization. Thus abstracts from both a theoretical as well as an applied perspective are welcome.
A number of interesting optimization problems is nonconvex and thus hard to optimize, particularly if the optimization variable is additionally high- or infinite-dimensional (e.g. a function on some domain). Often the nonconvexity is associated with some kind of discreteness or sparsity: For instance, the function to be optimized may only assume particular values or has sparse support. A typical strategy then is to instead solve a simpler, relaxed problem (for instance integer programs are often reduced to corresponding linear programs), which usually convexifies (part of) the optimization problem. Using the example of so-called branched transport, a nonconvex version of optimal transport, I will illustrate a few convexification concepts and associated numerical methods.
Motivated by the conditional gradient descent methods, a.k.a the Frank-Wolfe algorithms in the smooth case, we present here an adaptation of these Frank-Wolfe methods for non-smooth problems. Needless to say the smooth Frank-Wolfe algorithms have seen many applications in various fields of optimization and data science. We analysed the non-smooth adaptation of the Frank-Wolfe algorithms called Abs-Smooth Frank-Wolfe method. We prove convergence rates of the Primal-Dual convergence for convex abs-smooth functions similar to the smooth setting. We also look into various factors that help in accelerating these convergence rates. The approach has now been implemented and made available as a Julia package. This package - AbsSmoothFrankWolfe.jl - was tested using various non-smooth benchmark problems and the convergence rates were verified.
In a recent paper, the authors introduced a model for analyzing the strategic interactions of retail companies in the energy sector, where customers choose retail contracts over a longer period of time based on price and non-price characteristics. This framework is applicable to various energy markets, such as electricity, hydrogen, and natural gas [1]. In this talk, we extend this model by incorporating risk-averse customer behavior using the Conditional Value at Risk (CVaR) model. This extension introduces a maximum value function inside the objective function of the model, resulting in a non-smooth, non-linear optimization problem.
To solve this problem, we will utilize a solver called SALMIN, specifically designed for non-smooth, non-linear problems [2]. Towards the end of the talk, we will present initial numerical results from test problems, demonstrating the impact of our model extension.
[1]: Wiertz, A.-K., Walther, A., Zöttle, G. (2024). Strategic Retailers in the Energy Sector. (Under review).
[2]: Fiege, S., Walther, A., Griewank, A. (2019). An algorithm for nonsmooth optimization by successive piecewise linearization. Mathematical Programming, 177(1), 343-370.
Abs-smooth optimization problems consist of objective functions, and possibly constraints, that are build as finite compositions of smooth elementals and the absolute value function. These optimization problems can be written in the so-called abs-normal form by replacing all arguments of the absolute value by new variables and adding these relations as an additional constraint called switching equation to the problem. The linear independence kink qualification (LIKQ) plays an important role in the analysis of abs-smooth optimization problems in abs-normal form. In particular, provided that LIKQ holds it is possible to derive optimality conditions for abs-smooth optimization problems that can be checked in polynomial time.The talk focuses on the question of how stringent the LIKQ assumption is. Using a generalization of the classical jet transversality theorem it can be shown that the subset of problems that satisfy the LIKQ condition at all feasible points is dense and open with respect to the strong Whitney topology.
We propose a method for the optimization of discontinuous functions, for which all discontinuities are contained along lower dimensional manifolds. Traditional methods construct a robust optimization problem by either sampling or smoothing indifferently over the entire parameter space. However, these methods suffer from exponentially increasing cost for an increasing size of the parameter set. The key to our method is to approximate the globally smoothed objective function through a local smoothing function, which is applied in the vicinity of the discontinuity only. The function exploits the structure of the discontinuity set to dynamically detect location and orientation of the discontinuity. Consequently, integration is restricted to a low dimensional space, namely the one-dimensional line perpendicular to the discontinuity set. We show that this smoothing function is continuously differentiable on the parameter space and demonstrate the performance for the discussed setting. Furthermore, we demonstrate a significant reduction in computational cost when comparing our method with a globally smoothing approach. Additionally, we propose a method for detection of discontinuities by tracking the minimal eigenvalue for problems, for which the objective function relies on the evaluation of another energy minimization problem.
For all fields of applications the mathematical models are primarily based on differential equations. Hence, their numerical solution plays a fundamental role in numerical mathematics. This section covers mainly the construction and the behavior of numerical methods for differential equations including those of ordinary as well as of partial differential type.
Elliptic partial differential equations are often formulated as a variational problem, more precisely as a minimum problem for a suitable energy. A discretization by conforming finite elements provides an upper bound of the energy. Similarly the discretization of the dual variational problem yields a lower bound. The difference of the two energies provides an a posteriori error bound without generic constants with respect to the energy norm. We show that a useful approximate solution of the dual problem is obtained by a cheap postprocessing of the finite element solution of the primal problem. The procedure is easily understood for the Poisson equation. It is only slightly more involved for other differential equations.
Goal-oriented error estimation is a powerful tool for efficient and reliable numerical simulations. It allows one to control the error for specific quantities of interest and thus, enables more reliable and efficient simulations. To compute the numerical solution, we utilize the Virtual Elements Method (VEM). The VEM can handle general polygonal meshes which allows for an easy incorporation of hanging nodes that arise due to mesh refinement. In this talk, we present the first comprehensive framework for goal-oriented error estimation in VEM. We address two key challenges: First, the VEM stabilization term introduces additional error contributions that need to be estimated. To do so, the virtual shape functions need to be approximated inside the elements. Second, the standard techniques for approximating the exact adjoint solution fail in the VEM context, especially when exploiting VEM's key advantage of handling general polygonal meshes. We address these challenges by introducing efficient approximation techniques for virtual shape functions and introduce the Gauss-Point Reconstruction Method (GPRM) to approximate the exact adjoint solution. Through various numerical experiments, we validate the effectiveness of our proposed approaches and demonstrate their application to adaptive mesh refinement procedures. The results illustrate the capabilities as well as the limitations of the proposed framework.
In structural modeling, adaptive techniques based on a posteriori error estimation are well accepted to simulate finite element discretizations of partial differential equations. Based on a goal-oriented a posteriori error estimation, the quality of computational results is evaluated according to the physical significance of a specific quantity. In this contribution, several new methods of error representation are presented using the errors of the primal and dual solutions to improve the accuracy and efficiency in a proposed adaptive framework of goal-oriented a posteriori estimation. These error representations are derived from different adjoint-based error models for the governing equations of an elasto-plastic primal problem and related variational local and global forms. The framework generates a balanced mesh consisting of fine, medium and coarse elements for accurate results, avoiding a numerically expensive simulation with only fine elements. The effectiveness of the different proposed error representations is illustrated by numerical examples of a perforated sheet and a CT specimen for adaptive mesh refinement, leading to an effective reduction in computational effort. Furthermore, the results obtained from the new error representations are compared with the results obtained from error representation methods in the literature.
References
[1] A. T. Simeu and R. Mahnken: Error representations for goal-oriented aposteriori error estimation in elasto-plasticity with applications to mesh adaptivity, Engineering Computations, EC-12-2023-0975.R1, (2024).
[2] R. Mahnken and A. T. Simeu: Downwind and upwind approximations for primal and dual problems of elasto-plasticity with Prandtl–Reuss typematerial laws, Computer Methods in Applied Mechanics and Engineering,Vol. 432, 117277, (2024).
This talk presents a quantum algorithm for the solution of prototypical second-order linear elliptic partial differential equations discretized by d-linear finite elements on Cartesian grids of a bounded d-dimensional domain. An essential step in the construction is a BPX preconditioner, which transforms the linear system into a sufficiently well-conditioned one, making it amenable to quantum computation. We provide a constructive proof demonstrating that, for any fixed dimension, our quantum algorithm can compute suitable functionals of the solution to a given tolerance tol with an optimal complexity inversely proportional to tol up to logarithmic terms, significantly improving over existing approaches. Notably, this approach does not rely on regularity of the solution and achieves quantum advantage over classical solvers in two dimensions, whereas prior quantum methods required at least four dimensions for asymptotic benefits. We further detail the design and implementation of a quantum circuit capable of executing our algorithm, present simulator results, and report numerical experiments on current quantum hardware, confirming the feasibility of preconditioned finite element methods for near-term quantum computing.
Solutions to partial differential equations (PDE) of second-order in nondivergence form are, in general, difficult to approximate by finite element methods due to the lack of a variational formulation. In such cases, minimal residual methods may be the method of choice due to their wide accessibility. The residual of this paper stems from the Alexandrov--Bakelman--Pucci maximum principle for the Pucci extremal operators. The minimization of this residual in suitable finite element spaces leads to a sequence of discrete approximations that converges uniformly to the exact strong solution, provided the PDE satisfies further assumptions. Since only local regularity is required, the domain is allowed to be non-convex and non-smooth.
We consider a new approach to low-rank approximation of solutions to parameter -dependent PDEs that combines a representation in hierarchical tensor format with sparse polynomial expansions. We construct a low-rank adaptive Galerkin method for parametric elliptic problems that uses a tensor soft thresholding operation for rank reduction and discretization refinement based on lower-dimensional projected quantities. Unlike existing adaptive low-rank schemes, we obtain near-optimal ranks and discretizations without coarsening the iterates. In particular, for parametric problems with an anisotropic dependence on many variables, the new method leads to improved performance compared to existing adaptive tensor approximations that separate all variables into different tensor modes. Numerical experiments illustrate the effectiveness of the scheme.
In this session novel developments devoted to optimization and optimal control problems governed by ordinary or partial differential equations will be discussed. The focus is on theoretical investigations, numerical analysis, algorithmic issues as well as on application.
Optimal control problems with the control variable appearing linearly occur in many applications. However, this setting provides several challenges, from the analytical as well as from the numerical point of view. The optimal control can be of bang-bang or singular type, or even a combination of both is possible.
In the case of ordinary differential equations (ODE), the situation is thoroughly investigated. Specifically, for pure bang-bang controls, second order sufficient optimality conditions have been developed which are based on the so-called induced optimization problem. This method of optimizing the switching times gives the opportunity to prove optimality in a finite-dimensional manner and, furthermore, provides a convenient numerical solution method. For controls concatenated by bang-bang and singular arcs, however, there are still certain gaps concerning the theory and numerical verification of second order sufficient conditions in this context.
For constraints of partial differential equations (PDE), the situation is much more complicated and there are still many open questions. Some results on sufficient conditions for bang-bang controls have been published. Also numerical examples of bang-bang controls can be found in the literature, but only in very few cases, optimality could be shown. Recently, an example with a bang-singular control has been published where optimality has been conjectured by numerical arguments.
In this talk, we consider optimal control problems with the control variable appearing linearly subject to partial differential equations, particularly for boundary controls. We are interested in numerical methods as well as in optimality conditions. We will extend the induced optimization problem technique including the numerical method of arc parameterization from the ODE to this PDE setting. Necessary as well as sufficient optimality conditions therein are developed and compared to known conditions for the control problem. Connections to the ODE theory will be discussed by semi-discretization of the PDE in space. The results will be illustrated with numerical examples.
In this talk, we consider Newton’s method for finding zeros of nonlinear mappings from a manifold X into a vector bundle E. This setting requires a connection on E to render the Newton equation well defined, and a retraction on X is needed to compute a Newton update. As applications we will discuss variational problems involving mappings between manifolds, and, in particular, the numerical computation of geodesics under force fields.
We explore ideas drawn from discrete optimization and domain decomposition to solve necessary optimality conditions given by Pontryagin's Principle, specifically when applied to non-convex optimal control problems. By leveraging these concepts, we aim to identify potential advantages compared to classical solution approaches and demonstrate practical applications for mixed-integer optimal control problems. Notably, these types of problems frequently arise for example in natural gas or hydrogen transport networks utilizing pipelines. We present convergence results for linear-quadratic problems with mixed-integer constraints and discuss extensions towards a broader class of objectives and system dynamics.
The talk is concerned with the numerical analysis of the Beckmann problem of optimal transport. We apply a quadratic regularization of the Beckmann problem and discretize the regularized problem by means of Raviart Thomas finite elements. Moreover, we provide a priori estimates of regularization and discretization error for the minimal objective value. Together with H²-regularity estimates for the regularized solution, these estimates allow an error balancing resulting in an optimal coupling of regularization parameter and mesh size. Numerical tests confirm the theoretical findings.
This talk deals with an optimal control problem, where the state variable is given as a parametrized balanced viscosity solution of a rate-independent system. Under certain assumptions on the data one can prove the existence of globally optimal solutions for external loads in H¹(0,T). Moreover, we investigate the approximability of optimal solutions by viscous regularized problems. However, the latter requires the existence of a continuous solution z̃ of the initial problem since the approximating sequence is not only constructed via viscous regularization but also with an additional penalty term depending on z̃ in the energy, which can be interpreted as a part of the external load ℓ. But the analysis is based on the fact that ℓ∈ H¹(0,T). Therefore, in order to weaken this assumption and allow for jumps of z̃ one has to deal with a more general definition, where external forces ℓ∈ BV(0,T) are included. However, already existing concepts are not suitable in this context of application so that we introduce a new solution concept.
Dynamics and control is an interdisciplinary section which in particular adresses mathematical systems theory and control engineering. The contributions to this section are also concerned with the mathematical understanding and design of controllers which appear in actual applications.
Optimal feedback control for nonlinear systems is a powerful tool with applications in engineering, physics, and many other fields. However, a significant drawback of this approach is that the numerical treatment of the resulting nonlinear first-order partial differential equation—the Hamilton-Jacobi-Bellman (HJB) equation—can be challenging. In this talk, we will show that the HJB equation is linked to a nonlinear operator equation very similar to the Riccati equation.
To establish this connection, we define weighted Lp-spaces and develop a theory based on the Koopman operator that generalizes many concepts known from linear quadratic control problems. Moreover, we present a theory rooted in the Koopman operator and weighted Lp-spaces, which extends several well-known ideas from linear quadratic control.
We then demonstrate that the HJB equation can be formulated as a minimization problem over a set of nuclear operators, where the solution is characterized by a nonlinear operator equation analogous to the Riccati equation. Furthermore, we show that policy iteration can be interpreted as a specific method for solving this operator equation. However, this method has some unfavorable properties, which we address by introducing a modification.
We believe these results may pave the way for convergence proofs of the modified policy iteration or even offer an alternative theory to viscosity solutions. Finally, we present numerical experiments that illustrate the theoretical properties derived in this work.
Generalizations and variations of the fundamental lemma by Willems et al. are an active topic of recent research. In this note, we explore and formalize the links between kernel regression and known nonlinear extensions of the fundamental lemma. Applying a transformation to the usual linear equation in Hankel matrices, we arrive at an alternative implicit representation of the system trajectories while keeping the requirements on persistency of excitation. We show that the new representation is equivalent to the solution of a specific kernel regression problem. We explore the possible structures of the underlying kernel as well as the system classes to which they correspond.
Nowadays, the turnpike phenomenon is a well-known concept in optimal control, which has been the subject of several studies over the last decade. This notion, first introduced in economics, refers to the particular structure of certain solutions that remain close to a turnpike most of the time except at the beginning and at the end. Most of recent works on turnpikes in optimal control focuses on steady-state turnpikes i.e. on problems where the turnpike can be understood as the steady-state equilibrium of infinite-horizon optimal solutions. In this context, one of the most important results is the exponential turnpike property, which gives precision on the exponential convergence of the solution to the steady-state turnpike. However, it is not always possible to have a static turnpike, especially for systems with symmetries which we are interested in. In this presentation, we extend the well know exponential turnpike to the exponential trim turnpike for symmetric optimal control problems. The main step of this study are the following. First we use geometric reduction procedure to obtain from the initial problem are duced one without symmetries. Then we state, on this reduced problem, the exponential turnpike. Finally we use the group symmetry to recover the exponential trim turnpike for the full system. We draw some examples to illustrate our result.
Addressing optimal control problems for transport-dominated partial differential equations (PDEs) can be computationally intensive, particularly for high-dimensional systems. To mitigate this, we focus on developing reduced-order models that can serve as surrogates for the full PDE system in solving these problems. Earlier works in this regard have explored the use of proper orthogonal decomposition (POD) in an optimal control context. In our work, we focus on investigating the shifted proper orthogonal decomposition (POD) method, which is well-suited for capturing high-fidelity, low-dimensional representations of transport-dominated phenomena.
We propose two frameworks: one involves constructing the reduced-order model first and then optimizing the reduced system, while the other optimizes the original PDE system first, with the reduced-order model applied to the resulting optimality system. A 1D linear advection equation is used as a test case to evaluate the computational performance of the shifted POD method compared to conventional approaches, such as the standard POD, when employed as surrogates in a backtracking line search.
In this work, we propose a new framework to treat optimal control problems with second order control systems. Such control systems are usually encountered in mechanics in the form of Euler-Lagrange equations with forcing. The new approach is based on a reformulation of the optimal control problem as a regular Lagrangian variational problem. Such a reformulation is possible because of the second order form of the control system. Classical methods of optimal control theory lead to a non regular Hamiltonian formulation of optimality conditions. Applying variational methods to the new Lagrangian setting, we obtain the optimality conditions in a new form. We show equivalence between the new conditions and the classical ones and provide geometrical characterization of the relationships behind the two settings. The proposed framework allows for a deeper understanding of the structure of optimal control problems with second order system constraints and permits the use of variational integrators for the discretization of the optimal control problems preserving the geometrical structure of optimal solutions.
In optimal control, Pontryagin's maximum principle provides necessary conditions for optimality of a solution. Generally, however, solutions satisfying these cannot be found analytically, and instead, we need to rely on numerical methods. The convergence rate of the resulting approximations is not the only important measure of the quality of the method. The preservation of qualitative features such as conservation laws is also crucial. In an earlier work, we proposed a new way of stating optimal control problems for second order differential equations as a Lagrangian system in the space of states and costates. This provides a novel way deriving discrete necessary conditions for optimality by discretising these new Lagrangians and applying a variational principle. In this work, we investigate these discrete new Lagrangians and the resulting discrete optimality conditions, which are symplectic by construction. A comparison is performed with standard approaches, concerning e.g. conservation of symmetries, by solving, among others, the low-thrust orbital transfer problem.
Over the last decade mathematics has become the cornerstone in Signal and Image processing ranging from various methods for signal reconstruction to modelling of imaging modalities over its classical disciplines compression, denoising, segmentation, and registration to feature extraction. The used methodologies include such diverse fields as harmonic analysis, inverse problems, variational analysis, mathematical statistics, partial differential equations, optimization, approximation theory and sampling theory. The aim of this section is to foster interdisciplinary collaboration and the development of new directions in mathematical signal and image processing spawned from the interaction of various mathematical communities.
We revisit the so-called exit wave reconstruction problem in the variational setting. Here, exit wave reconstruction means to reconstruct the complex-valued electron wave in a transmission electron microscope (TEM) right before it passes the objective lens, i.e., the exit wave, from a focus series of real-valued TEM images. This is a non-linear inverse problem that is a variant of the well known phase retrieval problem. First, we discuss a classical variational approach to this setting. Here, existence of minimizers can be established in the functional space setting and numerical minimization is usually done with gradient descent based approaches. Applying the proximal gradient algorithm to a specific simplified version of this problem leads to an iterative algorithm well suited for deep unfolding. The latter is a technique that recasts iterative algorithms into neural networks and allows to introduce data-driven learning to a large class of classical iterative model-based approaches, creating new hybrid methods. By extending a proof technique Behboodi, Rauhut and Schnoor proposed for a similar unfolded network but with a linear forward operator to our non-linear forward operator, we can show a generalization error bound for the resulting network. A key ingredient for the extension to our non-linear setting is to ensure and exploit firmly non-expansiveness.
In this talk, I will present recent results regarding super-resolution properties of infinite-width shallow neural networks.
It is well-known that the training of shallow neural networks can be convexified by lifting the weights space to measures defined on a sphere. This gives rise to a convex optimization problem on measures, that, depending on the chosen activation function, exhibits different degrees of smoothness and sparsity. While representer theorems have been established, showing the existence of at least one sparse solution, much less is known regarding the exact reconstruction and the sparse stability properties of minimizers.
After reviewing the super-resolution results available in the literature, I will focus on the ReLU and Leaky ReLU activation functions, demonstrating that in this case, the minimizers are always sparse (achieving exact reconstruction). Specifically, these minimizers are composed of finite linear combinations of Dirac deltas, whose support is determined by the behavior of the dual-certificate. Next, I will discuss the stability of sparsity for minimizers with respect to labels perturbations and modifications of the regularization parameters. I will show that, for small perturbations and under suitable assumptions on the input, the perturbed minimizer remains sparse. Additionally, it is possible to estimate the locations of the Dirac deltas based on the perturbation level. Such results are achieved by a careful analysis of the dual-certificate of the minimizer for Dirac deltas that either belong to the decision boundary or lie in the interior of a decision region determined by the inputs.
Spontaneous formation of patterns by reaction-diffusion systems was discovered by Alan Turing in the 1950s and has been investigated from many different angles since then. We are interested in utilising this phenomenon in the context of texture analysis and synthesis. As a first task, we investigate the estimation of parameters of a pattern-generating reaction-diffusion system of Gray-Scott type from a single pattern representing the steady state distribution of one reactant. Thereby we are able to describe a texture, albeit from a limited class so far, by a set of numerical parameters. Unlike existing quantitative texture descriptors, these parameters allow reconstruction of a visually similar texture. We consider this as a step towards a novel class of generative texture descriptors capable of closing the loop between texture analysis and synthesis, which would constitute a new powerful tool for the processing of textured images. We demonstrate our approach on synthetic and real-world textures. We give an outlook with preliminary results on a generalisation from parameter to model identification as well as decomposition of complex textures into simpler components.
We introduce a class of processing architectures for node features on a graph, that are equivariant with respect to a local G-action, where G is a group. A local group action acts on feature vectors at graph nodes via group elements, which need not be equal for all graph nodes. Local symmetries are already considered in several works in the context of data science, ranging from color image processing to equivariant neural networks. Locally equivariant processing architectures provide an attractive direction for research because they promise to produce architectures robust to local perturbations of inputs. Our contributions for the design of locally equivariant processing architectures are the following:
Our devised architectures arise from differential operators acting on sections of specific vector bundles. Based on generalized Laplacian operators, we construct a bundle scale space representation of input data, akin to scale space methods in PDE image processing. These bundle scale space representations extend classical Gaussian scale-space representations in two ways. First, they inherently respect local symmetries. Second, the generalized Laplacians -- that generate the bundle scale spaces -- possess non-trivial null-spaces, which enhances expressivity. The generalized Laplacians that induce our novel architectures are parametrized by geometric quantities (Riemannian metrics and connection 1-forms) associated to the graph. By referring to methods from lattice gauge theory and vector diffusion maps we describe how to extract implementable discretizations of bundle scale-space representations, while preserving equivariance under local group actions. Furthermore, we show how to parametrize our model efficiently, resulting in architectures similar to message passing networks on graphs. This enables us to extend the usefulness of our methods via machine learning.
We study multilevel techniques, commonly used in PDE multigrid literature, to solve structured optimization problems. For a given hierarchy of levels, we formulate a coarse model that approximates the problem at each level and provides a descent direction for the fine-grid objective using fewer variables. Unlike common algebraic approaches, we assume the objective function and its gradient can be evaluated at each level. Under the assumptions of strong convexity and gradient L-smoothness, we analyze convergence and extend the method to box-constrained optimization. Large-scale numerical experiments on a discrete tomography (DT) problem show that the multilevel approach converges rapidly when far from the solution and performs competitively with state-of-the-art methods.
Scientific Computing is concerned with the efficient numerical solution of mathematical models from both science and engineering. The field covers a wide range of topics: from mathematical modeling over the development, analysis and efficient implementation of numerical methods and algorithms to software and finally application for the solution of complex real-world problems on modern computing systems. This interdisciplinary field combines approaches from applied mathematics, computer science and a wide range of applications in which in-silico experiments play an increasingly important role.
Calculating exact derivatives is a cornerstone of scientific computing. It enables deep learning, solving nonlinear partial differential equations, or asset optimization in finance. For real world applications, the calculation of derivatives poses challenges due to the complexity of the algorithms that represent the underlying function. Algorithmic Differentiation (AD) addresses this challenge.
Despite its advantages, applying AD naively to numerical algorithms such as fixed-point iterations can lead to inefficiencies and incorrect derivatives. In this case, knowledge of the problem structure should be leveraged. Recently, Walther and Sander extended the established results for first-order derivatives of fixed-point iterations to second-order derivatives using the two-phase approach.
In addition to a brief introduction to AD, we present the corresponding theoretical results in this talk. We further explain the integration of the theoretical results into the C++ AD tool ADOL-C and demonstrate their application to a geometric finite element approximation.
Modern scientific computing experiments often require the combination of multiple numerical packages, which are implemented in different programming languages. Thus, computational scientists face the task of combining these solvers together by removing the inter-language barrier via bindings or inter-process communication. Also, in cases where packages solving the same numerical problem (such as integration or optimization) need to be compared, the scientists receive additional workload by adapting function calls to a particular package. Such activities are non-trivial and error-prone in terms of the required programming and testing efforts. Moreover, this leads to duplicated work between different research groups, as usually such codes are used only internally and never published.
To address these issues, we work on the development of Open interfaces for Scientific Computing. This project has two main aspects: first, it simplifies crossing the inter-language barrier by automating data types conversion and function calls between different languages, and second, it provides generic interfaces for typical numerical problems. Therefore, users of Open Interfaces can switch more easily from one numerical implementation to another (which can be written in different languages and have different function signatures).
Our overall goal with this project is to improve interoperability and reproducibility in Scientific Computing by providing to the community a set of generic interfaces for common numerical problems, with numerical implementations adapted to these interfaces or even written against them, due to potential widespread use in the community.
This work is part of the Germany-wide project Mathematical Research Data Initiative (MaRDI) of the National Research Data Infrastructure (Nationale Forschungsdateninfrastruktur, NFDI).
PDE constraint optimization problems arise in a wide field of application and therefore the numerical analysis of these problems is a highly relevant research topic. In this talk, we consider a time-dependent optimization problem with a tracking type objective function and standard regularization. The equality constraint is a parabolic linear time-dependent PDE. We also consider additional box constraints on the control variable.
There are several approaches to solve these optimization problems, one can first discretize then optimize, or first optimize and then discretize and solve the resulting system. This is usually done by using applying some time-stepping scheme and using a different discretization in space. We follow the approach to first optimize then discretize the problem. We utilize a space-time variational formulation of the problem and apply a semi-smooth Newton method in the occurring Lebesgue-Bochner spaces. For the discretization, a simultaneous finite element discretization in time and space is used.
At last year's annual GAMM conference, we presented the theory of the application of the semi-smooth Newton method in our setting. This year we want to take a closer look at the discretization and the implementation of this approach. The discretization of the Newton systems for each iteration leads to unsymmetric saddle point systems. If the discretization is done in a naive approach this leads to very large sparse systems which are quickly becoming infeasible to solve. To mitigate this we use a tensor discretization approach which greatly reduces the storage needed for the discretized systems.
In this presentation we want to show: - how to develop an iterative matrix-free solver for the arising Newton system, - the discretization leading to Kronecker structures, - how we can exploit the Kronecker structures that arise from the chosen discretization for an efficient implementation, - the possibilities for parallelization using this approach, - numerical results for large scale systems.
We investigate the applicability of automatically generated code using AceGen to compute local quadrature point data in finite-strain hyperelasticity. We compare the matrix-free implementation of hand-written and AceGen-generated codes of compressible neo-Hookean models. To make the comparison more insightful, a traditional sparse matrix implementation is also included. Two strategies are examined: one that computes all data on the fly, and another that caches intermediate results. The obtained results are compared with the hand-written code taken from existing literature. Our results show that there is no overhead caused by automatic code generation. Moreover, the AceGen-generated code achieves superior performance in matrix-free computations. We also include the results for another hyperelastic model as an exemplary application.
We present the toolkit IFDIFF [1] for integration and sensitivity generation in parameterized implicitly (state-dependent) switched ODEs whose right-hand side is given as Matlab code containing non-differentiable operators (max, abs, etc.) and conditionals (if).
Naive implementations using IF-THEN-ELSE branching give unreliable simulation results without warning, as switching events are undetectable by standard integrators. The widespread belief that this can be countered using more stringent integration tolerances is wrong: We give a simple example where the integrator’s error estimation always delivers zero. Correct treatment of switched systems requires elaborate formulation of switching functions and tailored integrators, placing high mathematical demands on modelers. Even small model changes often imply considerable reformulation effort. Further, n switches generate up to 2$^n$ possible program flows and switching functions, rendering a-priori formulations infeasible already in medium-sized models.
IFDIFF programmatically handles switching events, auto-generating only required switching functions. It determines switching times up to machine precision, thus ensuring accurate simulation and sensitivity results. Transparently extending the Matlab integrators (ode45, ode15s, etc.), IFDIFF is applicable to existing code containing state- and parameter-dependent conditionals with only minor code adjustments, thus enabling fast prototyping and relieving modelers of mathematical-technical effort.
In this talk, we will discuss Filippov extensions that allow IFDIFF to detect and cope with certain Fillipov behavior during integration. Automatically switching into sliding-mode, IFDIFF determines a suitable convex combination of multiple right-hand sides and integrates along the switching manifold.
[1] IFDIFF - A Matlab Toolkit for ODEs with State_Dependent Switches
https://andreassommer.github.io/ifdiff/
During the last fifteen years, computer-assisted proofs have lead to the proofs of property (T) for several classes of groups. This property is in turn related to expanders, that is infinite families of sparse graphs with growing number of vertices and good connectivity properties. The quality of connectivity properties can be further estimated by means of Kazhdan constants. The common approach in such computer-assisted proofs is a usage of semi-positive definite programming. One can characterize property (T) by positivity of specific matrices originating from the group rings associated to groups. Due to the works of N. Ozawa, and recently U. Bader and P. W. Nowak, one can distinguish two alternative conditions for property (T): the first one making use of generators of a group and the second one involving group presentation. In both approaches one counts for an explicit estimate for the Kazhdan constant. One achieves this by estimating the spectral gap of a specific Laplace operator, defined in the group ring setting. In this talk, I am going to prove the estimates for the spectral gaps of the Laplace operators coming from the presentation of the symplectic groups over integers. This is a natural continuation of my previous result on this topic concerning among others special linear groups. We customize the induction technique and reduce the proof to specific computations for particular low-degree symplectic groups.
Another important aspect of this approach is to turn numerical solutions for such specific computations into a rigorous mathematical proof. This is done by the certification strategy (applied before by e.g. M. Kaluba, D. Kielak, P. W. Nowak, N. Ozawa). We comment here on our recent improvement of this strategy which can lead to faster computations. In order to accelerate them, we also applied Wedderburn decomposition.
If time permits, I would like to discuss possibilities of modifying semi-positive definite problems in a way that they would provide us with human-readible solutions. In this approach I would propose adding some type of discrete objective function to the corresponding simplex-like problem. This is a joint work with Jakub Szymański.
A spatiotemporal deep learning framework is proposed that is capable of two-dimensional full-field prediction of fracture in concrete mesostructures. This framework not only predicts fractures but also captures the entire history of the fracture process, from the crack initiation in the interfacial transition zone (ITZ) to the subsequent propagation of the cracks in the mortar matrix. Additionally, a convolutional neural network (CNN) is developed which is capable of predicting the averaged stress-strain curve of the mesostructures. The UNet modeling framework, which comprises an encoder-decoder section with skip connections, is used as the deep learning surrogate model. Training and test data are generated from high-fidelity fracture simulations of randomly generated concrete mesostructures. These mesostructures include geometric variabilities such as different aggregate particle geometrical features, spatial distribution, and the total volume fraction of aggregates. The fracture simulations are carried out in Abaqus/CAE, utilizing the cohesive phase-field fracture modeling technique as the fracture modeling approach. In this work, to reduce the number of training datasets, the spatial distribution of three sets of material properties for three-phase concrete mesostructures, along with the spatial phase-field damage index, are fed to the UNet to predict the corresponding stress and spatial damage index at the subsequent step. It is shown that after the training process using this methodology, the UNet model is capable of accurately predicting damage on the unseen test dataset by using just 470 datasets. Moreover, another novel aspect of this work is the conversion of irregular finite element data into regular grids using a developed pipeline. This approach allows for the implementation of less complex UNet architecture and facilitates the integration of phase-field fracture equations into surrogate models for future developments.
The integration of hybrid machine learning (ML) models and substructuring methods holds significant potential for advancing dynamic analysis in complex systems. This approach aims to reduce the reliability on extensive physical testing by leveraging data-driven techniques to enhance model accuracy and efficiency.
This work presents a baseline study on integrating machine learning techniques with substructuring methods to predict the dynamic behavior of multi-degree-of-freedom (MDOF) systems. The goal is to develop a hybrid framework that allows for accurate predictions throughout the entire development process, reducing the need for physical prototypes and testing at each phase. A multi-mass oscillator is separated into two linear subsystems, and an ML model is then trained to predict the dynamic behavior of the entire system based on the dynamics of the individual subsystems and their coupling interactions. Additionally, the approach is tested with different numbers of degrees of freedom (DOFs) for both the system and subsystems to ensure flexibility for later application to more complex MDOF systems. Initially, the model is trained using simulation data for the linear subsystems and their couplings. The predictions made by this model are then compared with experimental measurement data to adapt the model and improve its ability to predict real-world behavior, which was not captured by the simulations.
Ultimately, this work lays the initial groundwork for more efficient and reliable product development in complex systems, offering a potential approach to address the challenges of vibration load prediction and dynamic behavior analysis in engineering applications. While this study focuses on a simplified case, it provides a foundation for further research to explore the feasibility and effectiveness of this hybrid approach in more complex scenarios.
Modeling hierarchical nanoporous metals, characterized by complex ligament networks across multiple length scales, is computationally demanding. Multiaxial stresses occur in higher hierarchy ligaments, adding to the complexity of the problem and requiring an understanding of multiaxial material behavior. For finite element (FE) modeling, we separate the hierarchical nanoporous structure into upper and lower levels. To reduce computational cost, we aim to use surrogate models and FE-beam models for predicting the homogenized mechanical behavior of the lower level of hierarchy.
For model evaluation and improved interpretability of the evolution of trajectories with the introduction of the lower level of hierarchy, a 2D model is employed at the upper level. For the lower level, a 3D FE-beam model with diamond architecture is used to represent the porous network. We are introducing a new physics-informed recurrent neural network (RNN)-based architecture to represent the homogenized mechanical material response of the structure at the lower level of hierarchy. The RNN predicts the tangent stiffness matrix as a primary output from given strain trajectories via a recurrent layer. Secondary outputs such as stress, plastic strain, and plastic energy increments are derived through embedded physical relationships, ensuring physical consistency across the outputs. Positive energy dissipation is inherently ensured through positive eigenvalues of the tangent stiffness matrix and further through penalization of negative energy increments, ensuring thermodynamically consistent predictions.
FE simulations incorporating the trained RNN as a material user subroutine show that the inclusion of the lower level structure significantly modifies the trend in the strain trajectories observed at the upper level, while stress trajectories mainly experience changes in magnitude, although maintaining their general trend. This approach demonstrates the capability to efficiently simulate hierarchical materials, capturing the influence of lower-level porosity on the upper-level material behavior while maintaining physical consistency. Additionally, the model allows for straightforward integration into finite element frameworks like Abaqus, offering a computationally efficient method for studying complex hierarchical materials [1].
References:
[1] L. Dyckhoff, N. Huber, 2025, Journal of the Mechanics and Physics of Solids, under review
Traditional tool development processes in mechanical forming, especially for pressing tools, are often limited by time-consuming trial-and-error cycles that rely heavily on expert knowledge. These cycles can create bottlenecks in production and increase costs. Machine learning can be imployed here to greatly speed up these processes by using advanced models. In this regard, a Denoising Diffusion Model (DDM) has been proposed and employed in the previous work to inversely model effective tool surfaces from final product geometries [1]. This model leverages spatial and temporal attention mechanisms to enhance its predictive capabilities. While DDM achieved commendable overall predictions, it struggled with accuracy in reconstructing individual denoised frames during later forming steps. To address these challenges without significantly increasing computational demands, we explore integrating Pixel Adaptive Convolutional Neural Networks (PAC) [2]. PAC refines predictions from pre-trained models using high-resolution guidance images and adaptive kernels, enhancing both final results and intermediate frame accuracy. Additionally, the refinement could also be enhanced by incorporating reliability estimates for each pixel prediction. PAC has already been demonstrated in benchmark tasks like optical flow and semantic segmentation and thus aligns well with the forming process simulation by capturing motion and deformation over time. By incorporating these techniques, we aim to refine stress distributions and enhance the overall prediction reliability of the DDM, ultimately supporting a more efficient and accurate tool development process.
[1] Hupfeld, H. K., Teshima, Y., Ali, S. S., Dröder, K., Herrmann, C., \& Hürkamp, A. (2024). Accelerating the design of the effective surface of pressing tools with probabilistic inverse modeling approaches. Proceedings in Applied Mathematics and Mechanics, 24, e202400177.
[2] Su, H., Jampani, V., Sun, D., Gallo, O., Learned-Miller, E.G., \& Kautz, J. (2019). Pixel-Adaptive Convolutional Neural Networks. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 11158-11167.
For machine learning regression tasks, the combination of physics-based models with data-driven models can enhance prediction performance, data efficiency and physical consistency due to the specific use of prior knowledge in the form of validated physical laws. Because a data-driven correction is assumed to be less complex when fundamental relationships are already represented in the physics-based model, this correction task also requires less data than a task to map the entire problem. Consequently, fewer samples and tests are needed when the physics-based model is effectively utilized, which results in savings of material, energy, and time. In the presented hybrid approach, a physics-based process model that exhibits a discrepancy from the experimental target solution, based on inherent assumptions and simplifications, is corrected via machine learning. This hybrid model is implemented for the solid-state process of Friction Surfacing (FS), which is a solid-state materials processing technique to produce fine grained coatings with superior corrosion and wear properties. A smoothed-particle hydrodynamics (SPH) model serves as the physics-based model and a scarce target data set based on a Box-Behnken design of experiment is used to train the data-driven correction model. The process parameters force, rotation speed and travel speed are used to prediction the geometry, i.e. thickness and width, of the deposited layer. To perform physics-based feature engineering, the Buckingham Pi theorem is used to obtain dimensionless features and to reduce model inaccuracies as well as to increase model generalization. In addition, the computational costs of the SPH model are significantly reduced by replacing it with a surrogate ML model that is then corrected instead; therefore, the hybrid model enables rapid computation of predictions that are in very good agreement with experimental measurements.
In this talk, we present a hybrid finite element/neural network method for correcting coarse finite element solutions of partial differential equations. The method works by computing fine-scale corrections for the coarse solutions using a neural network trained offline. A neural network takes as input the local values of the coarse solution, as well as other local information such as values of the source term or the velocity field, and predicts the local correction. Such a network can be trained in two ways - directly using the fine finite element solution and using the residual of the equation. The key observation is the dependence between the quality of the predictions and the size of the training set, which consists of a number of different problems, e.g. with different source terms or velocity fields and corresponding fine and coarse solutions. We also present the a priori error analysis of the method together with the stability analysis of the neural network. To support the theoretical claims, we present the results of the numerical experiments. We also illustrate the generalization of the network to other problems in terms of domain, source term, etc.
An accurate numerical method is presented for computing the ground state of a many-electron system in the low-density limit first considered by Seidl (Phys. Rev. A 1999). In this limit the ground state problem reduces to a multi-marginal optimal transport (MMOT) problem. The method is an improved version of the Genetic Column Generation algorithm by Friesecke, Schulz and Voegler introduced in SIAM J. Sci. Comput. 44 (2022). Via a more global search rule it allows to find the ground state of MMOT problems with ~ 10⁶⁰ degrees of freedom to machine precision. Applying the method to the uniform electron gas in two dimensions confirms the effect of Wigner crystallization, that is, formation of a periodic lattice of electrons. Joint work with A.Parmiggiani, M.Penka, A.S.Schulz.
In this talk, I will present a mathematical analysis of Dynamical Mean-Field Theory (DMFT), a Green's function method developed in the 1990s, which provides dynamical correlations associated with the Gibbs state of fermionic quantum lattice models (e.g. the Hubbard model). In particular, I will show the existence of a solution to the corresponding equations in the framework of Iterated Perturbation Theory (IPT).
After a reminder of the models considered by this method (the Hubbard and Anderson impurity models), I will introduce the quantum (one-body time-ordered) Green's function, which can be seen as a time-dependent generalization of the one-particle reduced density matrix (1-pdm). In particular, I will introduce the self-energy and the hybridization function, two central objects in DMFT which share the property of being (negatives of) Nevanlinna-Pick functions (analytic maps from the upper half plane to itself).
After introducing these objects, I give the DMFT equations in the IPT approximation and prove the existence of solutions in functional sets modeling Anderson impurity models with infinite-dimensional baths. This is done by reformulating the equations as a fixed point problem in the space of probability measures, a setting in which we apply the Schauder(-Singbal) theorem. Moreover, we established some properties of the solution(s). (joint work with Éric Cancès & Alfred Kirsch, arxiv:2406.03384).
We deal with efficient and certified numerical approximation of parametric Hermitian eigenproblems. For this aim, we rely on projection-based model order reduction, i.e. we approximate a possibly large-scale problem with one of a much smaller dimension by projecting it on a suitable subspace. Such a space is constructed by means of weak-greedy type strategies applied either on a continuum or discrete parameter domain. After discussing the connections with the reduced basis method for source problems, we introduce a novel posteriori error estimate for the eigenspace associated with the smallest eigenvalue. It turns out that the approximation of the difference between the second smallest and the smallest eigenvalues, the so-called spectral gap, is crucial for the reliability of the error estimate. Therefore, we propose new efficiently computable upper and lower bounds for the spectral gap, which allow for an approximation through a greedy procedure. Our framework is well-suited to tackle the cases where the smallest eigenvalue is not simple. Our work is motivated by a particular application, which is the repeated evaluation of the ground state of parametric quantum spin system models. In this framework, the ground state corresponds to the eigenspace associated with the smallest eigenvalues of the QSS Hamiltonian. Besides that, finding the ground state of a parametric Hermitian eigenproblem is omnipresent in physics, chemistry, and engineering.
This talk addresses the computation of ground states of multicomponent Bose-Einstein condensates, defined as the global minimizer of an energy functional on an infinite -dimensional generalized oblique manifold. We establish the existence of the ground state and characterize it as the solution to a coupled nonlinear eigenvector problem. By equipping the manifold with several Riemannian metrics, we introduce a suite of Riemannian gradient descent and Riemannian Newton methods. Metrics that incorporate first- or second-order information about the energy are particularly advantageous, effectively preconditioning the resulting methods. For a Riemannian gradient descent method with an energy-adaptive metric, we provide a qualitative global and quantitative local convergence analysis, confirming its reliability and robustness with respect to the choice of spatial discretization. Numerical experiments highlight the computational efficiency of both the Riemannian gradient descent and Newton methods.
Eigenvector problems of the Kohn-Sham type often arise in computational physics and chemistry and can be formulated as minimization of an energy functional on the Stiefel manifold. We propose to use the Hamiltonian operator of the system to construct a Riemannian metric on the Stiefel manifold. This allows us to formulate the energy-adaptive Riemannian conjugate gradient method. Numerical experiments illustrate that for ground-state calculations of solid-state materials, our method can compete with state-of-the-art self-consistent field (SCF) based methods.
In this contribution, we present a blended learning concept designed to maximize the benefits students can gain from their teachers. The approach combines individual study with collaborative learning experiences and tutoring.
Students engage with content independently through a digital learning platform featuring videos and exercises while sitting together with their peers in a classroom.
This setup fosters discussions among students about their current challenges and provides opportunities for direct and individual tutoring from teachers. To realize this concept, we designed and implemented an innovative digital exercise type tailored to the specific needs of engineering mechanics education using the newest HTML5 technologies.
We created a digital course covering the content of the traditional frontal presentations as short videos. In between the series of videos, small exercises, which require no pen and paper, allow students to apply and rethink the concepts presented in the videos, fostering active engagement and enabling self-assessment.
While developing this course, we identified significant limitations in existing learning platforms. Many platforms disrupt the learning flow by requiring users to open each video and exercise separately. Additionally, exercise formats are often very limited. Multiple-choice questions, e.g., are insufficient for fostering the critical competencies required in mechanics: We want students to think about the system, solve the problem, and then formulate this solution as vector-matrix equations. Other options would have been STACK or programming interfaces. As mechanical equations consist of symbols with often many indices, an equation typed in such linear text input is not well-readable. Therefore, we developed a custom digital exercise type tailored to engineering mechanics. In our exercise type, students use drag-and-drop to assemble vector-matrix equations with predefined symbols and operators. The equations are evaluated on a server using SymPy, a computer algebra system, providing immediate feedback to students similarly to the way STACK works.
Our teaching concept combines digital learning with on-site support to maximize value. During in-person sessions, teachers use the time they gained from digitization for personalized tutoring. Students study the content individually but are invited to do that in a shared physical environment. Those who encounter difficulties can seek help from peers or the teachers, making individual discussions with the teacher possible even in larger courses. This approach also addresses the diversity of incoming student knowledge by allowing individuals to begin with introductory or review materials as needed, ensuring a personalized learning experience.
In undergraduate engineering education, foundational courses in engineering mechanics pose considerable challenges for students due to the abstract and analytical nature of the subject matter. To enhance learning outcomes and provide immediate, formative feedback, automated STACK assignments incorporating the Meclib library for parameterized graphics have been implemented in Bachelor-level courses on statics and dynamics. This paper presents an experience report focusing on the effectiveness of STACK-based assessments and the specific topics and problem types where students encountered significant learning obstacles.
The analysis reveals common challenges related to fundamental topics, including equilibrium analysis, free-body diagrams, and the application of Newton’s laws. Additionally, trends in student errors and misconceptions, particularly related to numerical calculations, correct handling of physical units, and problem interpretation, are examined. Unlike typical symbolic problems in lectures and exercises, the STACK assignments require students to compute numerical results, introducing additional challenges such as rounding errors and error propagation. The assignments are also structured to support the targeted practice of specific mathematical skills essential in mechanics, such as vector decomposition and the application of differentiation rules—including fundamental concepts like the product, quotient, and chain rules. It is discussed how automated, feedback-driven STACK assignments not only help students develop these core problem-solving skills but also motivate them to engage consistently throughout the semester, while enabling instructors to detect and address both conceptual misunderstandings and gaps in essential mathematical skills early in the learning process.
Through this study, it is aimed to provide insights into the design of more effective didactic strategies in mechanics and advocate for broader adoption of automated assessment tools in technical education. The findings underscore the importance of targeted feedback and adaptable problem sets in fostering a deeper understanding of mechanics fundamentals among engineering students.
A common challenge in education is that students attend classes to write down what is explained, remaining passive otherwise. Even if every part of the practiced exercises is understood, much more thinking, questioning and understanding arises when actively exercising. It is highly desirable to encourage this on a regular basis during the semester to achieve continuous learning and to prevent students being overwhelmed by quickly rising levels of difficulty.
So far, this encouragement was achieved by writing one qualifying exam during the semester in our engineering mechanics courses. That exam has to be passed after two attempts to achieve admission to take the final exam at the end of semester and is a good possibility for students to recapitulate and exercise what has been taught. Moreover, they get an impression of the necessary standard of knowledge as well as structure and grading of an exam. These advantages are contrasted by two main drawbacks: 1. The qualifying exam adds only one occasion to the course where students are required to exercise and therefore only provide little encouragement for continuous learning and 2. Grading the qualifying exams is a laborious process.
In this context, we present a concept for electronic assignments as a means of continuous exercising. It is planned to demand the completion of an assignment every two weeks which aims at recapitulating the recent topics of the course and which also has to be passed by the students to be allowed to take the final exam. The tasks are randomized to encourage individual working and discussing the approaches for solutions in study groups and they are developed using STACK (System for Teaching and Assessment using a Computer algebra Kernel) within the Moodle environment. We also use M. Kraska's Meclib library which significantly eases the development, provides high-quality mechanical plot elements and allows for interactive tasks, which are one focal point of the work.
In the presentation, we will show our approach to developing the assignments, which aim to be close to real-word applications and show example tasks.
Structural Analysis is a compulsory subject in the B.Sc. Programme Civil Engineering. It conveys fundamental concepts related to the design of structures, such as idealization of real-world structures as well as the calculation of internal forces and deformations due to given loading scenarios. Structural Analysis is typically assessed in a written exam and perceived as “difficult” by the majority of students. This is also reflected in high failure rates, which in turn correlates with relatively low attendance rates in class and insufficient participation in teaching and learning activities. In summary, a lack of understanding of fundamental concepts of engineering mechanics and an inability to transfer learned concepts to new exercise tasks is frequently observed. At the same time, competency-based examination is difficult, given cohorts as large as 200-300 students.
The above challenges have recently been addressed through the development of digital tutorial and examination tools within the scope of the project PITCH at the University of Duisburg-Essen (UDE). The project in general targets the development of digital examinations for a wide range of subjects considering aspects of equal opportunity and taking into account didactical, technical and legal aspects. With respect to Structural Analysis, our objective is two-fold. Firstly, we aim to increase students’ motivation and to encourage students to regularly participate in teaching and learning activities by providing weekly digital tutorial tasks, that are marked automatically. Secondly, we strive to develop digital exam tasks that facilitate fair and timely examination practises while maintaining the competency-based focus required in engineering disciplines.
To this end, the web-based automatic tutorial and exam system JACK of UDE is employed. It facilitates the use of various question types such as multiple choice or multi-stage with numerical input and it supports the development of randomized tasks. Another important aspect is the incorporation of problem- and response-specific feedback to support students’ learning. This contribution showcases some of the developed questions, summarizes the challenges encountered in the design of digital tutorial tasks for theoretical engineering subjects and presents the results of a first test run of weekly digital tutorial activities and a digital mock exam in winter semester 2024/25.
Mechanics is often regarded as a pure “weeder course” in engineering study programs. The content is generally perceived as extremely difficult, incomprehensible and without any reference to real-life problems. University education often fails to convey the importance and benefits of mechanics for engineering even rudimentarily. In addition, learning objectives are usually only achieved at rather low taxonomy levels, so that students are virtually denied the opportunity to acquire contemporary or, in other words, future skills.
The “DTM – Digitale Technische Mechanik” (loosely translated as “Digital Engineering Mechanics”) project as part of the OERContent.nrw initiative of the state of North Rhine-Westphalia has therefore set itself the goal of reforming the basic teaching of mechanics and bringing it up to date. In a network consisting of eight universities, learning material is being developed that can be used on a modular basis and addresses a much wider range of taxonomy levels and explicitly defined learning objectives. It is based on constructivist learning theories and takes into account concepts such as constructive alignment.
Projects in which realistic problems are posed on the basis of real demonstrators play a central role in the didactic concept. These projects are structured modularly and organized according to learning objectives defined at different taxonomy levels. In addition, these projects require and promote a significantly higher degree of independent learning and independent exploration and validation of solution approaches for engineering problems. In this contribution, we present one of these projects as an example and focus in particular on the problems posed and the associated learning material. We highlight the challenges in design and implementation, as well as the possibilities and advantages that can be achieved through the concept.
A large toolbox of numerical schemes to approximate the time dynamics in nonlinear PDEs has been meanwhile established, based on different discretization techniques such as discretizing the variation-of-constants formula (e.g., exponential integrators) or splitting the full equation into a series of simpler subproblems (e.g., splitting methods). In many situations these classical schemes allow a precise and efficient approximation. This, however, drastically changes whenever non-smooth phenomena enter the scene such as for problems at low regularity and high oscillations. Classical schemes fail to capture the oscillatory nature of the solution, and this may lead to severe instabilities and loss of convergence. Nevertheless non-smooth phenomena play a fundamental role in modern physical modeling, e.g., singularity and shock formation, turbulences, etc., which makes it an essential task to develop suitable numerical schemes which capture effectively their dynamics.
In this talk I present a new class of resonance based schemes. The key idea in the construction of the new schemes is to tackle and deeply embed the underlying structure of resonances of nonlinear PDEs into the numerical discretization. As in the continuous case, these terms are central to structure preservation and offer the new schemes strong geometric properties even down to very low regularity.
Oscillations of nonlinear systems contain, compared to linear oscillations, a broad variety of possible phenomena, like existence of multiple solutions, super- (and sub-) harmonics in case of harmonic excitation, period multiplication or chaos. While methods for identification and solution of the underlying equations of motion in linear cases still offer some challenges, the investigation of nonlinear oscillations is much more challenging. The plenary lecture shows results of the work of the speaker and his research group of the past 25 years on nonlinear oscillations.
We are starting from classical semi-analytical methods, which still offer some interesting working fields in implementation and validity of the solutions, are looking on classical numerical procedures as well as on current machine learning applications. We are considering fundamental phenomena like multi-stable or intra- and interwell solutions as well as corresponding basins of attraction. Broad fields from academic to industrial applications are shown including the classical Duffing oscillator, low degree of freedom oscillators as well as multistable energy harvesting systems and squealing brakes.
With the application of new data analysis and machine learning methods the classical field of nonlinear oscillations has become challenging impulses and new possibilities are to be discovered.
Lunch will be served at the conference venue in Rooms 51 and 53 for a pre-purchased voucher.
——
Metallic porous materials are used in wide-ranging practical applications, from lightweight metal structures to medical implants. However, their use in a given application depends on obtaining the desired mechanical properties, such as Young’s modulus and yield stress. Advances in 3D printing enable the creating a wider choice of prescribed structures than ever before. However, to take advantage of this opportunity one must predict which structures will exhibit the desired mechanical properties. Instead of testing each candidate structure directly using the computationally expensive simulation methods, our strategy rests on using Fast Fourier Transform-based computational methods to create a database of sample structures that will be used to build predictive models based on variables defined by topological data analysis (TDA), whose results can be extrapolated to a wider space of possible structures.
The talk will present results on predicting the elastic properties of metallic 3D porous materials using convolutional networks and additional structural descriptors. Even for relatively small datasets, the developed machine learning models achieve a prediction accuracy of Young’s modulus at R² ≥ 0.95 across a wide range of porous structures. The reference Young’s modulus values were obtained using molecular dynamics simulations and the finite element method. We will show that the approach is robust to distinct topologies of the porous media. The impact of adding topological summaries to the machine-learning model will be discussed.
A Reeb graph is a discrete invariant of a function on a space that provides information about the evolution of topology of the level sets. It is formed as the quotient of the space by contracting connected components of level sets of the function. Under sufficiently nice assumptions, e.g. for smooth functions on closed manifolds with finitely many critical values, it actually has the structure of a finite graph. Reeb graphs and their algorithmic version, mapper graphs, are widely used in computational topology, visualization and data analysis. In this talk we will discuss what information about the shape of space is encoded in Reeb graphs. In fact, there are only two necessary and sufficient conditions for a graph to be realized, up to homeomorphism, as the Reeb graph of a Morse function on a given manifold.
This project explores the intersection of mathematics, crystallography, and machine learning by developing a method to detect symmetry groups of tilings (crystals). The approach originates from a mathematical technique I introduced in [1], which can be effectively modeled using classical machine learning methods. The work aligns with the principles of explainable AI, as preliminary results suggest the existence of a precise conjecture that should be provable based on data-driven modeling. This poster presents the methodology, key findings, and potential directions for collaboration in bridging theoretical mathematics with computational techniques in crystallography.
Failure of materials and structural components is an important issue as long as man-made constructions exist. The section focuses on damage mechanics and fracture mechanics for all kinds of solid materials and structures. It aims at bringing together related original research covering experimental observations, modeling approaches and numerical techniques. Moreover, material failure is a complex process, which may be considered on different length scales ranging from atomistic scale up to macro scale of engineering structures. Since failure behavior of materials strongly depends on loading situations, contributions addressing static, dynamic and multi-axial failure are welcome as well as fatigue problems.
Accurate lifetime prediction of concrete structures under fatigue loading is vital, particularly in scenarios involving nonuniform loading histories where load sequencing critically influences structural durability. Addressing this complexity requires advanced modeling approaches capable of capturing the intricate relationship between loading sequences and fatigue life [1,2]. Traditional high-cycle fatigue simulations are computationally prohibitive, necessitating more efficient methods. This contribution explores the potential of physics-informed machine learning to predict the fatigue lifetime of high-strength concrete, explicitly considering the effects of loading sequences in nonuniform loading scenarios [3,4]. A deep neural network was trained using numerical simulations generated by a physically-based anisotropic continuum damage fatigue model of concrete that was calibrated and validated against experimental fatigue data of cylinder specimens tested in uniaxial compression [5]. The simulations used for training quantified the effects of load sequences at two different amplitude levels. The deep neural network incorporates physical constraints derived from experimental evidence into the loss function of the neural network to improve its prediction accuracy, along with initial and boundary conditions. The proposed approach demonstrates superior accuracy compared to purely data-driven neural networks, achieving realistic predictions of damage accumulation. Furthermore, the model has been successfully applied to predict fatigue lifetimes under complex loading scenarios with three to five amplitude variables, serving as a surrogate model to estimate damage evolution across loading jumps. This work emphasizes the potential of physics-based neural networks as a promising technique for efficient and reliable fatigue life prediction of concrete structures susceptible to fatigue.
References:
[1] A. Baktheer, E. Martínez-Pañeda, F. Aldakheel, Phase field cohesive zone modeling for fatigue crack propagation in quasi-brittle materials, Comput. Methods Appl. Mech. Eng. 422 (2024) 116834. https://doi.org/10.1016/j.cma.2024.116834.
[2] A. Baktheer, C. Goralski, J. Hegger, R. Chudoba, Stress configuration-based classification of current research on fatigue of reinforced and prestressed concrete, Struct. Concr. 25 (2024) 1765–1781. https://doi.org/10.1002/suco.202300667.
[3] F. Aldakheel, R. Satari, P. Wriggers, Feed-Forward Neural Networks for Failure Mechanics Problems, Appl. Sci. 11 (2021). https://doi.org/10.3390/app11146483.
[4] A. Tragoudas, M. Alloisio, E.S. Elsayed, T.C. Gasser, F. Aldakheel, An enhanced deep learning approach for vascular wall fracture analysis, Arch. Appl. Mech. 94 (2024) 2519–2532. https://doi.org/10.1007/s00419-024-02589-3.
[5] A. Alliche, Damage model for fatigue loading of concrete, Int. J. Fatigue 26 (2004) 915–921. https://doi.org/10.1016/j.ijfatigue.2004.02.006.
Damage simulations play a critical role in assessing structural integrity and understanding material failure under various conditions. However, their computational cost is often a significant challenge, especially in scenarios requiring repeated simulations such as design optimization, uncertainty quantification, and real-time decision-making. Traditional model reduction techniques such as the proper orthogonal decomposition and especially hyper-reduction methods [1] can reduce the simulation time significantly but often necessitate intrusive modifications to existing simulation codes and are bound to the classical finite element framework. In this contribution, an autoencoder-based non-intrusive model reduction framework for gradient-extended damage simulations [2] is introduced. Autoencoders, a class of neural networks designed for dimensionality reduction and feature extraction, are employed to construct a compact latent space representation of high-fidelity simulation data. The autoencoder can be trained to capture complex, nonlinear relationships within the data, enabling significant reductions in computation time while maintaining high accuracy [3]. It is also easily possible to incorporate real-world data directly into the reduced order model because the data does not have to be transformed to e.g., Neumann or Dirichlet boundary conditions. The proposed methodology is validated using numerical examples involving complex structural damage simulations. Different neural network structures and techniques are investigated with respect to their influence on accuracy. Results demonstrate the framework's ability to achieve orders-of-magnitude reduction in computational time while maintaining high accuracy in predicting damage evolution and structural behavior. The model's robustness and efficiency make it a promising tool for applications requiring rapid simulation capabilities, such as real-time monitoring, predictive forecasting, and uncertainty quantification in the context of digital twins.
[1] Farhat, C., Avery, P., Chapman, T., & Cortial, J. (2014). Dimensional reduction of nonlinear finite element dynamic models with finite rotations and energy‐based mesh sampling and weighting for computational efficiency. International Journal for Numerical Methods in Engineering, 98(9), 625-662.
[2] Brepols, T., Wulfinghoff, S., & Reese, S. (2020). A gradient-extended two-surface damage-plasticity model for large deformations. International Journal of Plasticity, 129, 102635.
[3] Simpson, T., Dervilis, N., & Chatzi, E. (2021). Machine learning approach to model order reduction of nonlinear systems via autoencoder and LSTM networks. Journal of Engineering Mechanics, 147(10), 0402106
Simulating fracture and especially the fracture of thermoplastics still proves to be difficult. It yet remains unclear what exactly happens at a molecular and atomistic level when thermoplastics fail, which is rooted in the multiscale nature of the problem. Before profound multiscale simulations based on the “Capriccio” method as a multiscale domain-decomposition approach can be set up, appropriate preparation is in order. Therefore, two pure MD models of a general thermoplastic polymer (GTP) are investigated. The first focuses on the model size and captures up to 30 million superatoms. By varying material parameters and pre-crack sizes, the respective influences could be investigated. Furthermore, questions regarding the size of the process zone and the K-determined field could be answered while interesting hypotheses in terms of the crack propagation in thermoplastics could be derived. In the second model, a “Capriccio light” approach was introduced that operates fully in MD but mimics a full multiscale “Capriccio“ simulation. With this cheaper and more flexible model, the relevant influences of parameters that only appear in multiscale simulations, such as the influence of multiscale boundary conditions or the size of MD and FE regions, could be investigated leading to a better understanding of the problem at hand.
Accurate prediction of fracture mechanical quantities based on the atomistic and molecular structure of matter has the capability to provide a deeper understanding of failure processes and can open new approaches to material design. However, many challenges exist towards this ambitious goal particularly for amorphous materials – mostly rooted in the tremendous difference in length and time scales involved when linking atomistic simulations to continuum approaches, which are required to mimic typical fracture mechanical setups.
This contribution gives an overview of the Capriccio method as a multiscale domain-decomposition strategy that employs the fine-scale, in the sense of a discrete atomistic or molecular description, in the vicinity of crack tips as the regions of a specimen exposed to particularly high loads. Based on this, we present our recent findings in the context of quantifying stress intensity factors of inorganic glasses together with a discussion of appropriate measures to indicate the onset of crack propagation. Furthermore, we discuss peculiarities arising from the chosen setups and their impact on our findings.
The section will focus on advanced theoretical, numerical and experimental models for the evaluation of the behavior of structures subjected to static and dynamic loads. Innovative materials characterized by high strength, anisotropy and unconventional mechanical responses pose new challenges to the design and the performance of various structural elements such as beams, plates and shells. In particular, structural issues may appear at different scales when materials with an internal architecture are employed. Particularly welcome are linear and nonlinear models and algorithms that address static and dynamic behavior of structures at different scales.
Lattice structures have gained increasing popularity due to their remarkable strength-to-weight ratio. With advancements in material extrusion additive manufacturing (MEX), complex designs of lattice structures have become more accessible and widely applicable. However, their slender components face the challenges of elastic buckling at low densities.
A novel lightweight additive manufactured stayed polymer lattices [1] is presented based on a previous published theoretical concept [2]. The buckling and post-buckling behaviour of stayed unit cells (UCs) were investigated experimentally through quasi-static uniaxial compression tests [3]. Simulations are conducted using an analytical model based on the general theory of elastic stability, implemented in a Python-based module “Pyfurc” [4].
Through an iterative process of fabrication (via MEX), testing (compression experiments) and simulation (analytical model), the mechanical behaviour of stayed UCs can be tuned by varying geometric parameters to achieve a higher strength-to-weight ratio. The analytical model enables predictive simulations across a broader range of parameters, reducing time and material costs while minimizing the reliance on direct experimental trials.
A set of parameters has been identified that can further enhance the ultimate load of the UCs, such as specific geometrical imperfections or specialized intersection designs. These findings provide a mechanical evaluation of the UCs' design, focusing on their buckling behaviour under varying geometrical parameters, which contributes to the identification of more effective lightweight stayed UC types.
[1] Ou, Y., Köllner, A., Dönitz, A. G., Richter, T. E., & Völlmecke, C. (2024). Material extrusion additive manufacturing of novel lightweight collinear stayed polymer lattices. International Journal of Mechanics and Materials in Design, 1-17.
[2] Zschernack, C., Wadee, M. A., & Völlmecke, C. (2016). Nonlinear buckling of fibre-reinforced unit cells of lattice materials. Composite Structures, 136, 217-228.
[3] Ou, Y.,& Völlmecke, C. (n.d.). Experimental study on the effect of geometric parameters on the buckling behavior of MEX-additively manufactured PLA collinear stayed unit cells. Progress in Additive Manufacturing. [Manuscript submitted for publication].
[4] pyfurc Documentation — pyfurc 0.2.3 documentation. 2023-01-12.
url: https://pyfurc.readthedocs.io/en/latest/ (visited on 07/26/2023).
3D concrete printing (3DCP) aims to revolutionize construction by increasing automation, reducing material usage, and enabling customized designs. Despite its potential, the lack of regulations and reliance on trial-and-error methods result in significant waste and inefficiencies. Reliable models are needed to predict and control the complex printing process with its various influencing factors from material, process, and environment.
This study aims to develop a structural model to predict print stability and prevent buckling and material failure in extrusion-based 3D concrete printing (3DCP), with a focus on environmental influences, particularly temperature. The structural build-up of the material is crucial for stability but is influenced by material ingredients, water-binder ratio, and ambient conditions, which vary in real-world projects. It describes the process of cementitious material gaining strength and stability during its early ages due to thixotropy and early hydration. Modeling that, a time- and temperature-dependent model for the evolution of early-age material parameters, such as stiffness, is derived. The model employs the maturity method, using an equivalent time to capture the temperature influence. It is verified using experimental data on stiffness evolution from squeeze flow tests and yield stress evolution measured from rotational rheometer tests. The model parameters are estimated using Bayesian inference, and validation shows good agreement with experimental data for both parameters at the material level. Subsequently, the derived time- and temperature-dependent stiffness model is adapted into a structural simulation using an elastoplastic material law with nonlinear hardening to study the temperature effect on the stability of 3DCP. Layers are activated sequentially based on a pseudo-density approach. The method is illustrated with an example of a printed wall with a width of one layer under varying ambient temperatures. The temperature impact on buckling and material failure during printing is demonstrated and numerically investigated through a sensitivity study.
The Laser Powder Bed Fusion (LPBF) process is a pivotal additive manufacturing technique that enables layer-by-layer fabrication of intricate metallic components with exceptional precision, material efficiency, and design flexibility. In the LPBF process, a high-intensity laser selectively melts and fuses the powder in each layer along with the layer beneath it. However, the localized heating inherent in the process generates significant temperature gradients, resulting in residual stresses that can compromise structural integrity, induce warping, and affect overall part performance. Addressing these challenges is crucial for ensuring the reliability of LPBF-fabricated parts, particularly for demanding applications in advanced industries. This study investigates the impact of laser path strategies on temperature distribution and residual stress formation in LPBF through thermo-mechanical simulations. A methodology based on the Julia programming language leverages a finite difference method (FDM) framework to simulate thermal and mechanical behaviour for different laser paths defined by G-Codes. The analysis spans multiple layers, enabling an evaluation of how scanning strategies influence heat dissipation and stress accumulation. Additionally, the effects of varying process parameters on residual stress development are examined. Benchmark geometries are employed to validate the model and explore the effects of laser path selection, timestep resolution, and scanning strategies on the results. By analysing diverse scanning patterns and their implications on heat flow and stress buildup, this research provides insights into optimizing LPBF processes. The findings aim to establish a deeper understanding of the relationship between laser paths, thermal gradients, and residual stress distributions. These insights are expected to contribute to the development of improved strategies for minimizing residual stresses, enhancing the quality and reliability of LPBF-fabricated components. The outcomes of this work offer practical guidelines for optimizing LPBF scanning strategies, advancing process control, and fostering broader adoption of additive manufacturing technologies in critical sectors. This research not only enhances the understanding of LPBF processes but also provides actionable strategies for improving part quality, supporting the growth of additive manufacturing in high-performance applications
Funding Acknowledgement: funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Projektnummer 543895078
Metamaterials are artificially designed structures that exhibit tailored mechanical properties. These properties result primarily from their underlying small-scale architecture and can therefore be optimised by adapting this structure. Owing to this, metamaterials are suitable for a wide range of applications, such as energy absorption, sound isolation, or seismic protection [1]. The future aim is to develop a three-dimensional metamaterial in the form of a slender, chiral lattice with rigid inclusions with superior properties reminiscent of the fundamental work of [2]. As a first step the mechanical behaviour the two-dimensional projection of the underlying structure is investigated herein. In a single unit, square shaped rigid inclusions are linked by simply supported struts. This unit of the metamaterial is investigated using a methodological three-tone: 3D printing, compression tests, and simulation. The methods are used iteratively to carry out parameter studies in order to optimise the model. A key focus of the talk is on designing the CAD model with regard to fabricate the unit cell using extrusion-based additive manufacturing technology, so-called 3D printing. Further tailoring of detailed structures such as hinges is achieved by tuning the printing parameters for fused filament fabrication. This tailoring process is presented together with experimental test results. The experimental tests are complemented by a theoretical framework for the simulation. This is based on the general theory of elastic stability [3]. The buckling and post-buckling behaviour is determined by minimising the total potential energy with respect to the generalized coordinates. The equations are solved with the in-house developed, open-source Python-based module 'pyfurc' [4]. Results of the first pilot studies will be presented which exhibit the desired rotation of the rigid inclusions under a compressive load.
References:
[1] J. Liu et al. “A Review of Acoustic Metamaterials and Phononic Crystals”. In: Crystals 10.4 (Apr.2020), p. 305. issn: 2073-4352. doi: 10.3390/cryst10040305.
[2] J. Li et al. “Observation of Squeeze–Twist Coupling in a Chiral 3D Isotropic Lattice”. In: physica status solidi (b) 257.10 (2020), p. 1900140. issn: 1521-3951.
doi: 10.1002/pssb.201900140.
[3] J. M. T. Thompson et al. A general theory of elastic stability. John Wiley \& Sons, 1973.
[4] Klunkean. pyfurc - Auto-07P made accessible through python.
URL: https://github.com/klunkean/pyfurc.
Additively manufactured sandwich structures with periodic cores are widely used in \linebreak lightweight construction due to their high stiffness-to-weight ratio and geometric flexibility. However, in the laser powder bed fusion (LPBF) process, the enclosed nature of unit cells such as FCC (face-centered cubic) and BCC (body-centered cubic) poses challenges for powder removal, potentially compromising manufacturability and part quality. To address this, spherical cut-outs were introduced at cell corners to enable powder evacuation. Finite element simulations were carried out to assess the impact of these changes on the effective material properties, including stiffness and Poisson's ratio, under different loading conditions.
In contrast, TPMS (triply periodic minimal surface) unit cells, which are inherently self-supporting and open-porous by design, offer a promising alternative for LPBF applications. Preliminary simulations demonstrate their potential to maintain mechanical integrity while overcoming powder removal challenges. A MATLAB-based framework was developed to generate and analyze TPMS unit cells, with features for seamless meshing, direct export to Abaqus, and a novel gradation approach enabling tailored density distributions across sandwich cores.
The results highlight the trade-offs created by spherical cut-outs in FCC and BCC unit cells, where manufacturability is improved at the expense of stiffness. TPMS-based structures, on the other hand, maintain their mechanical performance while simplifying the manufacturing process. This study provides important insights into the design and evaluation of periodic core structures for LPBF, which has significant implications for lightweight applications in the aerospace and automotive industries.
Additive Manufacturing in Construction (AMC) offers a unique advantage by combining automation with customization. Moreover, AMC introduces an innovative design approach, adding material only where it is structurally or functionally necessary. This makes AMC both cost-effective and resource-efficient. However, 3D printing with concrete presents specific challenges related to buildability. The deposited strands must support not only their own weight but also the weight of the layers above. Additionally, during the construction of concrete structures via 3D printing, issues such as elastic buckling and plastic deformation can lead to structural failure. Therefore, numerical modelling of the deposition process, considering process parameters, is crucial for predicting structural stability, optimizing the printing process, and minimizing trial-and-error experimentation.
This contribution outlines the framework and numerical implementation of a path- and process-based finite element approach to simulate the structural build-up during deposition. The modelling approach establishes a direct coupling between process parameters and simulation, accounting for the time-dependent material behaviour of fresh concrete. Various numerical examples related to the optimization of process parameters are presented to demonstrate the functionality of the approach. These studies analyse the convergence and numerical stability of the implementation concerning the sensitivity of time-dependent material properties as well as time and spatial discretisation. Additionally, explicit examples are used to apply the proposed approach for conducting parametric studies that investigate the buildability of printed structures through the optimization of process parameters.
The section will focus on advanced theoretical, numerical and experimental models for the evaluation of the behavior of structures subjected to static and dynamic loads. Innovative materials characterized by high strength, anisotropy and unconventional mechanical responses pose new challenges to the design and the performance of various structural elements such as beams, plates and shells. In particular, structural issues may appear at different scales when materials with an internal architecture are employed. Particularly welcome are linear and nonlinear models and algorithms that address static and dynamic behavior of structures at different scales.
The application of Lattice Boltzmann (LB) methods to solid mechanics has recently garnered significant interest. These methods promise exceptional performance in terms of computational complexity, exhibiting linear scaling behavior. Moreover, meshless approaches, such as LB schemes, demonstrate favorable properties when addressing large deformations or fractures.
Different schemes have been proposed with the Navier-Cauchy equation as the target equation, which are based on either wave decompositions [1,2] or the asymptotic expansion [3]. Another approach uses a moment-chain to replicate balance equations [4]. This relies on a constitutive forcing term [5] to model the material behavior of elastic solids. However, the existing LB schemes for solids have been restricted to linear elastic materials under small strains.
This presentation extends the moment-chain approach to accommodate non-linear materials with finite strains. By modifying the constitutive forcing term, we integrate general hyperelastic material models into the LB framework. Stress and strain measures are computed with regard to the reference configuration.
We validate the approach through numerical benchmarks, showcasing its performance across various material models. Furthermore, simulations of wave propagation illustrate the capability of the scheme in fully dynamical systems and demonstrate the method's potential for advancing computational techniques in non-linear elastodynamics.
[1] A. Schlüter, S. Yan, T. Reinirkens, C. Kuhn, and R. Müller, ‘Lattice Boltzmann Simulation of Plane Strain Problems’, PAMM (2021),
doi: 10.1002/pamm.202000119.
[2] A. Schlüter, H. Müller, and R. Müller, ‘Boundary Conditions in a Lattice Boltzmann Method For Plane Strain Problems’, PAMM (2021),
doi: 10.1002/pamm.202100085.
[3] O. Boolakee, M. Geier, and L. De Lorenzis, ‘A new lattice Boltzmann scheme for linear elastic solids: periodic problems’’, CMAME (2023),
doi: 10.1016/j.cma.2022.115756.
[4] E. Faust, A. Schlüter, H. Müller, F. Steinmetz, and R. Müller, ‘Dirichlet and Neumann boundary conditions in a lattice Boltzmann method for elastodynamics’, Comput Mech (2024), doi: 10.1007/s00466-023-02369-w.
[5] M. Escande, P. K. Kolluru, L. M. Cléon, and P. Sagaut, ‘Lattice Boltzmann Method for wave propagation in elastic solids with a regular lattice: Theoretical analysis and validation’, preprint at arXiv (2020),
doi: 10.48550/arXiv.2009.06404.
Stabilization techniques in combination with reduced integration are often employed in numerical methods to address the problems of locking while reducing computational costs. In finite elements, concepts such as enhanced assumed strain methods (EAS) [1] are usually employed in combination with stabilization techniques to tackle certain locking phenomena, such as volumetric or shear locking. This combination has been shown to improve the performance of low-order finite element formulations while addressing the downsides of reduced integration [2].
In recent years, the virtual element method (VEM) has emerged as a numerical approach capable of handling arbitrary polygonal and polyhedral meshes, offering flexibility in mesh generation and refinement [3]. This flexibility makes VEM particularly suitable for a wide range of applications involving distorted, non-convex, and irregular meshes. Similar to reduced integration, VEM formulations require stabilization to avoid rank deficiencies and ensure numerical consistency. Different stabilization techniques have been employed to overcome this issue. In addition to that, connections between classical stabilization strategies, such as hourglass stabilization, and stabilization techniques applied in VEM have been established [4].
In this contribution, different stabilization techniques are presented and investigated, with a particular focus on their application within the framework of the VEM. The discussion addresses the connections between classical approaches and emerging strategies, emphasizing their role in tackling challenges in computational mechanics. To validate the methodology, numerical examples, including benchmark problems, are conducted and the performance is quantified.
[1] J. C. Simo and F. Armero. Geometrically non-linear enhanced strain mixed methods and the method of incompatible modes, International journal for numerical methods in engineering, 33(7):1413–1449, 1992.
[2] N. Pacolli, A. Awad, J. Kehls, B. Sauren, S. Klinkel, S. Reese, and H. Holthusen. An enhanced single Gaussian point continuum finite element formulation using automatic differentiation, Dec. 2024.
[3] L. Veiga, F. Brezzi, A. Cangiani, G. Manzini, L. Marini, and A. Russo. Basic principles of virtual element methods, Mathematical Models and Methods in Applied Sciences, 23, 2013
[4] A. Cangiani, G. Manzini, A. Russo, and N. Sukumar. Hourglass stabilization and the virtual element method, International Journal for Numerical Methods in Engineering, \linebreak 102(3):404–436, 2015
With the rising complexity of architectural design, a growing interest in topology optimization, and the increasing need to model crack propagation, there is a higher demand of highly flexible geometry descriptions that go in hand with robust element formulations. Methods relying on polygonal meshes, such as the virtual element method (VEM), element formulations based on Wachspress shape functions, and the Scaled Boundary Finite Element Method (SBFEM) have proven to be versatile in solving complicated physical problems. Arbitrarily shaped elements are beneficial when it comes to highly localized meshes, treatment of hanging nodes and flexibility of the meshing of complex domains. Hence, it is crucial to have a comparison of polygonal element formulations to gain a better understanding of advantages and drawbacks.
This presentation provides a comparative study of polygonal element formulations, focusing specifically on the VEM, the semi-analytical SBFEM, and the fully discretized SBFEM. The study evaluates these methods in terms of stability, convergence behavior, and flexibility in the context of linear elasticity. The potential of the formulations is evaluated by applying them to several benchmark problems and by comparing the results. In doing so, the flexibility of the considered formulations is also proven. To this end, several measurements are considered such as error norms, convergence rates and deformation measures.
References:
Beirão da Veiga, L., Franco Brezzi, Andrea Cangiani, Gianmarco Manzini, L. Donatella Marini, and Alessandro Russo. "Basic principles of virtual element methods." Mathematical Models and Methods in Applied Sciences 23, no. 01 (2013): 199-214.
Klinkel, S., and R. Reichel. "A finite element formulation in boundary representation for the analysis of nonlinear problems in solid mechanics." Computer Methods in Applied Mechanics and Engineering 347 (2019): 295-315.
Song, Chongmin. The scaled boundary finite element method: introduction to theory and implementation. John Wiley \& Sons, 2018.
In industrial production, a substantial proportion of machining tools are manufactured from metals such as tungsten carbide and cobalt due to their exceptional hardness and durability. Both these minerals are considered critical from a social perspective [1]. To address this issue, the project 'Fairtools' aims to substitute these minerals in industrial tool applications and in cases where their replacement is technically infeasible, the project focuses on significantly reducing their utilization. Large deformations in tooling systems arise from factors such as high operational stresses, extreme temperatures and others, resulting in significant material strain and geometric changes. For modelling such complex deformation problems, the Particle Finite Element Method (PFEM) has proven to be a robust solution [2]. The traditional Finite Element Method (FEM) may encounter excessive mesh distortion and a subsequent loss of accuracy in these scenarios, but the PFEM addresses these challenges with efficient re-meshing techniques that can result in obtaining more precise solutions. To assess the capabilities of PFEM, simulations using this technique were performed to investigate material behaviour under tensile loads. The study was conducted in two stages: first, a baseline specimen with uniform material properties was simulated to establish a reference for comparison. In the second stage, a specimen with spatially varying material properties, representative of composite materials, was taken into consideration. In both scenarios, the results of PFEM and FEM were compared with respect to accuracy and computational time. The findings obtained from this study, further demonstrate the potential of PFEM in modelling large deformations and composite materials, thereby highlighting its potential for a diverse range of advanced industrial applications.
References
Marscheider-Weidemann, F., (2021): Rohstoffe für Zukunftstechnologien 2021. – DERA Rohstoffinformationen 50: 366 S., Berlin.
Cremonesi, M., Franci, A., Idelsohn, S. et al. A State of the Art Review of the Particle Finite Element Method (PFEM). Arch Computat Methods Eng 27, 1709–1735 (2020).
The equilibrium approach to the finite element method (FEM) is applied to the equilibrium problem for the Reissner-Mindlin plate. This is achieved using the Southwell stress functions [2], which are exploited to express stress-related quantities. Bending and twisting moments depend on the first derivatives of the stress functions, whereas the transverse shearing forces are determined by their second derivatives.
The stress functions are approximated using two C1 triangular elements, developed by Argyris, and Hsieh, Clough and Tocher, as well as a C1 rectangular element, developed by Bogner, Fox, and Schmit [1]. Each of the two components of the stress functions is approximated separately using the element shape functions and degrees of freedom, which include the values of the Southwell functions, their first-order derivatives, and their second-order derivatives. Consequently, the elements have 42, 24, and 32 degrees of freedom, respectively, for the aforementioned methods.
Additionally, Lagrange multipliers are defined as extra degrees of freedom at the corner nodes of the elements to satisfy equilibrium conditions for point forces acting at these corners. The boundary conditions are formulated in terms of stress-related quantities and are enforced using multi-point constraint elements, which employ Lagrange multipliers as degrees of freedom. This allows the imposition of conditions that are linear combinations of the degrees of freedom of the triangular plate elements. The stress-based formulation for FEM is derived using the principle of minimum complementary energy (or, equivalently, the complementary work equation).
The results obtained using this equilibrium approach are compared to those derived using the displacement-based element method, which involves the use of triangular isoparametric elements with 12 and 22 degrees of freedom. Upper and lower bounds for the strain energy are determined using the dual properties of the two FEM formulations. The relative errors of the approximate solutions are calculated using the Synge method [3].
References:
Ciarlet P.G. The Finite Element Method for Elliptic Problems. North-Holland Publishing Company, 1978.
Fraeijs de Veubeke B, Zienkiewicz O.C. Strain-energy bounds in finite-element analysis by slab analogy. Journal of Strain Analysis, 1967; 2(4):265-271.
doi:10.1243/03093247V024265
Synge J.L. The Hypercircle in Mathematical Physics. Cambridge: Cambridge University Press, 1957.
This paper presents the parameter estimations of a mathematical model regarding the natural vibrations of soda-lime-silica float glass panels. The proposed approach was based on experimental research conducted on a real soda-lime-silicate sample. The estimated parameters include the modulus of elasticity and thickness of the panel. Other parameters of the model were sourced from literature data and an inventory of the real float glass panel. The literature gives the Young modulus of glass panels in the range of 68 - 72 GPa. Because of the specificity of the material, even an experimental determination of this value may be subject to some inaccuracy. Due to the manufacturing process, the thickness of the float glass panel is variable along the sample surface. Although these changes are slight, in combination with the small thickness of the panel itself, they have a significant impact on the obtained values of the dynamic characteristics. This justifies the selection of these parameters for the estimation process. The development of a discrete computational model of the soda-lime-silica float glass panel was undertaken using the Rigid Finite Element Method (RFEM) because this method can be easily implemented in the MATLAB programming environment. The MATLAB software facilitates the implementation of optimisation algorithms, which, when integrated with the proprietary model software developed within the same environment, enable the estimation of parameters associated with float glass panels. The choice of appropriate criteria in the process of parameter estimation of float glass panel models has a significant impact on the accuracy of the obtained results. The estimation criteria were selected to ensure consistency between the natural frequencies and modes measured experimentally and those obtained from the mathematical model. The parameter values achieved for the soda-lime-silicate samples in the estimation process by the RFEM model were then verified using the Finite Element Method and Abaqus software to compare the obtained dynamical results.
The section covers all fields of vibrational problems in solid mechanics or mechatronics including nonlinear effects. Submissions may address, for example, systems with nonlinear material behavior, nonlinearities in joints, mathematical solution methods (analytical or numerical), control or description of nonlinear behavior like bifurcations or chaos, or experimental idendification of nonlinearities.
Systems with more than one equilibrium belong to a group of nonlinear multi-stable structures. The multi-stability can be created by specific devices, for example by a system producing nonlinear magnetic force and enabling to obtain the potential function with two or more potential minima (potential wells). Another option is to add axial force to the structure which shifts the vibrating system to a buckling point with existing two equilibria (two potential wells). The composite technology offers new possibilities to create multi-stable structures by a proper selection of the geometry and material properties.
In this paper, we propose a specific shell design considered in two configurations, called shell A and shell B. Geometry of the shell A is based on conical surface and unsymmetrical composite layers layout. Depending on the parameters it may exhibits mono or bi-stability. The shell, while excited periodically, may oscillates around a single potential well (in-well dynamics) or can move between two potential wells (cross-well dynamics) with a rapid jump, called snap-through, from one to another equilibrium [1]. Shell B is based on a pseudo-conical surface and has the same composite layers layout. Such created shell B exhibits five equilibria and very complex dynamics, in-well or cross-well oscillations with snap-through between two potential wells, or cross-well oscillations with a jump among more than two wells. The unique nonlinear properties of the shells are attractive for the design of efficient energy harvesters or for dedicated control techniques when the rapid change between equilibria is required. The main goal of this paper is to present the untypical nonlinear effects of multi-stable systems and their application to energy harvesting, morphing, and control [2].
Acknowledgment
This research was funded in part by National Science Centre, Poland 2021/41/B/ST8/03190.
References
[1] Mitura, A., Brunetti, M., Kloda, L., Romeo, F., Warminski, J.: Experimental nonlinear dynamic regimes for energy harvesting from cantilever bistable shells. Mechanical Systems and Signal Processing 206 (9), 110890, 2024
[2] Warminski J.: Nonlinear dynamics of self-, parametric, and externally excited oscillator with time delay: van der Pol versus Rayleigh models. Nonlinear Dynamics 99, 35-56, 2020.
Nonlinear mechanical vibrations under harmonic forcing can be well approximated by Fourier series. For a finite number of harmonics, the error is minimized over one period of vibration. This technique known as Fourier-Galerkin-Method (FGM) or Multi-Harmonic-Balance Method (MHBM) is today widely used in academics as well as industrial applications, e.g. friction damping in turbomachinery. The idea is often motivated by the original Harmonic Balance approach using the fundamental harmonic only which is often referred to as single term Harmonic Balance or Describing Function Method. One can show analytically that a single term approach applied to a one degree of freedom system results in an oscillator with equivalent stiffness and equivalent damping changing with vibration amplitude. However, an equivalent mass which changes for different energy levels has not been considered so far. In this work we will present how the standard Harmonic Balance procedure can be applied to system with energy dependent inertia behaviour. First, examples for this kind of dynamic system are provided to motivate the problem. Three different real systems will be shown to exhibit inertia nonlinearity, namely a pendulum with varying length, a slender cantilever beam at moderately large deflection and a particle container. For fundamental investigations two academic systems are chosen. The first one is a reference system with nonlinear acceleration term in cubic form. It can be seen as an inertia version of the Duffing oscillator and will be used to point out numerical challenges for both time integration and Harmonic Balance. A second reference system is linear in acceleration but the leading factor representing the oscillators inertia is a function of generalized coordinates. The latter system is found to be a suitable minimal model for the above mentioned examples. Single-term Harmonic Balance is applied to obtain an equivalent mass and related nonlinear eigenfrequency. It will be shown how MHBM can be modified to tackle vibration problems with amplitude dependent inertia and the characteristic vibration behaviour is analysed. Reference results are computed by time integration which may be challenging in cases where nonlinear behaviour is directly expressed in the accelerations. As stability information is crucial in nonlinear system design, this work will point out how stability can be assessed in the frequency domain by extending Hill’s theory for computing Floquet exponents in the case of inertia nonlinearity.
This paper presents a method for integrating base excitations into the harmonic balance method within the context of path continuation. Footpoint excitation can be applied via linear or non-linear elements to the mechanical system. First, a brief introduction into the harmonic balance method and the continuation procedure are given. Then, different types of base excitations are discussed: displacement, velocity or acceleration driven base excitation. Also, the relevant connections in the time and the frequency domain as well as the required derivatives are derived. Afterwards, the handling of the linear coupling case is shown as it is straightforward and only requires the usage of the derivatives declared beforehand. The equations of motion can be rearranged such that the base excitation appears exclusively on the right-hand side as an external force which may depend on the excitation frequency. Following this, two methods are presented to handle the nonlinear coupling case: The first method uses the definition of relative coordinates to achieve the base excitation to be included into the external forces again as in the linear case. The downside of this approach is that it is not generally applicable, which will be discussed. A second more general approach includes the base excitation into the nonlinear force term which then depends on the excitation frequency. Again, the derivatives needed for continuation are derived. This approach allows to retain the original set of coordinates. In the following, the proposed method is applied to a single degree of freedom system. The system shows a behavior where for certain choices of the excitation amplitude and the nonlinearity there is no turning point near resonance and the FRF splits up into two separate curves. This behavior is further investigated by using an analytic approach which allows reformulating the problem into a third order polynomial. Several parameter studies are shown, including the locus curves of the local maximum and the turning point under variation of the base excitation amplitude. Additionally, frequency sweep data is shown to verify the effect. The proposed method is then applied onto a multi degree of freedom system which additionally has nonlinear couplings within its physical coordinates. The FRF under variation of the base excitation amplitude shows the same behavior of having no turning point. Moreover, the applicability to arbitrary NDOF systems is proven.
Time-varying physical properties of mechanical systems can lead to parametrically excited oscillations. A notable example of this is the time-varying stiffness of gear teeth meshing in the analysis of gearbox dynamics. In gear dynamics, the vibration behavior is a key design criterion, as it can lead to increased wear and noise. A mechanical model with two degrees of freedom can be used to mathematically represent the rotational equation of motion of the gear pair. The torsional vibrations occurring around the rigid body rotation can then be simplified to a single-degree-of-freedom parametric oscillator. Mathematically, this results in a description using the ordinary differential equation known as the Hill equation. In addition to an assessment of the amplitudes of vibrations, an assessment of the system stability is a crucial aspect to consider. In the stability analysis of such a mechanical system, a stability chart, also known as an Ince-Strutt diagram, is utilized. These graphical representations of stability regions can be created for various combinations of system parameters to predict whether the system will be stable or unstable. In order to gain a comprehensive understanding of the stability behavior, a range of different methodologies can be employed. In the context of periodic time-variable systems, three methods are particularly prevalent: the perturbation method and the two approaches based on Hill’s as well as on Floquet’s theory. The latter serves as the basis for the methodology presented in this contribution. In general, these stability charts are created by solving the differential equation for a large number of discrete parameter values and their combinations. Their stability statements are then mapped into the Ince-Strutt diagram. However, the stability boundaries separating stable and unstable behavior are of primary interest. For this reason, this contribution introduces an approach to determine the boundaries between stable and unstable regions without the need to calculate parameter values within the region of interest. In numerical analysis, continuation methods represent a class of numerical techniques based on predictor-corrector schemes to compute any kind of parametrized curve. This contribution presents the method in general and conducts parameter studies on the above-mentioned application example. It will be shown that the proposed approach is suitable to generate stability charts for the analyzed parametric oscillator by continuation of stability boundaries.
Stationary solutions of dynamical systems play an important role in engineering and science. These solutions are characterised by the fact that their underlying solution type does not change for infinite time. Very well-known stationary solutions are equilibria and periodic solutions. Beyond that, quasi-periodic solutions may occur. This solution type can be understood as an extension of periodic solutions, showing multiple rationally independent base frequencies. While equilibria and periodic solutions, as well as their entire trajectories, can be calculated directly using numerical methods such as shooting, this is not possible for quasi-periodic solutions as they do not exhibit a finite period. However, it can be shown that a quasi-periodic trajectory fills the surface of a two-dimensional torus densely as time approaches infinity. This enables the calculation of the underlying torus as invariant manifold in state space instead of parts of the solution trajectory. Since equilibria and periodic solutions can also be regarded as solutions on a lower-dimensional torus, all three solution types fit into a common framework.
Using a parametrization in torus coordinates (so-called “hyper-times”), the MATLAB toolbox CoSTAR (Continuation of Solution Torus AppRoximations), developed by the authors, aims at calculating parameter-dependent solution branches of the underlying tori. Currently, the toolbox features Fourier-Galerkin (i.e. Harmonic Balance) methods, Finite Difference methods as well as shooting methods for periodic and quasi-periodic solutions. As these methods exhibit individual advantages and disadvantages, a modular design allows to easily compare the results between the implemented approximation methods. Apart from that, stability analysis and bifurcation detection are of great interest in engineering and science. While the stability of equilibria and periodic solutions can be calculated using well-established methods, stability computation is still challenging for quasi-periodic solutions. A method for stability computation of quasi-periodic solutions, based on Lyapunov-exponents and developed by the research group, has been implemented in CoSTAR for the quasi-periodic shooting method [1,2]. This enables the continuation and stability calculation of each solution type within a unified framework.
To demonstrate the applicability of the toolbox, a continuation example is presented.
Bibliography
[1] Fiedler, R., Hetzler, H. & Bäuerle, S. Efficient numerical calculation of Lyapunov-exponents and stability assessment for quasi-periodic motions in nonlinear systems. Nonlinear Dyn 112, 8299–8327 (2024). https://doi.org/10.1007/s11071-024-09497-9
[2] Hetzler, H. & Bäuerle, S. Stationary solutions in applied dynamics: A unified framework for the numerical calculation and stability assessment of periodic and quasi-periodic solutions based on invariant manifolds. GAMM-Mitteilungen 46 (2023), e202300006.
https://doi.org/10.1002/gamm.202300006
The section focuses on constitutive modelling of natural and artificial materials subject to elastic and inelastic deformation processes. The aim is to compare new constitutive models formulated on both the phenomenological and the micromechanics basis to determine their validity by comparison of simulations with experiments. A wide range of open problems will be considered in the section, like multi-scale modelling of heterogeneous materials, implementation of constitutive models in numerical applications, and the virtual testing of structural systems.
It is generally accepted that soft biological tissue shows inelastic material behavior, such as viscoelasticity and loss of stiffness after primary loading, akin to the Mullins effect in rubber. Recently, there has been experimental evidence suggesting a coupling of these rate-dependent and damage-related effects in porcine thoracic aorta [1]. Two observations in favor of this hypothesis are (i) rate-dependent equilibrium relations and (ii) considerable differences in viscous dissipation between loading to unloading.
With respect to continuum-mechanical modeling, thermodynamically-consistent formulations of anisotropic viscoelasticity at finite strains can be achieved with and without a multiplicative split of the deformation gradient, cf. [2] and [3], respectively. However, by definition, these approaches do not include damage-related effects. Regarding the latter, a wide-spread description for the Mullins effect has been discussed in detail by [4]. Here, the rate of deformation does in turn not influence the magnitude of damage, since the initial material response is taken to be purely elastic. As is often the case when modeling different sources of dissipation, one can superpose these two inelastic effects without truly coupling the phenomena. Consequently, the material response might allow for both viscous and damage-related dissipation, but rate-dependent equilibrium hystereses remain nonetheless impossible.
We attempt to address these deficiencies by extending the damage model of [4] to initially viscoelastic materials. This leads to a set of simple expressions which can straightforwardly be brought into line with the dissipation inequality, while still allowing for a wide range of possible material behaviors. We then discuss the consequences of this mutual coupling both from a purely theoretical point of view and with respect to experimental data from aortic and gastric porcine tissue.
References:
[1] Bogoni, F., Wollner, M.P., Holzapfel, G.A., 2024. On the experimental identification of equilibrium relations and the separation of inelastic effects in soft biological tissues. J. Mech. Phys. Solids 193, 105868.
[2] Ciambella, J., Lucci, G. , Nardinocchi, P., 2024. Anisotropic evolution of viscous strain in soft biological materials. Mech. Mater. 192, 104976.
[3] Liu, J., Latorre, M., Marsden, A.L., 2021. A continuum and computational framework for viscoelastodynamics: I. Finite deformation linear models. Comput. Methods Appl. Mech. Engrg. 385, 114059.
[4] Naumann, C., Ihlemann, J., 2015. On the thermodynamics of pseudo-elastic material models which reproduce the Mullins effect. Int. J. Solids Struct. 69–70, 360–369.
In recent years polymer materials experienced a high amount of interest for their exceptional suitability within lightweight engineering applications. To model their thermomechanical behavior it is crucial to sufficiently capture relaxation effects in mechanical as well as caloric quantities [1]. Possible approaches to handle these effects within a thermodynamically consistent framework are e.g. the introduction of internal variables such as a viscous strain [2] or the extension of the variable space by additional non-equilibrium fluxes such as the heat flux or the viscous stress tensor [3], both of which are related to dissipative effects. In this talk we present a summary of different views onto relaxation equations governing thermo-visco-elastic material behavior [3, 4]. Special focus will be put on the connection to standard rheological models, e.g. the generalized Maxwell model or the Poynting-Thomson model, since they have been proven effective to deliver reasonable results for basic cases under certain assumptions. Additionally, possible approaches for non-linear generalizations extending the validity for a wider spectrum of applications shall be addressed.
REFERENCES
[1] Kehrer, L., Keursten, J., Hirschberg, V. and Böhlke, T., 2023. Dynamic mechanical analysis of PA 6 under hydrothermal influences and viscoelastic material modeling, Journal of Thermoplastic Composite Materials, Volume 36.
[2] Maugin, G. and Muschik, W., 1994. Thermodynamics with Internal Variables. Part II. Applications, Journal of Non-Equilibrium Thermodynamics, Volume 19.
[3] Jou, D., Lebon, G. and Casas-Vázquez, J., 2010. Extended Irreversible Thermodynamics, Springer Netherlands, Dordrecht.
[4] Šilhavý, M., 1997. The Mechanics and Thermodynamics of Continuous Media, Springer Berlin Heidelberg, Heidelberg.
In recent years, Additive Manufacturing in Construction (AMC) has been gradually transforming the industry by offering a faster and more efficient way to build with concrete. Unlike traditional methods that rely on molds, AMC deposits material layer by layer, allowing new possibilities for custom designs and complex shapes. AMC also digitalizes the construction process, which helps reduce labor costs, cut material waste, and streamline the transition from design to construction. To optimize these advantages, AMC requires appropriate modeling of building components and deposition processes.
Many developments have focused on digital methods for designing parts and planning AMC processes while others provide numerical simulations to predict failures of AMC structures. Our study contributes to the latter, aiming to develop a modeling and simulation approach for bulk-deposited AMC concrete and its interlayers. From this, a reduced substitute model is developed to reduce the computational cost of simulating complex geometries on a part-scale.
Our starting point is developing and validating the viscoplastic material model of shotcrete 3D printing. Bingham-type rheology is used to model the thixotropic behavior of freshly printed shotcrete. In addition, interlayers are represented by cohesive zone elements to simulate how layers settle as new ones are added. Finally, the material model is incorporated into the finite element method and is validated against experimental results.
Crushed Salt is used as a backfill material for closing nuclear waste repositories, with the advantage that this is a residual material of e.g. salt mining. As backfill it has different functions such as mechanical stabilization, heat transport and sealing. Numerical simulations play a crucial role for the long-term assessment of these safety functions and in enhancing our understanding of repository systems. Simulations are instrumental not only in the safety analysis of repositories, but also in aiding the search for suitable repository sites. High-quality constitutive models are indispensable for these simulations, ensuring accurate and reliable results. The visco-plastic material behavior of crushed salt deformation can be decomposed in compaction creep, dislocation creep, fracture and grain rearrangement creep and pressure solution creep [1]. In this contribution, a constitutive model that handles the above-mentioned deformation mechanisms is presented. The underlying mechanisms for creep are derived from the examination of microscale processes within a Representative Volume Element (RVE). Compaction and dislocation creep initially occurs in the grain contact zone, where the applied stress is localized. A high void ratio at the beginning of creep process leads to an increased creep rate. Pressure solution creep is modeled as a diffusion-controlled process in the contact zone. The creep rate is derived from the mass balance of salt ions. The material model is implemented in the Finite Element software JIFEMP and calibrated by an experiment of the KOMPASS II project [2].
[1] Munson, D., \& Dawson, P. (1984). Salt constitutive model using mechanism maps. The Mechanical Behavior of Salt. Proc. Of the First Conf. On Salt, 673–680.
[2] Friedenberg, L., et al. (2024). KOMPASS-II: Compaction of Crushed Salt for Safe Containment - Phase 2. Gesellschaft für Anlagen- und Reaktorsicherheit (GRS).
In recent years, 4D printing has gained significant attention in the material modeling community. Unlike classical 3D Fused Deposition Modeling (FDM), 4D printing incorporates the dimension of time to the printing process. Recent publications have focused on characterizing printed materials, predicting their time-dependent behavior, and investigating the influence of infill patterns on the shape memory effect. Current investigations reveal that the direction of the stored strain held in printed structures strongly correlates with the print orientation. This work focuses on characterizing of the printed material and the quantification of the imposed recovery strain during the printing process and its subsequent shape-changing capability. Shape Memory Polymers (SMPs) are a subcategory of smart materials that have the capability to store an externally imposed shape, which can be recovered with an activation trigger, most commonly temperature. This trigger facilitates for the formation and breakup of cross links in the material, allowing for the capture of entropic deformation energy. During printing, this effect is imposed locally to the filament influenced by parameters such as temperature and printer speed. This research emphasizes a new more precise approach to modeling the material behavior and describing the recovery strain. An intermediate configuration is employed to predict the recovery after activation of the programmed state. By mixing free energy functions for both low and high temperatures, the model becomes adaptable for various types of polymers, though its currently limited to mostly amorphous polymers due to the complexity introduced by crystallization. Uniaxial tensile tests are used to capture the elastic properties of the printed samples, extended with strain recovery test to measure the stored strain that is imposed in the printing process on the filament. The imposed strain is assumed to be constant at each point, where the magnitude of the strain depends on the layer orientation angle around the vertical print axis. This constant recovery strain is linked to the printing parameters, storing the orientation as an internal field. Future research will extend this approach to more complex orientations and material behaviors, such as crystallization and visco-elastoplastic materials.
This study presents a thermodynamically consistent framework for modeling the viscoelastic behavior of ice under finite deformations, capturing its complex time-dependent mechanical response. A multiplicative decomposition of the deformation gradient into an elastic and viscous component is employed, reflecting the material's ability to accommodate both reversible and irreversible deformations, see [1]. The formulation is grounded in thermodynamically consistent principles which ensures compliance with the second law of thermodynamics using an appropriate ansatz for the evolution equation. A critical feature of the model is the integration of an exponential update algorithm for the evolution of the internal variables as in [2] for the sake of numerical stability and accuracy in the large deformation regime. The performance of the model is demonstrated using benchmark problems that reflect experimental observations of the deformation behavior of ice.
References:
[1] J. Christmann, R. Müller, and A. Humbert. On nonlinear strain theory for a viscoelastic material model and its implications for calving of ice shelves. Journal of Glaciology, 65(250):212–224, 2019.
[2] S. Govindjee and S. Reese. A presentation and comparison of two large deformationviscoelasticity models. Journal of Engineering Materials and Technology, 119(3):251– 255, 1997.
Coupled problems arise in several applications. From a general point of view each problem containing more than one primary field is called a coupled one. Usually the class of coupled problems is subdivided into volumetrically coupled problems and problems with surface coupling. The class of volumetrically coupled problems contains e.g. the fluid flow in porous solids described by mixture theory, thermo-mechanically coupled problems, chemo-mechanically coupled problems and electro- or magneto-mechanically coupled problems while in the second class problems like the fluid-solid interaction via an interface are included. All problems is in common that the presence of different fields in the numerical treatment requires special attention with respect to the multi-field formulation and the solution strategy. The session on coupled problems deals with all aspects mentioned above, i.e. ranging from modelling aspects to numerical solution strategies.
Hydrodynamic journal bearings are essential machine parts that are used in particular for applications with high rotational speeds. Their precise simulation requires the consideration of thermomechanical interactions between solids and fluid. During operation, the shear-stresses in the fluid with lubricant film heights in the range of 5 to 100 µm, lead to significant friction losses acting as a considerable heat source. This heating not only causes a non-linear change in the fluid viscosity, but also influences the temperature. This results in a deformation of the adjacent solid bodies, which naturally influences the pressure in the fluid film due to the thermomechanical coupling.
These coupling effects dynamically change the geometry of the lubricating film and significantly influence the load carrying capacity and dynamic properties (stiffness and damping) of the bearing.
Against this background, the present contribution investigates the thermomechanical interactions within a cylindrical radial journal bearing consisting of shaft and bearing shell including suitable boundary conditions. For this purpose, a temperature model for the solids is developed and implemented to solve the heat conduction equation and is coupled with a mechanical model. To achieve a high numerical efficiency, the p-finite element method is utilized and studied concerning its numerical behavior. The fluid temperature is described by the energy equation, which is approximated using the finite volume method.
After validating the model, it is applied to the thermomechanical analysis of a bearing to gain a deeper understanding of the coupling effects as well as their impact on the bearing’s performance including numerical effort and solution quality.
District heating grids (DHGs) are considered a key technology for decarbonizing the heating sector. In a DHG, heat energy is transported from heat sources to heat sinks through a network of pipes. Heat sources and heat sinks are connected to the district heating network usually via counter-flow heat exchangers so that they are hydraulically separated but thermally coupled. Therefore, a precise understanding of the behavior of heat exchangers is required for efficient control of heat sources and heat sinks in DHGs.In this context, usually reduced order models based on ordinary differential equations (ODEs) are utilized to derive control strategies.
State-of-the-art ODE-based models rely on empirical Nusselt correlations in order to describe the dependence of the heat transfer on the mass flow rates. This approach assumes fully developed flow and statistically stationary operation conditions, which is insufficient in various respects, especially when considering the requirements for modern multi-energy systems. As an alternative, computational fluid dynamics (CFD) could be utilized to more accurately capture the non-linear heat exchanger dynamics. However, CFD-based models are too costly for direct derivation of control strategies. Nevertheless, in order to utilize CFD results in ODE-based models, in this work CFD-based parameter identification for ODE-based heat exchanger models is performed, adapting ODE-based models from the literature.
In the contribution, the CFD model of an idealized counter-flow heat exchanger will be formulated and validated for transient flow and conjugate heat transfer. After that, it is demonstrated how CFD-based predictions of the heat transfer can be utilized for parameter identification of selected ODE-based heat exchanger models. The forthcoming analysis extends to a quantitative assessment of the differences and improvements.
The heat sink (HS) is a heat exchanger that transfers the heat generated by an electronic or mechanical device to a fluid medium. It prevents damage to electronic equipment by continuously dissipating heat, enabling their sustained advancement and widespread use in applications [1]. In the last decades, researchers pursued different approaches to improve the performance of HS. Ghadhban and Jaffal [2] studied the effect of channel configuration and pressure drop on the thermal performance and characteristics of microchannel heat sinks. Peng et al. [3-4] investigated the influence of geometric parameters on the heat transfer efficient of rectangular microchannels. They observed an optimal result were obtained under fixed height and total cross-sectional area. Arif et al. [5] explored the influence of the thermal interface between fin and base plate in pin-fin heat sinks, considering various interface geometries and contact properties. In order to further improve HS efficiency, more scientific work is required. The current work focused on the channel geometric structure (wavy-shaped and rectangular-shaped geometry) and fluid inlet velocity effects on the thermal performance and pressure drop characteristics of the mini-channel HS. The effects of heat flux generated at the hotspot and background on channel HS efficiency are also included in this work. Conjugate heat transfer and fluid flow analysis in HS were implemented using Comsol Multiphysics. The current simulation results of the heat sink were compared with the state of the arts and the obtained findings were presented and discussed.
Acknowledgment
This work has been supported by a grant from the Ministry of Science and Higher Education in Poland: 0612/SBAD/3628 (2024-25). Simulations were carried out at the Institute of Applied Mechanics, Poznan University of Technology (website: http://am.put.poznan.pl/en/home/).
References
[1] H. T. Dhaiban and M. A. Hussein, JACM, vol. 6, no. 4, pp. 1030–1043, 2020.
[2] F. N. Ghadhban and H. M. Jaffal, Results in Engineering, vol. 17, p. 100839, 2023.
[3] X. Peng, et al., International journal of heat and mass transfer, vol. 38, no. 1, pp. 127–137, 1995.
[4] B. Burlaga et al., Materials, vol. 15, no. 19, p. 6545, 2022.
[5] A. Arif et al., Journal of Electronic Packaging, vol. 134, no. 4, p. 041005, 2012.
Metastable alloys, like stainless steels, applied in the proximity of absolute zero, undergo at extremely low temperatures three coupled dissipative phenomena: intermittent plastic flow (IPF), plastic strain-induced phase transformation as well as evolution of micro-damage, including radiation-induced damage. IPF is characteristic of low (LSFE) and high stacking fault energy (HSFE) materials strained at extremely low temperatures. It represents an oscillatory mode of plastic deformation and reflects the discontinuous nature of the plastic flow. Low temperature serrated yielding (IPF) occurs below threshold temperatures: T1 for LSFE and T0 for HSFE materials. IPF consists in frequent abrupt drops of stress as a function of strain during kinematically controlled loading. The mechanism of IPF is related to the formation of dislocation pile-ups at the internal lattice barriers (the Lomer-Cottrell locks). Accumulation of dislocations in the pile-up leads to an increase of the resolved shear stress at the head of the pile-up until the stress reaches the level of cohesive strength and the lattice barrier breaks. Another important phenomenon that occurs in metastable materials at extremely low temperatures is the plastic strain-induced fcc-bcc phase transformation. It manifests itself by a rapid change of crystallographic structure from fcc parent phase to bcc secondary phase. The plastic strain-induced martensitic transformation in stainless steels is related to the TRIP (transformation-induced plasticity) effect, resulting in the uniform, unrecoverable, macroscopic strain. Finally, a third important phenomenon is related to the evolution of microdamage, including radiation damage. During irradiation, energetic particles penetrating the lattice displace the atoms from their original positions. Exposure to a flux of particles leads to the creation of clusters of defects. As a result of the cascade process, pairs of interstitial atoms and vacancies (the Frenkel pairs) are massively created. The vacancies often accumulate in clusters by means of diffusion. The nature of mechanically induced micro-damage, comprising micro-voids and micro-cracks, is different from the nature of radiation-induced micro-damage. Experimental identification of the plastic strain-induced phenomena at extremely low temperatures is quite complex. It involves a set-up consisting of a double-wall cryostat, well instrumented insert containing the sample, transfer line linking liquid helium (or liquid nitrogen) dewar with the cryostat mounted in the standard traction machine. Specific tests aimed at understanding the IPF as well as the plastic strain-induced fcc-bcc phase transformation are reported. The relevant constitutive models are shown. Moreover, dedicated experiments related to the macro-crack initiation and propagation in the presence of IPF and fcc-bcc phase transformation are discussed.
This section is dedicated to discuss recent advances in multiscale and homogenization techniques for static and dynamic problems. Topics of particular interest are nonlinear homogenization techniques, multiscale modelling of failure processes and localization phenomena, FE2 methods, atomistic to continuum coupling, contact homogenization, model reduction techniques and furthermore homogenization schemes incorporating experimentally determined microstructure data.
The numerical modeling of asphalt's mechanical behavior poses significant challenges due to its anisotropic and nonlinear viscoplastic characteristics. This paper presents a thermodynamically consistent multiscale material model based on the Microlayer framework (ML). In contrast to conventional approaches, the ML model dispenses with discrete modeling of the microstructure and instead uses geometric shapes inspired by the real microstructure. This enables analytical determination of microscale unknowns without relying on computationally intensive FE² methods.
The ML models the microstructure using an infinitely rigid aggregate onto which thin, deformable layers are imprinted. Each Microlayer exhibits a unique transient mechanical evolution, resulting in the macroscopic anisotropy of asphalt. A key advantage of this approach is that the macroscopic material response can be described using established isotropic material models [1]. This simplifies the experimental validation of the model.
Classical laboratory tests for the quality assurance of German asphalt roads and optimization techniques are conducted to validate the proposed approach. The results demonstrate excellent agreement between experimental data and model predictions, highlighting the ML framework's ability to precisely and efficiently capture the anisotropic behavior of asphalt. The new model offers considerable potential for practical applications such as pavement structure simulations.
May, M.; Platen, J.; Wollny, I. and Kaliske, M.: Microlayer Framework: Extension to Viscoelastic Material Behaviour for Finite Strains. Proceedings in Applied Mathematics and Mechanics, 2024.
Mechanical metamaterials are architected materials whose mechanical properties are determined by the combination of a bulk material and a designed microstructure. Therefore, these materials are ideally suited for inverse design tasks, i.e., for finding a specific metamaterial for a given property. Within this contribution, we present a neural network (NN)-based inverse design approach and apply it to two well-known classes of metamaterials, triply periodic minimal surfaces (TPMS) [1] and spinodoid metamaterials [2]. TPMS metamaterials are mathematically described through combinations of trigonometric functions and are characterized by having a zero mean surface curvature and periodicity in 3D space. Within the scope of inverse design, it becomes necessary to be able to describe a structure with a set of design parameters, also referred to as descriptors, enabling the seamless tuning of the geometry to meet target mechanical properties. Even though there is a wide variety of TPMS types exhibiting different topologies, they are lacking such a common descriptor space, complicating their application in inverse design tasks. In recent years, spinodoid metamaterials inspired by naturally occurring spinodal topologies gained a lot of attention due to some favourable properties like their tunable anisotropy and non-periodicity, and thus robustness against symmetry-breaking defects [2]. These structures offer a vast design space defined by only a small set of design parameters, i.e., descriptors. The relatively simple parametric representation of those complex spinodoid structures poses a unique opportunity to create architected materials with precisely adjustable mechanical properties, making them favourable for the application in inverse design [3]. After introducing the design possibilities of both architecture types, they are compared with regard to achievable elastic properties based on the formulation of an inverse design task. Specifically, we solve two different design tasks: maximizing stiffness in one direction for a given density and mimicking a fully specified stiffness.
References
[1] Fisher, J.W. et al. Catalog of triply periodic minimal surfaces, equation-based lattice structures, and their homogenized property data. Data in Brief 49, (2023).
[2] Kumar, S. et al. Inverse-designed spinodoid metamaterials. npj Comput. Mater. 6, 73 (2020).
[3] Raßloff, A. et al. Inverse design of spinodoid structures using Bayesian optimization, (2024).
The homogenization of solids with a fixed microstructure has been extensively studied over the last decades for a variety of material systems. However, homogenizing phase-transforming materials with an evolving microstructure still remains an unresolved challenge, even from a methodological perspective. Specifically, the conventional definition of a microscopic length scale is based on the wavelength of fluctuations in material properties --- such as density, mechanical stiffness, or heat conductivity --- as determined by the physical modeling problem. In this regard, the ratio of micro- and macroscopic scales characterizes the scale separation and therefore determines the mathematical structure of the homogenized continuum theory. For phase transforming materials, however, these fluctuation's lengths change over time. This makes it almost impossible to properly define representative volume elements, as the microstructure evolves heterogeneously, in general. Consequently, conventional two-scale homogenization approaches, such as the FE2-method, cannot be consistently applied when considering phase transformation mechanisms with complex interface topologies.
To address these issues, the concept of variable scale separations was recently developed in [1] as a fundamental homogenization framework for phase transforming solids. In the present work, this rather theoretical approach is applied to homogenize the phase state of an Allen-Cahn-type system for different spatial scale separations. More precisely, the ratio between micro- and macroscopic scales is now defined through a scale separation factor, which is independent of any wavelength of property fluctuations, and can therefore be chosen arbitrarily. The procedure for deriving the macroscopic phase evolution equations is motivated by spatial regularization methods known from non-local damage theories and micromorphic continua, see [2] and [3], respectively. Whereas the micro-macro transition is formulated only based on the unweighted average of the system's energy within conventional homogenization schemes, the outlined approach incorporates arbitrary measures of effectiveness. As an illustrative example, the martensite laminate orientation is homogenized as the result of spatially and temporally resolved two-dimensional finite element simulations of microstructure formation in a multigrain structure, demonstrating the method's applicability to systems with complex phase topologies.
References:
[1] von Oertzen, V.~and Kiefer, B., 2024, J. Mech. Phys. 195, 105961
[2] Peerlings, R. H. J., De Borst, R., Brekelmans, W. A. M. and De Vree, J. H. P., 1996, Int. J. Numer. Methods Eng. 39, 3391--3403
[3] Forest, S. and Trinh, D. K., 2011, Z. Angew. Math. Mech. 91, 90--109
Fused Filament Fabrication (FFF), a widely used additive manufacturing method, is widely recognized for its cost-effectiveness and ability to produce complex geometries with high design flexibility. Despite these advantages, the layer-by-layer deposition process introduces inherent challenges, including anisotropic mechanical properties, internal voids, and weak interlayer bonding, all of which degrade the performance of printed parts. Addressing these issues requires a deeper understanding of the relationship between microstructure and the resulting mechanical properties.
To address these issues, this study presents a new algorithm for implementing periodic boundary conditions (PBCs) in Representative Volume Element (RVE) analysis, specifically designed to handle the complexities of simulating materials with complex geometries and microstructural variations.
The newly developed algorithm introduces an efficient formulation to ensure deformation compatibility across RVE boundaries, enabling accurate simulation of mechanical responses under various loading conditions. A key improvement of this algorithm is its simplified approach to identifying corresponding nodes on opposite boundaries, which significantly reduces computational complexity. As a result, the script's length was reduced by more than 50\% compared to the well-established reference algorithm, making it more compact and computationally efficient.
The algorithm was implemented in Python and integrated into the Abaqus finite element software for user-friendly application. It facilitates the calculation of effective elastic constants, such as Young’s modulus, shear moduli, and Poisson’s ratios, with high precision and was verified by comparison with a well-established algorithm from the literature, confirming its accuracy and reliability.
This streamlined algorithm and its efficient implementation enhance the computational process for modeling in additive manufacturing. By improving the efficiency of simulations, this study offers a practical tool for addressing challenges in the mechanical characterization of 3D-printed structures, contributing to the optimization of materials and components for various applications.
Cellular materials, with their unique combination of lightweight, high strength, and good deformability, are promising for engineering applications. They may initially be divided into irregular structures and regularly repeating arrangements. Both categories can be open-celled or closed-celled. This contribution investigates the energy-absorbing properties of foamy and lattice-like structures with defined volume fractions in static and dynamic compression experiments. The specimens are manufactured by SLA printing of viscoelastic polymeric material. The structures areloaded in a tension test machine and by using a Split-Hopkinson pressure bar with a modified setup. From the measured strains, we can calculate how much of the applied energy was absorbed by the different structures. We then deduce a mass specific energy absorbtion. The choice of material implies that the lattice structures can withstand multiple loads and return to their original state after some recovery, cf. [1, 2]. Additionally, we present finite element simulations of our experiments and discuss how far these calculations are able to predict the different material responses.
[1] S. Bieler, K. Weinberg: Energy absorption of sustainable lattice structures under static compression. arXiv:2412.06493 [physics.app-ph]
[2] S. Bieler, K. Weinberg. Energy absorption of sustainable lattice structures under impact loading. arXiv:2412.06547 [physics.app-ph]
Discontinuous fiber-reinforced composites offer a high potential for lightweight applications due to their favorable specific stiffness and design freedom. Micro-CT imaging reveals that their microstructure is highly anisotropic, random and heterogeneous, which needs to be considered to predict their mechanical properties properly. For such materials, computational multiscale methods are capable tools, relying on representative geometrical descriptions of the microstructure. However, to use these methods, synthetic microstructures representing the considered materials are necessary first. For long fiber reinforced composites, the microstructure is mainly described by the fiber volume fraction, the fiber length and orientation distributions as well as the fiber curvature [1]. To realize the fiber curvature of long fiber reinforced composites, the fused sequential additional and migration (fSAM) method [2] models the fibers as polygonal chains. Based on an optimization scheme, the method is searching for non-penetrating fiber arrangements which fulfil the further desired descriptors, e.g., the fiber orientation distribution. As the polygonal chains are defined on a curved manifold, an adapted gradient descent scheme is applied moving along the geodesics, i.e., the shortest lines between two points on the manifold. The fSAM algorithm prevents unrealistically high fiber bending by restricting the angles between adjacent segments. However, besides this angle control, the fSAM algorithm does not account for the degree of fiber curvature even if this property may influence the mechanical behavior of the microstructure. Thus, we present an extension of the fSAM algorithm to account for the fiber curvature as desired quantity. Therefore, we first discuss the implemented curvature measure for polygonal chains. Then, we introduce the adapted objective function to account for the desired degree of curvature and present the algorithmic choices within the fSAM algorithm to realize microstructures with varying degrees of curvature. Based on the extended fSAM algorithm, we conduct numerical studies with respect to the influence of the curvature on the effective properties of long fiber reinforced composites.
Discontinuous fiber-reinforced composites offer a high potential for lightweight applications due to their favorable specific stiffness and design freedom. Micro-CT imaging reveals that their microstructure is highly anisotropic, random and heterogeneous, which needs to be considered to predict their mechanical properties properly. For such materials, computational multiscale methods are capable tools, relying on representative geometrical descriptions of the microstructure. However, to use these methods, synthetic microstructures representing the considered materials are necessary first. For long fiber reinforced composites, the microstructure is mainly described by the fiber volume fraction, the fiber length and orientation distributions as well as the fiber curvature [1]. To realize the fiber curvature of long fiber reinforced composites, the fused sequential additional and migration (fSAM) method [2] models the fibers as polygonal chains. Based on an optimization scheme, the method is searching for non-penetrating fiber arrangements which fulfil the further desired descriptors, e.g., the fiber orientation distribution. As the polygonal chains are defined on a curved manifold, an adapted gradient descent scheme is applied moving along the geodesics, i.e., the shortest lines between two points on the manifold. The fSAM algorithm prevents unrealistically high fiber bending by restricting the angles between adjacent segments. However, besides this angle control, the fSAM algorithm does not account for the degree of fiber curvature even if this property may influence the mechanical behavior of the microstructure. Thus, we present an extension of the fSAM algorithm to account for the fiber curvature as desired quantity. Therefore, we first discuss the implemented curvature measure for polygonal chains. Then, we introduce the adapted objective function to account for the desired degree of curvature and present the algorithmic choices within the fSAM algorithm to realize microstructures with varying degrees of curvature. Based on the extended fSAM algorithm, we conduct numerical studies with respect to the influence of the curvature on the effective properties of long fiber reinforced composites.
References:
[1] S. Fliegener, M. Luke, and P. Gumbsch: 3D microstructure modeling of long fiber reinforced thermoplastics. Composites Science and Technology, 104, 136-145 (2014).
[2] C. Lauff, M. Schneider, J. Montesano, and T. Böhlke: Generating microstructures of long fiber reinforced composites by the fused sequential addition and migration method. International Journal for Numerical Methods in Engineering, 125(22), e7573 (2024).
This section will focus on the analysis and modeling of transition from laminar to turbulent flow using DNS, LES, RANS equations, and experiments. Contributions are expected in, but not limited to, the following topics: stability of incompressible and compressible flows, fundamental study of the dynamics of transition, influence of the wall roughness on transition, transition modelling for LES and RANS equations, transition in flows with complex geometries, subcritical transition.
Background: PHACES syndrome is regarded as a rare condition with associated disorders that affects multiple parts of the child's organism. The presence of this syndrome can be identified through a range of manifestations, including cardiac and arterial anomalies, ocular anomalies or even dermatological issues. The risk of cerebrovascular accident (CVA) resulting from PHACES syndrome is multifactorial, although the mechanistic principles remain unclear. The current literature lacks information on the potential influence of arteriopathy on the risk of CVA. Consequently, the primary objective of this research was to assess the impact of arteriopathy present in PHACES syndrome in order to identify a possible reason for CVA.
Methods: A spatial model of an arterial system was prepared on the basis of biomedical imaging data. Subsequently, the reference geometry was modified to reflect three scenarios of the internal carotid artery: a) PHACES syndrome (severe tortuosity and high hypoplasia), b) severe tortuosity, c) high hypoplasia. The authors then conducted numerical investigations utilising the fluid-structure interaction (FSI) method to accurately predict flow hemodynamics under pulsatile flow conditions. Additionally, a blood washout study was incorporated into the analysis.
Results: The results of the numerical simulations demonstrated that the simultaneous combination of severe tortuosity and high hypoplasia can result in the creation of a thrombogenic environment, with hypoplasia appearing to be the dominant factor. A notable reduction in blood flow intensity was observed in the affected artery, accompanied by an increase in blood viscosity and a rise in blood stagnation. Furthermore, a considerable increase in the proportion of "non-washed-out" blood was evident in the PHACES syndrome case study.
Conclusions: It has been demonstrated that numerical simulations can facilitate statistical analyses and provide insights into the mechanistic underpinnings of potential cerebral venous sinus thrombosis (CVST) associated with PHACES syndrome. This research elucidated the potential for thrombus formation in multiple regions of the PHACES syndrome geometry, which could potentially detach and obstruct further cerebral arteries, leading to CVST.
Background: Rupture of giant intracranial aneurysm (GIA, aneurysm larger than 25 mm) can result with either serious brain damage or death. Bypassing is one of the available methods to prevent the aneurysm from rupture. This involves neurosurgeons carefully cutting off the blood inflow to the aneurysm sac, while also creating an additional connection to the distal arteries. It is thought that such a connection may help to preserve the proper blood supply to all efferent arteries. However, up to date there is no information which cerebral perforator should be selected as the ‘anchoring’ vessel. It is challenging to predict the outcomes of each possibility in clinical practice, however, it is possible with the use of computational fluid dynamics (CFD). Thus, the main objective of this study was to perform virtual bypass procedures and observe which configuration was optimal for the given patient.
Methods: The geometry of the arterial system, including the giant aneurysm, was reconstructed from biomedical imaging scans using custom software. Two bypass options were then proposed. Afterwards, a series of fluid-structure interaction (FSI) simulations were performed assuming physiological vasomotion of the walls. This allowed us to assess the flow-haemodynamics in the cerebral region, including shear stress, blood distribution, pressure distribution and aneurysm volume.
Results: Comparative analysis of the results from three case studies indicated that the configuration of the bypass may have a significant impact on the internal flow conditions within the aneurysmal region. Notable differences were observed in the shear-related parameters as well as in viscosity. A notable decrease in pressure was observed, which may have contributed to the observed aneurysm shrinkage. Additionally, areas that may be susceptible to thrombosis were identified.
Conclusions: In-silico analyses show that bypassing surgeries reduce mechanical stress on vessel walls and that the risk of thrombus formation varies depending on the bypass location.
Background: It is estimated that 25-50\% of cerebral aneurysm rupture cases end up with the patient death, while around 50\% of survivors is left with permanent brain damage. Thus, it is extremely important to secure the aneurysm before it ruptures. Each endovascular treatment procedure can be carried out with varied implantable devices and the final choice of the treatment method is done by the neurosurgeon, based on his experience. The main objective of this research was to evaluate blood flow changes resulting from applying varied treatment options for the particular patient with the use of computational fluid dynamics (CFD) tools.
Methods: The patient-specific model of the cerebral aneurysm with afferent and efferent arteries was reconstructed based on biomedical imaging data. Then, idealized perforating vessels were added to the 3D model to mimic the presence of the real small-caliber vessels. Then, the authors performed virtual stenting with Flow Diverters manufactured by different companies (P64, PED, Surpass Streamline and Evolve) and virtual stent-assisted coiling with varied coil-packing density (0, 10, 15, 20, 25, and 30\%). Finally, transient simulation of the pulsatile flow were carried out, utilizing the Immersed Solid Method to model the presence of the particular stent.
Results: We noted significant changes in velocity- and viscosity-related parameters, as well as pressure and wall shear stress (WSS) magnitudes. Additionally, stent/coils presence resulted in more uniform pressure distribution at the aneurysm wall. Furthermore, a drastic reduction in the blood supply to the perforators covered by the stent wirebraid was observed. This confirms the statement that small-caliber vessels cannot be neglected while analyzing an influence of the stent on the overall flow hemodynamics.
Conclusions: Numerical analyses proved that magnitude of the changes in all hemodynamic parameters is strictly related to the topology of the stent wirebraid and coil packing density. Thus, this suggests that CFD tools can provide valuable information during selection of the most optimal treatment method for particular patient.
Flow-induced red blood cell damage (hemolysis) occurs when flow forces induce excessive deformation or rupture of red blood cells (RBCs). While physiological flow conditions are generally well tolerated by RBCs, blood-handling medical devices can expose them to high stresses. Hemolysis is therefore a key factor in the design process of ventricular assist devices.
The numerical prediction of this phenomenon is based on computational fluid dynamics (CFD) simulations. Simple stress-based hemolysis models post-process the CFD results by reducing three-dimensional fluid stresses to representative scalar stresses. Flow vorticity, i.e., the antisymmetric part of the velocity gradient, is generally neglected in these models. More complex strain-based models resolve RBC deformation and orientation. This way, they can account for the viscoelastic behavior of the RBC membrane and take into account all flow components, including vorticity.
Due to the widespread popularity of stress-based models, vorticity is not commonly takien into account when modeling hemolysis. We systematically compare stress-based and strain-based models to investigate the influence of vorticity on RBCs. We find that vorticity affects cell orientation and deformation. We show a selection of test cases to illustrate the effect of vorticity on red blood cells. Moreover, we show how our findings help explain experimental observations on the significance of extensional stresses over shear stresses.
Overall, the results suggest that vorticity is an important factor in hemolysis modeling. Strain-based models can naturally account for such effects by modeling the cell’s response to the local flow field. Stress-based models, however, need to be extended to include vorticity. This also has implications for the design and interpretation of hemolysis experiments, which have historically focused on shear stresses.
The arterial walls become less flexible as individuals age or due to specific diseases. This leads to the heart pressure wave being less dampened, potentially damaging the sensitive tissues of organs such as the brain and kidneys. Additionally, the pressure wave reflected from the arterial branches combines with the primary wave from the heart, leading to hypertension. The traditional method of assessing arterial wall stiffness involves measuring the times when pressure maxima occur in the carotid and femoral arteries. This approach utilizes the Moen-Korteweg equation to calculate the average arterial stiffness between these points. However, accurately determining the distance and average diameter of the arteries between these points introduces challenging measurement errors.
The paper presents an innovative technique to determine the local stiffness of the carotid artery by measuring its diameter changes with an ultrasound scanner and blood pressure using an applanation tonometer. Using these noninvasive techniques, an equivalent Young's modulus, representing the arterial wall stiffness, is calculated through a Kalman filter.
Two vessel deformation models are applied: an analytical model of a linearly elastic cylinder under internal pressure and a numerical model utilizing a neo-Hookian constitutive law. The Dual Extended Kalman Filter is used, wherein the first phase filters the measured pressure, which is then used in the Extended Kalman Filter to determine Young's modulus of the artery wall. Both the analytical and numerical models yielded close results.
This session is devoted to the mathematical analysis of natural phenomena and engineering problems. In this area PDEs play a basic role. Therefore lectures discussing analytical aspects of PDE problems as well as problems in the Calculus of Variations are welcome.
We present a new diffuse interface model describing the growth of a tumor, whose evolution is assumed to be governed by biological mechanisms such as proliferation of cells via nutrient consumption and apoptosis. In this context, the tumor is seen as an expanding mass surrounded by healthy tissues, while the interface in between contains a mixture of both healthy and tumor cells. More precisely, we model the process through a non-isothermal Allen-Cahn system coupled with a reaction-diffusion equation for the nutrient, following the approach based on microforce balance and then study its well-posedness. Namely, we are able to prove the existence and uniqueness of a solution to our model by Galerkin's approach. This is a joint project with Stefania Gatti and Alain Miranville.
Viscoelastic phase separation plays an important role in biological cells, for instance, RNA or proteins can undergo phase separation to form membrane-less condensates, which is crucial for biological functions. In our contribution, we consider a model for phase separation in polymer solutions consistent with the second law of thermodynamics, introduced by Zhou et al. 2006. For the full model with additional stress diffusion, existence of global-in-time weak solutions is established by Brunk–Lukáčová-Medvid’ová 2022 via a Faedo–Galerkin ansatz for both degenerate and non-degenerate mobilities.
In this talk, we neglect the hydrodynamic transport and consider a constant mobility and a regular potential. Moreover, we focus on the case of a constant bulk modulus and a constant relaxation time. Exploiting the gradient-flow structure, we establish the global well-posedness (meaning existence, uniqueness and stability estimate) of the initial-boundary-value problem using a time-incremental minimisation scheme. We then address a singular limit through evolutionary $\Gamma$-convergence and Evolutionary Variational Inequality (EVI) solutions, tackling the primary challenge posed by the lack of equi-compactness of the energies.
This is a joint work with Katharina Hopf (WIAS) and Matthias Liero (WIAS).
A model describing both viscoelastic and viscoplastic behavior of two-phase flows has been developed. This model is used in geodynamics, e.g., to describe the evolution of fault systems in the lithosphere over geological time scales. The system consists of a momentum balance, an evolution law for an extra stress tensor, and a Cahn-Hilliard-type evolution law that governs phase separation. The Cauchy stress in the momentum balance is constituted by a viscoelastic Stokes-like term as well as by a contribution from the extra stress. The evolution law for the extra stress incorporates the Zaremba-Jaumann time derivative and a non-smooth viscoplastic dissipation mechanism. To analyze this system, appropriate concepts of generalized solutions are addressed. This is joint work with Marita Thomas (WIAS and FU Berlin) and Robert Lasarzik (WIAS Berlin) within project C09 "Dynamics of rock dehydration on multiple scales" of CRC 1114 "Scaling Cascades in Complex Systems" funded by the German Research Foundation.
We propose a variational model for two-phase surfactant films separating aqueous and oily fluids. Considering two species of surfactant molecules we describe a phase seperation within the film. The analysis builds on a model for single species lipid biomembranes proposed by Peletier and Röger [ARMA 2009]. We prove a Gamma-convergence result in the limit of vanishing surfactant length and show that the limit inherits a phase severation and a bending energy.
We consider a Stokes flow coupled with advective-diffusive transport in an evolving domain with boundary conditions allowing for inflow and outflow. The domain evolution is induced by the transport process, leading to a fully coupled problem. We prove existence of weak solutions to the system using a fixed-point method which allows us to treat the nonlinear coupling. Our approach aims to model the thermal control of blood flow in human skin and the underlying physiological processes. To this end, the model takes into account temperature-dependent production of biochemical substances, the subsequent dilation of blood vessels, and the resulting changes in convective heat transfer.
We study the effective behavior of random, heterogeneous, anisotropic, second order phase transitions energies that arise in the study of pattern formations in physical-chemical systems. Specifically, we study the asymptotic behavior, as $\varepsilon$ goes to zero, of random heterogeneous anisotropic functionals in which the second order perturbation competes not only with a double well potential but also with a possibly negative contribution given by the first order term. We prove that, under suitable growth conditions and under a stationarity assumption, the functionals $\Gamma$-converge almost surely to a surface energy whose density is independent of the space variable. Furthermore, we show that the limit surface density can be described via a suitable cell formula and is deterministic when ergodicity is assumed.
Optimization is the next natural step after simulation with increasing importance in the future. The aim of this session is to provide the basis of a holistic overview of all areas of optimization. Thus abstracts from both a theoretical as well as an applied perspective are welcome.
It is well known that the compliance minimization of trusses is an ill-posed problem. Even for the simplest loads one is forced to keep adding joints and bars of decreasing cross sections to assure that the objective function decreases. In the limit, we find ourselves in the realm of Michell structures, which combine continuum medium with bars, usually curved. In the bending regime, the analogue problem is the one of finding an optimal grillage. So far, in the literature, the design of grillages has been addressed by adopting the methods established for Michell trusses, both from the theoretical and numerical standpoint. In this talk I will present an alternative perspective on the grillage optimization problem. Through a novel Rubinstein-Kantorovich-type duality, an optimal transport reformulation can be found. New theoretical insights follow. We prove that, when only point loads are applied, a classical optimal grillage (made up from finite number of beams) does exist. For more general loading, including uniform pressure, we obtain structure theorems for the optimal trajectories of bending moments: they concentrate on an infinite graph; in particular, there is no need for curved beams. The new perspective also unlocks the optimal transport numerical methods. They are not only much more efficient, but they also allow to exactly identify the position of the extra joints, without the need for discretizing the design space.
This talk will discuss the results of the joint paper with Guy Bouchitté, see \linebreak https://arxiv.org/abs/2412.00516 .
Isometric deformations of elastic thin plates are very rigid. Folds enable increased flexibility of elastic deformations. We ask for optimal fold-patterns on an elastic thin plate that minimize some cost of the resulting deformation subject to suitable boundary conditions. To approximate the elastic deformation for given fold-pattern and the optimization problem, we will take into account a phase field model of Ambrosio–Tortorelli-type. We show results of the forward simulation and the PDE-constrained optimization problem for different forces, boundary conditions, and cost functionals. This is joint work with Martin Rumpf.
The interaction between actuation and mechanical structures causes complexity in lightweight design optimizations of arm-like manipulators, e.g., industrial robots. While changes in the actuation concept cause changes in load cases which lead to different optimized structures, changes in structural components influence their inertial properties which influence the required actuation forces or torques for predefined motions. Design and parameter changes in actuation and structure have an impact on the overall system. Optimization is a way to combine the design of actuation and structure.
The human musculoskeletal system is an example of an optimal system that considers the interaction between structure and actuation. Muscle contractions cause bone movement because they are connected via tendons. The application of force via the tendons minimizes bending stresses in the bones. Bone structure absorbs the resulting compressive stresses optimally. In arm-like manipulators, bending is as important as in the human arm. For technical applications, actuation concepts based on the human musculoskeletal system use, e.g., ropes to move the components and influence bending.
In this contribution, a coupled optimization problem with a nested loop is formulated to coordinate a rope-based actuation design with a topology optimization. With a multibody simulation (outer loop) and a predefined motion with an external load at the end effector, static load cases are derived for topology optimization (inner loop). The structural properties of the optimized structure:
- Masses
- Center of gravity
- Moments and product of inertia
are considered in the next multibody simulation, which provides new load cases for topology optimization. This repeats until the structural properties and the objective function of the topology optimization converge. This process is done for different actuation concept parameters, e.g., rope end point positions. The different results show the complexity between actuation concept parameters and topology-optimized structure.
The presentation will explore part of the stochastic model of phase transformations in eutectoid that had been developed (see [1,2,3]). In this model, the nucleation process is modeled using a sequence of random variables following the Bernoulli distribution with a given probability of success, which we interpret as the initiation of the nucleation. The initiation time plays a crucial role, as the volume fraction depends on the duration of the process until being measured. I will restrict my considerations to the model describing isothermal process in which the probability of nucleation occurring at a given time remains constant. For the case of the simplified model, I will present an alternative approach of generating the nucleation start time, utilizing an exponential distribution for this purpose. Moreover, I will use this distribution of the nucleation start time to determine the theoretical distribution of the volume fraction. Finally, the theoretical results will be used as a reference for the results obtained from the computer simulations to generate the benchmark function for this process. In this context, the primary goal is to determine the optimal simulation parameters that not only ensure a high level of consistency with the theoretical results but also guarantee simulation performance.
[1] Szeliga D., Czyżewska N., Klimczak K., Kusiak J., Morkisz P., Oprocha P., Pietrzyk M., Przybyłowicz P., Accounting for random character of some metallurgical phenomena and uncertainty of process parameters in modelling phase transformations in steels, Canadian Metallurgical Quarterly, 63(2), 2024, 460–467.
https://doi.org/10.1080/00084433.2023.2219948
[2] Szeliga D., Foryś J., Jażdżewska N., Kusiak J., Morkisz P., Nadolski R., Oprocha P., Pietrzyk M., Przybyłowicz P., Inverse problem in the stochastic approach to modelling of phase transformations in metallic materials during cooling after hot forming, Journal of Materials Engineering and Performance, 2024 (published on line),
https://doi.org/10.1007/s11665-024-10458-x
[3] Szeliga D., Jażdżewska N., Foryś J., Kusiak J., Nadolski R., Oprocha P., Pietrzyk M., Potorski P., Rauch Ł., Zalecki W., Stochastic model of accelerated cooling of eutectoid steel rails, Modelling and Simulation in Materials Science and Engineering, 2025, (published on line),
https://doi.org/10.1088/1361-651X/ada81c
Many overdetermined inverse problems for parameter estimation can be formulated as the minimization of the sum of squares of residuals between some reference and a corresponding parametrized simulation. The least squares objective is geometrically closely related to the Euclidean metric of points in an high-dimensional ambient space. The output of a differentiable simulation model with respect to its parameters can be interpreted as belonging to the set of points on a (relatively) low-dimensional Riemannian manifold. From this follows trivially that the Euclidean metric is not necessarily a proper measure of distance between points on the model manifold, which explains the non-linearity of the least squares objective. The geodesic metric could define a convex objective given two known points on the manifold. However, we lack knowledge of precisely the second point, the parameters we wish to identify that satisfy the minimization objective.
Assuming the set of potential minimizers does not lead to self-intersection of the manifold, a transformation can be found which non-linearly projects the ambient space into a latent space equipped with a metric that properly projects onto the distances between parameter combinations. Autoencoders are particularly suited for such a task. We propose a novel autoencoder architecture that finds such a differentiable projection leading to an approximate linearization of the manifold with respect to its parametrization while preserving orthogonality of noise below a desired signal-to-noise ratio. This allows rapid minimization of the original objective using the encoded residuals for which especially the Gauss-Newton method leads to superlinear convergence rates. In theory, perfect linearization of the tubular neighborhood of the manifold would even allow to solve the problem trivially in the sense of linear least squares. The procedure is illustrated using both academic and applied examples.
For all fields of applications the mathematical models are primarily based on differential equations. Hence, their numerical solution plays a fundamental role in numerical mathematics. This section covers mainly the construction and the behavior of numerical methods for differential equations including those of ordinary as well as of partial differential type.
We address the optimal order approximation of the pressure trajectory for an equal-order in time variational discretization of velocity and pressure in Stokes systems. In the literature, the pressure approximation of Stokes and Navier-Stokes systems has attracted less attention than the velocity approximation, even though being of equal importance for applications, for instance, for the computation of the drag and lift coefficient of flows around obstacles. The difficulties in the pressure approximation arise from the saddle point structure of the Stokes system and the lack of information regarding the computation of discrete initial pressure values [J. Heywood, R. Rannacher: Finite element approximation of the nonstationary Navier–Stokes problem. I. Regularity of solutions and second-order error estimates for spatial discretization, SIAM J. Numer. Anal., 19 (1982), pp. 275–311.]
We consider approximations of the Stokes equations by space-time finite element methods (STFEMs) of equal order in time. For brevity, piecewise linear approaches are studied here. STFEMs feature the natural construction of higher order discretization schemes for partial differential equations and coupled systems [M. Anselmann, M. Bause, A geometric multigrid method for space-time finite element discretizations of the Navier–Stokes equations and its application to 3d flow simulation, ACM Trans. Math. Softw., 49 (2023), Article No.: 5, pp. 1–25; https://doi.org/10.1145/3582492]. We show that the pressure trajectory is not defined uniquely. We propose two variants for a post-processed pressure in the set of the pressure solutions which guarantee for the pressure error in the L2-norm optimal second order estimates in time and optimal order estimates in space. The first-one yields a globally continuous trajectory and is based on collocation conditions that are imposed in the discrete time nodes. The second one uses the idea of interpolation based on the accurate discrete pressure values in the Gauss nodes (midpoints of the subintervals). This approach leads to a global pressure trajectory that is in general discontinuous at the endpoints of the time intervals. The post-processing and error analysis are presented [M. Anselmann, M. Bause, G. Matthies, F. Schieweck, Optimal order pressure approximation for the Stokes problem by a variational method in time with post-processing, in progress (2024), pp. 1-28]. The theoretical findings are illustrated by numerical computations.
We present residual-based stabilization techniques for finite element methods to simulate Navier-Stokes flows in stress-divergence form on curved surfaces. The mixed formulation in velocity and pressure variables leads to the inf-sup condition. One may use Taylor-Hood elements, e.g., [1,2], or equal-order element pairs for velocity and pressure together with a stabilization scheme, e.g., the Brezzi-Pitkäranta stabilization in [3], to achieve stable results. In this work, we extend the PSPG method [4] for classical Euclidean geometries to Navier-Stokes flows on surfaces to enable consistent equal-order approximations. Furthermore, for Navier-Stokes flows at high Reynolds numbers and transport-dominated advection-diffusion problems, the SUPG stabilization technique [5] is applied to get stable solutions for stability, herein extended to convection-dominated applications on curved surfaces. A crucial aspect for flow problems on surfaces is the consideration of the tangential velocity in the governing equations, e.g., [1–3,6], which also significantly influences the formulation of the residual-based stabilization techniques. A model with a Lagrange multiplier [1,2,6] for the tangentiality constraint is compared with a projection of the velocities onto the tangent space [3,6]. Results obtained with PSPG and equal-order element pairs for velocity and pressure are compared to stable solutions computed with Taylor-Hood elements to verify the stabilized finite elements.
REFERENCES
[1] T.P. Fries: Higher-order surface FEM for incompressible Navier-Stokes flows on manifolds, Internat. J. Numer. Methods Fluids, 88, 55–78, 2018.
[2] T.P. Fries and M.W. Kaiser, Modeling and simulation of simultaneous transport and incompressible flows on all level sets in a bulk domain, Proceedings in Applied Mathematics and Mechanics, 24, e202400037, 2024.
[3] M.A. Olshanskii, A. Quaini, A. Reusken, and V. Yushutin, A finite element method for the surface Stokes problem, SIAM J. Sci. Comput., 40, A2492–A2518, 2018.
[4] T.J.R. Hughes, L.P. Franca, and M. Balestra, A new finite element formulation for computational fluid dynamics: V. Circumventing the Babuška-Brezzi condition: a stable Petrov-Galerkin formulation of the Stokes problem for accommodating equal-order interpolations, Comp. Methods Appl. Mech. Engrg., 59, 85-99, 1986.
[5] A.N. Brooks and T.J.R. Hughes, Streamline upwind/Petrov-Galerkin formulations for convection dominated flows with particular emphasis on the incompressible Navier-Stokes equations, Comp. Methods Appl. Mech. Engrg., 32, 199-259, 1982.
[6] T. Jankuhn, M. Olshanskii, and A. Reusken, Incompressible fluid problems on embedded surfaces: Modeling and variational formulations, Interface Free Bound., 20, 353-377, 2018.
In this contribution, we present a novel three-field finite element for the Stokes eigenvalue problem. To this end, we approximate the Hellinger-Reissner (mixed) formulation where the symmetry of the stress tensor σ is dealt with in a weak form by introducing a Lagrange multiplier that represents the conservation of angular momentum. We consider the space of tensors whose rows consist of an element of RT space for the stress, the space of discontinuous piecewise linear vectors for the velocity, and the space of skew-symmetric continuous piecewise linear tensors to impose the symmetry weakly. We end up with a stress-velocity-vorticity formulation discretized with RTₖᵈ−DPₖᵈ−Pₖ^ {d (d−1)/2} with k ≥ 1. This formulation has notable benefits, as it directly arises from the fundamental physical principles of momentum balance, constitutive law, and mass conservation. Moreover, it provides a direct presentation of stress, which is particularly crucial in certain applications. Numerical examples in both convex and non-convex two and three dimensional domains are presented to illustrate the efficiency of the proposed methodology.
In this talk, we consider a coupled Chemotaxis--(Navier--)Stokes system on a bounded domain Ω ⊂ ℝ². The system consists of two parabolic partial differential equations coupled with an incompressible (Navier--)Stokes equation, describing the functions (z, c, 𝐮), where z = z(𝐱, t) denotes the cell density, c = c(𝐱, t) the substance concentration, 𝐮 = 𝐮(𝐱, t) with 𝐮 = (u₁, u₂)ᵀ the velocity field of the fluid at position 𝐱 ∈ Ω at time t ∈ (0, T], and p = p(𝐱, t) is the pressure.
The Chemotaxis--(Navier--)Stokes system describes the interaction between cells (e.g., bacteria) and a chemical signal or substance in liquid environments. This phenomenon plays an important role in biological applications. It is well known that physical quantities, such as density, should remain positive. Therefore, it is of great importance to develop numerical methods that ensure the approximate solutions also remain positive.
The fully discrete scheme, based on the linear conforming finite element method for the discretization of the spatial variable and the backward Euler method for the temporal variable, fails to preserve the positivity of the approximate solution. To ensure positivity preservation, we propose a stabilized scheme using the Algebraic Flux Correction method. The resulting fully discrete scheme is nonlinear, and a fixed-point argument is employed alongside a superconvergence argument to demonstrate its existence and uniqueness. Furthermore, we prove that its solutions remain uniformly bounded, provided the solution is sufficiently regular and the time and mesh steps are appropriately chosen, i.e., k = 𝒪(h¹⁺^ε), 0 < ε < 1/3.
In addition, we derive error estimates in L^∞ (0, T; L²), L²(0, T; H¹) for the cell density, and L^∞ (0, T; H¹) for the substance concentration as well as for the velocity field. We also present numerical experiments validating our theoretical results.
The Dual Weighted Residual (DWR) Method has attracted researchers’ interest in many fields of application problems since it was introduced by Becker and Rannacher at the turn of the last millennium. With regard to an efficient numerical approximation of the underlying model problem, the DWR approach yields a posteriori error estimator measured in goal quantities of physical interest, that can be used for adaptive mesh refinement in space and time. Here, we apply this goal-oriented error control combined with residual-based stabilization techniques to convection-dominated (transport) problems. We consider challenges regarding this type of model problems as well as the practical realization of the underlying approach. The performance properties of the space-time adaptive algorithm are studied by means of well-known benchmarks for convection-dominated problems and examples of physical relevance. We show robustness and computational efficiency results and demonstrate the importance of stabilization in a strongly convection-dominated case. Furthermore, we give insight into the application of the DWR approach to nonstationary incompressible flow problems in combination with efficient iterative solver technologies using a flexible geometric multigrid preconditioner.
In models of porous media flows, the porosity of the solid matrix is often treated as a static quantity. However, under certain circumstances, such as in soft sedimentary rocks or in magma flows, the porosity of the solid material can evolve under the influence of fluid pressure which can lead to the formation of solitary porosity waves and of higher-porosity channels. We consider a system of nonlinear PDEs for porosity and effective pressure, based on a poroviscoelastic model, which describes such phenomena. We first discuss well-posedness of this PDE problem, which has been established in the literature only for initial porosities of high Sobolev smoothness. We present several results for porosities of low regularity, including cases with jump discontinuities that are of particular interest in geological applications. Then we turn to results on a space-time adaptive numerical method, which is based on a fixed-point scheme inspired by the analysis, combined with a space-time least-squares formulation. This yields an appropriate treatment of discontinuities and enables spatially varying time steps, which are required for efficient approximations of the strongly spatially and temporally localized features of solutions.
Dynamics and control is an interdisciplinary section which in particular adresses mathematical systems theory and control engineering. The contributions to this section are also concerned with the mathematical understanding and design of controllers which appear in actual applications.
Exponential splitting is a well established and widely used technique for finding approximate solutions of linear DE of type u’=(A+B)u. It can be also used for the case of time dependent component B(t) after application of mid-point quadrature, where the error estimate is not clear in the case of singular cases of unbounded operators B(t).
In this talk I will show how second and fourth order splitting for possibly time dependent component can be derived using Duhamel formula. Based on this approach I will present a new proof of convergence of this scheme and elaborate on the possibilities brought by this approach. Analysis of the error estimated will be presented on the example of hydrogen atom featuring Coulomb potential. Results of numerical simulations will be presented.
This talk introduces a new approach for solving extremum seeking problems. Extremum seeking methods are widely employed for real time optimization without requiring explicit knowledge of the cost function’s gradient. However, conventional approaches relying on constant gain strategies often lead to the practical asymptotic stability. This means that the trajectories of a system converge to a neighborhood of the optimum, with the radius of this neighborhood decreasing as the control amplitude and frequency increase. Our work overcomes these limitations by leveraging a time-varying gain, whose properties ensure improved convergence characteristics. Namely, we establish conditions on the gain under which the trajectories of the system asymptotically converge to the optimum. Key requirements include the monotonic decrease of the gain and certain integral bounds over time. This approach is inspired by conceptual similarities between stochastic gradient descent and extremum seeking algorithms. We demonstrate that the derived sufficient convergence conditions are conceptually similar to step-size rules in stochastic gradient descent and stochastic approximation algorithms. The theoretical foundation is based on Lie bracket approximations, however, unlike the classical framework, our approach does not require uniform asymptotic stability of the optimal state for the associated Lie bracket system. Additionally, we estimate the speed of convergence of the solutions to the derived extremum seeking system and investigate extremum seeking systems with time-varying frequencies. The obtained results are demonstrated through numerical simulations.
Increasing energy efficiency and saving raw materials promote the lightweight design of mechanical structures, like beams and strings. Such systems are typically underactuated and require an accurate inverse model, which is essential in the two-degree-of-freedom control structure for robust trajectory tracking. The dynamic inversion of underactuated mechanical systems can be formulated in the servo-constraint framework using a set of differential-algebraic equations (DAEs). In case of a high differentiation index, the inversion-based feedforward control design poses significant challenges. For instance, standard time-integration methods solving the inverse problem sequentially over time with a sufficiently small time step size can show numerical instabilities. Although index reduction techniques might be a common approach to addressing these issues, their practical applicability is limited. In this contribution, we present a simultaneous inversion method for flat and minimum phase systems to overcome the shortcomings of sequential methods due to a high differentiation index. The key idea is to discretize the equations of motion on the entire time grid, which yields the continuity conditions for all trajectories. By combining these conditions with servo-constraints at all time points, a global system of equations with a large number of unknowns is obtained. Despite its high dimensionality, an efficient solution of the inverse dynamics is made possible by exploiting the sparse Jacobian matrix. The performance of the simultaneous approach is demonstrated with a numerical experiment, in which we generate a feedforward trajectory controller for a large-scale finite-segment model of a nonlinear string.
In rural Ethiopia, 70% of the population lacks reliable access to electricity. Extending the main grid is prohibitively expensive. Mini-grids emerge as a cost-effective and dependable solution. Mini-grids are localized energy networks. They typically consist of small-scale renewable power generation systems and distribution infrastructure. The typical installation includes a back-up diesel generator and a battery energy storage system, which stores excess power when available and provides it during periods of insufficient output from renewable energy sources. Designing such power systems is a non-trivial task. Taking into account battery degradation dynamics, a multi-scale control problem for the optimal design of such power systems is set up. Based on the KKT conditions of the multi-scale system and leveraging well-known averaging techniques, the concept of a two-time scale solution is introduced. The resulting two-time scale gradient descent algorithm is implemented to enable a numerical comparison of the two solution concepts.
We consider the task of data-driven identification of dynamical systems, specifically for systems whose behavior at large frequencies is nonproper, as encoded by a higher relative degree.
In this work, we review existing approaches to interpolate transfer function data most of which rely on a-priori knowledge of the relative degree or on high-frequency data with the term "high" remaining rather unspecifed. Then, we present our newly developed surrogate modeling strategies [1] that allow for state-of-the-art rational approximation algorithms (e.g., AAA and vector fitting) to better handle data coming from such systems with non-trivial relative degrees.
The approach rests on two pillars. Firstly, the possible surrogate model's relative degree is achieved through constraints on barycentric coefficients, rather than through ad-hoc modifications of the rational expressions. Secondly, we combine the model synthesis with an index-identification routine that allows one to estimate the relative degree of a system from low-frequency data.
Once the relative degree is identified, we can build a surrogate model that, in addition to matching the data well (at low frequencies), has enhanced extrapolation capabilities (at high frequencies). We showcase the effectiveness and robustness of the method through a suite of numerical tests.
References
[1] D. Pradovera, V. Gosea, J. Heiland. Barycentric rational approximation for learning the index of a dynamical system from limited data, 2024. https://arxiv.org/abs/2410.02000
Over the last decade mathematics has become the cornerstone in Signal and Image processing ranging from various methods for signal reconstruction to modelling of imaging modalities over its classical disciplines compression, denoising, segmentation, and registration to feature extraction. The used methodologies include such diverse fields as harmonic analysis, inverse problems, variational analysis, mathematical statistics, partial differential equations, optimization, approximation theory and sampling theory. The aim of this section is to foster interdisciplinary collaboration and the development of new directions in mathematical signal and image processing spawned from the interaction of various mathematical communities.
Massive multiple-input multiple-output (MIMO) communication systems are very promising for wireless communication and fifth generation (5G) cellular networks. In massive MIMO, a large number of antennas are employed at the base station (BS), which provides a high degree of spatial freedom and enables the BS to simultaneously communicate with multiple user terminals. Due to the limited angular spread, the user channel vectors lie on low-dimensional subspaces. For each user, we aim to find a low-dimensional beamforming subspace that captures a large amount of the power of its channel vectors. We address this signal subspace estimation problem by finding a good estimator of the signal covariance matrix in terms of a truncated version of the nuclear norm based on the received data samples at the BS. In this talk, theoretical guarantees for signal covariance and subspace estimation are investigated. We derive optimal expectation bounds for every singular value of the deviation of the sample covariance from the true covariance matrix of i.i.d. centered Gaussian random variables. As a consequence, we present an optimal bound on the estimation error in the Massive MIMO setting in terms of the number of observed time samples, the number of sampled entries (antennas), the truncation and noise level.
In this talk, we propose mathematical models for reconstructing the optical flow in time-harmonic elastography. In this image acquisition technique, the object undergoes a special time-harmonic oscillation with known frequency so that only the spatially varying amplitude of the velocity field has to be determined. This allows for a simpler multi-frame optical flow analysis using Fourier analytic tools in time. We propose three variational optical flow models and show how their minimization can be tackled via Fourier transform in time and iteratively reweighted least squares algorithm. Numerical examples with synthetic as well as real-world data demonstrate the benefits of our approach.
In applications such as Magnetic Resonance Imaging (MRI), data are acquired in the Fourier domain and to accelerate imaging, a subsample of the discrete Fourier data is often taken. In order to recover the image, we need to reconstruct the missing Fourier data and then apply the discrete inverse Fourier transform. However, reconstructing missing data in the Fourier Space poses significant challenges. Unlike the image domain, the Fourier domain is not localised, which means the data are not as easily reconstructed. In this talk, we will see why linear interpolation in the Fourier domain is futile and discuss alternative reconstruction methods, namely “Generalized Autocalibrating Partially Parallel Acquisition” (GRAPPA), a widely adopted approach in MRI reconstruction. GRAPPA leverages the “Auto-Calibration Signal” (ACS) region, which is an area in the low-pass part of the Fourier data that is fully sampled. In GRAPPA and many other MRI methods, data outside of the ACS region are often subsampled with a constant subsampling rate of R. We will examine the critical role of subsampling strategies outside the ACS region and how these influence reconstruction quality and consider alternative subsampling methods to utilise GRAPPA to its full capacity.
FLASH is a free-electron laser capable of emitting femto second short pulses of light in the X-ray spectrum. Before the beam is used in experiments, it should be focused. This is done by a Kirkpatrick-Baez mirror system which consists of two mirrors that can be bent, rotated, and translated. At the moment this mirror system has to be tuned by hand before each experiment, which is very time-consuming. The goal is to find a method to choose the 12 parameters of the KB optics automatically, depending on the varying properties of the incoming beam and on the experiment's requirements. We present our model for simulating the propagation through the mirror system as well as methods to solve the resulting optimization problem.
We consider the block Bregman–Kaczmarz method for finite dimensional linear inverse problems. The block Bregman–Kaczmarz method uses blocks of the linear system and performs iterative steps with these blocks only. We assume a noise model that we call independent noise, i.e. each time the method performs a step for some block, one obtains a noisy sample of the respective part of the right-hand side which is contaminated with new noise that is independent of all previous steps of the method. One can view these noise models as making a fresh noisy measurement of the respective block each time it is used. In this framework, we are able to show that a well-chosen adaptive stepsize of the block Bregman–Kaczmarz method is able to converge to the exact solution of the linear inverse problem. The plain form of this adaptive stepsize relies on unknown quantities (like the Bregman distance to the solution), but we show a way how these quantities can be estimated purely from given data. We illustrate the finding in numerical experiments and confirm that these heuristic estimates lead to effective stepsizes.
Consider the broken random sample problem introduced by DeGroot, Feder, and Gole (1971 Ann. Math. Statist.): In each observation, a random sample of M point pairs ((Xᵢ,Yᵢ))ᵢ₌₁ᴹ is drawn from a joint distribution with density f(x,y). Assume we can only observe (Xᵢ)ᵢ₌₁ᴹ and (Yᵢ)ᵢ₌₁ᴹ separately, i.e.~the pairing information is lost. We are interested in estimating f given N i.i.d.~observations. Naive maximization of the likelihood quickly becomes numerically intractable as M increases, due to the combinatorial number of possible pairings between the points. Instead, we propose a quasi-likelihood and provide convergence results for the corresponding estimator. Moreover, our ongoing research shows that under some mild assumptions, the convergence is uniform in M. This allows estimation of f even when the ``density" of the observed point clouds is very high. Applications of the broken sample setting are colocalization analysis in fluorescent microscopy and dynamic systems where we obtain encouraging results in numerical experiments.
Scientific Computing is concerned with the efficient numerical solution of mathematical models from both science and engineering. The field covers a wide range of topics: from mathematical modeling over the development, analysis and efficient implementation of numerical methods and algorithms to software and finally application for the solution of complex real-world problems on modern computing systems. This interdisciplinary field combines approaches from applied mathematics, computer science and a wide range of applications in which in-silico experiments play an increasingly important role.
Non-constant viscosity plays a crucial role in numerous incompressible flow problems of interest in science and engineering, with prime examples being polymer or blood flow. In such applications, the fluid’s viscosity depends on the local shear rate, where generalized Newtonian fluid models are considered an acceptable middle-ground between complexity and model capability. Standard discretization based on coupled velocity–pressure formulations in an implicit finite element scheme lead to saddle-point systems posing a challenge to state-of-the-art solvers and preconditioners.
Projection schemes, on the contrary, decouple the balance equations governing velocity and pressure, giving rise to multiple, simpler, and sequentially solved problems incorporating standard linear systems. Following such an approach requires caution when deriving suitable boundary conditions for the intermediate steps to preserve accuracy. For Newtonian incompressible fluids, several projection and splitting methods of high-order accuracy are available (see, e.g. [1, 2]). The extension to generalised Newtonian fluids, however, is often found to be non-trivial, as such schemes often build on the assumption of constant viscosity.
In this contribution we address this shortcoming and present an extension of the work by Karniadakis et al. [1] towards generalized Newtonian fluids. The presented method is thus based on an explicit-implicit treatment of convective, viscous, and pressure terms and includes the typical pressure Poisson and projection steps. Numerical results obtained through a matrix-free implementation in ExaDG [3] showcase temporal stability, accuracy, and efficiency of the higher-order splitting scheme in challenging numerical examples of practical interest.
REFERENCES
[1] Karniadakis, G. E., Israeli, M. and Orszag, S. A. High-order splitting methods for theincompressible Navier-Stokes equations. J. Comput. Phys., 1991.
[2] Timmermans, L.J.P., Minev, P.D. and Van de Vosse, F. N. An approximate projectionscheme for incompressible flow using spectral elements. Int. J. Numer. Methods Fluids,Vol. 22(7), pp. 673–688, 1996.
[3] Arndt, D., Fehn, N., Kanschat, G. and others. ExaDG: High-order discontinuous Galerkin for the exa-scale. In Bungartz, H.-J., Reiz, S., Uekermann and others (eds.), Software for exascale computing-SPPEXA 2016–2019, pp 189–224, 2020, Cham, Springer.
The development of new numerical methods for fluid flow simulations is challenging but such tools may help to understand flow problems better. Here, the Boundary-Domain Integral Method is applied to simulate laminar fluid flow governed by a dimensionless transient velocity-vorticity formulation of the Navier-Stokes equation. The Reynolds number is chosen in all examples small enough to ensure laminar flow conditions. The false transient approach is utilised to improve stability.
As all boundary element methods, the Boundary-Domain Integral Method has a quadratic complexity. Here, the H2-methodology is applied to obtain an almost linear complexity. This acceleration technique is not only applied to the boundary part but more important to the domain related part of the formulation, ie to approximate the matrix related to the domain integrals. The strong singular integrals and the integral free term appearing in boundary element formulations are often solved indirectly by utilising the rigid body movement. This is not possible for formulations based on the H2-methodology because the matrix is never established. Here, it is shown how to apply the technique of Guigiani and Gigante to handle the strongly singular integrals.
The presented examples are a lid-driven cavity, Hagen-Poiseuille flow and the flow around a rigid cylinder. In the latter example the behaviour of the method for an unstructured grid is presented. Also the Reynolds number was increased to such a value that a transient numerical simulation has been performed. All examples show that the proposed method results in an almost linear complexity as the mathematical analysis promisses.
Many modern applications in engineering are faced to deal with heterogeneous and often anisotropic material behavior, relevant for thermal diffusion, groundwater flow or electrostatics. The numerical solution of these types of problems is challenging, as applying the finite element method results in badly scaled and ill-conditioned linear systems to be solved. To maintain efficiency and scalability of the linear solver, iterative solution techniques preconditioned by algebraic multigrid methods (AMG) are widely used. However, traditional AMG lacks a proper handling of the material information and thus converges slowly, if at all, for these types of problems. While recent advances have been made for anisotropy related to stretched meshes, the proper handling of heterogeneous and anisotropic materials remains an open topic for now.
This talk will propose a new smoothed aggregation procedure in the context of AMG to incorporate the material information into the coarsening process for the coarse level construction. While classical strength-of-connection measures struggle with coefficient jumps and anisotropy, the new approach shows robustness and ease-of-use in the envisaged scenarios. Especially in cases with a strongly directional material component, it can be shown that the iteration count is independent of the material property. In addition, care has to be taken in regard to the Jacobi based prolongator smoothing step, especially for ill-conditioned fine level operators. Matrix filtering based on the proposed dropping scheme can control the operator complexity of the multigrid hierarchy and thus keeps the cost of applying the preconditioner low.
The continuum viscous–plastic sea-ice model is widely used in climate models for simulating large-scale sea-ice dynamics, usually on grids of several kilometers (10 km). Recently, there is an increasing interest in modelling small-scale processes that have the potential to impact large-scale dynamics such as sea-ice–iceberg interactions. The influence of small-scale icebergs on sea-ice dynamics have not yet been studied in the context of climate models as efficient numerical realizations are missing. To address this shortcomings we develop a new numerical efficient hybrid model in which the sea-ice momentum equation is modified and icebergs are included as particles on a subgrid level. We show, that the time discretized modified momentum equation is equivalent to a convex energy minimization problem. To a series of test cases we demonstrate that subgrid dynamics, such as polynya formation due to grounded icebergs, can be reflected with this new approach. This hybrid ice model is both numerically efficient and suitable for large-scale climate models.
We propose a matrix-free inexact preconditioned solution strategy for elliptic partial differential equations discretized by the Galerkin method on structured grids. We base our preconditioner on an approximation of the discrete linear operator by a sum of Kronecker product matrices. The action of the inverse of the approximation on a vector of coefficients is approximated by an inner preconditioned Conjugate Gradient solver. The complexity of the Kronecker matrix-vector product in the inner iteration is lower than the complexity of the matrix-vector product for the forward problem, leading to a fast solution strategy. The proposed method is implemented in our open-source Julia framework for spline based discretization methods. We show the robustness, efficiency and effectiveness of our approach for benchmark problems in linear elasticity and anisotropic heat conduction, and illustrate the performance gain with respect to the state-of-the-art Fast Diagonalization and approximate Kronecker inverse preconditioning techniques.
CANCELLED
——————
A key challenge in hybrid lightweight design is the joining of dissimilar materials such as metal and plastic. Environmentally and resource-orientated solutions are form-fit connections, as there is no need for additional adhesives. A current approach is the use of aluminium foam structures established in lightweight design to serve as undercuts that can be used for bonding with polymer. In this study, the polymer is additively manufactured on top of the aluminium foam. The filling of the pores in the aluminium foam by the polymer essentially influences the bond strength of the composite. A simulation-based process modelling was created to investigate and evaluate the mechanical interlocking. This virtual process chain includes the geometric modelling of aluminium foam structures, the process simulation of additive manufacturing using a Computational Fluid Dynamics (CFD) model with Fluid-Structure Interaction (FSI) and a subsequent structural-mechanical analysis using the Finite Element Method (FEM). Hence, for a numerical analysis of the mechanical interlocking, the pore filling from the CFD needs to be mapped onto the mechanical model. Thegeneration of the geometry of the polymer in the pores from the CFD model and the subsequent meshing for the structural-mechanical model often leads to discretization problems due to highly distorted elements. In order to overcome this problem, the geometric modelling of the polymer surface with Non-Uniform Rational B-Splines (NURBS) is investigated in this contribution. In particular, the fitting of multi-patch NURBS surfaces to the nodes of the polymer mesh calculated in the CFD model is investigated. With this method, the polymer surface can be interpolated for different pore fillings by the displacement of the control points, which reduces the number of simulations of the additive manufacturing process and contributes to the optimisation of the process parameters. Furthermore, this approach can be adapted to model the free surface in common additively manufactured polymer parts.
Understanding the microstructures of materials has become increasingly crucial for studying their structure-property relationships, i.e., the influence of structural descriptors on macroscopic material properties. In materials with polycrystalline microstructures, the 3D grain architecture significantly influences mechanical properties. Imaging techniques like diffraction contrast tomography and 3D electron backscattered diffraction (EBSD), which can provide informative 3D image data (i.e., grain maps) of polycrystalline materials, are often not readily available. However, 3D image data is highly valuable not only for computing structural descriptors that quantify the material microstructures but also for serving as input for spatially resolved simulations, enabling a deeper understanding of material properties. Conversely, 2D image data acquired via 2D EBSD is more accessible, but it has limitations—such as its inability to serve as input for spatially resolved 3D simulations of material behavior—and quantifying 3D structures from 2D data is often non-trivial. This talk introduces a computational method to address these challenges by generating digital twins of the 3D morphology of material microstructures using stochastic 3D modeling calibrated with 2D image data [1]. The method employs parametric models from stochastic geometry, particularly parametric random tessellations, to generate random virtual 3D grain architectures, including those with curved grain boundaries. Calibration of model parameters to 2D data is achieved through methods of generative artificial intelligence (AI). Specifically, a neural network (discriminator) is trained to guide selection of model parameter such that 2D cross-sections of generated structures statistically match the experimentally measured 2D image data. The method is demonstrated using 2D EBSD data of recycled AA6082 aluminum alloys, where the reconstructed 3D grain architectures enable a quantitative characterization of the 3D structure. Then, the reconstructed 3D grain maps can serve as input for spatially resolved 3D simulations of crystal plasticity [2]. The parametric nature of the stochastic 3D models allows for systematic parameter variation, enabling the generation of additional structural scenarios beyond those calibrated to experimental data. In this way, the database of 3D microstructures can be expanded to derive data-driven process-structure-property relationships and to virtually design materials with tailored mechanical properties.
References
[1] L. Fuchs, O. Furat, D.P. Finegan, J. Allen, F.L.E. Usseglio-Viretta, B. Ozdogru, P.J. Weddle, K. Smith and V. Schmidt. “Generating multi-scale Li-ion battery cathode particles with radial grain architectures using stereological generative adversarial networks.” Communications Materials (in print).
[2] B. Klusemann, B. Svendsen and H Vehoff. International Journal of Plasticity 50 (2013) 109–126.
To approximate n-point probability functions, different techniques are feasible. Machine learning has proven to be very effective in a wide variety of appplications, such as statistical homogenization, where n-point probability functions play a significant role.
To specifically extract 2-point probability functions from isotropic two-phase microstructures, we achieved good results by employing a coupled neural network structure that combines a convolutional (CNN) and a fully connected network (FCNN).In a second approach, we propose a neural operator that not only approximates these probability functions but also captures the infinite dimensionality of the target function space, in contrast to a classical artificial neural network (ANN) approach.
Mechanical metamaterials or architected materials are becoming increasingly popular due to rapid developments in additive manufacturing. These architected materials exhibit mechanical properties that significantly differ from those of their base material and offer a high degree of customizability. This is enabled by a rationally designed spatial structure of the base material on the mesoscale, which allows tuning the effective properties on the macroscale. This freedom in design naturally raises the question: How can we systematically find a structure that matches some given requirements? Since working directly on the structure itself is difficult to handle practically, the most important features of the structure are usually captured in low-dimenional descriptors. Consequently, the inverse design approach used here focuses on suggesting appropriate descriptor values for given target properties. This contribution focuses on spinodoid metamaterials [1] with primarily elastic target features. Thus, the goal is to identify the most suitable spinodoid metamaterial for a given nonlinear-elastic behavior. First, a neural network-based surrogate model is created, which predicts the corresponding elastic material behavior for a given descriptor. Creating such a model requires a sufficiently large dataset of descriptor-property pairs. To efficiently generate this computationally expensive dataset, an appropriate sampling method is presented. After calibrating the surrogate model, inverse design for complex elastic target properties can be efficiently performed using various methods [1,2]. The effectiveness of the framework is demonstrated with selected examples and the results are compared with existing work [3].
[1] Kumar, S., Tan, S., Zheng, L. & Kochmann, D. (2020). Inverse-designed spinodoid metamaterials. Computational Materials, 6, 73. https://doi.org/10.1038/s41524-020-0341-6
[2] Asmus, J., Müller, C.L. & Sbalzarini, I.F. (2017) Lp-Adaptation: Simultaneous Design Centering and Robustness Estimation of Electronic and Biological Systems. Scientific Reports, 7, 6660. https://doi.org/10.1038/s41598-017-03556-5
[3] Thakolkaran, P., Espinal, M., Dhulipala, S., Kumar, S., Portela, C.M. (2023). Experiment-informed finite-strain inverse design of spinodal metamaterials. arXiv-preprint.
https://doi.org/10.48550/arXiv.2312.11648
This study presents a deep learning framework for the inverse reconstruction of open-cell porous metamaterials, targeting specific hydraulic properties such as porosity and intrinsic permeability. Utilizing a combination of synthetic 3D porous microstructures and CT-scan-based representative volume elements (RVEs) of porous open-foam specimens, the framework leverages a property-variational autoencoder (pVAE) to establish robust mappings between 3D microstructures and their corresponding hydraulic properties. The pVAE integrates a variational autoencoder (VAE) with a regression network, providing a compact latent space that enables efficient interpolation and optimization. Moreover, a convolutional neural network (CNN) is trained to predict the hydraulic properties, thereby generating a computationally efficient dataset to train the pVAE, circumventing the high cost of direct numerical simulations.
In particular, the VAE comprises a convolutional encoder, which compresses input structures into a low-dimensional latent space modeled as a Gaussian distribution, and a decoder, which reconstructs 3D geometries. The training process minimizes a composite loss function that includes a reconstruction loss, ensuring fidelity to the input data, and a Kullback-Leibler (KL) divergence term, which regularizes the latent space for smoothness and semantic continuity. Additionally, a regression loss term aligns the latent space with target hydraulic properties, optimizing the model's ability to predict and generate structures with specific attributes.
By leveraging the continuous and interpretable latent space for optimization and sampling, the pVAE generates porous structures tailored to desired hydraulic properties. This approach enables scalable, data-driven design of porous metamaterials for multiscale and multi-functional engineering systems, advancing structure-property mapping methodologies.
REFERENCES
[1] Heider Y, Aldakheel F, Ehlers W (2025) A multiscale cnn-based intrinsic permeability prediction in deformable porous media. Meccanica “under review”
[2] Wang L, Chan YC, Ahmed F, Liu Z, Zhu P, Chen W (2020) Deep generative modeling for mechanistic-based learning and design of metamaterial systems. Comput. Methods Appl. Mech. Eng. 372:113377,
https://doi.org/10.1016/j.cma.2020.113377
[3] Jones RE, Hamel CM, Bolintineanu D, Johnson K, Buarque de Macedo R, Fuhg J, Bouklas N, Kramer S (2024) Multiscale simulation of spatially correlated microstructure via a latent space representation. J. Solid Mech. 301:112966, \newline
DOI https://doi.org/10.1016/j.ijsolstr.2024.112966
[4] Zheng L, Karapiperis K, Kumar S, Kochmann DM (2023) Unifying the design space and optimizing linear and nonlinear truss metamaterials by generative modeling. Nat Commun 14, 7563.
https://doi.org/10.1038/s41467-023-42068-x
Numerical investigations of real engineering problems have become indispensable due to the continuously increasing complexity of the issues at hand. Especially when analytical solution methods reach their limits and experimental investigations are associated with high effort and costs, simulation offers a pragmatic and promising approach. The development of the finite element method, which was only made possible in the last century due to the available computing power, is now a standard tool in engineering practice.
Engineering applications are diverse, and simulations can become very complex. Considering geometric and physical nonlinearities, time-varying simulations, multiscale modeling, multicriteria optimization, or analyses with uncertain variables require adequate computer-based models. It is often practical and sometimes necessary to replace the actual model with more efficient ones, ensuring the loss of accuracy is minimized. One approach that will continue to gain importance in the future is AI-based surrogate modeling, utilizing and combining different neural networks. Even in choosing the appropriate networks, such as FFNN, RNN, CNN, and their corresponding architectures, artificial intelligence can assist, taking into account available data for a specific problem.
Beyond the solution phase, where defined output parameters are determined based on available input parameters using an AI-based model, the use of artificial intelligence in the preprocessing and postprocessing steps leads to a holistic approach. At the conference, we will present and discuss how model creation in the preprocessing and model evaluation in the postprocessing can be supported by AI. An automated process will be presented using academic examples with various engineering-related questions.
This study presents an approach to solving an inverse problem through the application of Deep Reinforcement Learning (DRL) coupled with homogenization. The underlying objective is to determine the micro-structural parameters of a composite material, including particle radius, Young’s moduli and Poisson’s ratios in order to achieve a specific target bulk modulus at the macro-scale using DRL. This approach is later extended to a multi-objective task that also decreases total material weight by incorporating density and volume fraction as additional parameters. Employing homogenization under Periodic Boundary Conditions (PBCs), a 3D mesh is analyzed comprising a matrix and particles to identify a Representative Volume Element (RVE), thereby reducing computational complexity and allowing efficient Finite Element Method (FEM) calculations in subsequent steps. An Advantage Actor–Critic (A2C) model is employed, using the FEM analysis as feedback, to iteratively adjust the micro-structural parameters and incrementally approach the target properties. A Genetic Algorithm (GA) is implemented to fine-tune the hyper-parameters of the neural network, enhancing the ability of the model to effectively explore different parameter combinations. While tuning of hyperparameters is conducted in 2D, findings are transferred to 3D to verify this approach in more realistic scenarios. The DRL approach is compared in a partial manner with the Bayesian optimization, a well established algorithm type in the field of inverse material design, in order to give an idea of the different application circumstances. For A2C, the development of a specific reward function enables the DRL algorithm to approach the solution consistently leading to many different solutions of the inverse problem. In addition, the search space is decreased significantly without limiting the variety of determined micro-configurations by using effective balancing of exploration and exploitation. Extensive tuning of hyperparameters enables the adjustment of the algorithm to specific desired outcomes as the increase of sample efficiency or quantity of diverse solutions. Using this hands-on learning approach can unveil innovative material configurations by exploring a broader range of design scenarios, including those that might be overlooked or deemed non-intuitive by traditional methodologies. Hence. it is positioned to establish an alternative methodology for designing novel material combinations with tailored properties.
Gamification is an innovative approach to promoting motivation. It refers to the use of game principles (e.g. competition and cooperation) and elements (e.g. challenges, points and levels) in non-game contexts. It has proven to be a widespread and powerful technique for driving behaviour. Theoretical studies have shown that gamification potentially appeals to motivational mechanisms (Mora et al. 2017) and is a relevant approach for engagement rather than focusing on pure entertainment (Domínguez et al. 2013). Escape games and computer games have become a popular leisure activity for a broad section of the population (Demmler et al. 2014).
The creation and process of building an open-source physical and virtual mechanics-themed Escape Room will be presented. Selected topics of basic mechanics (e.g. material stiffness, equilibrium, bending, ... ) were chosen where each puzzle is directly linked to the topic. Thus, visitors can playfully engage with the questions. Additionally, they learn something about the historical or scientific context due to the interwoven story using of social media elements. The target group is primarily the interested public, school pupils and freshers but the room can also be modified for kids furthering an interest in STEM subjects at an early stage. Positive evaluation results and experiences will be shared.
The physical Pop-Up Escape Room will be presented on-site for the conference participants to experience and can be re-built. The virtual room can be visited any time (available on gaming platform itch.io[2]). The virtual room has been developed in the spirit of open and transparent science using the free and open-source game engine Godot[1]. All material is available open-source on a GitHub repository[3] with all instructions and development files being shared.
GAMEchanics, the mechanics-themed Escape Room is an example for using gamification as a method to engage with topics in mechanics with a memorable experience. It has been featured e.g. in Tagesschau, Tagesspiegel[4][5] etc.
This project was funded by the Klaus Tschira Foundation.
References:
Demmler, Kathrin; Lutz, Klaus; Ring, Sebastian (Hrsg.) (2014): Computerspiele und Medienpädagogik. München: kopaed (Materialien zur Medienpädagogik, Bd. 11).
Domínguez, Adrián; Saenz-de-Navarrete, Joseba; de-Marcos, Luis; Fernández-Sanz, Luis; Pagés, Carmen; Martínez-Herráiz, José-Javier (2013): Gamifying learning experiences: Practical implications and outcomes. Computers \& Education 63, S. 380–392.
Mora, Alberto; Riera, Daniel; González, Carina; Arnedo-Moreno, Joan (2017): Gamification: a systematic review of design frameworks. J Comput High Educ 29 (3), S. 516–548.
[1] https://godotengine.org/
[2] https://gamechanics.itch.io/
[3] https://github.com/SVFS-TUBerlin/Project_GAMEchanics
[4] www.tagesspiegel.de/themenspeziale/bildungundforschung/
moderne-lehre-an-der-tu-berlin-spater-im-job-gibt-es-auch-keine-musterlosungen-12542045.html
[5] www.tagesspiegel.de/wissen/projekt-an-der-tu-berlin-ein-neues-spiel-soll-studierenden-die-gesetze-der-mechanik-erklaren-10419310.html
As part of a laboratory course in the study program chemical engineering a traditional cookbook based experiment with a didactic concept based on constructive alignment was developed and implemented. The design ensured alignment between learning objectives, teaching-learning activities, and assessments, with the learning objectives covering all levels of Bloom’s taxonomy. The experiment focused on measuring lift and drag forces on different bodies in a wind channel and applying principles of similarity mechanics. To enhance understanding, an augmented reality (AR) app was utilized, enabling students to visualize pressure fields, velocity fields, and streamlines around an airfoil, depending on measured flow velocities and the set angle of attack.The experiment targeted 45 fourth-semester bachelor’s students who had completed a 3-hour-per-week lecture and a 3-hour-per-week exercise on the subject. The didactic framework included multiple phases: a pre-test to assess prior knowledge, distribution of a script providing theoretical background and experimental procedures as well as addressing the important learning objectives, execution of the experiment, and subsequent analysis and discussion of the results. The teaching-learning activity concluded with a post-test after the students had a short presentation and discussion of their results with the supervisor to evaluate the learning outcomes. Pre-test and post-test are both carried out on the learning management control system of the university “Moodle”.To measure the effectiveness of the teaching learning activity the pre- and post-test are identical. The effectiveness of the teaching-learning activity was assessed using both quantitative and qualitative results from the pre- and post-tests. The post-test results showed significant improvement, demonstrating the overall success of the experiment. However, areas for improvement were identified. The pre-test revealed that several learning objectives were already achieved by many students, suggesting that the experiment could shift its focus in these areas. Conversely, the post-test highlighted certain learning objectives that were not met, which would have remained unnoticed without this form of assessment.These findings emphasize the importance of precise learning assessments and iterative refinement of teaching-learning activities to address students’ needs effectively. Future adjustments could reallocate the experiment’s emphasis to address underachieved learning objectives while reducing redundancy, thereby improving the efficiency and impact of the teaching-learning activity.
Contemporary education must be student-centered and competence-oriented. The focus mustbe on independently tackling and solving problems in line with constructivist learningtheories. This also requires a transition in teaching activities away from the pure transfer ofknowledge towards supporting students in achieving their learning objectives: the well-knownshift from teaching to learning. This quickly leads to a seemingly insurmountable problem:Lecturers quickly reach the limits of feasibility depending on the number of students. Inaddition, in the spirit of the “Universal Design for Learning”, it is desirable for students to beable to receive support not only at certain times, but whenever they find the time to workindependently on specific topics and problems.
One idea for solving this problem is the use of GPTs, i.e. generative pre-trained transformers.Probably the best-known example of this is chatGPT, a large language model (LLM) thatprovides knowledge in some impressive ways. Based on chatGPT, it is also possible to createyour own GPTs for specific topics. This suggests the idea of using such specifically customizedGPTs to support students. The following scientific questions arise from this idea:
What is chatGPT (in different versions) already capable of in terms of solvingproblems in mathematics and mechanics?
How accurate and reliable are the solution approaches and solutions provided bychatGPT/ GPTs?
Is it possible to additionally instruct GPTs in terms of how and in what way feedbackand answers should be provided? In particular, how can it be ensured that GPTs donot directly “give away” complete solution paths?
In this contribution, we will present a case study that relates to the content of one of ourundergraduate mechanics lectures and specifically addresses the answers to these questions.
Visualizing complex phenomena in fluid mechanics has long been a challenge in education. Traditional experimental approaches often focus on macroscopic effects, such as lift forces, while smaller-scale interactions remain theoretical and difficult for students to grasp. This limitation hinders the connection between theoretical knowledge and practical applications, particularly in illustrating multi-scale fluid mechanics phenomena. To address these challenges, we developed a student-centered teaching solution combining a theoretical laboratory script with an augmented reality (AR) application. The script features a curated collection of fluid mechanics experiments grounded in constructive alignment and aligned with the SOLO taxonomy to promote deep understanding over surface learning. Embedded QR codes link to corresponding AR-based simulations, allowing students to visualize and interact with fluid mechanics concepts. Following a constructivist learning approach, the AR app enables students to explore micro-scale flow patterns and dynamic transitions, transforming previously abstract phenomena into accessible, self-directed learning opportunities. The AR app supports independent, remote experimentation through HomeLabs, enabling students to perform experiments without specialized equipment or materials. This setup creates a flexible, dynamic learning environment where students can engage with content at their own pace and receive immediate feedback. By eliminating the need for physical laboratories or computational resources for complex CFD simulations, the app reduces barriers to advanced learning while minimizing costs and instructor workload. Designed as an open-source platform, the project is scalable and adaptable to varying educational needs. Educators can customize the app to suit specific teaching goals and seamlessly integrate it into existing curricula without requiring significant infrastructure investments. Surveys show that students, particularly the "tablet generation," prefer accessible, digital experiments. Thus, the app was developed using the Unity Engine to ensure a user-friendly experience compatible with tablets and smartphones. This innovative approach bridges the gap between theoretical and practical learning by combining interactive tools, advanced visualization, and cost-effective accessibility. It enhances student engagement and understanding while providing a scalable and sustainable model for modernizing laboratory education in fluid mechanics. By promoting open-source collaboration, this project extends its benefits to a broader academic community, fostering resource sharing and collective advancement in STEM education.
This paper presents a new education concept for the course “Mechanics 1”, which was specially developed for the new Bachelor's degree program “AI Engineering - Artificial Intelligence in Engineering Science”. The Bachelor's degree program is the first German-speaking degree program that provides an integrated education in artificial intelligence, engineering science, and computer science. The focus is on the systematic integration of coding exercises in the practical lessons as a result of which the students acquire practical computing programming skills in addition to the theoretical foundation of mechanics. The presented teaching methodology requires students to solve exercises not through manual calculations but by using computer algebra systems. In conjunction with this, students learn to model mechanical problems using the programming language Python. This shifts the educational focus away from manual analytical problem-solving skills towards a more refined problem understanding and in-depth knowledge of mathematical and software-technical modeling of corresponding problems. This enables an increased focus on understanding mechanical systems and the discussion of more complex and practice-oriented problems. A key element of the concept is the structure of the course: At the beginning, students receive partly pre-structured programs, which they complete step by step through equations. This enables both, the computation of large systems of equations and the graphical representation of mechanical relationships, for example of internal forces. As the semester progresses, the level of guidance decreases, which encourages the development of increasingly independent problem-solving strategies. The concept is accompanied by the use of electronic response systems (ERS), which enables continuous monitoring of learning progress and provides students with immediate feedback on their understanding. Initial evaluations indicate that the integrated approach both deepens the understanding of mechanical principles and conveys important digital competencies for modern engineering practice.
Experts discuss modern technical solutions and challenges in building engineering and transport, showcasing practical implementations and innovations shaping the future of construction and transit.
Over the last 15 years, Europe has seen a very dynamic development and implementation of zero-emission city buses. The leading products here are battery buses - BEB, fuel cell buses - FCEB and modern trolleybuses equipped with high-capacity traction batteries. The development of energy storage systems, electric drive train systems and evolutionary adaptation of vehicles enable full operability of zero-emission city buses in everyday use. High technical reliability, efficiency and low noise levels affect the high level of acceptance by passengers, drivers and technical service. In 2024, approximately 60% of new city buses in Europe were zero-emission vehicles. For many years, Solaris has been introducing innovative BEB, FCEB and trolleybus to the market, successfully competing with the giants of the bus industry from Europe and China. The culmination of many years of consistent strategic actions aimed at the electrification of city buses is the prestigious European Bus of the Year 2025 award won by the Solaris E18 H2 bus equipped with a hydrogen fuel cell.
Failure of materials and structural components is an important issue as long as man-made constructions exist. The section focuses on damage mechanics and fracture mechanics for all kinds of solid materials and structures. It aims at bringing together related original research covering experimental observations, modeling approaches and numerical techniques. Moreover, material failure is a complex process, which may be considered on different length scales ranging from atomistic scale up to macro scale of engineering structures. Since failure behavior of materials strongly depends on loading situations, contributions addressing static, dynamic and multi-axial failure are welcome as well as fatigue problems.
Understanding the fracture mechanism of elastomeric materials is essential for ensuring the reliability and durability of numerous engineering applications. Rubbers, which exhibit large elastic deformations and energy-dissipative behavior, are susceptible to crack initiation and propagation under monotonic and cyclic loading. So, this work presents a mathematical formulation of the mixed formulation of the multi-field (u, p, d) framework through a phase-field fracture in nearly incompressible hyperelasticity undergoing failure. Here, we rely on the phase-field approach to fracture which is a widely adopted framework for modeling and computing the fracture failure phenomena in solids. We incorporate a hybrid formulation with an additive strain energy decomposition to account for different behaviors in tension and compression. A mixed displacement and pressure formulations must satisfy the inf-sup condition for solution stability. Hence, we utilize a mixed formulation with a perturbed Lagrangian formulation which enforces the incompressibility constraint in the undamaged material and reduces the pressure effect in the damaged material. So that the proposed formulation guarantee that the discrete Ladyshenskaya-Babuska-Brezzi (LBB) condition is not violated. In this work, we also present crack propagation experiments, evaluated by digital image correlation (DIC), for an SBR elastomer and compare them with the proposed numerical approach. Two numerical examples are provided to illustrate the capability and efficiency of the model, thereby capturing complex fracture behavior under realistic loading conditions. Experimental verification of quasi-statically driven crack growth was performed under fully relaxed conditions. An experimental calibration was first conducted to determine the hyperelastic constitutive parameters and the material’s fracture toughness. Finally, the effectiveness of the scheme is further evaluated by comparing the crack paths, maximum force response, and force-displacement curves obtained from both experimental and numerical results.
The virtual element method (VEM) is a modern discretization scheme for the numerical solution of boundary value problems on polytopal grids. It has proven to be an efficient alternative to the Finite Element Method (FEM) in recent years and is most prominently known for offering considerable flexibility in the meshing process. In the context of numerical applications of fracture mechanics, one of its most attractive features results from the possibility to employ elements of complex shape with an arbitrary number of nodes, which may be convex as well as non-convex and may even contain crack tips. Consequently, crack growth simulations with the VEM benefit from the fact that incremental changes of the geometry of a crack do not necessitate remeshing, but rather crack paths can traverse through the existing elements, enabling the realization of simulations with significantly reduced computational cost. While classical methods for crack analysis have already been successfully applied within the VEM framework, further research is still required regarding the implementation of efficient and precise methods to evaluate crack tip loading and crack deflection. Therefore, the concept of configurational forces in material space is employed, which already proved to be highly effective for the calculation of these quantities in the context of the FEM. However, the calculations yield certain challenges that need to be dealt with, e.g., due to discontinuous stresses and strains across element edges in the vicinity of the crack tip, and require additional effort in connection with curved crack faces. This work discusses the theoretical and computational aspects of employing configurational forces for mixed-mode crack analysis within the VEM. A methodology for calculating nodal configurational forces is presented and comparative studies are conducted, carefully investigating challenges and opportunities emerging from the discretization method for assessing crack tip loading and crack path prediction, with results benchmarked against analytical and FEM-based solutions.
Cracks in elastic materials represent an extreme form of notches which, due to high local stress concentrations at their tips, tend to grow even under moderate external loads. This growth can compromise the integrity of technical or natural structures and potentially leads to failure. Highly dynamic loads, such as impacts, further exacerbate this process and may result in catastrophic failure.
For decades, numerical methods have been applied for stress analysis of cracks, making significant contributions to the safety of structures, the prediction of lifetimes, and the enhancement of durability to ensure safe operation. In addition to the Finite Element Method (FEM), which has established as a standard tool for analysing cracks, the Boundary Element Method (BEM) has been extensively studied for applications in fracture mechanics. Particle-based approaches and damage-mechanics models, such as those utilizing phase-field methods, provide qualitative insights but are not suitable for quantitative investigations or validation in safety-critical applications. The Virtual Element Method (VEM) is a generalization of the FEM that employs accurate approximations of virtual shape functions by using suitable projection operators. As a result, VEM allows for discretizations with elements of arbitrary shape, which may even be non-convex and include crack tips. For crack analysis, VEM offers significant advantages due to its high flexibility in element design, enabling accurate calculations of loading quantities and efficient predictions of crack paths.
This presentation focuses on the dynamic analysis of stationary cracks using VEM. Key developments include integrating inertia into VEM, implementing advanced time integration schemes and performing dynamic crack loading analyses as a preparatory step towards simulating dynamic crack propagation. Results are validated against analytical solutions and benchmarks from the literature.
This work focuses on the simulation-based determination of the effective crack resistance in het- erogeneous—and specifically porous—materials. We follow the approach of Hossain et al. [1] for a numerical experiment that enables the identification of effective crack resistance as a material parameter. Displacement boundary conditions corresponding to a steadily growing crack on the macroscopic scale are applied to a representative microstructure. On the microscale, crack growth is simulated without making a priori assumptions about crack paths, continuity of crack propagation, etc. The maximum value of the macroscopically acting J-integral represents the driving force necessary to advance the crack by a macroscopic length increment without crack arrest and is defined as the effective crack resistance.
For simulations on the representative microstructure without a priori assumptions about crack growth, phase-field models are particularly suitable and are therefore employed. Phase-field models introduce a regularized approximation of cracks and define an inherent length scale. In this work, we discuss the determination of the effective fracture toughness in metallic foams. The representative microstructures are obtained from CT scans of real materials. We examine the influence of various parameters on the effective crack resistance, with a particular focus on the interplay between the length scales of the phase-field model and the heterogeneities.
References
[1] M. Z. Hossain, C. J. Hsueh, B. Bourdin, K. Bhattacharya, Effective toughness of heterogeneous media, Journal of Mechanics and Physics of Solids 71, pp. 15–32
The main aim of this research is to study the mechanical behavior and damage evolution of additively manufactured spinodoid metamaterials under quasi-static loading. Additive manufacturing (AM) is a transformative method that constructs parts layer-by-layer from a digital 3D model, enabling the creation of complex geometries and reducing costs associated with assembly. Spinodoid metamaterials are a class of architected metamaterials derived from spinodal decomposition principles, are particularly well-suited for fabrication using AM techniques like laser powder bed fusion (LPBF). These structures possess tunable mechanical properties, such as direction-specific strength and smooth property gradation, making them highly versatile for a wide variety of engineering applications in aerospace, construction, and medical industries.
The design process involves generating spinodoid structures through Gaussian Random Fields (GRFs), which produce microstructures with spatially varying densities. These variations significantly influence the mechanical behaviour of the structure, especially under tensile and compressive loading. Damage initiation and progression play a pivotal role in determining how these materials deform. Understanding these processes is essential for optimizing their performance and ensuring reliable designs. To achieve accurate predictions of mechanical behavior, the present study incorporates both local and non-local damage models into the simulation framework. Non-local approaches, such as implicit gradient damage models, address issues like localization and enhance physical accuracy, which is very important to design these spinodoid’s for engineering applications.
By integrating advanced damage modeling techniques, the current study aims to establish a robust framework for simulating the performance of additive manufactured structures, while underscoring the immense potential of spinodoid metamaterials for applications requiring customized mechanical properties. The integration of damage modeling enables precise predictions of mechanical performance, supporting the robust design of next-generation metamaterials. The findings are particularly relevant for industries prioritizing lightweight structures, biomedical implants, and energy absorption systems. This research lays the foundation for broader adoption by demonstrating a robust framework for designing innovative and customizable materials.
The classical phase-field approach [1] assumes a homogenized distribution of the critical energy release rate, which leads to catastrophic failure and brittle fracture [2,3]. However, experiments on the fracture behaviour of rubber-like materials imply the existence of a ductile-like progressive failure mechanism [4,5]. The sensitivity of the material parameters governing elastic mechanical response doesn't play a strong role on the the bulk response of test specimens. However, the crack initiation and crack path are more sensitive to perturbations on the fracture parameters. Exploring this behaviour is important, as progressive crack growth is crucial in understanding the fatigue failure of rubberlike materials.
This study institutes samplings in critical entropic energy and material parameters using various probability distributions to capture inherent stochasticity in rubber-like materials. To validate the concept, experiments were conducted on die-cut V-shaped, double-edge notched specimens of unfilled styrene-butadiene-rubber (SBR), revealing non-repetitive response of crack paths and varying load-displacement curves beyond the threshold for crack initiation. Results obtained using die-cut specimens demonstrated ductile-like fracture behaviour, whereas laser-cut specimens exhibited sharp brittle fracture behaviour, demonstrating the sensitivity of the crack initiation on the initial flaws and defects on the surface. In addition to in-house experiments, data obtained from the literature were used for validation. Analysis of the simulations and experiments have shown that ductile-like fractures grow in an irregular manner and follow ornate, non-linear paths for both asymmetrical and symmetrical specimens with the introduction of stochasticity in material and fracture parameters. The study challenges the classical approach in phase-field fracture mechanism for rubber-like materials and offers insights for future work on fracture and fatigue crack growth.
References
[1] Miehe, C., Hofacker, M., Welschinger, F. (2010). A phase field model for rate-independent crack propagation: Robust algorithmic implementation based on operator splits. Computer Methods in Applied Mechanics and Engineering, 199, 2765-2778.
[2] Açıkgöz, K., \& Dal, H., (2022). A generalized phase-field approach for the failure of rubberlike materials. Constitutive Models for Rubber XII, 312–320.
[3] Schänzel, L., Dal, H., Miehe, C., (2013). Phase field modeling of fracture in rubbery polymers. Constitutive Models for Rubber VIII, 335–341.
[4] Wu, J., McAuliffe, C., Waisman, H., Deodatis, G. (2016). Stochastic analysis of polymer composites rupture at large deformations modeled by a phase field method. Computer Methods in Applied Mechanics and Engineering, 312, 596-634.
[5] Açıkgöz, K., Tanış, B. E., Dal, H., (2024). A stochastic phase-field approach for the failure of rubberlike materials. Constitutive Models for Rubber XIII,accepted.
The section will focus on advanced theoretical, numerical and experimental models for the evaluation of the behavior of structures subjected to static and dynamic loads. Innovative materials characterized by high strength, anisotropy and unconventional mechanical responses pose new challenges to the design and the performance of various structural elements such as beams, plates and shells. In particular, structural issues may appear at different scales when materials with an internal architecture are employed. Particularly welcome are linear and nonlinear models and algorithms that address static and dynamic behavior of structures at different scales.
Port-Hamiltonian (PH) systems modeling [1] — a framework that inherently guarantees energetic consistency and facilitates the interconnection of physical systems — is gaining increasing popularity also in computational mechanics. Structural elements such as strings [2] and beams, which often represent submodules in flexible multibody systems, are particularly well-suited for this approach [3]. This talk focuses on the application of PH modeling to the dynamics of geometrically exact beams, also known as Simo-Reissner beams or special Cosserat rods. While the linearized Timoshenko beam, that assumes small deformations and rotations, is a common application example in PH works, we focus on two aspects that are only rarely addressed: Firstly, we analyze computational aspects linked to the spatial and temporal discretization of PH beams. We demonstrate how the PH framework naturally mitigates numerical locking effects. Secondly, we present a PH formulation of the geometrically exact beam, which is amenable to large deformations and rotations. We further discuss a structure-preserving spatial discretization, using mixed finite elements. By maintaining the PH properties for the spatially discrete system, this approach provides a robust foundation for accurate and physically consistent simulations. Additionally, an appropriate time discretization ensures an exact representation of the energy balance in discrete time. Simulation results illustrate the performance of the developed approach. The gained insights highlight the potential of the PH framework for extending to more complex systems and advancing numerical treatments in structural mechanics.
References:
[1] V. Duindam, A. Macchelli, S. Stramigioli and H. Bruyninckx: "Modeling and Control of Complex Physical Systems: The Port-Hamiltonian Approach". Berlin, Heidelberg: Springer, 2009.
[2] P. L. Kinon, T. Thoma, P. Betsch and P. Kotyczka: “Generalized Maxwell Viscoelasticity for Geometrically Exact Strings: Nonlinear Port-Hamiltonian Formulation and Structure-Preserving Discretization”. In: IFAC-PapersOnLine 58(6), pp. 101–106, 2024.
[3] A. Warsewa, M. Böhm, O. Sawodny and C. Tarín: “A Port-Hamiltonian Approach to Modeling the Structural Dynamics of Complex Systems”. In: Applied Mathematical Modelling 89, pp. 1528–1546, 2021.
CANCELLED
————
From a computational perspective, the analysis of wave propagation and other high-frequency dynamic simulations is rather costly and presents considerable challenges. In such cases, explicit time integration is frequently employed, as the time step for numerical simulation is limited by the physical characteristics of the system. Nevertheless, the efficiency of explicit time integration depends on the availability of a lumped mass matrix (LMM). As a matter of fact, no mass lumping technique is able to deliver satisfying accuracy for all types of elements. However, the spectral element method (SEM) is widely utilized for transient simulations as it provides a lumped mass matrix (LMM) by construction and provides optimal rates of convergence for quadrilateral and hexahedral elements. In this context, lumping is achieved through the application of the nodal quadrature technique, where Gauß-Lobatto-Legendre (GLL) points are utilized both for the definition of Lagrangian shape functions and the quadrature rule. It should be noted, however, that this method is not suitable for other point distributions, as it can result in zero or negative masses, which makes it inappropriate for arbitrary element types. Conversely, established mass lumping techniques, such as the row-sum method and the diagonal scaling method (HRZ lumping), can be employed to transform a consistent mass matrix into a diagonal matrix for arbitrary element types. These approaches frequently cause complications such as reduced convergence rates and an inability to guarantee the positive definiteness of the mass matrix. Therefore, it is highly desirable to develop a new method that addresses these shortcomings.
A first step in this direction is achieved in the contribution at hand. Here, we introduce a novel approach based on the spectral decomposition theorem (SDT) that enables the transformation of a consistent mass matrix (cSEM) into a lumped mass matrix (lSEM). This technique ensures that the lumped mass matrix is positive definite and guarantees exponential convergence rates, as it exactly reproduces the SEM mass matrix obtained through nodal quadrature. The proposed SDT-based mass lumping method thus provides a foundation for the development of advanced mass lumping schemes for other element types, for which no such methods currently exist.
It is well known that in Finite Element (FE) simulations, the selected mesh has a strong influence on the quality of the results. Especially in the case of highly distorted meshes, large discrepancies between the numerical and the analytical solution can be observed. To address this issue, various elements based on the Petrov-Galerkin FE method have been developed in recent years (see, e.g., [1, 3]). In contrast to the Bubnov-Galerkin method, which is commonly used in most FE formulations, the Petrov-Galerkin method employs different ansatz spaces for the test and trial functions. While this approach typically improves accuracy in elastostatic simulations, its extension to elastodynamic problems is not straightforward. In [2], it is shown that special discretization techniques are necessary to achieve conservation of energy in the case of unsymmetric mass and stiffness matrices, as they occur in the Petrov-Galerkin FE method. Otherwise, unbounded energy growth may occur over time. Wang and Hillman [2] proposed a modification to the Newmark method to address this issue. In this contribution, we present an alternative approach. The core idea is to introduce the velocity field as an independent variable and enforce its relationship with the displacement field through a special constraint. We demonstrate that, under certain conditions, this approach enables energy-conserving simulations.
References
[1] Pfefferkorn R, Betsch P. Mesh distortion insensitive and locking-free Petrov-Galerkin low-order EAS elements for linear elasticity, Int J Numer Methods Eng. 2021; 122(23):6924-6954.
https://doi.org/10.1002/nme.6817
[2] Wang J, Hillman MC. Temporal stability of collocation, Petrov-Galerkin, and other non-symmetric methods in elastodynamics and an energy conserving time integration, Comput. Methods Appl. Mech. Engrg. 2022; 393:114738.
https://doi.org/10.1016/j.cma.2022.114738
[3] Xie Q, Sze KY, Zhou YX. Modified and Trefftz unsymmetric finite element models, Int J Mech Mater Des. 2016; 12:53-70.
https://doi.org/10.1007/s10999-014-9289-3
Impulse-based substructuring (IBS) is the time-domain counterpart of the well-established frequency-based substructuring method (FBS). The IBS method is especially suited for determining shock responses, where the main goal might be to correctly predict the maximum amplitudes, e.g., of the accelerations. Its advantages originate from it being a time-domain method, i.e., high-frequent content, potentially consisting of many modes, can be represented in a short time series.
While this method has been successfully applied for numerical models in the past, experimental applications are just now starting to be conducted. In the authors' previous work, it was shown that the IBS method can be used experimentally to determine the initial response peaks of aluminum and polyoxymethylene (POM) rods considered as one-dimensional, using a time domain deconvolution procedure to experimentally estimate the impulse response functions (IRF) and additionally downsampling with a low-pass filter for the POM rods. Whereas this showed that an experimental application of IBS is possible, the practical use-cases were limited to one-dimensional systems.
Therefore, the Virtual Point Transformation (VPT), commonly used within FBS to realize experimental 6 degree of freedom interface coupling, was adapted for the IBS method. The performance of the adapted IBS scheme using VPT is showcased in an experimental example. With the approaches shown, it is possible to correctly reconstruct the initial acceleration response peaks to impulse hammer impacts. The results show that combining a time domain deconvolution, downsampling with a low-pass filter, and the VPT fundamentally enables experimental substructuring of three-dimensional structures in the time domain using the IBS method.
Accurate NVH simulation results require the use of models that precisely represent the physical dynamics and interactions occurring under real-world operating situations. This research focuses on the development of a mechanical equivalent model for trailers that accounts for different loading conditions to improve the accuracy of these simulations. Central to this study is the use of experimental modal analysis techniques to investigate the structural dynamics of both a scaled-down trailer model and its full-size counterpart.
The scaled trailer model, designed with dynamic similarity to its real-world counterpart, is tested on a specialized bench allowing excitation at the axle and recording of forces at the kingpin as well as accelerations of the trailer. The experiments include various loading scenarios, with shifts in the centre of gravity, different payload types, load couplings and weight distributions. In addition, a scaled-down version of a fifth wheel is used to investigate the interface forces that occur with different coupling types such as material combinations and different clearances. In parallel, the full-size trailer is tested on a multi-piston test rig to evaluate its dynamic behaviour. Key characteristics, including natural frequencies, mode shapes, and damping ratios, are derived for a comparative analysis.
The results show a correlation between the scaled and full-size models, with the position of the centre of gravity and the clearance in the fifth wheel coupling emerging to be the most influential factors for the dynamic behaviour. Local stiffening effects due to load connections are observed, although their effects are secondary to the overall positioning of the load. Finite element simulations including these effects align with the experimental results and underline the validity of the scaled model for dynamic evaluations.
This research confirms the feasibility of using a scaled model to predict the dynamics of full-size trailers and provides data to create a mechanical equivalent model for integration into NVH simulations. These findings contribute to improved simulation accuracy and enable better design and performance optimization of semi-trailers and tractor units in the transportation industry.
Dynamic Substructuring provides a framework for analyzing the dynamics of complex, large-scale systems at the subcomponent level, often integrating model reduction techniques in numerical workflows. For machines and components with geometric and dynamic complexities, incorporating experimental data for component characterization is crucial. A frequency-based approach is commonly adopted for efficiently coupling experimentally tested components. While Substructuring focuses on assembling passive components, Transfer Path Analysis (TPA) addresses the characterization of active components and the propagation of vibrations to the passive system.
The chain of operations for experimental substructuring and TPA in the frequency domain begins with well-designed measurements that generate passive frequency response functions (FRFs) representing subcomponent dynamics and active operational measurements to characterize unknown sources. Interface compatibility is often weakened, reducing the problem to a few generalized degrees of freedom while retaining key dynamic features. The substructuring assembly (e.g. using Lagrange multipliers) and source characterization (e.g. through in-situ blocked forces) both involve solving inverse problems.
While these operations may seem straightforward, several sources of error can undermine the methodology, leading to unreliable predictions. Proper modeling of interfaces and degrees of freedom is crucial, and filtering and regularization techniques can help mitigate vibration noise and distortions. This paper evaluates the effectiveness of filtering (e.g., modal filtering, PRANK) and regularization (e.g., truncated SVD, Tikhonov regularization, added compliance) techniques within the framework of Frequency-Based Substructuring and TPA. The physical consistency, usability, and effectiveness of these methods are discussed and benchmarked using both numerical and experimental examples.
The section will focus on advanced theoretical, numerical and experimental models for the evaluation of the behavior of structures subjected to static and dynamic loads. Innovative materials characterized by high strength, anisotropy and unconventional mechanical responses pose new challenges to the design and the performance of various structural elements such as beams, plates and shells. In particular, structural issues may appear at different scales when materials with an internal architecture are employed. Particularly welcome are linear and nonlinear models and algorithms that address static and dynamic behavior of structures at different scales.
Iron-based shape memory alloys (Fe-SMAs) exhibit unique shape memory effect (SME) behavior upon thermal activation, making them advantageous for various structural applications such as prestressing. Introducing short Fe-SMA fibers into concrete structures allows for a uniform and localized distribution of prestressing forces within the concrete matrix. In this study, an experimental campaign was conducted to evaluate the efficiency of prestressing concrete using short Fe-SMA fibers. Concrete samples reinforced with Fe-SMA fibers, steel fibers, and plain concrete were tested under three-point bending after exposure to ambient temperature, 160°C, and 200°C. All fiber-reinforced samples contained a 2\% volume fraction of fibers with identical geometries featuring end-hooked shapes for enhanced pull-out resistance. The results indicated that at ambient temperature, samples with steel fibers exhibited higher strength (22.95 MPa) than those with Fe-SMA fibers (20.2 MPa) due to the absence of phase transformation in the Fe-SMA fibers, however, at elevated temperatures of 160°C and 200°C, samples with Fe-SMA fibers demonstrated higher strength (26.65 MPa and 24.39 MPa, respectively) compared to those with steel fibers (19.46 MPa and 16.67 MPa, respectively), highlighting the activation of the SME. Based on these findings, a numerical model was developed to simulate the behavior of concrete composites reinforced with Fe-SMA fibers. An algorithm was created to define the random distribution of fibers, and a novel modeling approach accounted for the end-hooked geometry by assigning different contact properties to the modeled straight fiber ends and middle sections. A mesh sensitivity analysis was performed to determine the optimal mesh size, and the model was validated by comparing numerical results with experimental data.
Acknowledgements:
This research was partially funded by the National Science Centre of Poland (Grant No. 2023/49/N/ST8/03063) and Poznań University of Technology (Grant No. 0412/SBAD/0091).
References:
Tabrizikahou, Alireza, et al. "From experimental testing to computational modelling: A review of shape memory alloy fiber-reinforced concrete composites." Composites Part B: Engineering (2024): 111530.
Tabrizikahou, Alireza, Mieczysław Kuczma, and Moslem Shahverdi. "Impact of fiber geometry, temperature, loading rate, and concrete mix on the pull-out resistance of iron-based shape memory alloy (Fe-SMA): Experimental investigation." Construction and Building Materials 456 (2024): 139298.
Tabrizikahou, Alireza, et al. "Prestressing of concrete using iron-based shape memory alloy (Fe-SMA) short fibers: Experimental and numerical analysis." Construction and Building Materials 467 (2025): 140309.
The research presented in this paper refers to the reinforcing method of thin-walled cold-formed steel (TWCFS) elements using Carbon Fibre (CF) composites joined by bonded connection. The proposed reinforcing method is a good response to TWCFS steel structures designers' needs which strive to achieve solutions which are fast easy and simultaneously safe. The originality of the proposed strengthening method lies in the fact that the mats are bonded across the cross-section to form a segmental closed cross-section. It should be noted that thin-walled cold-formed steel elements (TWCFS) are widely used in civil engineering practice due to it beneficial relation describing bearing capacity versus material consumption. Unfortunately the increased use of such structures, in the event of damage or over-loading, may require reinforcement. Due to very small thickness of the wall of its cross-section, traditional methods using welded connections or mechanical fasteners are not recommended because they can lead to the weakening of the cross-section of TWCFS elements. The main results of the presented investigation were the identification of failure mechanisms of CFS “sigma” beams restrained with CFRP textiles. It was observed very complex failure mechanisms including deboning of the adhesive layer, local plastic deformation, and local, global, and distortional buckling deformation. It was also found that proposed restraining methods of CFS “sigma” types beams using CFRP wraps are the most effective in case of distortional buckling deformation. To investigate this phenomenon the numerical model was made in ABAQUS software, using the Finite Element method (FEM). In this model, TWCFS sigma beams were modelled with C3D8R hexahedral elements were used. The supports, stiffeners and plates were modelled with shell discrete rigid elements with linear shape function of the R3D4 type. Ultimately, the CFRP mat was modelled with M3D4 tetrahedral membrane elements were used. The verification consisted of comparing the force-displacement relationship for the numerical model and results from full-scale laboratory tests. Based on conducted analysis three forms of joint damage were observed, namely at the steel-adhesive interface, fibre rupture and mixed damage behaviour. A combination of adhesive failure and fabric rupture, suggests complex interactions between the materials. It was also noted that the proposed method can be very beneficial due to its simplicity and non-destructive nature simultaneously providing high bearing capacity of the reinforced element.
The main objective of this work is to present an idea of the Finite Difference Method (FDM) application for calculations of variable cross-section beams. The differential equation has an equivalent difference equation in FDM [3,4] where unfortunately the flexibility matrix is not symmetric because the bending EJ stiffness is not more constant along the beam [1]. In order to avoid these difficulties, an idea of the solution is developed. Instead of solving fourth-order equations, the calculations are performed in two steps using the FDM approximation of two second-order differential equations. In the first step the matrix equation is solved, equivalent to the first equation describing the dependence between the bending moments and given load. In the next step, after determining the values of bending moments, the second matrix equation is solved. In this step the nodal displacement vector as the solution of the matrix equation equivalent to second-order differential equation is created. The vector on the right-hand side of equations contains the nodal bending moments obtained in the previous step. In both steps each row of the narrow band matrix contains only three number coefficients of the central second-order difference operator and numbers corresponding to boundary conditions, respectively. In order to obtain eigenfrequencies of beams the standard method as the solution of the matrix eigenvalue problem is used. In the stability problem of beams the equation elements containing the normal force N are considered as nodal forces. The most important advantage of the presented idea is that the variable bending stiffness EJ of beams are on the right-hand side of the second order equations with constant coefficients. A significant number of numerical calculations have shown effectiveness and high accuracy of the elaborated method.
Literature
[1] Timoshenko S.P., Goodier J.N. Theory of elasticity, Mc Graw Hill, New York, 1934
[2] Timoshenko S.P., Gere J.M., Theory of elastic stability, Mc Graw Hill, New York, 1961
[3] Levy H., Lessman F., Równania różnicowe skończone, Warszawa, PWN 1966 (in Polish)
[4] Pawlak Z., Rakowski J., Fundamental solutions for regular discrete slabs, Zeitschrift für Angewandte Mathematik und Mechanik, 77, 1997, 261-262
The section covers all fields of vibrational problems in solid mechanics or mechatronics including nonlinear effects. Submissions may address, for example, systems with nonlinear material behavior, nonlinearities in joints, mathematical solution methods (analytical or numerical), control or description of nonlinear behavior like bifurcations or chaos, or experimental idendification of nonlinearities.
We investigate the oscillations of a stiff longitudinally elastic double pendulum. The model is motivated by the oscillations of a tethered satellite system with negligible bending stiffness, where we expected, that the fast longitudinal oscillations have no significant influence on the slow transversal swingings. Numerical simulations show that even for very stiff pendula a significant amount of the energy is transferred to the longitudinal oscillations leading to irregular system behavior.
In this talk, we investigate how a periodic oscillation of a rigid double pendulum looses stability by decreasing the longitudinal stiffness and how the method of adiabatic invariants can be applied to correctly approximate the system behaviour.
Evaluating the integrity measure of safe basins is a critical task in engineering, as it provides insight into the robustness of operational states in various machines and systems. A classical local approach involves determining the largest radius hypersphere (or, in a suitably scaled context, a hyperellipsoid) fully contained within the safe region. However, in high-dimensional spaces, even infinitesimal discrepancies between the basin boundary and the shape of the l² hypersphere lead to a drastic reduction in its relevance. As the number of dimensions n grows, a fixed-radius l² hypersphere occupies an increasingly negligible portion of the basin’s hypervolume, rendering the conventional l²-based measure less meaningful. In this work, we elucidate the fundamental inadequacy of the standard l² measure in high-dimensional settings. To overcome this shortcoming, we propose an alternative approach for identifying the optimal l^p-norm that best aligns with the basin geometry, thereby minimizing the volume loss at the basin’s boundary. By analyzing a set of high-dimensional escape problems, we show that an appropriately chosen lᵖ-based measure offers a more robust and consistent framework for quantifying safe basins, ultimately enhancing the reliability of safety evaluations in engineering applications.
Bulk materials are used in many areas of technology and are often stored in silos. Examples can be found in mining, plastics technology or food technology.
In some cases, extremely loud noises can occur when silos are discharged, which are caused by self-excited vibrations. However, the excitation mechanism for these vibrations is still not clearly understood. The noise is emitted by bending vibrations of the silo wall via the surrounding air. The energy for the bending vibrations comes from the potential energy of the bulk material, which leads to a vertical movement of the bulk material during discharging. The excitation mechanism transfers energy from the vertical movement of the bulk material via the boundary layer of the particles to the bending vibration of the silo wall.
Therefore, this work has observed the interaction in contact between particles of the bulk material and the silo wall more closely. Interesting interactions between the silo wall and particles in normal contact and in friction are revealed. Investigations into the contact surfaces between particles and a vibrating silo wall show that a strong coupling of silo movement and the boundary layer of the particles only occurs with a sufficiently large compression of the bulk material. This was established by observing the contact surfaces between particles and a transparent silo wall using a model test rig. Friction tests on a specially developed tribometer for bulk solids showed a dependence of the friction force on the sliding velocity. In addition, the friction shows pronounced hysteresis properties.
These more detailed investigations of the interactions between the particles of the boundary layer and the silo wall are the basis for possible future modelling of silo vibration.
Literature:
K. Popp, M. Rudolph, M. Kröger, M. Lindner: Mechanismen für die Entstehung und Vermeidung von reibungsselbsterregten Schwingungen. Konstruktion, 2004.
T. Falke, M. Kröger: Erweiterungen der tribologischer Modellierungen zur Abbildung reibungsselbsterregter Schwingungen. In: 61. Tribologie-Fachtagung, 2020, pp. 09/1-09/9.
T. Fürstner: Reibkontakteinflüsse zwischen Partikeln und Festkörpern auf die Schwingungsselbsterregung. Dissertation, TU Bergakademie Freiberg, 2021.
Dynamic Atomic Force Microscopy (dAFM) uses a forced excited oscillating cantilever probe to image sample surface topography. The working principle uses the nonlinear interaction forces between probe and sample and has two operation modes, non-contact or intermittent contact between probe tip and sample surface. One of the most important key characteristics of dAFM is spatial vertical resolution. The resolution, or more generalized sensitivity, is defined as the smallest detectable change of a measured value, limited by noise. Mathematically, sensitivity is often defined as the ratio between noise and responsivity. While the limiting noise is generally considered to be of thermomechanical nature, the responsivity characterizes the cantilever as a resonant sensor. It is defined as the slope of the sensor output as function of the sensor input. In dAFM, input is the sample topography height and output the resonant sensors oscillation amplitude.
Previous works have considered parametric excitation, more precisely parametric resonance, as an approach for improving intermittent AFM. However, sensitivity was not directly part of those investigations. Additionally, the nonlinear limitation in the proposed parametrically excited system was intrinsic and was therefore neither generic nor adjustable.
In this work, we perform new investigations regarding parametric resonance influencing the sensitivity of non-contact AFM. The parametric excitation and resonance limitation is achieved by a feedback circuit with cubic nonlinearity. The study includes different parameters of parametric resonance, as excitation frequency and strength of parametric excitation. Additionally, the influence of more general AFM parameters, like probe characteristics are considered. For this, we investigate the dynamic behavior of MEMS probes in parametric resonance, including the nonlinear probe-sample interaction force, using numerical continuation. In this manner, we calculate frequency responses at different tip-sample distances and amplitude/phase distance curves. Building on this, we present a more generalized formulation of sensitivity in parametrically excited dAFM systems, depending on system and process parameters, to have a global overview of the potential that parametric resonances offer in terms of sensitivity. This is also compared to forced excited dAFM.
CANCELLED
——————
Numerical path continuation is commonly applied to determine how limit states of complex nonlinear dynamical systems evolve with a free parameter. For high-fidelity models, the sequential nature inherent to continuation can become an unsurmountable obstacle. Recently, we proposed a seminal method for robustly obtaining the desired high-fidelity solution curves in a parallelized way.
Our approach begins with a low-fidelity model, which is either of lower order or omits certain nonlinear or coupling terms. The simplifications must be substantial enough to produce an approximate solution curve with little or even negligible computational effort. A subset of relevant solution points along this approximate curve is than selected, and from these points, we iteratively compute the points on the targeted solution branch of the high-fidelity model. It is crucial to understand that the steps toward the high-fidelity solution branch can be executed independently for each selected solution point, making it perfectly suited for parallel computation.
We demonstrate the proposed generic concept through a range of both academic and industry-sized nonlinear vibration problems. Various system models, nonlinearities, and analyses are explored, with the Harmonic Balance method applied in all cases to compute periodic limit states. Finally, it is shown how the concept can be applied in combination with the single nonlinear mode theory, where the low-fidelity model is obtained from nonlinear modal analysis of an isolated mode. The method is available, along with the presented numerical examples, as a branch PEACE (Parallelized Re-analysis Of Solution Curves) of the open source tool NLvib.
The section focuses on constitutive modelling of natural and artificial materials subject to elastic and inelastic deformation processes. The aim is to compare new constitutive models formulated on both the phenomenological and the micromechanics basis to determine their validity by comparison of simulations with experiments. A wide range of open problems will be considered in the section, like multi-scale modelling of heterogeneous materials, implementation of constitutive models in numerical applications, and the virtual testing of structural systems.
In this contribution, we present formulations for hyperelastic physics-augmented neural network (PANN) constitutive models that fulfill the polyconvexity condition. Polyconvex constitutive models are based on energy potentials which are convex functions in certain deformation measures. From a material modelling perspective, polyconvexity is desirable as it ensures a materially stable behavior of the constitutive model and potentially enhances its generalization capabilities.
We present polyconvex PANN modelling approaches for purely mechanical [1,2], \linebreak parametrized [3], and multiphysical electro-elastic material behavior [4]. We apply the models to different material datasets, including synthetic data of homogenized microstructures and experimental data of 3D printing materials. For some material classes, polyconvex constitutive models show an excellent performance and improve the model’s generalization and material stability compared to unrestricted PANN approaches. In other cases, however, polyconvex PANNs fail to represent the material behavior and have bad prediction qualities. This is caused by the mathematical structure of polyconvex constitutive models.
Overall, we discuss how polyconvex PANN models can be formulated, what the opportunities and benefits of such models are, and what constitutes their limits of applicability.
REFERENCES
[1] D.K. Klein, M. Fernández, R.J. Martin, P. Neff, O. Weeger. “Polyconvex anisotropic hyperelasticity with neural networks”. Journal of the Mechanics and Physics of Solids 159:104703 (2022)
[2] L. Linden, D.K. Klein, K.A. Kalina, J. Brummund, O. Weeger, M. Kästner. “Neural networks meet hyperelasticity: A guide to enforcing physics”. Journal of the Mechanics and Physics of Solids 179:105363 (2023)
[3] D.K. Klein, F.J. Roth, I. Valizadeh, O. Weeger. “Parametrized polyconvex hyperelasticity with physics-augmented neural networks“, Data-Centric Engineering 4:e25 (2023)
[4] D.K. Klein, R. Ortigosa, J. Martínez-Frutos, O. Weeger. „Nonlinear electro-elastic finite element analysis with neural network constitutive models“. Computer Methods in Applied Mechanics and Engineering 425:116910 (2024
The modeling of non-metals presents significant challenges due to the complex mechanical behavior of such materials. For instance, the constitutive parameters of polymers are often calibrated using homogeneous experimental data, which are typically oversimplified. Moreover, even when these models successfully replicate homogeneous experimental results, there have been no mathematical guarantees for reliable predictions in real-world applications. In this contribution, we present a novel data-driven computational method that converts large datasets on the damage-elastoplasticity of elastomers into constitutive equations that accurately predict behavior even for new, unseen cases. To this end, a regression algorithm is developed that learns polymer mechanics from full field measurement data. By means of an elegant model selection technique, modest generalization error of the proposed data-driven statistical learning framework can be guaranteed. The presented approach establishes a foundation for generalization using mechanical data, a capability not present in traditional constitutive modeling techniques.
Textile reinforcements offer numerous advantages over conventional materials, including net-zero emissions, net-zero waste, durability, and design flexibility, rendering them ideally suited for aerospace, automotive, marine, and defense sectors. Nevertheless, understanding and predicting the mechanical behavior of textile reinforcements, especially in complex scenarios like when fibers slide and bend, remain challenging. In this contribution, we aim to revolutionize the current data-driven computational mechanics by integrating it with statistical learning to construct a pioneering framework that can transfer large datasets of generalized mechanics of textile reinforcements into robust constitutive formulations with reliability beyond existing models. To this end, the collection of comprehensive mechanical datasets will be accomplished through advanced experimental mechanics. A supervised-learning algorithm will be developed on the basis of the principle of virtual work. By balancing bias and variance of the learner, a small generalization error of the proposed data-driven statistical-learning framework will be guaranteed and quantified. This novel methodology will allow for material-modeling with predictive capability beyond observed data, a critical aspect particularly relevant in the defense sector.
Compressible polymeric foams exhibit highly non-linear mechanical behaviour, predominantly influenced by their porosity and cellular architecture. Classical constitutive models, limited by fixed mathematical formulations, often fail to accurately describe this complexity. In contrast, data-driven constitutive modelling offers a flexible and robust alternative for capturing the intricate mechanical responses of such materials. This study extends a B-spline-based data-driven framework[1,2] to compressible materials, dynamically adjusting control point values to minimize error between experimental data and model predictions, while adhering to thermodynamic consistency through optimization constraints. On the experimental side, nonhomogenous compression, confined compression and homogeneous tensile tests were conducted using closed-cell EPDM (ethylene propylene diene monomer) foams with three distinct densities. The confined compression and uniaxial tension tests were calibrated through data-training software[3] whereas the compression tests were validated via 3D finite element simulations. Finite element analyses of the latter confirmed the framework’s accuracy. Compared to classical constitutive models[4-6], the data-driven approach effectively captures the complex mechanical behavior of the compressible polymeric foams. The results are compared to the predictions obtained from well-known compressible hyperelastic constitutive models in the literature.
References
[1] Dal, H., Denli, F. A., Açan, A. K., \& Kaliske, M. (2023). Data-driven hyperelasticity, Part I: A canonical isotropic formulation for rubberlike materials. Journal of the Mechanics and Physics of Solids, 179, 105381.
[2] Tikenoğulları, O. Z., Açan, A. K., Kuhl, E., \& Dal, H. (2023). Data-driven hyperelasticity, Part II: A canonical framework for anisotropic soft biological tissues. Journal of the Mechanics and Physics of Solids, 181, 105453.
[3] Durna, R., Açan, A. K., Tikenoğulları, O. Z., Dal, H. (2024): H.Hyper-Data: A Matlab based optimization software for data-driven hyperelasticity. SoftwareX, 26, 101642
[4] Paul J. Blatz, William L. Ko; Application of Finite Elastic Theory to the Deformation of Rubbery Materials. Transactions of The Society of Rheology 1 March 1962; 6 (1): 223–252.
[5] Ogden, R. W. (1972): Large deformation isotropic elasticity: On the correlation of theory and experiment for compressible rubberlike solids. Proceedings of the Royal Society of London Series A, Mathematical and Physical Sciences, 328, 567–583
[6] Hill, R. (1979). Aspects of invariance in solid mechanics. Advances in applied mechanics, 18, 1-75.
This work is concerned with the modeling of a cold-box sand, a composition of sand grains and a resin binder. To this end, experiments are performed which show the following characteristics: localization phenomena in form of a shear band, softening behaviour in the force-displacement curve, asymmetric behaviour for compression and tension. For this complex material a micromorphic continuum is used to model the behaviour of the sand. In addition to the degrees of freedom of a classical continuum, the micromorphic model has additional degrees of freedom, which represent the grains of sand in our analysed material. The macro-part of this model represents the binder of the cold-box sand. Subject of this work is the parameter identification of the cold-box sand for the homogeneous and inhomogeneous deformation state. For the inhomogeneous state, the sand specimen that we use for the homogeneous state is extended by a radial notch in the centre area. For both cases, homogeneous and inhomogeneous, the point in time shortly before the formation of the shear band is selected. This means that only the elastic region of the micromorphic model is relevant and therefore no plastic parameters need to be calculated. For the parameter identification in this work a 3D model is used.
Fibre-reinforced thermoplastic composites are increasingly employed in structural applications due to their short production cycles, high recyclability, and excellent specific strength and stiffness. The production of three-dimensional shell-like structures involves the thermoforming of fibre-reinforced thermoplastic laminates (FRTPLs). This draping of the fibres within the melted polymer matrix relies on forming mechanisms, including inter- and intralaminar shearing, bending, and stretching, which also significantly affect manufacturing defects such as wrinkling and fibre breakage. However, the stretching behavior of laminates is inherently limited by the brittle tensile characteristics of the reinforcing fibres.
To enhance the formability of FRTPLs, fibre trimming within the laminate can be applied. This approach introduces additional degrees of freedom for laminate forming, enabling improved adaptability during the forming process. However, fibre trimming also weakens the structural properties (stiffness, strength) of the final component. Therefore, the objective is to minimize the number of trimmed fibres and to determine their locations in the initial, undeformed laminate. This ensures enhanced formability only in critical regions.
To identify optimal fibre trimming positions, a thermomechanically coupled simulation of the forming process is conducted. Using a macroscopic homogenized material model, describing the interlaminar shearing, regions with a strong strain localization during forming are identified. Subsequently, a mesoscale material model, accounting for the laminate's anisotropic behaviour during forming, is employed to analyse the forming results of the tailored FRTPL. This tailored laminate features localized fibre trimming in a multilayer configuration of unidirectional FRTP tapes. The numerical model is validated through experimental forming tests, confirming the effectiveness of the proposed approach.
This section is dedicated to discuss recent advances in multiscale and homogenization techniques for static and dynamic problems. Topics of particular interest are nonlinear homogenization techniques, multiscale modelling of failure processes and localization phenomena, FE2 methods, atomistic to continuum coupling, contact homogenization, model reduction techniques and furthermore homogenization schemes incorporating experimentally determined microstructure data.
Dislocation-mediated plastic deformation plays a crucial role in determining the mechanical behavior and microstructural evolution in contact mechanics, yet establishing a robust multiscale linkage remains a challenge. Here, we introduce a crystal plasticity model based on continuum dislocation dynamics that integrates microscale dislocation behavior with \linebreak macroscale plastic deformation under contact conditions. In addition to crystallographic influences on dislocation mobility, the model is also capable of capturing subsurface dislocation transport, and trace line formation under sliding contact, unveiling complex microstructural features that impact plastic deformation, surface topography, and the evolution of contact area and contact pressure—key features influencing plasticity in contact mechanics. Unlike traditional continuum simulations that lack microstructural resolution or discrete simulations that fail to couple microstructure-driven plasticity with evolving contact conditions, our approach bridges these limitations. By employing an implicit macro-microscale coupling mechanism, a flux vector splitting upwind scheme for positive-preserving dislocation transport while solving the dislocation dynamics problem, and a penalty contact boundary condition accounting for plastic deformation effects on contact properties, the model achieves high numerical stability on the multi scale coupling computation. This framework provides a predictive foundation for understanding dislocation-driven deformation in contact mechanics, encompassing applications such as indentation and tribological loading.
We present a hyper-reduction technique applied to the computational homogenization in (nonlinear) magneto statics. Within the method a reduced set of integration points is identified by clustering and empirically corrected. The presented method is applied to a computationally homogenized (nonlinear) 3D magnetostatics model, which contains multiple phases showing different material behavior (e.g. ferromagnetic matrix with spherical pores).
A major roadblock in the widespread adoption of multiscale modeling as a simulation technique is the complexity in implementing the coupling of existing softwares seamlessly at different physical and computational scales. In this work, we present a software framework which couples a hierarchy of surrogate models on the micro-scale, to a macro-scale model. The framework consists of two software products: preCICE and the Micro Manager. preCICE is an open-source black-box partitioned coupling library for multiphysics simulations. By design, preCICE couples two or more simulation softwares on the same physical scale. The Micro Manager adaptively controls a large number of micro-scale simulations. It is itself coupled to preCICE. This software framework allows for coupling, e.g., finite element (FE) software such as CalculiX and Abaqus to existing micro-mechanics solvers, like for example, FANS (Fourier Accelerated Nodal Solvers). We present initial results of benchmarking the coupling of FANS on the micro-scale with the FE solver CalculiX on the macro-scale. Instead of using a single solver on the micro-scale, a hierarchy of models can be deployed. As a proof of concept we show how a hierarchy of micro-scale models can adaptively be coupled to the macro-scale model for a 3D mechanics problem.
The function of paper has expanded considerably beyond its original role as a pure information carrier. Paper is now used in a wide range of modern applications, including packaging, furniture and insulation materials. As a result, paper has established itself as a sustainable alternative to carbon-based materials in a variety of areas.
Despite the high demand for paper and paperboard across various sectors, there is still a lack of understanding regarding their macroscopic behavior. The microscopic phenomena that lead to macroscopic plasticity and damage, as well as the means of enhancing material properties for specific loading conditions, remain unresolved. It is therefore crucial to gain a comprehensive understanding of the microstructural mechanisms that influence macroscopic properties.
To gain this understanding, a synthetic fiber network was developed within the finite element framework. The fiber network was generated by a virtual compression process, which was carried out in analogy to the manufacturing process of paper. Contact surfaces were represented using a cohesive contact formulation that permitted fiber separation. The corresponding contact properties were calibrated based on experimental tests of single fiber-fiber bonds. In addition, the distribution of fiber orientation within the network was considered, as this significantly affects the anisotropy of paper at the macro level. Micro-CT scans revealed that fiber orientation is predominantly in-plane. Probability density functions were identified that effectively approximate the distribution of fiber orientations. Incorporating these PDFs into the network generation process improved the representation of the numerical microstructures. As a result, network models were created that integrated various features, including single-fiber anisotropy due to microfibrils and separating contact surfaces. Thereby, we considered essential structural characteristics including paper thickness, basis weight, and fiber orientation.
The response of the bulk material was determined by performing virtual tensile tests on the synthetic fiber networks. The synthetic networks were able to mimic the anisotropic behavior of paper and demonstrate the influence of fiber orientation on mechanical properties. In particular, it was observed that the modulus of elasticity of the paper sample increased with increasing fiber orientation.
The relaxed micromorphic model (RMM) is an enriched model that uses the kinematics of the classical micromorphic theory but employs a relaxed curvature in terms of the Curl of the microdistortion field instead of the full gradient [1]. This leads to many advantages that other higher-order continua do not exhibit [2,3]. The most important feature is that the RMM operates as a two-scale linear elastic model between macroscopic and microscopic linear elastic scales. However, identifying the unknown parameters is still an open research topic.
In our talk, we present our recent findings regarding the homogenizing of metamaterials into the RMM [4]. We present a novel homogenization scheme based on the least square minimization of the energies obtained for different sizes and many deformation modes. The results of the RMM, Cosserat and classical micromorphic models are compared.
REFERENCES
[1] P. Neff, I.D. Ghiba, A. Madeo, L. Placidi and G. Rosi. A unifying perspective: the relaxed linear micromorphic continuum. Continuum Mechanics and Thermodynamics 26,639-681(2014).
[2] J. Schröder, M. Sarhil, L. Scheunemann and P. Neff. Lagrange and H(curl,B) based Finite Element formulations for the relaxed micromorphic model, Computational Mechanics 70, pages 1309–1333 (2022).
[3] M. Sarhil, L. Scheunemann, J. Schröder, P. Neff. Size-effects of metamaterial beams subjected to pure bending: on boundary conditions and parameter identification in the relaxed micromorphic model. Computational Mechanics 72, 1091–1113 (2023).
[4] M. Sarhil, L. Scheunemann, J. Schröder, P. Neff. A computational approach to identify the material parameters of the relaxed micromorphic model. Computer Methods in Applied Mechanics and Engineering 425, 116944, (2024).
This section will focus on the analysis and modeling of transition from laminar to turbulent flow using DNS, LES, RANS equations, and experiments. Contributions are expected in, but not limited to, the following topics: stability of incompressible and compressible flows, fundamental study of the dynamics of transition, influence of the wall roughness on transition, transition modelling for LES and RANS equations, transition in flows with complex geometries, subcritical transition.
Mechanical seals are designed to separate two fluid phases from each other, exemplified by the separation of ambient air and hydraulic oil in hydraulic cylinder applications. These seals necessarily include a converging gap, even at a microscopic level. During operation, a rod in contact with both phases moves through this converging gap. Practically, during a rod instroke motion, a thin film of oil sticks to the rod's surface. A simulation model of this process focusses on a laminar two-phase flow through the converging gap and one moving wall with stick boundary conditions. The simulation considers a phase distribution model and parameters such as surface tension, the contact angle for rigid body contact and interface thickness.The numerical solution indicates the existence of at least two types of solutions, dependent on the initial thickness of the fluid film. The first type involves the suction of ambient air through the converging gap, while in the second, the fluid is wiped off the rod. For a rigid seal, a sharp snapping point separates these two solution types.Contrary to intuition, it is found that a parameter set exists, where a thinner oil film leads to a higher oil mass flow, while a thicker oil film results in reduced oil transport. This "blocking" effect can be understood by the influence of outer boundary conditions, the viscosity parameters of the two phases, and the related velocity field.
Several canonical flows, particularly the square-duct flow, have laminar solutions that remain stable to small disturbances regardless of the Reynolds number. Consequently, the laminar solution remains an attractor in its finite neighborhood, and any non-laminar states are related to the emergence of unstable, invariant solutions that remain separated, in the linear sense, from the laminar solution. These invariant solutions appear in the state space through saddle-node bifurcations and attract the flow state along their stable manifold to later eject it along their unstable directions towards other such states. In this sense, such invariant sets constitute the scaffold of the turbulent attractor and hold a governing role over the turbulent dynamics.
This research focuses on the identification and characterization of several streamwise localized solutions in the turbulent square-duct flow. These solutions represent the first reported localized invariant solutions for square-duct turbulent flow, marking a significant advancement in our understanding of turbulence transition in this configuration.
To identify localized solutions, we employ a combination of advanced numerical techniques, including edge tracking by bisection and Newton-Krylov iterations. Stability analysis is performed using an Arnoldi method-based eigenvalue algorithm. The identified solutions seem to lie within the turbulent attractor, which manifests in the statistical view of the turbulent flow. At the same time, the identified solutions are local edge states when certain symmetries are invoked, with their respective stable manifolds of codimension one. The multitude of edge states means that each such state is a local rather than a global attractor within the edge subspace. Consequently, the respective stable manifolds of each identified solution only locally define the boundary between laminar and turbulent basins of attraction. One of the key implications of this is that the edge seems to be formed by a number of intersecting codimension one hypersurfaces, each only locally separating the state of the flow from the laminar solution. This leads to the conjecture that, at least for a range of Reynolds numbers, the turbulent state may not be fully enveloped by the laminar-turbulent edge, with turbulence in square-duct flow remaining inherently transient.
In summary, this work unveils the first streamwise localized invariant solutions in square-duct turbulent flow, identifies them as local edge states in the symmetric subspace of the full space, and suggests that the turbulent state is a long chaotic transient rather than a stable attractor. These findings significantly impact our understanding of turbulence and its underlying dynamics.
Diseases of the human circulatory system are the leading cause of mortality worldwide. Early diagnosis can significantly reduce the risk of developing cardiovascular diseases and their consequences. Computational Fluid Dynamics (CFD) is a computer-aided technology that can be applied in cardiology as a prognostic and diagnostic tool for specific diseases, based on analyzing blood flow patterns. One of the most challenging scenarios for computational modeling involves arteries whose shapes are altered by external factors.
An example of blood vessel shape change is the case of Myocardial Bridge (MB) [1]. MB is a congenital heart anomaly in which a section of the main coronary artery lies beneath the heart muscle and is compressed during the heart cycle. In this scenario, the flow pattern is influenced not only by pressures at the inlet and outlet of this section but also by shape changes in the blood vessel. For such cases, CFD must account for the dynamic shape changes during the cardiac cycle. However, there is a limited amount of research focused on validating CFD models in such scenarios.
In the presented research, the MB case was reproduced under laboratory conditions using an elastic conduit subjected to cyclic external stress while flow parameters were recorded. The same case was studied using the CFD technique, where selected data from the experiment were used as boundary conditions, and the remaining data were used for comparison with the model results. The study revealed significant differences between inlet and outlet flows during specific instants, attributed to the volume accumulation potential of the changing shape of the conduit. The model results aligned with the experimental data only when the dynamic shape of the tube was incorporated into the model.
This validation indicates that in cases such as MB, reconstructing the blood vessel's shape is crucial. As a next step, the shape of the blood vessel can be implemented into models using medical imaging, as demonstrated in [2].
References:
[1] Rogers IS, Tremmel JA, Schnittger I. Myocardial bridges: Overview of diagnosis and management. Congenital Heart Disease. 2017;12:619–623.
https://doi.org/10.1111/chd.12499
[2] Psiuk-Maksymowicz, K., Borys, D., et al. Methodology of generation of CFD meshes and 4D shape reconstruction of coronary arteries from patient-specific dynamic CT. Sci Rep. 14, 2201 (2024).
https://doi.org/10.1038/s41598-024-52398-5
Many technical applications involving flows profit from trying to manipulate the boundary conditions or flow parameters in such a way as to generate a desired effect, like reduced drag, increased mixing, attenuated or increased turbulence or reduced sound emission, for example. The sophistication of the governing equations requires a broad range of research topics and methods to be covered, including analytical treatments, reduced-order modelling, passive manipulation of boundaries in experiments involving riblets, active manipulations using actuators, for example, or numerical approaches involving the adjoint equations, amongst others. The speakers of our session reflect the broad application area of flow control and discuss difficulties in the application side and recent advances in the analysis, as well as experimental and numerical approaches.
Most flows of industrial and technical relevance are turbulent. Turbulent flows typically exhibit larger viscous drag compared to an equivalent laminar flow. This fact prompted numerous efforts in reducing such excess drag – the so-called turbulent drag – during the last decades, enticed by potentially huge economic and environmental benefits. The challenges range from discovering efficient means of reducing turbulent drag, through finding practical ways of implementing them, to bridging the gap between the simple fundamental flows necessarily considered in research and the complex flows occurring in real applications.
Particularly this last point has been hindering the development of turbulent drag reduction techniques. For instance, there is active debate on whether the estimates for drag reduction obtained experimentally or numerically at typically relatively low Reynolds number can be extrapolated to the larger Reynolds numbers of most applications. Similarly, most research investigations so far considered incompressible flows, and flows in which friction is the only source of drag, thereby neglecting compressibility effects, or the interaction with other possible sources of drag.
In this contribution, I address some of the open questions mentioned above, finding the answer to some, and highlighting where further efforts are still required. The active drag reduction strategy of spanwise wall forcing, as well as the passive drag-reducing surface corrugations known as ribelts are taken as example turbulent drag reduction techniques to analyse the behaviour of turbulent drag reduction.
The reduction of skin friction drag in turbulent flows has constituted a significant field of research over the past few decades. Spanwise forcing represents one of the most extensively studied techniques for the reduction of turbulent drag. It entails the imposition of a spanwise oscillation of the wall in a turbulent wall bounded flow. Different studies (e.g., Quadrio & Ricco 2004) have demonstrated that, if the forcing parameters (i.e., amplitude and period of oscillation) are chosen properly, a reduction of up to 45\% in skin friction may be achieved. Even though the performance of such technique has been widely studied and verified, the underlying working principle remains unclear. The prevailing hypothesis is that the reduction in skin friction is a consequence of the interaction between the wall oscillation and the turbulent motions in the vicinity of the wall. In particular, the unsteady spanwise velocity profile induced by the wall motion, known as Stokes layer, is believed to play a fundamental role. The issue, however, is that in the near wall regions, turbulence is significantly influenced by the presence of the wall itself, which makes it challenging to isolate the impact of wall oscillations on the turbulent structures. The present work proposes that this obstacle may be overcome by removing the wall effect entirely. To achieve this, spanwise forcing is not applied in a canonical wall bounded flow, but in homogeneous shear turbulence instead. Homogeneous shear turbulence is an idealized turbulent flow that has been used in literature as an intermediate stage of complication between homogeneous isotropic turbulence and inhomogeneous flows, such as wall bounded flows. It is characterised by a uniform mean velocity gradient and therefore contains many of the physical properties of real wall bounded flows, while retaining the simplicity of homogeneity. Spanwise forcing is included in the homogeneous shear turbulence simulation by enforcing the Stokes layer velocity profile. Numerical computations at different forcing parameters are performed. Results show similarities with the conventional wall bounded flow performance of spanwise forcing, particularly in the variation of the skin friction coefficient with the forcing period. Moreover, the simplicity of the base flow allows to clearly identify the effect of spanwise forcing on turbulence. The influence on several statistical quantities, commonly used to describe the behaviour of turbulent flows, is studied. The objective is to establish whether this unconventional numerical configuration may offer insights into the physics of spanwise forcing.
The pulsed ejection of air into a boundary layer represents an effective method to prevent or delay its separation from the surface. Despite many successful demonstrations, this approach has not reached a breakthrough in the aviation industry, which is arguably due to an excessive energy consumption.
Recent progress in the field of Deep Learning suggests that artificial neural networks can be used to great effect for the optimization of forcing parameters. In particular, reinforcement learning (RL) appears to be suited for this task, revealing robust control strategies through a bio-inspired interaction between the algorithm and the flow under consideration.
The objective in the current study is to test the capabilities of RL subjected to experimental conditions online (i.e., during a wind tunnel experiment). In a closed-loop, low-speed wind tunnel, the flow past a fully-turbulent backward-facing ramp is manipulated by an array of five pulsed-jet actuators relying on magnetic valves to introduce momentum in a pulsatile fashion. Based on partial insight into the instantaneous flow conditions via wall shear-stress measurements, the RL agent is trained to suppress flow separation. Specifically, the action space involves the decision whether to open or close the magnetic valve, thereby either forcing the flow at this specific time instance (open valve) or saving mass flow (closed valve).
This study will promote the implementation of RL for the benefit of controlling real-world, turbulent flow conditions. Furthermore, we expect to obtain interesting insight regarding the mechanisms of flow control by analysing the RL strategy.
This study investigates the effects of two-dimensional streamwise wall waviness (WW) on turbulent boundary layer (TBL) separation over a NACA 4412 airfoil surface under adverse pressure gradient conditions. Both regular sinusoidal and tilted WW configurations are analyzed to determine their effectiveness in delaying flow separation. The universality of the optimal WW design is evaluated across a range of Reynolds numbers. Large Eddy Simulation (LES) is used to analyze both instantaneous and time-averaged flow characteristics.
Initial investigations focus on sinusoidal WW at a friction Reynolds number Reτ = 2500. The effective slope (ES), which defines the waviness slope (Napoli et al., 2008), is found to significantly impact separation delay and wall shear stress. Separation is maximally delayed at ES ≈ 0.17, with a reversal of this trend at higher ES values. A strong correlation is observed between the wavelength (λ) of WW and the downstream shift of the separation point, with larger λ increasing wall shear stress, while amplitude (A) shows no consistent trend. The waviness crests are shown to influence turbulent structures and enhance sweep events, similarly to the amplitude modulation mechanism identified by Mathis et al. (2009). At lower Reτ, the beneficial effects of the optimal sinusoidal configuration diminish due to weaker momentum transfer from the outer layer and reduced amplitude modulation effects.
The tilted WW design demonstrates superior separation control, outperforming the optimal sinusoidal WW by 70% in delaying separation and maintaining effectiveness over a broader range of Reynolds numbers. Unlike the sinusoidal design, which performs well only at higher Reτ, the tilted configuration is effective even at lower Reτ.
The effect of the dielectrophoretic forces in convective flows under microgravity conditions were experimentally investigated to examine the onset and behavior of the hydrodynamic instability. A dielectric fluid confined within a differentially heated, vertically aligned cylindrical annulus was subjected to an alternating electric field at a frequency of 200 Hz during sounding rocket flight, which provides 300 seconds of microgravity conditions and parabolic flights, which in turn provide 22 seconds of microgravity conditions. The combination of differential heating and high alternating electric potential induced artificial electric gravity, triggering the instability.
Flow patterns of the dielectric fluid are studied qualitatively with the use of Shadowgraph technique and additional quantitative analysis was performed using the Particle Image Velocimetry (PIV). During the Parabolic flights two-plane PIV was realized to have three-dimensional view on the flow field.
First experimental set is devoted to the study of near-critical thermal and electrical forcing parameters during the long-term microgravity in sounding rocket flight on the instability onset and growth rate.
The second experimental set is devoted to the investigation of the initial condition of the fluid on the onset and saturation of the flow instability. Different mixing times were applied during the higher gravity phases to break the symmetry of the base flow, formed by Rayleigh-Bénard convection cells. Three different aspect ratios of the cylindrical cell were investigated under the same forcing parameters.
After saturation of the instability was achieved, the results allowed the calculation of saturation times under varying thermal and electrical conditions, providing a basis for comparison with linear stability theory. In addition, the sounding rocket flight data facilitated the determination of perturbation amplitude growth rates and near-critical experimental parameters, confirming numerical simulations.
Based on the previous results of the parabolic flight experiment, the findings from the second set allowed to make a conclusion, that breaking of the initial pattern symmetry helps to reach the saturation of the flow earlier, although the gap ratio influenced the onset time. With the higher aspect ratio, the instability sets up earlier, whereas the mixing time could be adjusted to find the most stable flow mode during microgravity.
These both results offer significant insights into the interaction of thermal and electrical forces in microgravity, advancing the understanding of thermo-electrodriven instability.
Acknowledgement: The project ‘Dielektrophoretisch induzierte Konvektion (DEPIK)’ was supported by the BMWi via the space administration of the DLR under grant no. 50WM2244.
This session is devoted to the mathematical analysis of natural phenomena and engineering problems. In this area PDEs play a basic role. Therefore lectures discussing analytical aspects of PDE problems as well as problems in the Calculus of Variations are welcome.
We consider the time harmonic Maxwell equations in a complex geometry. We are interested in complex geometries that model polarization filters or Faraday cages. We study the situation that the domain of interest contains inclusions with large or infinite conductivity, the inclusions are distributed in a periodic fashion along a surface. The periodicity is h\>0 and the shape of the inclusion depends also on h since we want to model thin structures. We are interested in the limit h to 0 and in effective equations. Depending on geometric properties of the inclusions, the effective system can imply polarisation or cancellation of the field.
Maxwell’s equations are considered in a half-waveguide Ω₊ := ℝ₊ × S where S ⊂ ℝ² is a bounded Lipschitz domain and ℝ₊ = (0,∞). The electric permittivity ε and the magnetic permeability μ are assumed to be strictly positive and periodic outside a compact set. Our Maxwell system is accompanied by a radiation condition that was introduced and investigated in [1]. We give a result on existence and uniqueness in the form of a Fredholm alternative: When there is no bound state, i.e., no non-trivial solution of the homogeneous problem on Ω₊, then there is a unique solution for every right-hand side. Our approach is based on an energy method which was developed in [2] to study Helmholtz equations.
References:
[1] A. Kirsch and B. Schweizer, Time harmonic Maxwell’s equations in periodic waveguides, (submitted), TU Dortmund 2024-01
[2] B. Schweizer, Inhomogeneous Helmholtz equations in wave guides – existence and uniqueness results with energy methods in European J. Appl. Math. Vol. 34(2), pp. 211-237
In this lecture, the authors analyse the regularity properties of the solution to time domain transmission problems with respect to the data in electromagnetism using the Laplace approach. The model problems are considered: (i) the analysis is conducted by considering first the scattered and transmitted fields at a dielectric surface Gamma. (ii) the slight changes to be made in the case of a coated perfect conductor are provided.
Reference:
G.C. Hsiao, T. Sanchez-Vizuet and W.L. Wendland: Boundary-field formulation for transient electromagnetic scattering by dielectric scatterers and coated conductors. SIAM J. for Applied Mathematics, to appear.
This work examines the solution of fractional integro-differential equations using the Adomian decomposition method, utilizing a unique kernel operator. The fractional term is specified in the Caputo framework, and the solution is derived by an approximate and analytical method. The fixed-point theorems are employed to establish the existence and uniqueness of the solution. Further, the stability of the solution is discussed by applying the definition of Ulam-Hyers (U-H) stability types. A thorough investigation of error analysis is conducted to evaluate the precision of the methodologies, with results obtained through tested instances demonstrated. The analytical method is surprisingly efficient and straightforward, delivering a precise solution in a single iteration. Comparisons with alternative approaches, illustrated by error tables and graphs, indicate which method is more effective regarding processing time and cost regarding numerical and analytical methods. The proposed method effectively solves fractional order differential equations and is pertinent to various scientific issues.
A fixed-point solver for mappings from a simplex into itself that is gradient-free, global, and requires d function evaluations for halving the error is presented, where d is the dimension. It is based on topological arguments and uses the constructive proof of the Mazurkewicz-Knaster-Kuratowski lemma when used as part of the proof for Brouwer's Fixed-Point theorem. Its explorative placing of evaluations and the low number of required function evaluations make this solver suitable for computationally expensive or even experimental problems.
In this talk, we present new insights into the regularity of elliptic systems within certain three-dimensional polyhedral domains under mixed boundary conditions.
Our approach begins by analyzing geometric model problems, focusing on a \linebreak two-dimensional angle that naturally extends to edges in three dimensions.
Utilizing algebraic properties of the elliptic system, we construct a solution basis for the model problem in the absence of boundary conditions. When mixed boundary conditions are introduced, the regularity problem is reduced to the analysis of an associated matrix equation. By further employing numerical range properties and accretive operator theory, we derive regularity results for the solutions.
This framework also covers other boundary conditions and might be adaptable to further scenarios like higher-order elliptic equations or three-dimensional problems posed within conical geometries. The outcome of this theory can also be applied to establish new regularity results for problems in linear elasticity.
Optimization is the next natural step after simulation with increasing importance in the future. The aim of this session is to provide the basis of a holistic overview of all areas of optimization. Thus abstracts from both a theoretical as well as an applied perspective are welcome.
In reliability-based topology optimization or topology optimization under uncertainty, an accurate evaluation of the probabilistic model requires the system to be simulated for a large number of varying parameters. Traditional gradient-based optimization schemes thus face the difficulty that reasonable accuracy and numerical efficiency often seem mutually exclusive. We propose a stochastic optimization technique to tackle this problem. To be precise, we combine the well-known method of moving asymptotes (MMA) with a stochastic sample-based integration strategy. By adaptively recombining gradient information from previous steps, we obtain a noisy gradient estimator that is asymptotically correct, i.e., the approximation error vanishes over the course of iterations. As a consequence, the resulting stochastic method of moving asymptotes (sMMA) allows us to solve chance constraint topology optimization problems for a fraction of the cost compared to traditional approaches from literature.
Cardinality constraints in optimization are commonly represented via L⁰-type constraints. One way of dealing with these constraints is reformulating the L⁰-constraint using the standard L¹-norm and the largest-K-norm, replacing the resulting constraint with a penalized version and solving the corresponding sequence of DC-type subproblems. We extend the DC-reformulation approach to problems with Gradient-L⁰ constraints, problems where sparsity of the gradient is the target.
This work is concerned with the computation of the first-order variation for one-dimensional hyperbolic partial differential equations. In the case of shock waves the main challenge is addressed by developing a numerical method to compute the evolution of the generalized tangent vector introduced by Bressan and Marson (1995). Our basic strategy is to combine the conservative numerical schemes and a novel expression of the interface conditions for the tangent vectors along the discontinuity. Based on this, we propose a simple numerical method to compute the tangent vectors for general hyperbolic systems. Numerical results are presented for Burgers' equation and a 2 x 2 hyperbolic system with two genuinely nonlinear fields.
We consider a least squares formulation of a linear parabolic equation in spaces with natural regularity. As a consequence the formulation contains the Riesz isomorphism. The discrete approach uses space-time finite elements and a suitable approximation of the Riesz isomorphism. Using finite elements that are separable with respect to space and time,the final fully discrete representation has the form of a generalized Lyapunov equation. The numerical solution of this systems requires a tailored approach.
Contact problems with friction or heat generation appear in many different fields of mechanical engineering. Optimal control or optimization of elasto-plastic problems governed by nonlinear PDEs is subject of intensive research. The paper is concerned with the analysis as well as the numerical solution of the topology optimization problems for elasto–plastic rather than elastic structures in unilateral frictional contact with a rigid foundation. The small strain plasticity model with linear hardening and a von Mises effective stress is used. The displacement and stress of this structure in contact are governed by the system of the coupled variational inequalities. The material density function is design variable. The topology optimization problem consists in finding such material distribution of the domain occupied by the body in contact to minimize the contact stress and to ensure the uniform distribution of this stress. Using the regularization of the stress projection operator on the set of admissible generalized stresses as well as of the friction functional this original structural optimization problem is replaced by the regularized one. The phase field approach is used to approximate sharp interface problem formulation and to calculate the derivative of the cost functional. The relation between sharp interface and phase field optimization problems is investigated. The cost functional derivative is calculated. The Lagrange multiplier technique is used to formulate the set of necessary optimality conditions. Gradient flow approach in the form of modified Cahn-Hilliard boundary value problem is used to evaluate optimal topology domain. Mixed finite element formulation of modified Cahn-Hilliard problem is used. The evolution of the structure topology is governed by the cost functional derivative. The examples of minimal contact stress topologies are provided and discussed.
Numerical optimization has long been a cornerstone in engineering disciplines, underpinning areas such as optimal control and design optimization. The multi-objective nature of design optimization problems raises the interest in computing entire Pareto fronts of optimal compromises. By suitable scalarization techniques, this can be formulated as solving a family of parametric optimization problems. Model predictive control requires to iteratively solving optimal control problems, which are also related to one another by a parametrization of the initial state. Here, to achieve real-time capability, numerical efficiency is of great interest. In this work, we introduce Optimality-Informed Neural Networks (OptINNs) as a combination of classical optimization and machine learning approaches. Drawing inspiration from Physics-Informed Neural Networks, OptINNs directly integrate domain-specific knowledge, here on the optimality of solutions, into their architecture and into the training process. Thereby, the common data dependency bottleneck of neural networks is addressed by providing an objective performance metric that does not rely solely on validation datasets. We propose Karush-Kuhn-Tucker (KKT)-type OptINNs specifically designed for parametric optimization problems. To validate our approach, we apply KKT-type OptINNs to different optimization challenges, ranging from simple linear constrained problems to complex nonlinear optimal control scenarios. Our results highlight the effectiveness of OptINNs in enhancing optimization performance while reducing data requirements.
For all fields of applications the mathematical models are primarily based on differential equations. Hence, their numerical solution plays a fundamental role in numerical mathematics. This section covers mainly the construction and the behavior of numerical methods for differential equations including those of ordinary as well as of partial differential type.
Exponential integrators are a class of time integration methods used to solve large systems of evolution equations. They integrate the linear part of the problem (almost) exactly and use an explicit scheme for the nonlinearity. This makes them highly accurate and stable, especially when the nonlinearity is weak. These integrators are widely used in stiff problems of semilinear parabolic equations and in highly oscillatory problems such as wave and Schrödinger-type equations.
Their implementation involves the computation of matrix functions (such as exponentials and trigonometric functions) on vectors, which can be done explicitly for small problems, or by iterative methods (such as Krylov subspace methods) for larger problems. When these computations are efficient, exponential integrators offer significant advantages.
In this talk, we consider two new approaches to improve the performance of exponential Runge-Kutta methods: μ-mode integrators for evolution equations in Kronecker form and accelerated methods using simplified linearization. Numerical experiments in 2D and 3D show the effectiveness of these approaches.
This research is a collaboration with Marco Caliari (Verona), Fabio Cassini (Verona), and Lukas Einkemmer (Innsbruck).
We present an approach to solve the equations of thermo-poroelasticity. This system describes the dynamics of an elastic solid, which contains pores which are filled with a liquid. The model gives a coupled system that consists of one elliptic equation, describing the elastic deformation of the solid under physical stress and two parabolic equations, for the flow and the temperature of the liquid arising from pressure gradients. Within this talk, we concentrate on the time discretization of such a system. In particular, we are interested in semi-explicit methods which decouple the equations. By considering the pressure and the temperature as one vector-valued unknown, we regain the structure of linear poroelasticity. By applying known methods for weakly coupled poroelasticity, we derive a novel partially decoupled integration scheme for thermo-poroelasticy under certain assumptions on the coupling strength.
The Delay Differential Equations (DDEs) are the class of differential equations where the derivative of the solution at the current time depends not only on the state of the system at that time, but also on the states in the past. This formulation leads to a (semi)dynamical system on an infinite dimensional phase space, similar to what happens in Partial Differential Equations (PDEs). Thus, finding analytical solutions is almost impossible and the numerical methods to obtain approximate solutions might be challenging. DDEs are commonly used in modelling real-life problems where reaction time is not instantaneous: robotics, biological systems, epidemics, and technological processes such as milling.
I will shortly discuss the method for obtaining not only numerical approximations of solutions to a rather general class of DDEs but also a validated (rigorous) algorithm to obtain bounds on the true solution in the vicinity of the approximation [1]. The current implementation in the C++ language by employing techniques of Automatic Differentiation is quite general and might be immediately applied to almost any system that is represented by a computer program (C++ subroutine). The core principles of the method are similar to those developed in CAPD library of validated solvers for ODEs, differential inclusions and dissipative PDEs [2]. The implementation is available as an Open-Source library on GitHub [3].
I am currently using this method in mathematically rigorous computer-assisted proofs of various dynamical phenomena such as the existence of periodic orbits in Mackey-Glass equation [1], existence of complicated solutions in discontinuous DDEs [4] or symbolic dynamics in ODE perturbed by a delayed term [5]. But the method might be applied to other problems, such as reachability or optimal control, with the explicit bounds on the error.
Szczelina, R.; Zgliczyński, P.; High-order Lohner-type algorithm for rigorous computation of Poincaré maps in systems of Delay Differential Equations with several delays, Foundations of Computational Mathematics, Volume 24, pages 1389–1454, (2024).
T. Kapela, M. Mrozek, D. Wilczak, P. Zgliczynski,CAPD::DynSys: a flexible C++ toolbox for rigorous numerical analysis of dynamical systems,Communications in Nonlinear Science and Numerical Simulation, Volume 101, October 2021, 105578.
github.com/robsontpm/capdDDEs, Accessed: 28-12-2024.
Gierzkiewicz, A.; Szczelina, R.; Sharkovskii theorem for Delay Differential Equations and other infinite dimensional dynamical systems., Submitted, arxiv.org/abs/2411.19190 (2024).
Benedek, G.; Krisztin, T; Szczelina, R.; Stable periodic orbits for Mackey-Glass type equations, Journal of Dynamics and Differential Equations (2024).
Dynamics and control is an interdisciplinary section which in particular adresses mathematical systems theory and control engineering. The contributions to this section are also concerned with the mathematical understanding and design of controllers which appear in actual applications.
Optimal control problems in dynamic environments include the challenge that one has to make fast decisions without risking collisions. This can be illustrated with an example in which the goal is to drive on the racetrack without any collision with other road users. For this example, in [1], we introduced a hybrid structure, which consists of a Reinforcement Learning approach, which is used as a trajectory planner, and a solution approximation of a simplified optimal control problem in order to actually steer the dynamical system. This approach allows to make fast decisions for a fast changing environment. Unfortunately, the decision based on the Reinforcement Learning algorithm does not give any guarantees that all possible collisions are avoided, which is a typical problem of data based approaches. In contrast, in [2], a classical Reinforcement Learning approach is equipped with a funnel controller, which leads to new safety aspects. Nevertheless, in this approach a safe reference trajectory has to be available, which is often - like in the racetrack example - not the case.
In this talk, we will present a combination of these approaches. As an underlying example, we focus on the “driving on the racetrack”- example, where a safe controller in a fast changing environment is needed. We use Reinforcement Learning for trajectory planning, which is now safeguarded by a funnel controller nearby obstacles. Therefore, since it is not possible to compute a safe universal reference trajectory in advance, we define a safety trajectory for each obstacle that leads around the obstacle.
[1] S. Gottschalk, M. Gerdts and M. Piccinini: Reinforcement Learning and Optimal Control: A Hybrid Collision Avoidance Approach. In Proceedings of the 10th International Conference on Vehicle Technology and Intelligent Transport Systems (VEHITS 2024), pages 76-87, 2024.
[2] S. Gottschalk, L. Lanza, K. Worthmann and K. Lux-Gottschalk: Reinforcement Learning for Docking Maneuvers with Prescribed Performance. 26th International Symposium on Mathematical Theory of Networks and Systems MTNS 2024, pages 196-201, 2024.
Distributed model predictive control (DMPC) is a powerful and general control technique to automate the behavior of a collaborating network of systems in a distributed manner. One key advantage is that, through the intuitive definition of constraints and the encoding of the collaborative goal in the cost function, a wide variety of collaborative tasks can be considered. Moreover, conceptually, DMPC can naturally deal also with nonlinear systems. In theory, this makes DMPC uniquely attractive in collaborative robotics and a potential enabler for self-reliant, robust robot swarms that can significantly outperform automation approaches relying on individual robots. In existing works, simulations and idealized experiments confirm these conceptual advantages of DMPC in challenging robotics scenarios. However, experiments under realistic conditions, with real robot-to-robot wireless communication, presently reveal limitations, as common communication networks can be overwhelmed by the sheer amount of communicated data. This can become a problem especially in faster robotics tasks that require decently high sampling rates. In particular, DMPC schemes communicate much more data than other distributed control techniques. Each controller sends at least one message of the size of an open-loop prediction per solution iteration, and one timestep may comprise multiple iterations. In consequence, communication is currently a key inhibitor preventing the widespread application of DMPC in real-world use cases in robotics.
This work presents viable approaches to overcome this limitation and tests them in formation control scenarios with mobile robots. Although the general methodology can be used for many applications and implementations of DMPC, the example application in robotics is particularly enticing due to many timely application areas that could benefit from reliable DMPC. The study considers nonholonomic mobile robots, which warrant a fully nonlinear treatment, bringing additional challenge to the task. To improve DMPC's communication, the contribution first analyzes what precisely is or can be communicated in a typical DMPC algorithm. Then, based on insights on the inherent structure of the communicated data, several ways to reduce the amount of communication are proposed and discussed, from simple, algorithmic ways to machine-learned approaches. Results from the most promising approaches reveal how far communication can be reduced while retaining the desired functionality of the distributed controller. There, a key motive to reduce communication is also to unlock the ability to use alternative communication methods other than traditional Wi-Fi, e.g., as a fallback to remain functional also when Wi-Fi communication is struggling or disturbed.
Precise setpoint control of formations of differential-drive robots poses significant challenges. Here, formation control refers to the cooperative control of the formation's centroid and the relative postures of the individual robots. Even for a single nonholonomic robot, controller design is challenging as no continuous, static state-feedback law can asymptotically stabilize a given setpoint. This also complicates deriving stabilizing terminal ingredients for model predictive controllers tailored to nonholonomic systems. Moreover, an MPC without terminal ingredients subject to conventional quadratic costs provably fails for certain initial conditions. This arises from the geometry of nonholonomic systems such that the MPC cost must align with the so-called sub-Riemannian distance -- an actual measure of the effort required to drive the robot to its origin.
Using these insights, a stabilizing MPC controller can be designed using a mixed-exponents cost function, which, close to the setpoint, penalizes deviations in hard-to-control directions more, i.e., in those directions following from interleaving direct input vector fields. From a differential-geometric perspective, the mixed-exponent pseudo-norm bounds the sub-Riemannian distance to the origin and follows from the nonholonomic system's homogeneous approximation, a first-order approximation retaining controllability. This tailored MPC controller is probably capable of stabilizing several nonholonomic vehicles as well as formations of robots, also in real-world experiments.
However, up to this point, only formation setpoints in which all robots shall have the same goal orientation have been considered. Stabilizing setpoints with distinct robot orientations is an open question. In such cases, robots do not share mutual hard-to-control directions in a common frame of reference but still have mutual input directions. Consequently, close to the setpoint, a robot cannot correct its relative error by solely using direct input vector fields without simultaneously altering the formation's center in hard-to-control directions. As a result, the formation is not capable of approaching its desired setpoint by driving along well-controllable directions, which has been the intrinsic key characteristic of the tailored mixed-exponent cost function. Using polar coordinates offers a promising solution to retain this characteristic to also stabilize formation setpoints with distinct robot orientations. Particularly, close to the setpoint, a single robot can maneuver without altering the formation's center in hard-to-control directions. As a downside, this approach requires special attention to constraint satisfaction, as the system's inputs are no longer linear combinations of the robots' motor velocities. Beyond differential-drive robots, the approach's potential to govern generic setpoints of other nonholonomic vehicles, such as car-like robots, is investigated.
CANCELLED
——————
Servo motor controllers are vital components in industrial applications such as robotics and automated machinery, in which precision, safety, and reliability are essential. This paper presents and evaluates different strategies to leverage Reinforcement Learning (RL) to control servo motors, while at the same time maintaining the traditional cascade control architecture, incorporating filter and feedforward control, as means to ensure understandability and safety of the control structure. First, we implement a multiagent RL architecture, in which the individual components of the cascaded control architecture are assigned a RL agent. Secondly, we implement a single-agent RL architecture, whereas the actor network of a twin-delayed deep deterministic policy gradient (TD3) algorithm is designed to replicate the cascade controller. Initial simulations are followed by experimental validation on a servo motor in a laboratory setting, demonstrating the real-world applicability of the approach. Finally, results are discussed and compared, leading to an assessment of the applicability of the proposed methods in industrial applications.
Scientific Computing is concerned with the efficient numerical solution of mathematical models from both science and engineering. The field covers a wide range of topics: from mathematical modeling over the development, analysis and efficient implementation of numerical methods and algorithms to software and finally application for the solution of complex real-world problems on modern computing systems. This interdisciplinary field combines approaches from applied mathematics, computer science and a wide range of applications in which in-silico experiments play an increasingly important role.
This study presents an optimization model for forced convection problems, aiming to determine the optimal location and size of inlet and outlet ports in a cavity flow field. The proposed approach integrates the localized meshless method, the generalized finite difference method (GFDM) for spatial discretization, the projection method for stable and accurate simulation, and particle swarm optimization (PSO) for design optimization. Forced convection, driven by external sources such as fans, pumps, or air conditioning systems, enhances heat transfer and is governed by the Navier-Stokes equations and the energy equation. The projection method ensures stability in solving time-dependent incompressible fluid flow problems, while GFDM eliminates the need for mesh generation, numerical quadrature, and parameter determination, enabling efficient spatial discretization. The combination of these methods ensures stable, accurate, and computationally efficient numerical solutions. Integrating GFDM and the projection method with PSO, a robust metaheuristic algorithm, allows the model to effectively identify the optimal design parameters for inlet and outlet ports. Several examples demonstrate the effectiveness of the proposed simulation-optimization model, highlighting its potential to improve cooling system designs and solve complex forced convection problems.
In this study, the Method of Fundamental Solutions (MFS) and the Domain-Decomposition Method (DDM) are integrated with Particle Swarm Optimization (PSO) to develop an efficient and robust framework for solving degenerate boundary problems. The MFS, inherently free from mesh generation and numerical quadrature, is recognized as a promising meshless method. Its implementation requires only field points and source points, which are positioned outside the computational domain. The numerical solution in MFS is expressed as a linear combination of fundamental solutions with unknown coefficients, determined by solving a system of linear algebraic equations that enforce the interior and boundary conditions. To address challenges associated with degenerate boundaries, the DDM discretizes the computational domain. By dividing the domain into smaller subdomains, the DDM enhances numerical stability, facilitates efficient local analysis, and improves solution accuracy. Moreover, the optimal placement of source points, critical to the MFS's performance, is determined using the PSO algorithm. As a modern metaheuristic optimization technique, PSO efficiently explores the computational domain to identify the optimal configuration of source points without requiring additional problem-specific information. Consequently, the integration of MFS, DDM, and PSO ensures superior performance in accuracy, stability, and scalability. The proposed methodology is validated through numerical experiments, demonstrating its effectiveness in solving diverse degenerate boundary problems. Additionally, a systematic analysis of key parameters across the three methods highlights the robustness and adaptability of the combined numerical framework. This study, therefore, provides a powerful and versatile tool for addressing complex boundary problems.
This study proposes combining the method of fundamental solutions (MFS) and particle swarm optimization (PSO) to accurately and stably analyze boundary detection problems. The boundary detection problems, considered in this study, are governed by the two-dimensional Laplace equation. In the boundary detection problems, the spatial position for part of the boundary portion is unknown, so it is challenging to analyze the boundary detection problems by any numerical scheme. We utilized the MFS, one the most promising boundary-type meshless methods, for spatial discretization of the considered boundary detection problems. The numerical solutions of the MFS are extremely accurate if the optimal location of sources, which should be located out of the computational domain, can be determined. Hence, in this study, we used the PSO, one of the meta-heuristic algorithms, to simultaneously determine the optimal location of the source in the MFS and the unknown spatial location of the boundary portion. Several numerical examples will be adopted to verify the effectiveness of the proposed numerical scheme for solving the boundary detection problems. Besides, some factors in the numerical scheme will be examined and these comparisons will be provided to demonstrate the merits of the proposed numerical scheme.
When storing hydrogen in pressure vessels, there is a possibility that the stored hydrogen will be released into the atmosphere via exhaust units if the pressure rises above a maximum value. If the released hydrogen mixes with air, a potentially explosive atmosphere (Ex-zone) can result. This contribution examines the minimization of Ex-zone dimensions above vertical exhaust units on hydrogen systems.
Based on the geometry of the standard Lambda exhaust unit commonly used in the industry, optimizations and modifications of this geometry were carried out. A total of six additional exhaust units were developed. The dimensions of the geometries correspond to the nominal diameter DN125 of the Lambda exhaust unit. The various exhaust units were investigated using numerical simulations with the ANSYS CFX 2022 R2 software.
The simulated system consists of two components: the exhaust unit and the spreading area. These simulation steps were carried out sequentially, with the results of the first step serving as input parameters for the second step. In the first simulation step, the seven exhaust units were simulated with mass flow rates ranging from 0.001\,kg/s to 0.01\,kg/s. In the evaluation, the flow velocities and the hydrogen mass fraction were analyzed, as these represent the input parameters for the spreading area in the second simulation step.
The investigation of the Ex-zones revealed, in a parameter study, that the outlet velocity has a stronger influence on the height of the Ex-zone than the hydrogen mass fraction at the outlet of the exhaust unit. The influence of the hydrogen mass fraction increases as the outlet velocity decreases. To determine the dimensions of the Ex-zone resulting from the investigated exhaust units, the maximum values at the outlet of each exhaust unit were used for the worst-case estimation.
It was observed that the exhaust unit with the lowest maximum values of outlet velocity and hydrogen mass fraction over the entire mass flow range has the smallest Ex-zone dimensions. This exhaust unit is a straight unit that has been supplemented by a diffuser tube with a diameter extension in the lower section. This extension forms a larger mixing chamber. The Ex-zone dimensions of this exhaust unit are the only ones among all variants examined to be smaller than those of the standard Lambda exhaust unit.
Hyperelastic material models are frequently used in the engineering sciences. Application examples include biomechanics and the modeling of the finite elastoplasticity of metallic alloys. Given the rising importance of machine learning approaches, hyperelastic models have also been approximated successfully by neural networks, see [1]. In the present work, the focus is on such approximations for isotropic energies. They are often defined in terms of invariants of the right Cauchy-Green tensor. In this way, the energies are a priori isotropic and also fulfill the principle of material frame indifference. Moreover, it can be shown in a straightforward way that convex neural networks can automatically enforce the polyconvexity of such energies - a property that is important for proving the existence of solutions. However, it is also known that these neural networks rely only on a sufficient criterion for polyconvexity, but not a necessary one. Accordingly, they cannot capture all hyperelastic models. In this paper, the limitations of these networks are carefully elaborated, and improved models are proposed to address the shortcomings.
[1] L. Linden et al., Neural networks meet hyperelasticity: A guide to enforcing physics, J. Mech. Phys. Solids, 179 (2023), 105363.
Recent developments have shown that machine learning methods, such as neural networks, can significantly benefit from incorporating physical knowledge [2]. However, a correct and non-restrictive implementation remains a major challenge [1,3,4]. Classic invariants of the right Cauchy-Green tensor, for instance, are a sufficient, yet overly restrictive choice for the combination of objectivity, isotropy and polyconvexity. This work thus aims to incorporate physical and mathematical constraints into neural networks without limiting the required solution space, specifically in the context of isotropic hyperelasticity. The two key components of the approach are an input convex network architecture and a parametrization based on the signed singular values of the deformation gradient. This enables to rigorously capture frame indifference and polyconvexity, together with other physical constraints. A highly beneficial feature of the proposed design is its compliance with the universal approximation theorem. More precisely, the architecture can approximate any isotropic polyconvex hyperelastic energy, provided it is sufficiently large. This is achieved by employing a necessary and sufficient criterion for polyconvexity under the assumption of objectivity and isotropy. The benefits and unique aspects of the new approach will be demonstrated and discussed through the approximation of a polyconvex energy, a non-polyconvex energy, and the construction of a polyconvex hull - an outcome that cannot be achieved using regular architectures.
[1] K. Linka et al., Constitutive artificial neural networks: A fast and general approach to predictive data-driven constitutive modeling by deep learning, J. Comput. Phys., 429 (2021), 110010.
[2] B. Moseley, Physics-informed machine learning: from concepts to real-world applications. Phd Thesis, University of Oxford (2022).
[3] L. Linden et al., Neural networks meet hyperelasticity: A guide to enforcing physics, J. Mech. Phys. Solids, 179 (2023), 105363.
[4] G.-L. Geuken et al., Incorporating sufficient physical information into artificial neural networks: A guaranteed improvement via physics-based Rao-Blackwellization. Comput. Method. in Appl. M., 423 (2024), 116848.
The development of new materials and the optimization of their properties are critical challenges in the field of materials science, particularly within the framework of multiscale modeling and homogenization. Traditional data-driven methods, while effective, often fall short in accurately predicting the complex behavior of microstructures under diverse loading conditions, especially when faced with the intricate microstructures of multi-phasic materials. We propose a new approach by embedding essential physics-based constraints, such as Voigt-Reuss bounds, directly into the architecture ensuring that the network not only learns from empirical data but also adheres to established physical laws, thereby enhancing the predictability and interpretability of the model. Importantly, this approach guarantees that the predictions remain within analytical bounds, providing a built-in error control mechanism that significantly enhances the reliability of the model's output. Furthermore, by infusing fundamental physical bounds, our model requires significantly less data, making it particularly advantageous in data-scarce environments where traditional data-driven approaches may falter. We employ high-resolution images of microstructures as the primary input, coupled with detailed information on material properties and loading conditions. This allows for a nuanced understanding of the microstructure response and refining the predictive capabilities of multi-scale models, enabling more accurate and efficient predictions of material responses without the extensive reliance on exhaustive simulation data.
A wide variety of complex phenomena in engineering and science require the solution of expensive, high-dimensional systems of partial differential equations (PDEs). In order to overcome the computational burden and accelerate the calculations, reduced order models (ROMs) have been developed. Non-intrusive methods have been shown to be effective in scenarios involving experimental measurements or under constrained access to full-order solvers. However, these approaches often lack interpretability, physicality, and uncertainty quantification (UQ) of the predicted solutions. To resolve this compromise, we present a data-driven, non-intrusive, reduced-order modelling scheme that follows a generative process to compute new solutions while maintaining physical consistency. In particular, a variational autoencoder is used for dimensionality reduction and a variational adaptation of sparse identification of nonlinear dynamics (SINDy) is developed to proficiently construct reduced-order models (ROMs) from limited amount of high-dimensional noisy data. This identifies latent dynamics in an interpretable manner while inherently incorporating UQ. The proposed method is composed of three building blocks: Variational Encoding of Noisy Inputs (VENI): A generative model utilizing variational autoencoders (VAEs) is applied to transform high-dimensional, noisy data into a low-dimensional latent space representation that is suitable to describe the dynamics of the system. Variational Identification of Nonlinear Dynamics (VINDy): On the time series data expressed in the new set of latent coordinates, a probabilistic dynamical model is learned by a variational version of SINDy. In this process, the distributions of the coefficients that determine the contribution of terms from a predefined set of candidate functions are identified. Variational Inference with Certainty Intervals (VICI): Variational inference is used to construct physically consistent ROMs for new parameter instances and initial conditions in a generative manner. The probabilistic framework inherently facilitates the quantification of uncertainty in the model coefficients, while providing certainty intervals for temporal solutions. The performance of the proposed method is validated on a diverse set of partial differential equation benchmarks including structural mechanics and fluid dynamics examples.
Reference
P. Conti, J. Kneifl, A. Manzoni, A. Frangi, J. Fehr, S. L. Brunton, and J. N. Kutz. VENI, VINDy, VICI: a variational reduced-order modeling framework with uncertainty quantification, 6 2024.
Partial differential equations (PDEs) arise to describe linear and nonlinear physical phenomena. In mechanical engineering, there is a particular need for new black box solvers of PDEs due to ever shorter product development times. The recent success in solving various PDEs with neural networks, particularly with physics-informed NNs (PINNs) suggests that they are a natural candidate for those solvers. Even though the range of PDEs that can be approximated seemingly well by PINNs is quite impressive, a rigorous a-posteriori error control is at least not straightforward. Classical PINNs are usually trained with loss functions based upon the pointwise residual. The advantage of this is, that the method is mesh-free, although it makes the error control difficult due to the fact that an error-residual relation is only available if the problem is well-posed. Therefore, the goal of our work is to certify PINN approximations for linear and nonlinear PDEs while preserving their broad applicability and keeping the additional cost of discretizing the underlying physical domain low. Given a trained PINN approximating the solution of a PDE, we embed the possibly complicated shaped domain into a simple shaped domain and construct an efficient computable upper bound for the error. The advantage of this ansatz is, that the discretization is straightforward. Moreover, to evaluate the error estimator we use Riesz representations and therefore the evaluation is efficient due to the simple shape of the domain. The method is applied to various elliptic and parabolic PDEs.
One of the challenging tasks in the sustainability transformation of the world is the qualification of engineering students, who should and can creatively change existing technology in their subsequent careers and thus shape it sustainable. Never before has the classical education in mechanics, with the modeling of technical systems, the confident and at the same time critical model evaluation, the well-founded solutions of the model equations with use for intensive studies and the generation of in-depth valid findings, been of such immeasurable importance as in the phase of ecological transformation. Only sound knowledge of the function of technical systems makes it possible to replace actionist trial and error procedures with actual improvements that can be planned and seriously predicted. Disruptive changes of technology require the highest level of competence to recognize, expand and apply fundamental correlations for future designs. In order to acquire this competence, a valuable foundation could be laid in the basic training of mechanics, which, with further high-quality training, would allow the hope of successfully mastering the challenge of the technological transformation. Unfortunately, however, the theoretically sound teaching of mechanics rarely reaches the students mind and their ability to understand, apply and transform engineering systems.
This article shows how the valuable classical mechanics can be combined with practical and, consequently, inspiring analyses of engineering systems to reach the intellectual soul of the students. Furthermore, using the example of a recently planned new highway bridge construction, it is shown how the skillful application of the mechanical principles, used by Bachelor students, can significantly improve the current design or preservation of the existing bridge through partial improvement and, consequently, demonstrating the achievement for a sustainable transformation in engineering.
There are several fundamental mathematical concepts, including some very simple definitions and some fairly straightforward theorems, which occur in many different contexts in university-level mathematics courses, even though it is not always easy for students to recognize these mathematical concepts and to notice how they occur in different courses.
One such concept is the product rule for the derivative of a function, which is essentially the same in mathematical content (but not in form) for pointwise products of real-valued functions, products of complex-valued functions, scalar products of vector-valued function in inner product spaces, cross-product of functions in 3-space, matrix product of matrix-valued functions. These product rules are taught in different courses at different times, but are more or less the same formula (except maybe for syntax) and use essentially the same proof.
Another such concept is the notion of triangle inequality, which occurs in courses of geometry, linear algebra, analysis of one real variable, and higher-dimensional analysis, although in different forms where students may not understand the similarity.
This talk tries to give some unifying framework.
Mechanics is one of the fundamental pillars of successful engineering studies, because a sound mechanical understanding of the behavior of materials and structures is essential for the constructive and design classes that build on it. Therefore, the teaching of basic mechanics in the first semesters of engineering studies plays a decisive role.
Although engineering and the demands placed on engineers in practice are constantly changing, the content of courses in basic mechanics has often hardly changed in recent decades. The form and delivery of the courses have also remained the same in many cases, even though various technical possibilities for modern and digital teaching exist today. In addition, the content and form of examinations are often regarded as virtually unchangeable. In my experience, however, this “traditional” approach has mainly led to many students not attending lectures in the first place or at least not actively participating.
In order to counteract this and motivate students to participate more actively in courses, there are a variety of methods and tools available, the following of which are highlighted in this work:
Clear definition of learning objectives
Course planning according to constructive alignment
Examples from engineering practice
Online polls and anonymous, digital question option
Video content for preparation
E-quizzes and e-tests
These aspects are presented and evaluated using the example of my Mechanics 1 and 2 lectures for civil engineers. It turned out that, in this case study, the activity of the students could be significantly increased, without this being particularly reflected in the evaluation results. However, a significant amount of time and effort had to be invested to implement the mentioned tools. Further, the reduction of the content to be presented in the lecture seemed to be key for a positive effect to be observed.
One of the core skills of a mechanical engineer is solving technical problems. This requires not only comprehensive fundamental knowledge but also, in particular, methodical and heuristic competencies. Spatial thinking and design comprehension play a key supporting role in this process. The ability to solve technical problems through design, in other words, to create innovatively, is taught and assessed through exercises, projects, or design tasks. However, the use of online teaching poses significant challenges for these forms of instruction, as numerous experience reports demonstrate. At the start of an international master’s program in engineering, the Institute for Machine Elements, Engineering Design and Manufacturing introduced a pre-course that allows students to conduct a thematically structured self-assessment. This is carried out via the ONYX test platform. The course aims to evaluate the students’ fundamental knowledge and their spatial thinking and design comprehension. Before the start of the master’s program, participants are given the opportunity to systematically address any existing knowledge gaps in specific subject areas. This process is supported by an accompanying wiki and an automatic feedback providing hints towards the correct solution. This approach enables more effective use of semester time for the actual course content. We present the structure of the self-assessment, the types of tasks used, and the approach to teaching and evaluating design comprehension in a virtual environment. Additionally, we discuss different methodologies and present results from the implementation in online teaching.
Earth Hall, PCC, Głogowska 10, 60-734 Poznań
The conference dinner will take place in the extraordinary Earth Hall, located in the Poznań Congress Centre (PCC) on the premises of the Poznań International Fair (MTP) in the heart of Poznań, next to the Poznań Główny railway station. The venue is easily accessible by public transportation, with a quick and convenient connection from the Poznań University of Technology campus.
NOTE: Only for those, who have confirmed their participation in the registration form. Your voucher is included in the conference envelope.
https://maps.app.goo.gl/1pKqTjPjVH2hjsdy5
Failure of materials and structural components is an important issue as long as man-made constructions exist. The section focuses on damage mechanics and fracture mechanics for all kinds of solid materials and structures. It aims at bringing together related original research covering experimental observations, modeling approaches and numerical techniques. Moreover, material failure is a complex process, which may be considered on different length scales ranging from atomistic scale up to macro scale of engineering structures. Since failure behavior of materials strongly depends on loading situations, contributions addressing static, dynamic and multi-axial failure are welcome as well as fatigue problems.
Concrete structures subjected to impact and blast loads experience complex failure mechanisms that are challenging to simulate accurately. Local constitutive models formulated using plasticity with softening are commonly used for this purpose. The softening behavior is typically represented by a scalar damage field, which scales the yield surface to capture the degradation of material strength. However, these local models often exhibit mesh-dependent results with localization of damage into a few cells. To address this limitation, this study combines a modified version of the Johnson-Holmquist (JH2) model with a gradient-enhancement approach. The introduction of an inertia term into the additional PDE for the determination of the nonlocal equivalent plastic strain transforms it into a hyperbolic equation, enabling an efficient solution with an explicit dynamics solver.
A one-dimensional benchmark simulation demonstrates the differences between the local and gradient-enhanced models. The local model shows severe damage localization and diminishing plastic energy dissipation with finer meshes. In contrast, the gradient-enhanced model distributes damage over multiple elements, though the plastic strain still localizes within a single element. Introducing strain hardening with respect to the local equivalent plastic strain resolves this issue, ensuring convergence of plastic energy and non-localizing plastic strain. These findings are extended and validated with two-dimensional simulations, showcasing the model's practical relevance.
Additionally, the impact of the added inertia term is analyzed in the context of dynamic strength enhancement, a critical characteristic of concrete under high strain rates. The proposed gradient-enhancement approach demonstrates improved numerical stability and mesh-independence compared to local models, making it a suitable tool for simulating concrete behavior under extreme loading conditions.
This study presents the development and implementation of a numerical model to analyze crack propagation in elasto-plastic materials using Griffith's criterion and cohesive zone models (CZM). The research focuses on bridging the gap between classical fracture mechanics and advanced micromechanical approaches to predict material failure under complex stress states. By integrating these methodologies within a computational framework, the study evaluates their applicability in describing crack initiation, propagation, and energy dissipation in materials with significant plastic deformation.
Griffith's criterion is applied as a benchmark for crack propagation in linear elastic fracture mechanics, where the energy release rate is directly correlated with crack extension. Its extension to elasto-plastic materials is evaluated by incorporating plasticity effects through energy correction factors. The cohesive zone model, in contrast, captures local material degradation ahead of the crack tip by defining traction-separation relationships, enabling precise simulation of nonlinear behaviors such as ductile tearing and fracture resistance curves (R-curves).
The numerical implementation employs finite element methods (FEM) with adaptive meshing to accurately resolve stress and strain fields near crack tips. The computational results are validated against theoretical predictions of fracture toughness and energy dissipation. The study also investigates the role of geometrical parameters, loading conditions, and material properties on crack trajectories and the effectiveness of each model in representing real-world fracture phenomena.
The findings highlight the limitations of Griffith's criterion in capturing nonlinear effects and demonstrate the superior accuracy of CZM in modeling complex fracture behaviors. This work contributes to the advancement of numerical tools for fracture analysis and provides insights into material optimization for engineering applications, particularly in structures subject to high stress and strain concentrations.
The failure criterion proposed by Christensen is a two-parameter failure criterion with two sub-criteria [1]. The criterion applies to all isotropic non-porous materials ranging from ductile materials like aluminum over polymers and cast iron to brittle materials like concrete or rocks. It contains the von Mises criterion as the limit case for ideally ductile materials and the Rankine criterion for perfectly brittle materials. Additionally, the Christensen criterion allows to define a failure index, indicating the failure mechanism. Although the failure criterion demonstrates good alignment with experimental results [1] and a wide range of applicability, it has only been applied to a few materials [2, 3]. This may be attributed to the difficulty of evaluation with its two sub-criteria as well as the lack of existing implementations in Finite Element Software. In this work, we propose a method to overcome these disadvantages by deriving a single failure index for the Christensen failure criterion. The index is defined analogously to failure indices for composite laminae, which, if it exceeds the value of 1, defines the onset of material failure, characterized as either the onset of plastic deformation or brittle failure. The failure index, being defined in a linear manner, provides an indication of material utilization. This is achieved with an algorithm utilizing the principal stress space in spherical coordinates. Then, under the same angles, the radius of the stress state can be related to the radius of the failure surface to compute a failure index. Additionally, a methodology for identifying the corresponding failure number by projection onto the failure surface is proposed. Both the failure index and the failure number are implemented in the Finite Element Software Simulia ABAQUS. This allows for an efficient use of the failure index. In a case study, the Christensen criterion is compared to the von Mises criterion. Using a three-point bending test with a notched specimen made of the polymer PEEK, the difference in failure load and failure location are analyzed.
References:
[1] R. M. Christensen, The theory of materials failure, Oxford University Press, USA (2013)
[2] A. Krainer, et al., A semi-passive beam dilution system for the FCC-ee collider, EPJ Techniques and Instrumentation 9.1 (2022): 3
[3] Oikonomopoulou, Faidra, et al. Interlocking cast glass components, exploring a demountable dry-assembly structural glass system, Heron 63.1/2 (2018): 103-138
In the study some results of numerical analyses performed on FRP-to-concrete bonded joints for direct-shear tests are presented. These results will support better preparation for experiments planned on real-scale elements.
Externally bonded fiber-reinforced polymer (FRP) materials have been for years widely used as an alternative method to traditional techniques of strengthening of concrete elements. One of failure type of such elements is the intermediate crack (IC) debonding of FRP strips initiated at the tip of flexural or flexural/shear cracks in concrete substrate. The IC debonding leads to the FRP debonding failure [1,2]. The proper modelling of that complex behaviour requires definition of traction-separation law for the FRP-concrete interface [3,4]. In the paper a fracture mechanics approach is used for defining this law.
In the research the FRP strip debonding from concrete surface is modelled using the eXtended Finite Element Method (XFEM) employed in Abaqus code [2,5]. The failure mechanism definition includes damage initiation as well as damage evolution which are based on the traction-separation law. For damage initiation the quadratic stress criterion was used and for damage evolution the power law fracture energy criterion based on components GI (Mode I) for concrete and GII (Mode II) for the interfacial FRP-concrete joint. The interfacial fracture energy represents the total external energy required to create, propagate and fully open crack along FRP–concrete interface. It has been proved, among other things, that the bond strength depends strongly on the interfacial energy GII. Finding the right interfacial fracture energy of analysed joint is still an open issue because of large amount of parameters which govern the local bond–slip behaviour as well as the bond strength of FRP-concrete joints. This will be the subject of further investigation.
References:
[1] Jankowiak I., Analysis of RC beams strengthened by CFRP strips – Experimental and FEA study, Archives of Civil and Mechanical Engineering, 2012, 12, pp.376-388
[2] Jankowiak I., XFEM analysis of intermediate crack debonding of FRP strengthened RC beams, Advances in Mechanics: Theoretical, Computational and Interdisciplinary Issues: proceedings of the 21st International Conference CMM, Poland, 2016, pp.235-239
[3] X.Z. Lu, J.G. Teng, L.P.Ye, J.J. Jiang, Bond–slip models for FRP sheets/plates bonded to concrete, Engineering Structures, 2005, 27, pp.920–937
[4] Al-Saawani, M.A. et al., Finite element modeling of debonding failures in FRP-strengthened concrete beams using cohesive zone model, Polymers 2022, 14, 1889
[5] Abaqus User's Guide, Dassault Systemes, USA, 2024
Material instability is often associated with the loss of ellipticity for the constitutive equations of the considered material. Determining when ellipticity is lost using the acoustic tensor is a bit complicated. For each load increment value of the updated Lagrangian formulation, the algorithm must assess the tangent modulus tensor at each point in the material. Accordingly, the Routh–Hurwitz stability criterion evaluates the loss of material stability. The Routh–Hurwitz criterion application requires transforming the finite element equilibrium equations to the state-space formulations for each load increment. The algorithm then changes the resulting discrete differences equation matrices into their continuous system form.
The main objective of this work is to analyze damage detection in certain steel lattice structures using the Discrete Wavelet Transform (DWT) [1]. A structural response signal of a considered structure may be considered as a discrete set of static or dynamic displacements, which can be processed further using this DWT. It allows for high accuracy and high-resolution estimation of possible structural weaknesses such as cracks, micro-failures, or aging (corrosion) stiffness reductions. The one-dimensional wavelet transform will be used, and dynamic excitation will follow the spectrum of the Bucharest earthquake in 1986 [2]. The Finite Element Method (FEM) approach would allow for the structural response numerical recovery. The location of the weakened parts of the structure may have a random character and can be identified by the wavelet function parameters. A finite set of deterministic FEM solutions would enable for the Least Squares Method (LSM) approximation of the structural responses and its polynomial form with respect to the given uncertainty sources. Then, assuming a continuous probability distribution of the occurrence of a given random feature as Gaussian, probabilistic moments [3,4] and relative entropies [5] of the structural deformations are determined using three alternative probabilistic methods: semi-analytical direct integration method, Monte-Carlo simulation, and Stochastic Perturbation Technique (SPT).
References
[1] A. Knitter-Piątkowska, M. Guminiak, M. Przychodzki. Application of Discrete Wavelet Transformation to defect detection in truss structures with rigidly connected bars. Engineering Transactions. 2016; 64(2): 157–170.
[2] Axis VM, https://gammacad.pl/axisvm.
[3] Kamiński M., Lenartowicz A., Guminiak M., Przychodzki M. Selected Problems of Random Free Vibrations of Rectangular Thin Plates with Viscoelastic Dampers. Materials. 2022, 15(19): 6811
[4] Kamiński M. On iterative scheme in determination of the probabilistic moments of the structural response in the Stochastic perturbation-based Finite Element Method. International Journal for Numerical Methods in Engineering. 2015; 104(11):1038–1060.
[5] Bhattacharyya A. On a measure of divergence between two multinomial populations. Indian J. Stat. 7:401–406, 1946, https://doi.org/10.1038/157869b0.
The section will focus on advanced theoretical, numerical and experimental models for the evaluation of the behavior of structures subjected to static and dynamic loads. Innovative materials characterized by high strength, anisotropy and unconventional mechanical responses pose new challenges to the design and the performance of various structural elements such as beams, plates and shells. In particular, structural issues may appear at different scales when materials with an internal architecture are employed. Particularly welcome are linear and nonlinear models and algorithms that address static and dynamic behavior of structures at different scales.
Mechanical structures, such as wind turbines and trusses, often exhibit complex vibratory responses to external excitation. Undesired or harmful oscillations might appear in addition to the desired system dynamics, sometimes depending on minor parameter variations. Understanding and predicting these responses is crucial to optimize the performance of such systems. However, nonlinearities due to material properties or large deformations, the intricate dynamics of joints, and parameter uncertainties impose a significant challenge to modeling approaches.
Network science, the study of interactions within complex interconnected systems, provides a novel perspective. By conceptualizing these mechanical systems in terms of component internal dynamics, coupling properties, and the connecting structure instead of the traditional description with inertia, damping, and stiffness, network-based methods reveal how the overall system dynamics arise from the interplay of components, interconnections, and structure. Prior work has shown that different response patterns can emerge for the same structure depending on the type of interaction mechanisms, i.e., different joint properties. For example, central components with many connections, so-called “hubs,” can act as accelerators (similar to well-connected individuals in social media that can quickly spread information across a large community) or decelerators (comparable to major crossroads that can be prone to traffic jams). Different classes of dynamical systems exhibit qualitatively different responses, from homogeneous and evenly distributed among the components to concentrated on a few central components.
This work answers questions such as “Which type of dynamics is my system likely to exhibit?” and “Which changes to the system will be most effective in mitigating unwanted states and achieving the desired system dynamics?” by applying network-based analyses to mechanical model structures. The research focuses on the impact of different coupling strategies on the overall system dynamics, aiming to enable engineers to design and control mechanical structures and their dynamics more effectively.
Graphs are widely recognized for their ability to reduce data dimensionality while preserving essential structural and mechanical information. Computational efficiency can then be improved compared to methods relying on raw image data. Additionally, graphs can provide more detailed information about the underlying mechanical behavior, making them potentially powerful tools for structural analysis. In the continuation of a recent study [1], this paper proposes a novel approach to represent a truss structure approach based on the original geometry of the corrugated board profile during out-of-plane compression by using graphs. The corrugated board is transformed from an image into a graph structure, which is then considered a truss system. This approach allows for the calculation of shear, moment, and axial stress profiles throughout the different phases of the compression test, considering the information from the load-deformation curve as complementary to the model. The main findings indicate that stress analysis of the graph-truss model reveals regions of maximum bending moments and shear stresses, particularly near contact areas between the liners and fluting segments, where localized buckling frequently occurs. The results demonstrate that this graph-based truss representation not only aligns with observed physical phenomena but also offers a deeper understanding of the stress distribution and deformation mechanisms in corrugated boards. The authors believe this method provides a versatile framework for studying the mechanics of corrugated boards for further optimization [2] and holds potential for broader applications to other engineering structures under complex loading conditions.
[1] Belfekih, T., Fitas, R., Schaffrath, H. -J., \& Schabel, S. (2024). Graph-Based Analysis for the Characterization of Corrugated Board Compression. Materials, 17(24), 6083. https://doi.org/10.3390/ma17246083
[2] Fitas, R., Schaffrath, H. J., \& Schabel, S. (2023). A Review of Optimization for Corrugated Boards. Sustainability, 15(21), 15588. https://doi.org/10.3390/su152115588
Diatoms, which occur in thousands of forms in our oceans, have a filigree, highly porous yet strong and resistant structure. It has been shown that better impact values can be achieved in the parametric lightweight optimization of crash-relevant components based on diatom structures than with conventional lightweight structures, despite being lighter. These properties make them suitable for endoprosthetics, where biocompatibility and the stability of anchoring in the bone are crucial. In-growing structures are preferred over cemented ones to avoid stress shielding.
The open porosity of biocompatible materials promotes bone ingrowth, increasing the stability of endoprostheses. Sustainable stability is also supported by optimal stress distribution to avoid stress shielding. This is achieved using data from the Charité hospital in Berlin for realistic stresses in the design. Implementation is done via additive manufacturing in the powder bed fusion of metal with a laser beam (PBF-LB/M) with Ti-6Al-4V powder.
This contribution aims to develop a method for AI-supported structuring of pressure swelling and crack growth samples to successfully structure endoprostheses in future projects. An extensive database of open-pore, gyroidal, and lattice structures forms the basis for the development of an AI tool that optimizes these structures with regard to their suitability for endoprostheses. Parametric models and genetic algorithms are used to evaluate the design principles. The AI tool initially serves as a decision-making instrument for the best design concepts and should later deliver reliable results on its own.
As part of the project, eight lattice structures were selected, and corresponding samples were additively manufactured from Ti-6Al-4V. The respective printed samples were tested under static compressive and bending stress to investigate the deformation behavior. The deformation fields were recorded using digital image correlation, from which the macroscopic behavior of the respective lattice structure under compression or bending was derived. Furthermore, crack propagation tests were carried out using printed CT samples to make a relative assessment of the fatigue behavior of the structures in relation to each other.
The structural optimization method is now being refined and evaluated based on the experimental results. The aim is to calculate a stress-optimized pressure threshold specimen that can withstand the required oscillating and impact loads in a femur without excessively unloading the surrounding bone. Finally, this optimized structure is to be verified by experimental investigations.
The increasing electrification of the mobility sector leads to some additional weight due to the implementation of heavy batteries and associated peripherals. Therefore, lightweight materials such as fiber-reinforced thermoplastics (FRTPs) are required to increase efficiency through weight reduction without compromising the mechanical properties of the component. Most automotive FRTP lightweight parts are produced by the thermoforming process, which has the greatest potential for mass production of flat-shelled components such as the underbody protection for the battery in battery electric vehicles (BEV). However, due to the complex draping behaviour of the material, various defects can occur during the forming process, such as wrinkling due to shear deformation, which has a negative impact on the properties of the part. Wrinkles lead to localised deflections in the flow of forces, which have a negative effect on the structural-mechanical properties. In addition, they can be the starting point for crack initiation in structurally relevant areas of the part, so appropriate part and process design is essential to avoid critical wrinkle formation. Such a design of the forming process is usually modelled as experimental testing is too expensive. However, this is computationally expensive due to the complex material behaviour. Therefore, this paper deals with the implementation of surrogate models for predicting the forming result, which are more time-efficient than numerical simulations. The surrogate model used is a convolutional autoencoder that predicts the shear angle distribution as an indicator of possible wrinkling. These convolutional neural networks (CNNs) are well suited to classification and pattern recognition tasks on data with a known grid-like topology, such as images. The input is the curvature distribution of randomly shaped geometries. A study of selected hyperparameters is carried out to obtain an appropriate shear angle prediction. The model is trained with shear angle distribution results from finite element simulations, as these can physically accurately represent the complex material behaviour. Therefore, a workflow has been implemented in Python that generates equidistant meshes of random geometries, automatically transfers them to a numerical simulation, and performs data generation that serves as training and test data sets for the convolutional autoencoder. Following hyperparameter tuning, this paper analyses whether the surrogate model trained with random curvature distributions is also able to predict the shear angle distribution of real forming geometries such as elliptical structures.
The continuous development of increasingly complex materials necessitates the expansion of epistemic knowledge in materials modeling to enable comprehensive optimization in terms of cost and performance already during the virtual design phase. In parallel, modern experimental techniques and data-driven methodologies have advanced substantially, with machine learning offering unprecedented opportunities for extracting meaningful insights from extensive datasets. Such insights can guide the development of robust, physically interpretable models that improve both the predictive accuracy and the fundamental understanding of material behavior.
In this context, a novel data-driven approach for discovering constitutive laws governing isotropic hyperelastic materials is presented. Rather than relying solely on black-box models such as deep neural networks, a sparse regression technique is employed to identify a parsimonious, interpretable functional form for the strain energy density. By starting with a large library of candidate functions, the methodology systematically selects only those terms that contribute to an accurate and physically meaningful representation of the material response. This approach yields models that remain transparent and interpretable, thereby facilitating a deeper appreciation of the underlying mechanics and a more direct integration into established simulation frameworks.
The method leverages full-field displacement data acquired from experimental tests, e.g., via digital image correlation (DIC), which provide spatially resolved information on the deformation state. Global and local regularization strategies are employed by Bayesian inference to account for measurement noise, model uncertainties, and potential simplifications in the assumed constitutive structure. This probabilistic framework naturally incorporates uncertainty quantification, enabling assessed model confidence and guiding model refinement more effectively.
A critical step in the procedure is the calibration of the identified constitutive model. To achieve this, the experimentally measured displacements are directly coupled with the displacements of finite element simulations of the candidate material model. By iteratively adjusting model parameters, the discrepancies between the simulated and observed displacement fields are minimized. This inverse analysis approach, driven by Bayesian inference, not only provides optimal parameter estimates but also explicitly quantifies uncertainties arising from imperfect measurements or modeling assumptions.
The capabilities of the method are demonstrated through synthetic benchmark examples representing various isotropic hyperelastic materials.
This research concentrates on a stochastic analysis of space-fractional truss model,a special case of space-fractional Euler-Bernoulli beam. The analysis assumes that the applied loads or length scale or order of continua are random (normal distribution). The main outcome is the answer to the question on the distribution of deflection and normal forces at selected points of truss structure. Furthermore, the analysis of the coefficient of variation, skewness, flatness, kurtosis are shown, and the safety measure is estimated.
Acknowledgement : This research was funded in whole by the National Science Centre, Poland, grant number 2023/51/b/st8/01062.
The section will focus on advanced theoretical, numerical and experimental models for the evaluation of the behavior of structures subjected to static and dynamic loads. Innovative materials characterized by high strength, anisotropy and unconventional mechanical responses pose new challenges to the design and the performance of various structural elements such as beams, plates and shells. In particular, structural issues may appear at different scales when materials with an internal architecture are employed. Particularly welcome are linear and nonlinear models and algorithms that address static and dynamic behavior of structures at different scales.
Multi-point constraints are essential in modeling various engineering problems, for example in the context of joints undergoing large rotations or coupling of different element types in finite element analysis. The master-slave elimination is an efficient method for the numerical treatment of the constraints because it reduces the dimension of the resulting linear system which is particularly advantageous when a large number of constraints have to be considered. However, the method requires an inversion of the submatrix of the constraint Jacobian. For nonlinear constraints, this inversion has to be performed at every single iteration step of the Newton-Raphson scheme [1]. Nevertheless, the method exhibits a reduced computational complexity compared to Lagrange multipliers and the penalty method. This is also the case if the analysis of redundant constraints and the identification of slave degrees of freedom are included [2].
The aim of this talk is to present a method for drastically increasing the computational efficiency of this already efficient method. It is based on the exploitation of the specific structure of the constraint Jabocian as it appears in typical engineering applications. The analysis of this structure is based on the identification of constraint clusters and their coupling types, which can be performed in a pre-processing step without significant computational effort. All matrix operations required for this calculation are performed using the CSR technique for storing sparse matrices, which is also particularly advantageous because all the matrices required for this are already stored in this format. The corresponding algorithms are shown and their implementation is verified by principle examples. Finally, the master-slave elimination is extended by the exploitation of the structure of the constraint Jabocian which reduces the computational costs. The speed-up of the improved master-slave elimination over the previous master-slave elimination as well as other constraint methods is demonstrated using numerical examples.
[1] Boungard, J. and Wackerfuß, J.: Master-slave elimination scheme for arbitrary smooth nonlinear multi-point constraints. In: Computational Mechanics, 74(5):955–992, 2024
[2] Boungard, J. and Wackerfuß, J.: Identification, elimination and handling of redundant nonlinear multi-point constraints. In preparation.
Mechanical structures are becoming lighter and thinner, saving resources and space. However, thin-walled structures are more prone to vibrations and emit more noise. Such thin-walled structures are used also in atomic force microscopes for sensing. A cantilever is employed as the structure. The cantilever must remain in a static rest position for accurate measurements. Non-linear structural dynamics are utilized to achieve vibration reduction. The finite element method is applied to model the beam. A time-periodic electromagnetic actuator damps the beam. The optimization of the actuator is carried out through a parameter study. In this study, the position and the time-periodic force exerted by the actuator on the beam are varied, among other factors. This variation achieves adjustable stiffness. The applied force transforms the system into a time-periodic system, and the Floquet theorem is employed. Stability channels are identified by applying the Floquet theorem and varying the parameters. These stability channels indicate the extent of vibration reduction. By optimizing the parameter combination, the decay time can be significantly reduced. The dependence of the achievable decay time is examined through parameter studies of stability maps. This concept can be applied to atomic force microscopes to enable faster measurements with the same sensor sensitivity by reducing the decay time without altering the beam dynamics.
Condition monitoring focuses on detecting and identifying changes in a machine's dynamics. In rotor dynamics, flexible connection elements such as bearings and joints are critical. Over time, the dynamics of these components can change, e.g., due to aging, which changes the overall dynamic behavior of the rotor system. Condition monitoring uses response data acquired during operation that is caused by operational forces. The contribution presents a practical approach to monitor the bearings' stiffness and damping, using transfer path analysis and frequency-based substructuring.
Transfer path analysis (TPA) is a method that considers the assembly of a source structure with an unknown excitation and a receiver structure. Typically, the goal of TPA is to characterize the excitation of the source and predict the responses of the receiver. However, in the context of condition monitoring, the goal is different. From an equivalent force and a measured response, the dynamics (i.e., transfer functions) of the system should be calculated. Using the calculated transfer functions, the dynamics of the bearings can be extracted by using frequency-based substructuring.
This contribution presents a workflow that uses TPA and frequency-based substructuring to identify the bearings of a rotor system in operation. In a TPA step, the bearings are considered part of the receiver system, and the rotor is considered the source system. An equivalent force at the interface that represents the operational excitation is determined. Measurements of the response on the interface are used to obtain some transfer functions of the system at the interface. These transfer functions are used in a frequency-based substructuring step to isolate the dynamics of the bearings, which are then used to detect changes. Previously, a numerical example of a beam with an operational excitation that is connected to the environment via spring-damper elements was presented. This contribution gives an experimental application of the novel procedure on an academic rotor test rig with magnetic bearings and artificial excitation.
The development of patient-specific lower limb prosthetics is highly important. Recent advances in additive manufacturing have paved the way for cost-effective, rapid, and customized prosthetic design. In order to ensure the safety and comfort of the wearer, a versatile testing method that can be used from the early stages of development is required. It is important that this method not only considers the mechanical properties of the prosthesis, but also the dynamic interaction between the prosthesis and the wearer. We believe that Real-Time Hybrid Substructuring (RTHS) is a promising approach to achieve this. In this method, a system is divided into substructures, some of which are simulated numerically and some of which are tested experimentally. Compatibility and equilibrium conditions at the interfaces of the substructures are satisfied in real-time using actuators and sensors. Thus, the dynamics of the complete coupled system can be replicated based on the hybrid set of substructures. When applying this approach to the testing of lower limb prostheses, the wearer is modeled using a numerical gait model, while a prototype of the prosthesis is built and tested experimentally. The motion and forces at the interface between the numerical gait model and the prosthesis are exchanged in real-time using a robotic actuator and a force-torque sensor. This technique offers the possibility to analyze the performance of the prosthetic device as well as the whole system of the amputee wearing the prosthesis and the interplay between the wearer and the prosthesis. It avoids the use of a test subject and numerical modeling of the prosthesis. In this talk, we will present the method of using RTHS to test prostheses itself, our current state of research, and give an outlook on future research with the intent of establishing a novel testing standard for prostheses. Our research has shown that alternating contact RTHS testing is particularly challenging because imperfect interface synchronization endangers system stability. To address this, we developed a control framework to ensure stable and high-fidelity contact RTHS testing, which was tested on two different hardware setups, namely a custom-built Stewart platform and a KUKA robotic arm. While our approach showed great performance on the Stewart platform, it could not yet reach its full potential on the KUKA robot. This is currently under investigation. We also present an initial proof-of-concept experiment using a life-size foot prosthesis.
Numerical solutions of, e.g. elasticity or heat conduction suffer from the deterioration of the overall accuracy when re-entrant corners, internal-respective boundary layers, or shock-like fronts are present [1]. A-posteriori error estimates offer a systematic way of retaining accuracy by localised mesh refinement. Following the seminal idea of Prager and Synge [2], such error estimates can be constructed based on the comparison of the discontinuous dual quantity, calculated from the primal approximation, and any H(div) conforming function, satisfying the so-called equilibrium conditions. When it comes to elasticity, the distinct symmetry of the stress tensor has to be considered (see e.g. [3]).
Besides the construction of adaptive solution procedures, the equilibrated dual quantity offers an increased accuracy, especially on boundaries with prescribed Dirichlet conditions. Within this contribution, we discuss an efficient implementation of flux and stress equilibration within the finite element framework FEniCSx. Based on structures with an auxetic sub-structure, the advantages of adaptive solution algorithms regarding efficiency and approximation quality are discussed.
References:
[1] Verfürth, R. (1994). A posteriori error estimation and adaptive mesh-refinement techniques. J. Comput. Appl. Math., 50, p. 67-83.
[2] Prager, W. \& Synge, J. L. (1947). Approximations in elasticity based on the concept of function space. Quart. Appl. Math., 5, p. 241–269.
[3] Bertrand, F. et al. (2021). Weakly symmetric stress equilibration and a posteriori error estimation for linear elasticity. Numer. Methods Partial Differ. Equ., 4, p. 2783-2802
The rapid advancement of Industrial technologies and data collection and handling methods has paved the way for the widespread adoption of Digital Twins (DTs) in engineering, enabling seamless integration between physical systems and their virtual counterparts. Digital Twins are dynamic, real-time digital replicas of physical assets, systems, or processes designed to optimise performance, enhance predictive capabilities, and support decision-making across diverse engineering domains.
The current work presents a comprehensive framework for building robust and scalable Digital Twins tailored for material testing applications using a universal testing machine (UTM), focusing on core challenges such as model fidelity, data integration, and computational efficiency. Our goal is to build a Digital Twin of a material subjected to cyclic loading. A simple linear elastic material model with its governing PDE is considered in this case. The model is used to simulate the material's behaviour through the finite element method. Later, the obtained high-fidelity solution is reduced with the help of a well-known POD greedy method, which helps to improve the computational efficiency. The solution is further improved by integrating data collected by sensors in real time. The Parametrized-Background-Data-Weak (PBDW) (ref) method is implemented to realise the data integration.
Our approach integrates physical knowledge, which is available in terms of constitutive or material modelling, and real-time sensor data to construct and continuously update the Digital Twins. Emphasis is placed on the knowledge available through Partial Differential Equations, which address the reliability and robustness of the digital twin model. This work highlights the advantages of the digital twins in the predictive maintenance and health monitoring of assets or systems, which eliminates unexpected failures and downtimes in engineering applications.
The section covers all fields of vibrational problems in solid mechanics or mechatronics including nonlinear effects. Submissions may address, for example, systems with nonlinear material behavior, nonlinearities in joints, mathematical solution methods (analytical or numerical), control or description of nonlinear behavior like bifurcations or chaos, or experimental idendification of nonlinearities.
We study a weakly nonlinear boundary-value problem for a system of delay differential equations. The initial function of the delay differential system contains an unknown eigenfunction that provides the solvability of the weakly nonlinear boundary-value problem. Due to the impossibility of defining solutions of weakly nonlinear boundary-value problems in terms of elementary functions, the necessity of deriving computational iterative solution methods arises. We obtain conditions of solvability and construct a new iterative scheme for finding solutions of the weakly nonlinear boundary-value problem for a system of differential equations with delay as well as its eigenfunction. One potential application of this study of the boundary-value problem with delay is connected to the problem of simulating nonisothermal chemical reactions.
Cell biological systems are characterized by complex relationships and non-linear processes. The modelling of these processes improves the understanding and represents a significant enrichment of the experimental investigation. An example of such a system is the regulation of blood glucose concentration by pancreatic β-cells through the secretion of insulin. β-cells are electrical active and insulin secretion is regulated by an interplay of metabolic and electrophysiological components, resulting in a membrane potential change between silent and active burst phases.
Mathematically, this behavior can be described by a set of ordinary non-linear differential equations. In the electrical system, different types of bifurcations occur as the ATP concentration varies, which links the metabolic and electrical activity. The state of the system changes from a stable equilibrium to a limit cycle and back again. The transition from the limit cycle to the equilibrium point is characterized by an increase in period duration, which is due to the type of bifurcation, the merging of a limit cycle with a saddle point.
The change in the period duration can be approximated comparatively by the eigenvalue analysis of the saddle point as well as by the harmonic balance method with respect to the limit cycle. Small changes in the bifurcation parameter, i.e., the ATP concentration, have a very strong effect on the system behavior. To gain a better understanding, it is useful to separate the metabolic and electrical activity in experiments and thus in simulations. However, the ATP concentration in an experimentally investigated cell can also vary greatly during a measurement or cannot be accurately determined by the measurement method.
In order to take this variation into account, an uncertainty-based view of the problem is recommended. The uncertainty of the amplitude and period of the limit cycle is quantified by combining the harmonic balance method with the generalized polynomial chaos expansion. This spectral stochastic method achieves a significant reduction in computational time compared to the most commonly used Monte Carlo method. The intrusive and non-intrusive approach of the combined method as well as the approximation of the period duration are compared regarding their efficiency and accuracy on the example of the β-cell.
Certain materials exhibit different elastic behavior in tension and compression. These materials are typically modeled as bimodular, with different Young's moduli in tension and compression, as a simplified engineering approximation. Due to this material property, the neutral axis of a beam composed of bimodular material shifts away from the geometric centroid of the cross-section, with its position either above or below the geometric centroid depending on the sign of the curvature. Consequently, the equation governing flexural oscillations is formulated using a model based on effective two-layer laminates, incorporating a discontinuous neutral axis. However, the position of the neutral axis for each of the two bending configurations is influenced not only by the material's elastic properties but also by the characteristics of a cross-section, where the cross-section, in this case, varies along the beam axis. Depending on the cross-sectional properties, the dynamic response of a bimodular beam may exhibit either linear or nonlinear vibrational behavior. When a difference in the effective bending stiffness exists between upward and downward bending, additional varying internal forces arise, leading to a nonlinear dynamic response. For this purpose, the harmonic balance method is specifically applied to determine the frequency-response function to periodic excitation. A numerical investigation of a bimodular tapered beam with different cross-sectional properties, followed by a comparative analysis of the responses, reveals the substantial effect of bimodular materials, particularly concerning their cross-sectional properties.
We analyze the relationship between boundary value problems with impulsive action at fixed points in time and boundary value problems with switching at fixed and non-fixed points in time. We derive constructive conditions for the solvability of these problems and a scheme for constructing solutions to a nonlinear periodic boundary value problem with switching at non-fixed points in time. Both are based on using the Adomian decomposition method. In addition, we obtain constructive conditions for the convergence of the iterative scheme to the solution of the weakly nonlinear boundary value problem, as well as the switching points. The obtained iterative scheme is applied to find approximations to the periodic solution of the equation with switching at non-fixed moments of time, which models, e.g., a nonisothermal chemical reaction.
The section focuses on constitutive modelling of natural and artificial materials subject to elastic and inelastic deformation processes. The aim is to compare new constitutive models formulated on both the phenomenological and the micromechanics basis to determine their validity by comparison of simulations with experiments. A wide range of open problems will be considered in the section, like multi-scale modelling of heterogeneous materials, implementation of constitutive models in numerical applications, and the virtual testing of structural systems.
The layered structure of composite laminates makes them susceptible to delamination. To counteract this, translaminar reinforcements can be introduced, which can slow down or stop crack propagation by bridging. Through-thickness reinforcements such as z-pins and stitching threads displace fibers of the laminate laterally during insertion. As a result, eye-shaped resin zones without fiber reinforcement form around the reinforcements. Also, the laminate is compacted locally, leading to a distortion of the fibers and locally increased fiber volume contents. The modelling of realistic resin zone geometries and laminate microstructure is important for accurate simulations of the in-plane behavior of through-thickness reinforced laminates. Also, numerical studies on the thermal eigenstresses in the pin-laminate interface due to the contrast of thermal expansion of the constituents require a detailed description of the microstructure. Existing models and microstructure definitions feature different non-physical assumptions like discontinuous fiber paths or yield unrealistically high fiber volume contents. Also, they are limited to circular and elliptical through-thickness reinforcement geometries and are often based on micrographs of specific configurations. Therefore, they cannot be easily applied parametrically to other geometric configurations. Experimental studies demonstrated an increased Mode I fracture toughness for composite laminates reinforced with rectangular z-pins compared to laminates with circular pins. Therefore, a microstructure definition also applicable to rectangular transversal reinforcements is important to investigate the underlying mechanisms of geometries’ influence on the bridging behavior. In this work a detailed modelling of the microstructure around stitched threads and z-pins of different geometries is proposed, that parametrically considers the reinforcements’ shape, dimensions and possible overlap of the distorted fiber zones caused by adjacent reinforcements. The definitions of the distorted zone’s width and locally increased fiber volume content and fiber angle are independent of the chosen resin zone contour definition. The resulting microstructures for different fiber volume declining approaches and resin zone functions are presented and compared to micrographs. The influence of the modelling parameters on effective in-plane mechanical properties are investigated in a representative unit cell finite element simulation.
The simulation of large topological changes, such as those occurring in Cone Penetration Testing (CPT) and vibratory pile driving, still poses significant challenges in computational soil mechanics. Severe mesh distortions render classical approaches like the Finite Element Method (FEM) unsuitable, emphasizing the need for alternative methodologies. The Particle Finite Element Method (PFEM) combines standard FEM with continuous remeshing techniques, offering a robust framework specifically designed for simulating large topological changes. However, the inherent remeshing process in PFEM, combined with the mapping of state variables from integration points to nodes, can cause deviations in these variables, potentially violating admissible states in constitutive models and compromising the accuracy and stability of the simulation. This study investigates static and dynamic benchmark simulations, such as a Hertzian contact problem and the wave propagation in dry and saturated soil columns, to evaluate the influence of such mapping-induced variations on the accuracy and reliability of PFEM-based solutions. The results provide insights into the application of PFEM to more complex boundary value problems, particularly those involving advanced constitutive models, such as hypoplasticity with intergranular strain or Sanisand, which are highly sensitive to changes in their state variables.
Interpenetrating phase composites (IPC) are a class of materials with two continuous phases. The combination of a soft and stiff skeletal phase results in a combination of both properties, known e.g. from fiber-reinforced composites. We investigate composites made of 3D fiber networks in a hyperelastic silicone matrix for IPC, which exhibit strong, nonlinear mechanical behavior and energy dissipation for applications in artificial cartilage.
The aim of this work is the process of modeling the mechanical properties of interpenetrating composites for numerical analysis. The process is based on the inclusion of CT imaging data and experiments for each phase of the composite. Numerical analysis and comparison of the obtained results with the experiemnt will allow to determine the values of strains and stresses inside the composite structure. This is of significant importance in describing the obtained structural and mechanical properties of isotropic fiber networks and the resulting IPCs. The obtained properties have a unique mechanical interaction between the fiber networks and the matrix, which allows to adjust the anisotropy of the mechanical response. Insights from in situ nanoCT analysis and mechanical modeling are used to identify the skeletal mechanism.
[1] L. Siebert, T. Jeschek, B. Zeller-Plumhoff, Robert Roszak, R. Adelung, M. Ziegenhorn "Mechanical Interactions in Interpenetrating Composites" 5th International Conference on Nanotechnologies and Biomedical Engineering : Proceedings of ICNBME-2021, November 3-5, 2021, Chisinau, Moldova, Springer, 2022 - s. 579-586
Anisotropic elasto-plasticity, sensibility to temperature and moisture, wrinkling and damage occurrence are some of the phenomena needed to analyse for a better insight on the performance of a material. Due to this complex material behavior, the usage of certain materials, like paper and paperboard, has been hindered. These materials are highly sustainable and are mostly used in the packaging industry, being a reasonable alternative to other less environmentally friendly materials.
Paper and paperboard undergo large deformations on the elastic and plastic regime, primarily on the forming process. In a continuum-mechanical framework, they can be considered a homogeneous anisotropic material. The anisotropy being present on the machine direction, the cross direction and the out-of-plane direction. Keeping in mind that it was already showed that the Poisson’s ratio in the out-of-plane direction is very close to zero, which allows the decoupling in-plane and the out-of-plane behavior. The larger deformations occur on the out-of-plane direction. A unified model is required for being able to address the complex deformations.
The conventional yield criteria by Hill does not account adequately for the advanced complex material response, since the elastic properties are altered and hardening with anisotropic plasticity is present; due to the evolution of plasticity.
Starting from a non-unique intermediate configuration on the finite elasto-plasticity framework, where the deformation gradient is multiplicatively split into an elastic and a plastic part, the derivation of constitutive laws for anisotropic materials that are frame indifferent and nonlinear, requires the introduction of structural tensors, as well as other additional independent variables. The strain energy function is defined with representations of the right Cauchy-Green deformation tensor and the structural tensor for this intermediate configuration. The decomposition of the elastic strain energy into in-plane and out-of-plane parts, account for their differences. Is possible to obtain a more general treatment of elasto-plastic interactions that allows to capture the densification effect, that increases elastic stiffness because of plastic deformation during out-of-plane compression loading.
Additional changes were made for accounting the inconsistencies of the initial yield stresses and the plastic strain ratios. The anisotropic plastic hardening is considered by the definition of a set of coupled internal variables. This leads to a thermodynamically consistent manner of ensuring the validity of the model for finite deformations. Encompassing several important characteristics of the elasto-plastic behavior of paper and paperboard, promising numerical simulations’ results confirm the model’s validity and the forming process could be optimized.
Precipitation hardening is a heat treatment well known to enhance the mechanical properties of metals by introducing precipitation particles in the microstructure. However, this technique can also be applied to ferroelectric crystals, as recent studies have shown [2,3]. As an alternative to doping, it effectively reduces domain wall mobility as well as heat dissipation and by that enhancing the mechanical quality factor of ferroelectric materials. Determining the optimal shape and size of the precipitates is the key question of our research with the goal to improve the efficiency of ferroelectric material. Our work is based on a phase field approach to simulate the evolution of domains in ferroelectric material [1]. The pinning effect of precipitates on domain walls is analyzed by introducing elliptical and spline-shaped precipitates in the material. This allows to determine favorable shapes of precipitates. For the computation of the multiscale problem, we make use of the open source finite element library of the FEniCS Project.
[1] Bohnen, M., Müller, R. (2023) Simulation of precipitate hardening in ferroelectric material. Proceedings in Applied Mathematics and Mechanics, 23, e202300215.
L. Siebert, T. Jeschek, B. Zeller-Plumhoff, Robert Roszak, R. Adelung, M. Ziegenhorn "Mechanical Interactions in Interpenetrating Composites" 5th International Conference on Nanotechnologies and Biomedical Engineering : Proceedings of ICNBME-2021, November 3-5, 2021, Chisinau, Moldova, Springer, 2022 - s. 579-586
https://doi.org/10.1002/pamm.202300215
[2] Zhao, C. et al. (2022). Coherent Precipitates with Strong Domain Wall Pinning in Alkaline Niobate Ferroelectrics. Advanced Materials 34(38), 2202379.
L. Siebert, T. Jeschek, B. Zeller-Plumhoff, Robert Roszak, R. Adelung, M. Ziegenhorn "Mechanical Interactions in Interpenetrating Composites" 5th International Conference on Nanotechnologies and Biomedical Engineering : Proceedings of ICNBME-2021, November 3-5, 2021, Chisinau, Moldova, Springer, 2022 - s. 579-586
https://doi.org/10.1002/adma.202202379
[3] Zhao, C., Benčan, A., Bohnen, M. et al. (2024) Impact of stress-induced precipitate variant selection on anisotropic electrical properties of piezoceramics. Nat Commun 15, 10327. \linebreak https://doi.org/10.1038/s41467-024-54230-0
Many technical applications involving flows profit from trying to manipulate the boundary conditions or flow parameters in such a way as to generate a desired effect, like reduced drag, increased mixing, attenuated or increased turbulence or reduced sound emission, for example. The sophistication of the governing equations requires a broad range of research topics and methods to be covered, including analytical treatments, reduced-order modelling, passive manipulation of boundaries in experiments involving riblets, active manipulations using actuators, for example, or numerical approaches involving the adjoint equations, amongst others. The speakers of our session reflect the broad application area of flow control and discuss difficulties in the application side and recent advances in the analysis, as well as experimental and numerical approaches.
In aerospace transportation and propulsion systems, shock-induced flow separation leads to highly unsteady flow fields and has strong detrimental effects on the aerodynamic behavior and performance. To alleviate these effects, separation control is investigated and developed. A commonly pursued approach uses vortex generators of different design to increase the momentum transfer within the boundary layer and thus make it less prone to separation. Different designs of mechanical vortex generators, valued in aerospace engineering for their robustness and simplicity, as well as the more flexible and less drag-penalty prone air-jet vortex generators (AJVGs) have been studied. A large number of parameters influence the control effectiveness of these devices, amongst them geometrical parameters, flow parameters, and the array arrangement of multiple vortex generators. The latter aspect is particularly relevant for AJVGs, which are small enough to allow for inter-device spacings that enable interactions between the turbulent structures induced by neighboring AJVGs, and where the intensity of these interactions strongly influence their control effect. Jet / jet interactions of medium strength enhance the control effectiveness of AJVGs, whereas interactions that are too strong may even cause an increase in separation.
With this great number and variety of control parameters, it is inevitable that the vortex generators encounter off-design conditions during a flight trajectory or engine run, where the operating conditions vary and will never be as clean as in a laboratory.
An overview will be provided on the relevant control parameters and their respective influences, both for mechanical vortex generators including microramps and microvanes and for air-jet vortex generators, and we will introduce the physical mechanisms and underlying flow dynamics governing the control effect. Then, influences of off-design conditions will be discussed. For these purposes, we use results from recent experimental and numerical studies in supersonic and hypersonic turbulent boundary layers. A joint analysis of this rich experimental-numerical data set with statistical methods and modal analysis with dynamic-mode decomposition, amongst other techniques, allows an in-depth interpretation of observed flow phenomena and control mechanisms and effects. On the basis of these findings, the application potential of the investigated control devices will be assessed.
Various industries such as urban air mobility, wind energy, aerospace, transportation, etc. have been imposed with noise emission regulations by most governments. This has led to an increase in research about noise emission and mitigation strategies. Specifically for the wind industry, larger turbine rotors implemented to increase energy production are also emitting higher sound levels which is a serious concern for the industry.
Additionally, rotor blades experience adverse pressure gradients leading to flow separation, reduced performance and efficiencies, and stall in extreme conditions. To tackle this, a specific type of flow control device known as rod-type vortex generators (RVGs) have been investigated for wind turbine rotors and airfoils. These add-ons have been proven to enhance flow mixing in the boundary layer thus delaying or reducing turbulent flow separation. However, studies on their impact on the emitted noise levels are limited.
In order to gain a fundamental understanding of noise generation and propagation mechanisms, a computational approach – an in-house post-processing tool based on the popular Ffowcs-Williams and Hawkings acoustic analogy (FW–H) is utilized. It is based on the linear, integral solution of the FW-H analogy derived by Farassat known as Farassat’s Formulations. It predicts noise emitted by rotating bodies in subsonic motion (M < 0.8). Based on the surface pressure distribution on the blade surface and the rotor blade kinematics the FW—H code predicts the rotational noise (harmonic noise) and its components in great detail. Additionally, acoustic measurements using a phased microphone array are conducted on a wind turbine airfoil implemented with RVGs.
With the implementation of the RVGs the separation noise gets reduced to the reduction of flow separation area, but the rods also increase pressure fluctuations as they generate streamwise vortices that could potentially increase loading noise. This research focussed on the interplay of the two competing noise mechanisms to assess the acoustic impact of the flow control device.
The numerical investigation of the acoustic impact of the RVGs on the NREL Phase VI wind turbine rotor shows that they have no significant impact on the rotational noise emitted. There is an increase in the noise levels at higher frequencies which is not of major concern for the wind industry. Experimental investigation on the DU96-W-180 airfoil with RVGs show that the rods decrease noise at low frequencies and slightly increase noise at higher frequencies (2 dB with array uncertainty 1 dB).
Installation of Ultra High Bypass Ratio (UHBR) turbofan engines on airframes brings significant reductions of fuel consumption and emissions. However, due to limited height of the undercarriage, UHBR engines must mounted close to wing. This results in slat cut outs at the juncture of the engine pylon, which significantly promotes separation at high angles of attack. The flow separation in this region reduces total lift of the wing at the most crucial phases of the flight mission, i.e. during take-off and landing. Therefore, mitigation of the risk related to flow separation in the slat-less area is an important research problem. Application of active flow control (AFC) has been envisioned as the most promising solution to control the flow separation at the juncture of the engine pylon. Among investigated AFC approaches, pulsed jet actuators (PJAs) provide the most promising result on a real scale Wind Tunnel (WT) model. Nevertheless, the momentum and energy consumption of a PJA system is still quite high to be implemented on an aircraft. The current work aims toward lowering the necessary energy required for separation flow control by application of advanced actuation patterns and increasing the pulse frequency while lowering the duty cycle ratios. For that purpose, a dedicated pulsed jet actuator system was developed. The design goal was to provide individual control of pulsation frequency and duty cycle at each PJA slot in the array. The paper presents results of silent condition testing of the individual actuator designed for large scale wind tunnel testing. The bench test included characterisation of high frequency pulsations of the jet by hot wire anemometer for various frequences, duty cycles input pressures and mass flow rates. The obtained results proved that the developed actuator can provide distinctive flow pulsation at high flow rates enabling high jet velocity. The results has path a way for development of 77 nozzle actuator for large scale wind tunnel model for testing power efficient flow control over large aerodynamic surface.
CANCELLED
——————
The performance of a wind turbine rotor depends on the wind characteristics of the site and the aerodynamic shape of the blades. The blade geometry determines the torque and the power generated by the rotor. This study investigates the enhancement of small Horizontal Axis Wind Turbine (HAWT) rotor performance by using Response Surface Methodology (RSM) through Computational Fluid Dynamics (CFD) tools. The optimization process employs a curve-fitting technique to optimally distribute the chord length and twist angle along the blade. The obtained design is analysed using the CFD-Tools Ansys-fluent. To account for full turbulence effects, the SST (Shear Stress Transport) turbulence model is utilized for initial simulations. To enhance the reliability and precision of the optimization results, a refinement method is implemented, incorporating the top three candidate solutions identified through the Multi-Objective Genetic Algorithm (MOGA) into the Design of Experiments (DOE) table. Three optimization loops are performed, focusing on outperforming the initial design points from MOGA against the best candidates in the DOE table. Additionally, the study expands the input variable constraints from 10% to 40%, based on a comparative analysis, to assess and demonstrate the effectiveness of the optimization strategy. This multi-step approach ensures a robust and efficient optimization process for improving HAWT rotor performance.
This session is devoted to the mathematical analysis of natural phenomena and engineering problems. In this area PDEs play a basic role. Therefore lectures discussing analytical aspects of PDE problems as well as problems in the Calculus of Variations are welcome.
In the spirit of preceeding results for the quasistatic case, we study the limiting process from nonlinear to linearized viscoelastodynamics for a general frame-indifferent model of Kelvin-Voigt rheology. We prove existence of weak solutions to both cases using a recently developed variational time-delayed approach. We then prove that solutions to the nonlinear model indeed converge to the unique solution of the linearized model and highlight how the different limits of linearization and the approximation process interact.
According to the Nernst theorem or, equivalently, the third law of thermodynamics, the absolute zero temperature is not attainable. Starting with an initial positive temperature, we show that there exist solutions to a Kelvin-Voigt model for quasi-static nonlinear thermoviscoelasticity at a finite-strain setting [Mielke-Roubíček '20], obeying an exponential-in-time lower bound on the temperature. Afterwards, we focus on the case of deformations near the identity and temperatures near a critical positive temperature, and we show that weak solutions of the nonlinear system converge in a suitable sense to solutions of a system in linearized thermoviscoelasticity. Our result extends the recent linearization result in [Badal-Friedrich-Kružík '23], as it allows the critical temperature to be positive.
In this talk we study a dynamic optimal transport type problem on a compact set (bulk) coupled with a non-intersecting and sufficiently regular curve. On each of them, a Benamou-Brenier type dynamic optimal transport problem is considered, yet with an additional mechanism that allows the exchange (at a cost) of mass between bulk and curve. In the respective actions, we allow for non-linear mobilities. Our first result is a proof of existence of minimizers based on the direct method of calculus of variations.
The main interest lies in the case when the curve is also allowed to change. Then, a Tangent-Point energy is added to the action functional in order to preserve the regularity properties of the curve. Also in this case, existence of optimizers is shown.
We extend these analytical findings by numerical simulations based on a primal-dual approach that illustrate the behaviour of geodesics, for fixed and varying curves.
Various solution concepts for rate-independent systems have been developed, generally leading to different solutions. If the underlying energy functional is not convex or the dissipation potential depends not only on the rate but also on the state of the solution, a discontinuous evolution may be necessary. The concept of balanced viscosity (BV) solutions allows such temporal jumps and shows in addition a physically reasonable behavior.
In this talk, we assume the dissipation potential to depend Lipschitz-continuously on the state of the solution. We provide an approximation scheme with adaptive time-step size based on [1] and abstract conditions ensuring its convergence to BV solutions, cf. [2,3].
One possible application lies in plasticity models with non-associated flow rules, often used for granular media such as soils and rocks. Laborde proposed a generalization of the principle of maximum dissipation allowing to deduce a variational formulation of such models [4]. In particular, this leads to a state-dependent dissipation potential and therefore it fits into the setup of our theory.
Additionally, often a cap model is used. It modifies the yield surface to intersect the hydrostatic axis and thus bounds the elastic region. This way a compaction of the material under high hydrostatic pressure should be included. At last, we will discuss the cap model from a mathematical perspective.
[1] M. A. Efendiev and A. Mielke. On the rate-independent limit of systems with dry friction and small viscosity. Journal of Convex Analysis, vol. 13, pp. 151–167, 2006.
[2] M. Sievers. Convergence analysis of a local stationarity scheme for rate-independent systems. ESAIM: Mathematical Modelling and Numerical Analysis, vol. 56(4), pp. 1223-1253, 2022, https://doi.org/10.1051/m2an/2022034.
[3] A. Mielke, R. Rossi. Existence and uniqueness results for general rate-independent hysteresis problems. Mathematical Models and Methods in Applied Sciences, vol.17(1), pp. 81-123, 2007, https://doi.org/10.1142/S021820250700184X.
[4] J.-F. Babadjian, G. Francfort, M.G. Mora. Quasi-static Evolution in Nonassociative Plasticity: The cap Model. SIAM J. Math. Anal., vol. 44, pp. 245-292, 2012,
\newline
https://doi.org/10.1137/110823511.
In this talk, I present a linearization result for quasistatic fracture evolution in nonlinear elasticity. As the stiffness of the material tends to infinity, we show that rescaled displacement fields and their associated crack sets converge to a solution of quasistatic crack growth in linear elasticity without any a priori assumptions on the geometry of the crack set. The proof relies on a careful study of unilateral global minimality, as determined by the nonlinear evolutionary problem, and its linearization together with a variant of the jump transfer lemma in GSBD. Based on joint work with Pascal Steinke and Kerrek Stinson.
Lower semicontinuity of surface energies in integral form is known to be equivalent to BV-ellipticity of the surface density. In this talk, we prove that BV-ellipticity coincides with the simpler notion of biconvexity for a class of densities that depend only on the jump height and jump normal and are positively 1-homogeneous in the first argument. The second main result is the analogous statement in the setting of bounded deformations, where we show that BD-ellipticity reduces to symmetric biconvexity. Our techniques are primarily inspired by constructions from the analysis of structured deformations and the general theory of free discontinuity problems. This is joint work with Carolin Kreisbeck and Marco Morandotti.
Optimization is the next natural step after simulation with increasing importance in the future. The aim of this session is to provide the basis of a holistic overview of all areas of optimization. Thus abstracts from both a theoretical as well as an applied perspective are welcome.
The optimal power flow problem lies at the intersection of power system engineering and operations research. Its application areas compromise expansion planning, operation, market clearing, network resiliency and unit commitment. Optimal power flow calculates optimal generator power and voltage setpoints to minimize the cost of system losses and costs of power generation. Security-constrained optimal power flow (SC-OPF) [1] calculates an optimal power flow which will not break any current or voltage limits in case of one of the n-1 anticipated scenarios of system state due to a line fault or generator shutdown.
The security-constrained optimal power flow is modeled as a nonlinear program (NLP), which minimizes the economic dispatch under the constraints of maintaining power balance for active and reactive power as well as voltage and current bounds across the network. Any deviation from the bounds enter the objective function as penalties.
For the solution of SC-OPF problem we compare a holistic formulation with a scenario-based formulation. The scenario-based formulation allows for Benders decomposition [2,3] to decouple the base problem (also called master problem) from the scenarios (also called subproblems). Benders decomposition solves the SC-OPF problem in an iterative procedure, alternating between base problem and scenarios.
The performance of holistic approach and Benders decomposition approach are evaluated on multiple IEEE example networks [4] in terms of solution quality, speed and robustness.
[1] J. K. Skolfield and A. R. Escobedo, “Operations research in optimal power flow: A guide to recent and emerging methodologies and applications,” European Journal of Operational Research, vol. 300, no. 2, pp. 387–404, 2022.
[2] S. Cvijic and J. Xiong, “Security constrained unit commitment and economic dispatch through benders decomposition: A comparative study,” in 2011 IEEE Power and Energy Society General Meeting, pp. 1–8, 201
[3] M. G. R. Rahmaniani, T.G. Crainic and W. Rei, “The benders decomposition algorithm: A literature review,” Cirrelt, 2016
[4] F. Li and R. Bo, “Small test systems for power system economic studies,” in IEEE PES General Meeting, pp. 1–4, 2010
Due to the massive integration of decentralised energy resources (DER) such as photovoltaic systems, battery storage and e-charging stations, the optimal power flow (OPF) in electric distribution networks has gained significant research interest. Assuming that an active distribution network possesses a fixed topology, the equality constraints are described by the power flow equations, which must satisfy Kirchhoff’s Law and Ohm’s Law, whereas the state and control variables compose inequality constraints posed by the physical limitations of the network components. Particularly, radial low-voltage networks have the nature of simpler topology and a larger set of variables.
Given a set of linear or quadratic objectives and constraints concerned with power network security, a steady-state OPF can be formulated as a non-linear, non-convex optimisation problem. Generally, solving a large-scale, single-stage OPF is already faced with challenges including the non-convexity of the feasible set and the size of the large, sparse admittance matrix.
From there, researchers have been employing optimal control techniques to transform the time-invariant OPF into a time-varying problem on a discrete temporal domain. The discrete-time OPF can be solved progressively at each time instant, or, if forecast data are available, as a stacked model predictive control problem over a longer discretised time window. However, classical centralised Newton-type methods, such as the interior point (IP) and sequential quadratic programming (SQP), will not be suitable for large-scale online OPF regarding the computation time requirement and data privacy concerns. Consequently, the demand for distributed OPF formulations and algorithms arises.
With this background, various distributed optimisation approaches and their variants have been investigated recently to accommodate the requirements for online, time-varying OPF. Several examples of distributed OPF approaches are the projected primal-dual gradient method, the alternating direction method of multipliers (ADMM) and Augmented Lagrangian Alternating Direction Inexact Newton (ALADIN). For instance, a possibility for model reduction by condensing the variable space is implicitly contained in the OPF formulation. The implicit function theorem suffices that the control variables can be mapped to state variables by the power flow equation, if the discretised time step is small enough.
In this talk, we present recent advances in the real-time OPF formulation and online algorithms regarding their convergence properties and error estimates.
Operators of district heating networks are facing numerous technical challenges in course of the energy system transformation, which require innovations in network control. A realistic mapping of the energy transport through the distribution network is key to optimizing the use of operating resources. The underlying thermo-hydraulic PDE system and optimization problems constrained by this PDE system strongly depend on the demand resulting from the heat consumers’ behavior as a boundary condition.
For predictive optimization, a realistic demand forecast model needs to be provided. To identify underlying patterns, we use a neuronal network, namely a transformer model for time series forecasting, and historical real monitoring data to generate temperature-dependent, time-resolved, characteristic demand profiles of heat consumers. Next, we model the inherent stochastic nature of demand by incorporating stochastic processes with time-dependent mean reversion levels as the Ornstein-Uhlenbeck process into our previous results.
The optimal control problem under consideration is to find the optimal input into the system such that not only the cost is minimized but in addition the stochastic demands are satisfied. We employ different optimization strategies and compare them for our application example of a district heating network.
Today, a significant portion of total final energy consumption can be attributed to heating systems in buildings. Reducing energy consumption and carbon dioxide emissions in this sector is, therefore, crucial to achieving the goals of climate action initiatives worldwide. A promising approach to addressing this objective involves the use of advanced predictive control systems, which have the potential to dynamically adapt to changing external conditions in real-time. However, its implementation is rather challenging due to the high complexity of current building energy systems. In this work, we consider a mixed-integer nonlinear model predictive control (MINMPC) strategy, which can directly tackle system nonlinearities, switching behavior and intricate restrictions. In this context a central point is the development of a proper model for the building energy system, which must carefully follow the system dynamics and at the same time be rather simple and suitable for derivative-based optimization in model predictive control. We aim to simplify this process by automating the model design and, therefore, reducing the need for expert knowledge at this step. We present a novel relax-and-round strategy for MINMPC and demonstrate its performance on a resistor-capacitor (RC) model of an office building at Hannover University of Applied Sciences and Arts, obtained within an automated framework. We show that previously achieved cost savings potential can be noticeably increased by the utilization of the building envelope as an additional energy storage.
In the course of a milling process, heat is generated because of work done during the interaction(s) between the endmill (cutting tool) and the workpiece. This includes work done in: inducing plastic deformation within the workpiece, initiating fracture for chip production, and work due to friction between the cutting edges of the endmill and workpiece. The heat leads to a temperature distribution in the tool. Very high cutting temperatures are detrimental to both the tool and the workpiece. Excessive temperatures increase tool wear and lead to a diminished quality of the finishing on the workpiece surface as a result of the development of built-up edges on the endmill’s cutting surfacs. Studies have shown that cutting temperature is affected by endmill shape parameters like helix angle, rake angle, relief angle, and clearance angle. Thus, it is a research interest to carry out an investigation of an optimization process that reduces the maximum milling process cutting temperature by using these endmill shape parameters as optimization variables. This work presents a novel approach for automated endmill shape design and optimisation that is completely based on Matlab coding. Through the use of smooth cubic splines and the initial values for the endmill geometric parameters, a shape profile for an endmill’s tooth is defined and copied over up to the number of cutting edges required by the initial endmill design. The profile is meshed and extruded through the helix angle using the gibbonCode, a robust meshing Matlab toolbox. An Abaqus input file for a milling process simulation is scripted in Matlab using the nodes and elements from the designed solid endmill. Abaqus finite element software is then called to run the simulation and the required results for the endmill’s nodal cutting temperatures are extracted and processed. Particle Swarm Optimisation algorithm is coupled to the automatic endmill shape generation and milling process simulaition design. It uses the maximum nodal cutting temperature as optimization criterion value to carry out an optimizaiton process. The optimisation objective is to find the best combination of the endmill’s geometric parameters that would bring about the lowest maximum cutting temperature. The results from this study are compared to the results of a previous study where, instead of maximum cutting temperature, the maximum cutting force was minimized. It is expected that the automated design and optimisation process presented here will bring about design improvements that significantly delay tool wear by reducing cutting temperature.
The task in optical design is to find a set of parameters such that an optical system meets certain performance criteria and boundary conditions. Mathematically, this can be described as a least-squares optimization problem, i.e., the objective function is the norm of an objective vector. For many years, the Levenberg-Marquardt algorithm has been the optimization method of choice in the optical design community. However, the efficient evaluation of the Jacobian of the objective vector is a delicate task – also in many other disciplines of applied optimization. We present a workflow based on algorithmic differentiation (AD), where the Jacobian can be evaluated in the same order of computational complexity as the objective vector itself, i.e., linear in the number of parameters and linear in the number of objectives. Sparsity patterns in the specific optimization problem can be exploited to achieve this result in both AD forward mode and AD reverse mode.
Ray tracing as a part of geometrical optics describes the propagation of light through an optical system. Many rays are traced from the object to the image to obtain the data that are required for the evaluation of the objective vector of the optimization. While light travels through homogeneous media in straight lines, its trajectory in inhomogeneous media is curved and described by an ordinary differential equation. Applying AD to a ray tracing routine is in most cases straightforward. However, there are some challenges, e.g., the determination of the intersection of a ray with a surface, especially for inhomogeneous media. We discuss a framework in which this implicit problem can mathematically be described and differentiated.
For all fields of applications the mathematical models are primarily based on differential equations. Hence, their numerical solution plays a fundamental role in numerical mathematics. This section covers mainly the construction and the behavior of numerical methods for differential equations including those of ordinary as well as of partial differential type.
Geometrically unfitted finite element methods such as CutFEM, Finite Cell, XFEM or unfitted DG methods have been developed and applied successfully in the last decades to a variety of problems, ranging from scalar PDEs on stationary domains to systems of PDEs on moving domains and PDEs on level set surfaces. These approaches combined with established tools of finite element methods allowed to apply and analyse unfitted methods in many fields. In this talk, we deal with an elliptic interface problem for the time-harmonic quasi-magnetostatic Maxwells equations. Here the material function $\mu$, the magnetic permeability, can jump at an interface. Such problems are considered in low-frequency applications. Standard unfitted Nitsche methods are not robust with respect to the parameter k, proportional to the wavenumber. For example, a standard Nitsche discretization for the curl-curl-operator introduces terms that do no longer vanish for gradient fields. In this talk, we will use a vectorial finite element discretization based on H(Curl) conforming functions. We will tackle the problem of robustness by using a mixed formulation and a Nitsche formulation. Additionally, we apply a careful tailored ghost penalization term.
Permanent magnets, such as neodymium-iron-boron (NdFeB), play a crucial role in enhancing the efficiency of power conversion devices, including wind turbines, sensors, and electric motors, cf. [1]. Due to their enormous potential to address current technological and societal challenges, such as reducing CO2 emissions, they are the subject of intensive research. The primary goals include improving their performance, replacing critical materials, and reducing the energy required for their production. Numerical simulations, particularly the finite element method (FEM), provide a powerful tool to accelerate the advancement of these materials. However, accurately simulating magnetic materials requires some care, especially when discretizing the magnetic field H. In this work, different formulations for the description of the magnetic field are introduced and compared with each other. The focus remains on the application of Ampre’s law using a vector potential formulation. To ensure the uniqueness of the magnetic vector potential, a gauge condition is applied, i.e., the Coulomb gauge, cf. [2]. Since the magnetic field involves the function space H(curl), the tangential components must be continuous, while the normal components are generally discontinuous. This requirement cannot be met by conventional node-based Lagrange elements; instead, edge-based or Nédélec elements are typically used, cf. [3]. Both interpolations are presented and compared with each other.
References:
[1] O. Gutfleisch, M.A. Willard, E. Brück, C.H. Chen, S.G. Sankar and J. Ping Liu. Magnetic Materials and Devices for the 21st Century: Stronger, Lighter, and More Energy Efficient. Advanced Materials, 23: 821–842, 2011.
[2] E. Creusé, P. Dular and S. Nicaise. About the gauge conditions arising in Finite Element magnetostatic problems. Computers and Mathematics with Applications, 77:1563–1582, 2019.
[3] J.C. Nédélec. A new family of mixed finite elements in R3. Numerische Mathematik, 1(50):57–81, 1986.
The boundary element method (BEM) is one of the very interesting domain based approximation techniques that has been successfully applied to solve numerous physical problems. It solves a given partial differential equation (PDE) using only boundary discretization, which significantly alleviates the meshing effort, but also limits the possible PDEs to be solved to only linear ones. Furthermore, the BEM in its classical form cannot accurately capture the solution in the presence of the edges and corners and this problem is addressed in this work. The BEM requires a uniquely defined normal at each point of the computational domain and for this reason, a systematic approach is used to investigate this issue. The boundary integral equations (BIEs) for the potential problem are studied in terms of singularity orders – both singular and hypersingular BIEs are defined and regularization techniques based on [1] are considered. Galerkin and collocation methods for solving the BIEs are introduced and studied, followed by the investigation of special quadrature rules for singular integration based on the Duffy transformation [2]. By combining singular and hypersingular formulations, a special discontinuous discretization technique is introduced for both solution methods for solving Dirichlet, Neumann and mixed problems. Based on these developments, a simple potential problem is solved on cube geometry using both Lagrange and isogeometric discretization. The resulting combination of the proposed numerical methods allows the definition of a special patch test for the BEM, similar to that proposed for the well-known finite element method [3]. While the analytical solution to BIEs in principle does not exist in closed form due to the singularities in its kernels, it is possible to integrate such an integrals with almost machine precision accuracy following the fact that the BEM is as accurate as the singular integration scheme.
REFERENCES
[1] Liu, Y., Rudolphi, T. (1991). ”Some identities for fundamental solutions and their applications to weakly-singular boundary element formulations”, Eng. Anal. Boundary Elem., 8(6), 301–311.
[2] Duffy, M. G. (1982). ”Quadrature over a pyramid or cube of integrands with a singularity at a vertex.”, SIAM J. Numer. Anal., 19(6), 1260–1262.
[3] B. M. Irons and A. Razzaque, ”Experience with the patch test for convergence of finite elements”, in A. K. Aziz (ed.), The Mathematical Foundations of the Finite Element Method with Applications to Partial Diferential Equations, Academic Press, New York, 1972
Mesh-tying is a common computational approach for solving transport equations on two disjoint subdomains whose associated grids do not conform at the common interface. In the present contribution, we propose a finite volume adaptation of this general concept that is optimized for block-structured projection-based flow solvers. A characteristic trait of the method is that the interfacial mass fluxes and tractions are considered as unknown Lagrange multipliers whose values are determined such that the flow field remains continuous across the patched interface. With a view towards computational efficiency and implementational ease, the interfacial field of multipliers is spatially discretized by collocation at the face centers on one side of the interface. This choice is akin to the node-to-segment approach in computational contact mechanics and leads, in our formulation, to a weak satisfaction of the interfacial mass and momentum balances while ensuring continuity of the flow field at the collocation points. With the aid of an augmentation scheme, the multipliers are eliminated from the discrete system. By consequence, the system matrix's size is preserved and only a few additional non-zero entries are introduced. In a series of test cases ranging from the two-dimensional transport of a scalar in a disk-shaped domain to the Taylor-Couette flow and a rotor-driven fluid flow, we demonstrate that the proposed method maintains the optimal spatial accuracy order of the underlying flow solver on azimuthally equispaced but possibly non-conforming polar grids. The two-dimensional formulation extends immediately to three spatial dimensions if the grids are conforming but possibly non-uniform in the axial direction. In this regard, we present the application of the finite volume mesh-tying method to a three-dimensional baffled stirred tank.
Many natural and industrial phenomena exhibit nonlocal behaviour in temporal or spatial dimensions. The former is responsible for processes for which its whole history influences the present state. The latter, on the other hand, indicates that faraway regions of the domain may have some impact on local points. This is useful in describing media of high heterogeneity.
Partial differential equations that are nonlocal involve one or several integral operators that encode this behaviour. For example, Riemann-Liouville or Caputo derivatives are used in temporal direction, while fractional Laplacian or its relatives describe spatial nonlocality. When it comes to numerical methods the discretization of these requires more care than their classical versions. Moreover, it is usually much more expensive, both on CPU and the memory, to conduct simulations involving nonlocal equations.
In this talk we will present several approaches to discretize nonlocal and nonlinear parabolic equations based on finite differences, Galerkin method in space and L1 scheme in time. I will also show rigorous and optimal bounds for these numerical methods.
Dynamics and control is an interdisciplinary section which in particular adresses mathematical systems theory and control engineering. The contributions to this section are also concerned with the mathematical understanding and design of controllers which appear in actual applications.
The paper is devoted to ambient temperature influence on dynamic characteristics of viscoelastic layered plates i.e. structures applied in many fields of engineering under various physical conditions. Damping properties of common viscoelastic materials depend on thermal conditions, as is frequently reported [1], thus the problem has significant practical importance.
The Zener fractional material model with separation of deviatoric and volumetric deformation is chosen to describe viscoelasticity of the plate layers. Plate kinematics is described using the refined zig-zag theory [2].
It is assumed that viscoelastic material is rheologically simple, thus the temperature influence obeys the time-temperature superposition principle. The vertical shift of characteristic curves is neglected and the horizontal shift factor is computed. In the analyses the results obtained using the most frequently used Williams-Landel-Ferry formula [1] and those from less frequently applied methods including the Arrhenius, Kaelble and Drake-Soovere formulae [3,4] will be compared and discussed. Two latter approaches can also be used in the thermal conditions corresponding to the glassy phase, as they remain valid on both sides of the deflection point of the characteristic curve.
The principle of virtual work and the Laplace transform are used to formulate the problem of the plate free vibrations. The resulting non-linear eigenvalue problem is solved iteratively using the homotopy method. The results in the form of complex eigen-values and eigen-vectors allow to find the natural frequency, the damping ratio, and the vibration modes.
The results of several analyses of layered plates with various viscoelastic materials with the material data available in the literature [3], will be presented in the conference. In particular, the analyses will focus on the comparison of the dynamic characteristics obtained for various methods for the shift factor determination.
Acknowledgments
The research presented in this paper was funded by the university grant 0411/SBAD/0010.
REFERENCES
[1] H.F. Brinson, L.C. Brinson, Polymer Engineering Science and Viscoelasticity. an introduction, Springer, New York (2008)
[2] A. Tessler, M. Di Sciuva, M. Gherlone, A consistent refinement of first-order shear deformation theory for laminated composite and sandwich plates using improved zigzag kinematics, J. Mech. Mater. Struct. 5(2), 341–367 (2010)
[3] A. Arikoglu, A new fractional derivative model for linearly viscoelastic materials and parameter identification via genetic algorithms, Rheol. Acta 53, 219–233 (2014)
[4] G.M. Rowe, M.J. Sharrock, Alternate shift factor relationship for describing temperature dependency of viscoelastic behaviour of asphalt materials, Transport Res. Rec. 2207, 125–135 (2011)
The research is devoted to the rheological properties of viscoelastic materials, which are used to reduce excessive vibrations in passive damping systems in high buildings loaded with wind or in buildings located in seismic areas. It is extremely important but also difficult to correctly describe the dynamic behavior of a viscoelastic material, since its rheological characteristics depend on the ambient temperature, on the frequency of the forcing load and on the amplitude of vibration. There are many papers in the literature devoted to the study of advanced rheological models that are trying to consider all these factors.
The behavior of viscoelastic material over a wide range of temperatures and frequencies is well described by rheological models using non-integer order derivatives, e.g., the four-parameter fractional Zener model [1]. In this work, a five-parameter model is proposed, which was obtained after extending the aforementioned Zener model and which takes into account the effect of vibration amplitude on the dynamic behavior of the viscoelastic material. The extension consisted in adding a nonlinearity in the part describing elasticity, so the new model can be called: nonlinear fractional Zener model.
The use of the five-parameter model involves significant difficulties when trying to describe the actual rheological material, since it is necessary to identify these parameters, for example, from laboratory test results [2]. To overcome these difficulties, the harmonic balance method [3] was used and a complex version of this method was chosen.
The effectiveness of the method was tested on an artificially generated experimental data set. The method was then used to identify model parameters to properly describe the selected viscoelastic material designated for vibration reduction in structural systems, depending on the frequency and amplitude of vibration.
[1] Pawlak, Z.M.; Denisiewicz, A. Identification of the fractional Zener model parameters for a viscoelastic material over a wide range of frequencies and temperatures. Materials (2021), 14, 7024. https://doi.org/10.3390/ma14227024
[2] Javidan, M.M., Kim, J. Experimental and Numerical Sensitivity Assessment of Viscoelasticity for Polymer Composite Materials. Scientific Reports, (2020), 10, 675.
https://doi.org/10.1038/s41598-020-57552-3
[3] Lewandowski, R. Nonlinear steady state vibrations of beams made of the fractional Zener material using an exponential version of the harmonic balance method. Meccanica (2022), 57, 2337–2354. https://doi.org/10.1007/s11012-022-01576-8
The repair and strengthening of reinforced concrete (RC) frames is of great importance in ensuring the structural safety and serviceability of buildings after seismic events. Beam-column joints are primary mechanisms for dissipating seismic energy within RC frame structures. During ground motions, these joints accumulate significant energy, which may result in plastic deformations [1]. However, with appropriate retrofitting techniques, many damaged structures can be restored to service. Most conventional retrofitting methods focus on improving the strength and stiffness of RC joints [2], but this approach often overlooks the importance of enhancing energy dissipation capacity, which is a critical factor for long-term structural performance under aftershock sequences.
This study introduces a retrofitting technique that incorporates viscoelastic (VE) materials stiffened with steel plates in beam-to-column joints in RC frames. Unlike traditional methods, this approach not only improves the strength and stiffness of damaged joints, but also enhances their energy dissipation capacity. This additional capacity is particularly important for addressing the risks associated with repeated loading cycles of already weakened structures [3].
The research methodology included numerical simulations conducted using finite element modeling (FEM) in Abaqus. The analysis evaluated the dynamic performance of RC beam-column joints retrofitted with VE materials and steel plates. A comparative study was carried out using the reference models (undamaged and damaged joints) in relation to the retrofitted joints proposed in this study.
It is worth noting that the proposed retrofitting technique not only addresses the strength and stiffness improvement of RC structures, but also enhances the long-term resilience of RC beam-column joints. The findings highlight that by providing greater ductility and damping properties, the proposed method ensures a more resilient seismic response. The described mechanism ensures that the retrofitted joints are better equipped to handle both primary seismic shocks and subsequent aftershocks. A distinct matter and area for further research is an effective way to bond the VE material layer to the concrete surface to ensure that the damping properties are fully utilized.
[1] Yurdakul, Ö., & Avşar, Ö. (2015). Strengthening of substandard reinforced concrete beam-column joints by external post-tension rods. Engineering Structures, 107, 9–22.
[2] Singh, V., Bansal, P.P., Kumar, M., & Kaushik, S.K. (2015). Retrofitting of reinforced concrete beam-column joints using bonded laminates.
[3] Huang, X., Xu, Z., & Xiao, H. (2023). Experimental study on seismic behavior of damaged Beam-Column joints retrofitted by viscoelastic Steel-Enveloped elements. Buildings, 13(3), 702.
Efficient mobility and sustainable transportation are imperative for driving economic growth and preserving the environment in today's globalized society. However, the transportation sector, responsible for a substantial portion of energy consumption and CO2 emissions, presents a formidable challenge for achieving sustainable and ecological transformations, especially in countries like Germany. Rail transport, a crucial mode of transportation, significantly contributes to electricity consumption.
Despite Germany's progress in enhancing energy efficiency, there remains untapped potential for reducing energy consumption and CO2 emissions. The EKSSE research project is dedicated to improving the energy efficiency of Nuremberg's and Hamburg's subway systems. This presentation aims to delineate the project's objectives and introduce our proposed approach for modeling the subway system as a port-Hamiltonian system, offering a promising avenue for optimizing energy usage and fostering sustainability in urban transit networks.
The dynamics of a system are not always fully described by a continuous evolution in terms of differential equations, but also by discrete jumps of the state. Jump times and states after the jumps may be stochastic with the probability depending on the actual state of the system. This can be described in the mathematical framework of stochastic hybrid systems. As an extension to classical stochastic hybrid systems described in [1], we introduce stochastic hybrid systems of variable dimension (SHSVD). For these systems, the dimension of the continuous state may switch at jumps.
After an introduction to the concept of a solution to an SHSVD, we will discuss how different notions of stability can be carried over to SHSVD. The focus here will be on the stability of the union of all origins of the possible dimensions. As illustrating examples of the concept, we will present an exponential decay in variable dimension and an oscillator which is held and released with a probability.
We will also present a numerical algorithm to simulate SHSVD which is a combination of a numerical method for solving differential equations and a simulation of an inhomogeneous Poisson process. We will address the situation in which no a-priori bound on the probability density function of a jump occurring is known. Furthermore, we present results of numerical simulations in which the long-term behavior of an SHSVD differs from the long-term behavior of each fixed dimension subsystem.
References:
[1] A. Teel, A. Subbaraman, A. Sferlazza, “Stability analysis for stochastic hybrid systems: A survey,” Automatica, vol. 50, no. 6, pp. 2435-2456, Oct. 2014,
https://doi.org/10.1016/j.automatica.2014.08.006
In deep learning, one often operates in a (highly) over parametrized regime. Meaning we have significantly more trainable parameters than available training data. Nevertheless, experiments show that the generalization error after training with (stochastic) gradient descent is still small, while one would expect over fitting, i.e. small training error and relatively large test error.
This suggests the existence of an implicit bias towards learning networks that generalize well, in settings where infinitely many networks can achieve zero training loss.
To investigate this phenomenon, we analyze the training dynamics of deep diagonal linear networks. Alternatively, this can be interpreted from the perspective of recovering sparse signals from linear measurements.
We propose a method to show convergence of the gradient descent and to fully characterize its limit. Using techniques inspired by Mirror Gradient Descent and a Lojasiewicz type of inequality.
Many applications in PDE-constrained optimization and data science require minimizing the sum of smooth and nonsmooth functions. For example, training neural networks may require minimizing a mean squared error plus an l₁ regularization to induce sparsity in the weights. In this talk, we introduce a multilevel proximal trust-region methods to minimize the sum of a nonconvex, smooth and convex, nonsmooth function. Exploiting ideas from the multilevel literature allows us to reduce the cost of the step computation, which is a major bottleneck in single level procedures. Our work unifies theory behind the proximal trust-region methods and certain multilevel recursive strategies. We prove global convergence of our method in ℝⁿ and provide an efficient nonsmooth subproblem solver. We show the efficiency and robustness of our algorithm by means of numerical examples from training a neural network to solve PDEs.
Transplant registries are of paramount importance to the advancement of transplantation medicine, as they facilitate the centralised and standardised collection of medically relevant data on organ donors, recipients, and living donors. By systematically consolidating this information, registries provide a robust data foundation that enhances quality and transparency throughout the transplantation process and supports evidence-based decision-making. It is only through centralised data collection that critical insights can be gained to refine allocation criteria, improve donor organ assessment, and enhance long-term outcomes for transplant recipients. A well-structured registry is therefore crucial for continuously improving patient care and fostering new scientific approaches that ultimately increase patient safety and treatment success. The German transplant registry was established to serve this purpose, integrating data from key organisations such as the German Organ Transplantation Foundation (DSO), the Dutch Eurotransplant International Foundation (ET), and the Institute for Quality and Transparency in Healthcare (IQTIG). This comprehensive integration of data has resulted in a complex structure involving numerous variables and tables.
While this extensive data set represents a valuable resource, several opportunities have been identified to improve its usability for both clinical practice and research. These include optimising the handling of missing data to streamline the database, managing redundancies arising from overlapping donor and recipient records, and addressing the complexity of the data structure by developing user-friendly solutions. The goal of this study is to unlock the full potential of this rich data set by applying advanced in silico methods for data optimisation. Key steps will involve restructuring, standardising, and eliminating redundancies in accordance with FAIR principles to enhance data quality. By improving accessibility, the objective is to broaden the user base to include scientists and clinical staff – such as physicians, surgeons, and hospital personnel – and facilitate the routine use of transplant data. The development of a low-threshold application is envisaged, with the aim of making this invaluable information readily available to medical professionals, thereby supporting evidence-based decision-making and driving innovation in transplantation medicine.
Graph Neural Networks (GNNs) are powerful tools for addressing learning problems on graph structures, with a wide range of applications in molecular biology and social networks. Nonetheless, the theoretical principles underlying their empirical performance are not well understood. This work provides a convergence analysis of gradient dynamics in linear GNN training. We show that the gradient flow training of a linear GNN with mean squared loss converges to the global minimum at an exponential rate, with the convergence rate explicitly depending on the initial weights and the graph shift operator (graph aggregation matrix). Moreover, for balanced initialization, gradient flow training achieves the global minimum of the mean squared loss while minimizing the total weights of the network. In addition to gradient flow, we analyze the convergence of linear GNNs under gradient descent training, interpreted as a discrete approximation of gradient flow with a fixed step size. Finally, we validate our findings on synthetic datasets from well-known graph models and real-world datasets.
This paper presents theoretical and empirical results demonstrating the effectiveness of "Tokenization" in capturing the physical operator of an ergodic flow, a technique used in multiple long video machine learning architectures, in particular in NVIDIA's LongVideoGAN. Complementing the theory, we present novel experimental results on canonical problems, ranging from the wave and heat equations to the Kuramoto-Sivashinsky (KS) equation, using both LongVideoGAN and autoregressive models. Finally, we show LongVideoGAN is able to learn a computational fluid dynamics (CFD) dataset featuring a Kármán vortex street, with excellent temporal correlation and generalisation results.
The presentation we will describe the mathematical model of an oscillator with damping, whose vibrations were forced by a random series of impulses. Under appropriate assumptions regarding random variables, in the model, the vibrations of the system become a process which, in the limit as time tends to infinity, is stationary and ergodic. For value of the impulses, which are independent discrete random variables, for the time intervals between impulses, which are independent continuous random variables for which the function of probability density assumes the form of exponential distribution, the probabilities are computed from the system of equations. In the presentation we indicate the errors in computing of distributions of impulses, which issue from the fact that we apply the model for analysis of vibrations of the system within several minutes. In this case the mathematical model of an oscillator is understood in accord with the terminology used for defining statistical models as a formalized description of a certain theory or causal situations that are assumed to generate the observed data. Simulation studies aimed at detecting distributions of impulses involve an analysis of data recorded in the form of spatial-temporal matrices. Analyzing hundreds of courses we check when it is possible to apply a mathematical model, and when we have to use artificial intelligence algorithms in order to solve an inverse problem.
The course Fluid Mechanics 1 at TU Dortmund, offered in the third semester of the Bachelor’s programs in Biochemical and Chemical Engineering, covers key topics such as hydrostatic, Bernoullis principle, and integral momentum theory. The Chemical Engineering students have to absolve Fluid Mechanics 2 with the boundary layer theory, Navier-Stokes equations, similarity mechanics, and turbulence. Approximately 150 students participate annually in the course Fluid Mechanics 1, which is based on Zierep’s textbook and supplemented by lectures, problem-solving sessions, student-led tutorials, and laboratory exercises involving rheometers and wind-tunnel experiments. A significant challenge lies in the fact that many students are not yet accustomed to the demands of university-level learning. Fluid mechanics builds on various foundational concepts, but essential principles, such as Newton’s friction model in the integral momentum approach, are often overlooked by students. The exams are designed following the SOLO taxonomy emphasizing deep learning by addressing relational understanding and extended abstract understanding. This includes both short conceptual questions and tasks requiring independent judgment in selecting appropriate tools for specific problems. Despite this structured approach, the exams are challenging, with low pass rates. To address these issues, a structured learning platform was introduced, aiming to promote collaborative and organized learning, provide rapid feedback, and ensure mastery of fundamental concepts. Traditional exercise scripts were replaced by tasks exclusively accessible through the platform, encouraging engagement and active participation. The effectiveness and appeal of this approach are being assessed. Effectiveness is measured by analyzing changes in average pass rates and exam grades over several years before and after implementation. The appeal is evaluated through student ratings of the exam preparation materials and qualitative feedback from course evaluations. The success of this concept has led to its extension to other courses, including Introduction to Fluid Mechanics and Heat Transport for international Process Systems Engineering students and Technical Mechanics within the Faculty of Mechanical Engineering. The structured learning platform demonstrates a promising approach to improving student outcomes and satisfaction in challenging engineering courses.
The growing discrepancy between the rapidly changing requirements for competencies in engineering and the rigid structures at universities is causing ever-increasing frustration, longer study times, high dropout rates, and, last but not least, dwindling enrollment figures. The challenges of overcoming these discrepancies seem insurmountable. In this lecture, we would like to discuss measures, methods and concepts that are taking the first essential steps towards solving this problem.
The ability to obtain knowledge or, more generally, information from external sources and to use it effectively and efficiently for one’s own work has developed rapidly in recent years. In particular, the use of AI-based tools is revolutionizing modern professional and everyday life. These developments, which will have lasting effects, bring with them completely new demands for urgently required competencies. These are also referred to as “future skills” and include, on the one hand, the classic skills such as “problem-solving abilities”, “adaptability” and “perseverance”. It should be noted in this context that in the vast majority of cases, students are unable to achieve even these classic skills in the current higher education system. On the other hand, these classic skills are extended by “basic digital skills” and “technological skills”. The “basic digital skills” include, for example, “digital literacy”, “collaboration” and “digital learning”. Together with the “classic skills”, these form the set of urgently required skills for all employees – across the board, so to speak. On top of that, there are the challenges “at the top” and the associated “technological skills” such as “complex data analysis”, “smart hardware development” and “user experience design”.
In this contribution, we would like to explain elementary concepts and good practice examples that particularly promote the design of motivational teaching – the essential basis for acquiring contemporary competencies and, among other things, reducing the average duration of studies. The results and conclusions of the GAMM-funded workshop “New Ways in Teaching: From Teaching to Learning Events”, which took place at the TU Dortmund on June 10-11, 2024, also serve as a basis for this. The in-depth discussions revealed that certain didactic methods, which we will explain in more detail in our presentation, have a very positive influence on aspects such as student motivation, student autonomy, and the skills and competencies that can be achieved.
The sandwich principle [1] was applied to a basic mechanics class in the sense of alternating individual and collective learning phases. This principle was implemented using two smartphone experiments [2] during the lecture: one experiment for static friction and one for the acceleration of tubes with different wall thicknesses. With the aim for a more learner-centered lecture, the individual learning phases started with an application of the basic mechanics principles to the experimental problem and were followed by a Think-Pair-Share period. During the subsequent collective phase, student groups carried out experiments with their own smartphones. The results were collected and written into the lecture slides, closing with a discussion of the theory and experimental features. The intended taxonomy levels have been applied and analyzed.
Measures are presented from surveys, comparing the test lecture (n = 139 students participated in the survey) with the previous year’s results (n = 234). The lectures of both years shared the same lecturer, content, slides and examination type. A slight improvement could be observed from the overall ratings, most notably with respect to the use of media, the motivation to deal with the subject of the lecture, motivation for the content, a stimulating working atmosphere, and the qualification and learning success.
From the view of student activation and participation, the static friction experiment performed better in terms of the number of activated students. Its quick and easy implementation (sticking vs. sliding smartphone at different desk angles) yielded high accessibility. The dynamics experiment with accelerating tubes required a longer experimental time on-site or at home between lectures. It is assumed to be a reason for less experimental participation, whereas the students sill actively participated during the final collective discussion. Future implementations are planned with a lower threshold for experimental on-site participation.
Literature:
[1] A Bock, B Idzko-Siekermann, M Lemos, K Kniha, S C Möhlhenrich, F Peters, F Hölzle and A Modabber: “The Sandwich principle: assessing the didactic effect in lectures on ‘cleft lips and palates’”. BMC Medical Education 20:310 (2020). DOI 10.1186/s12909-020-02209-y
[2] S Staacks, D Dorsel, S Hütz, F Stallmach, T Splith, H Heinke and C Stampfer: “Collaborative smartphone experiments for large audiences with phyphox”. European Journal of Physics 43:055702 (2022). DOI 10.1088/1361-6404/ac7830
Mathematical modelling plays a role not only in mathematics itself but especially in application fields such as physics, computer science, and engineering. CAMMP (Computational And Mathematical Modelling Program, www.cammp.online) is a project which has aim to show secondary students and teachers the relevance of mathematical modelling for their daily life. Therefore, the project conducts different events in which students and teachers actively engage in solving real, relevant problems through mathematical modelling and the use of computers. The problems originate from everyday life, industry, or research. CAMMP's educational research projects are concerned with the development and research of teaching and learning materials that enable such active engagement. In particular, the materials aim to make (often complex) mathematical concepts required for modelling real-world problems accessible to learners and to provide them with an authentic insight into the relevance of mathematics for our society.
The contribution at the 95th Annual Meeting of the GAMM will present the range of CAMMP activities, for example, mathematical modeling weeks and days, as well as the design principles (Prediger et al., 2012) that guide the design of the activities. This includes, among other things, the selection of problems, corresponding data sets, and technological tools. In addition, we will illustrate selected materials, for example on the prediction of life expectancy based on artificial neural networks (Kindler et al., 2023) or predictive text systems based on n-gram models (Hofmann & Frank, 2022).
Literature:
Hofmann, S. & Frank, M. (2022): Teaching data science in school: Digital learning material on predictive text systems. Twelfth Congress of the European Society for Research in Mathematics Education (CERME12), Feb 2022, Bozen-Bolzano, Italy.
Kindler, S., Schönbrodt, S., & Frank, M. (2023). From school mathematics to artificial neural networks: Developing a mathematical model to predict life expectancy. In P.
Drijvers, H. Palmér, C. Csapodi, K. Gosztonyi, & E. Kónya (Eds.), Proceedings of the Thirteenth Congress of the European Society for Research in Mathematics Education (CERME13). Alfréd Rényi Institute of Mathematics and ERME.
Prediger, S., Link, M., Hinz, R., Hußmann, S., & Ralle, B. (2012). Lehr-Lernprozesse initiieren und erforschen: Fachdidaktische Entwicklungsforschung im Dortmunder Modell [Initiating teaching-learning processes and research: Subject didactic development research in the Dortmund model]. MNU, 65(8), 452–457.
Large interacting particle systems (LIPS) are not only an important modeling tool in the life and social sciences, but also the data sciences. In the former case, the particles may correspond to cells or pedestrians interacting with their respective environment and each other, while in the latter, these particles may explore high dimensional data landscapes in the context of global optimisation.
In this talk I will introduce different models for specific applications of LIPS, and discuss how their respective mean-field limit equations can be used to analyse the long time behavior of the systems. These results give important insights on how microscopic dynamics and interactions lead to the formation of complex phenomena, using, for example, tools from optimal transport.
The importance of different length scales in materials science is well-recognised and subject of intense interdisciplinary research efforts. In these developments, multiscale modelling approaches take a key role as these enable the prediction of the effective material response based on detailed microstructure representations.
Motivated by the influence of microcracks and grain boundaries on effective electrical properties, the focus of this contribution is on the systematic development of generalised computational multiscale formulations for electrical conductors. In line with classic energy-based approaches for bulk material, it is shown that effective electrical conductivity tensors can be condensed from the underlying microstructure. The usefulness of the proposed formulation is demonstrated by taking into account experimental data. Extension of the detailed multiscale description of electrical conductors, grain boundaries and the associated strong discontinuities in the microscale fields are taken into account. This enables the systematic analysis of the Andrews method - one of the key approaches in materials science to study grain boundary resistivity and its effect on the electrical properties of polycrystalline materials. A theoretical foundation to the Andrews method is provided and its applicability and tacit assumptions involved are discussed along with resolving its core limitations. With material interfaces occurring at different length scales, eventually focus is placed on electrically conductive adhesives as a typical example of macroscale interfaces. These are key elements of electronic packages used in, e.g., automotive, communication and computing applications, with their distinct properties being rooted in the underlying intrinsic multiscale nature. By establishing energetic scale-briding relations, these macrosale composite interphases are approximated as zero-thickness cohesive interfaces. This generalises classic phenomenological traction-separation laws by relating the apparent electro-mechanical interface response to the underlying microstructure and lower-scale processes.
Lunch will be served at the conference venue in Rooms 51 and 53 for a pre-purchased voucher.
——
The Magic Flute (Die Zauberflöte)
Duration 2:50, 1 intermission
On Friday, there will be a performance at the Grand Theater in Poznań, specifically for the participants of the GAMM 2025 conference and invited guests.
THE MAGIC FLUTE (Die Zauberflöte) – Wolfgang Amadeus Mozart
Duration 2:50, 1 intermission
NOTE: Only for those, who have confirmed their participation in the registration form. Your ticket is included in the conference envelope.
https://maps.app.goo.gl/FyZq7JtFDqfvLSfx8