Deep-learning-based numerical homogenization of heterogeneous media
- In this thesis, we study a deep-learning-based approach to the compression of multi-scale partial differential operators characterized by highly heterogeneous coefficients. Such operators naturally appear in many science and engineering disciplines when modeling processes in heterogeneous media, that are marked by the interaction of effects on multiple scales. In order to simulate the effective behaviour of the operator on a macroscopic target scale of interest without having to resolve all microscopic features in the model with a computational mesh, many numerical homogenization methods that compress such operators reliably to macroscopic surrogate models suitable for this task, have been developed over the years.
We propose to approximate these surrogates with a neural network in a hybrid offline-online algorithm, that aims at combining the advantages of classical model-based numerical homogenization with those of the data-driven regime of deep learning. In the offline phase, aIn this thesis, we study a deep-learning-based approach to the compression of multi-scale partial differential operators characterized by highly heterogeneous coefficients. Such operators naturally appear in many science and engineering disciplines when modeling processes in heterogeneous media, that are marked by the interaction of effects on multiple scales. In order to simulate the effective behaviour of the operator on a macroscopic target scale of interest without having to resolve all microscopic features in the model with a computational mesh, many numerical homogenization methods that compress such operators reliably to macroscopic surrogate models suitable for this task, have been developed over the years.
We propose to approximate these surrogates with a neural network in a hybrid offline-online algorithm, that aims at combining the advantages of classical model-based numerical homogenization with those of the data-driven regime of deep learning. In the offline phase, a neural network is trained to approximate the coefficient-to-surrogate map from a dataset consisting of coefficient-surrogate-pairs, which is computed with classical numerical homogenization algorithms. This has the advantage that in the subsequent online phase, compressing previously unseen coefficients via forward passes through the trained network is significantly accelerated compared to the classical homogenization algorithm used in the offline phase. This makes multi-query applications where online efficiency is crucial, for example the simulation of evolution equations with time-dependent multi-scale coefficients, computationally feasible.
We apply this hybrid framework to a prototypical elliptic homogenization problem in connection with a representative modern numerical homogenization method, the Localized Orthogonal Decomposition. To justify our approach mathematically, we rigorously analyze it from the viewpoint of approximation theory and prove that the surrogates produced by the Localized Orthogonal Decomposition can be approximated up to arbitrary accuracy by a feedforward neural network. We provide upper bounds on the depth and number of non-zero parameters for such a network and discuss the required fine-scale discretization level to obtain optimal error bounds for a neural-network-based surrogate model.
Furthermore, we perform numerical experiments to demonstrate the feasibility of our method, using a very high-dimensional class of elliptic multi-scale coefficients as a demonstrating example, and test its limitations by means of diffusion coefficients based on realizations of high-contrast random fields. Finally, we numerically investigate the performance of our method in a time-dependent setting by considering heterogeneous heat and wave equations with time-dependent multi-scale coefficients.…
Author: | Fabian KröpflORCiDGND |
---|---|
URN: | urn:nbn:de:bvb:384-opus4-1182022 |
Frontdoor URL | https://opus.bibliothek.uni-augsburg.de/opus4/118202 |
Advisor: | Daniel Peterseim |
Type: | Doctoral Thesis |
Language: | English |
Year of first Publication: | 2025 |
Publishing Institution: | Universität Augsburg |
Granting Institution: | Universität Augsburg, Mathematisch-Naturwissenschaftlich-Technische Fakultät |
Date of final exam: | 2024/12/09 |
Release Date: | 2025/02/06 |
Tag: | Deep Learning; Neural Networks; Numerical Homogenization; Surrogate Models; Model Order Reduction |
GND-Keyword: | Partieller Differentialoperator; Deep learning; Neuronales Netz; Homogenisierung <Mathematik>; Modellordnungsreduktion |
Pagenumber: | xi, 118 |
Institutes: | Mathematisch-Naturwissenschaftlich-Technische Fakultät |
Mathematisch-Naturwissenschaftlich-Technische Fakultät / Institut für Mathematik | |
Mathematisch-Naturwissenschaftlich-Technische Fakultät / Institut für Mathematik / Lehrstuhl für Numerische Mathematik | |
Dewey Decimal Classification: | 5 Naturwissenschaften und Mathematik / 51 Mathematik / 510 Mathematik |
Licence (German): | ![]() |