Mathematical and numerical study of some models in multiscale simulation of materials
Contributions à l'étude mathématique et numérique de quelques modèles en simulation multi-échelle des matériaux
Résumé
The first part of the manuscript is concerned with various works in molecular simulation. We consider systems composed of point particles (that typically represent atoms in a molecular system), that interact through a potential energy. The degrees of freedom of the system are the position and the momentum of each particle. The very large number of degrees of freedom makes the problem challenging: many systems considered by the applied community are composed of more than 10^5 atoms, and the largest systems that can today be simulated contain a few tens of billions of atoms.
A first question is to sample the Boltzmann-Gibbs measure associated to the potential energy of the system. Several methods have been proposed in the literature, based on Markov chains, Markov processes (typically solutions of stochastic differential equations), and ordinary differential equations. We briefly review these methods, and present our results on the non-ergodicity of some deterministic methods (namely, the Nosé-Hoover method). One has hence to be cautious when using this method to sample the Boltzmann-Gibbs measure.
We next consider questions related to the construction of effective dynamics. Typically, the potential energy has many local minima, separated by high barriers. However, it is not always needed to know all the degrees of freedom of the system to know in which conformation the system is. The knowledge of a function of the degrees of freedom is often sufficient enough. Starting from the microscopic description of the system (where the state is described by a vector X, the dimension of which is large) we then introduce a macroscopic description, based on a function of the microscopic state, that we denote here xi(X). This function (that we assume to be scalar-valued) is our quantity of interest. We propose a strategy to design a dynamics that approximates the dynamics xi(X_t) of the quantity of interest, when the system, at the microscopic scale, evolves according to the overdamped Langevin equation. Under a time scale separation assumption, we show that the dynamics we propose is an accurate approximation of xi(X_t).
The works described above have been performed in a finite temperature setting. We have also considered isolated systems, the evolution of which is given by Newton laws, that we write as a Hamiltonian dynamical system. Simulating this dynamics is challenging, due to the gap between the fast time scales present in the system (vibrations of some chemical bonds occur with a typical period of the order of the femtosecond), and the slow time scales, which are of the order of the microsecond or the millisecond. Such simulation times are essentially out of reach if one uses a standard integrator, that would use time steps of the order of the femtosecond. We have addressed this problem in a two-fold manner. First, we have proposed numerical integrators for highly oscillatory Hamiltonian dynamics (for which the time step is not limited by the fastest characteristic time scales), following a homogenization (in time) approach. We have also proposed a variant of the parareal scheme (that uses parallel computations to simulate a dynamics) that is better adapted to the Hamiltonian framework than the original parareal algorithm.
In the second part of the manuscript, considering solids described at the atomistic scale, we derive coarse-grained models (written at the continuum scale) and couple these two models. The fine scale model is similar to models considered in the first part of the manuscript, whereas the coarse-grained model is written at a much more macroscopic scale. During the PhD, we have addressed this question in a variational setting (corresponding to a vanishing temperature). The configuration we are after is a (sometimes unique) minimizer of some energy. We have then studied how to couple the atomistic model with the corresponding continuum model. Recently, we have turned to a finite temperature setting, in which macroscopic quantities of interest are averages of functions depending of the microscopic state of the system (the position of all the atoms) with respect to the Boltzmann measure. In this setting, we have derived coarse-grained models, where the temperature is a parameter, following thermodynamic limit approaches.
In the third part of the manuscript, we turn to materials modelled at the continuum scale by (linear) elliptic partial differential equations. We consider random materials, the properties of which oscillate at a fast scale. Random homogenization is a well developed theory to handle such problems. However, even of such simple cases, the associated numerical methods today available (such as methods to compute the homogenized matrix) generally lead to very expensive computations. We address this problem from a numerical viewpoint: we consider an equation that is simple from the theoretical viewpoint, and wish to design more efficient numerical strategies. We have followed two directions. The first one is concerned with variance reduction. In the setting we consider, even though the exact homogenized matrix is deterministic, it turns out that, due to the numerical procedure that is standardly used, one can only have access to a random approximation of the homogenized matrix. We have proposed variance reduction techniques, to typically obtain an approximation with a smaller confidence interval, in the framework of Monte Carlo methods. A second direction consists in considering weakly stochastic materials. Indeed, real materials are rarely periodic, but they are not always strongly random. The case when the randomness comes as a small perturbation of a periodic model is thus relevant, in practice, to model a large class of materials. We then show that, in this case, we can compute, at the first orders, the homogenized matrix with a workload comparable to that encountered in periodic homogenization, and hence much smaller than the one required by generic stochastic homogenization.
A first question is to sample the Boltzmann-Gibbs measure associated to the potential energy of the system. Several methods have been proposed in the literature, based on Markov chains, Markov processes (typically solutions of stochastic differential equations), and ordinary differential equations. We briefly review these methods, and present our results on the non-ergodicity of some deterministic methods (namely, the Nosé-Hoover method). One has hence to be cautious when using this method to sample the Boltzmann-Gibbs measure.
We next consider questions related to the construction of effective dynamics. Typically, the potential energy has many local minima, separated by high barriers. However, it is not always needed to know all the degrees of freedom of the system to know in which conformation the system is. The knowledge of a function of the degrees of freedom is often sufficient enough. Starting from the microscopic description of the system (where the state is described by a vector X, the dimension of which is large) we then introduce a macroscopic description, based on a function of the microscopic state, that we denote here xi(X). This function (that we assume to be scalar-valued) is our quantity of interest. We propose a strategy to design a dynamics that approximates the dynamics xi(X_t) of the quantity of interest, when the system, at the microscopic scale, evolves according to the overdamped Langevin equation. Under a time scale separation assumption, we show that the dynamics we propose is an accurate approximation of xi(X_t).
The works described above have been performed in a finite temperature setting. We have also considered isolated systems, the evolution of which is given by Newton laws, that we write as a Hamiltonian dynamical system. Simulating this dynamics is challenging, due to the gap between the fast time scales present in the system (vibrations of some chemical bonds occur with a typical period of the order of the femtosecond), and the slow time scales, which are of the order of the microsecond or the millisecond. Such simulation times are essentially out of reach if one uses a standard integrator, that would use time steps of the order of the femtosecond. We have addressed this problem in a two-fold manner. First, we have proposed numerical integrators for highly oscillatory Hamiltonian dynamics (for which the time step is not limited by the fastest characteristic time scales), following a homogenization (in time) approach. We have also proposed a variant of the parareal scheme (that uses parallel computations to simulate a dynamics) that is better adapted to the Hamiltonian framework than the original parareal algorithm.
In the second part of the manuscript, considering solids described at the atomistic scale, we derive coarse-grained models (written at the continuum scale) and couple these two models. The fine scale model is similar to models considered in the first part of the manuscript, whereas the coarse-grained model is written at a much more macroscopic scale. During the PhD, we have addressed this question in a variational setting (corresponding to a vanishing temperature). The configuration we are after is a (sometimes unique) minimizer of some energy. We have then studied how to couple the atomistic model with the corresponding continuum model. Recently, we have turned to a finite temperature setting, in which macroscopic quantities of interest are averages of functions depending of the microscopic state of the system (the position of all the atoms) with respect to the Boltzmann measure. In this setting, we have derived coarse-grained models, where the temperature is a parameter, following thermodynamic limit approaches.
In the third part of the manuscript, we turn to materials modelled at the continuum scale by (linear) elliptic partial differential equations. We consider random materials, the properties of which oscillate at a fast scale. Random homogenization is a well developed theory to handle such problems. However, even of such simple cases, the associated numerical methods today available (such as methods to compute the homogenized matrix) generally lead to very expensive computations. We address this problem from a numerical viewpoint: we consider an equation that is simple from the theoretical viewpoint, and wish to design more efficient numerical strategies. We have followed two directions. The first one is concerned with variance reduction. In the setting we consider, even though the exact homogenized matrix is deterministic, it turns out that, due to the numerical procedure that is standardly used, one can only have access to a random approximation of the homogenized matrix. We have proposed variance reduction techniques, to typically obtain an approximation with a smaller confidence interval, in the framework of Monte Carlo methods. A second direction consists in considering weakly stochastic materials. Indeed, real materials are rarely periodic, but they are not always strongly random. The case when the randomness comes as a small perturbation of a periodic model is thus relevant, in practice, to model a large class of materials. We then show that, in this case, we can compute, at the first orders, the homogenized matrix with a workload comparable to that encountered in periodic homogenization, and hence much smaller than the one required by generic stochastic homogenization.
La première partie du mémoire résume des travaux en simulation moléculaire. On s'intéresse à des systèmes de particules ponctuelles (représentant typiquement les noyaux des atomes d'un système moléculaire), qui interagissent via une énergie potentielle. Les degrés de liberté du système sont la position et l'impulsion de chaque particule. La complexité du problème vient du nombre de degrés de liberté en jeu, qui peut atteindre (et dépasser!) plusieurs centaines de milliers d'atomes pour les systèmes d'intérêt pratique.
Les questions étudiées portent sur l'échantillonnage de la mesure de Boltzmann-Gibbs (avec des résultats concernant la non-ergodicité de certains systèmes dynamiques proposés dans la littérature), et sur la construction de dynamiques effectives: supposant que le système suit une dynamique X_t régie par l'équation de Langevin amortie, et se donnant une variable scalaire macroscopique xi(X), lente en un certain sens, nous proposons une dynamique mono-dimensionnelle fermée qui approche xi(X_t), et dont la précision est estimée à l'aide de méthodes d'entropie relative.
Une autre partie du travail consiste à développer de nouveaux schémas numériques pour des problèmes Hamiltoniens hautement oscillants (souvent rencontrés en simulation moléculaire), en suivant une démarche d'homogénéisation en temps. Nous avons aussi proposé une adaptation au contexte Hamiltonien de l'algorithme pararéel, permettant d'obtenir la solution d'un problème d'évolution par des méthodes de calcul parallèle.
La seconde partie du mémoire présente des travaux sur la dérivation de modèles à l'échelle du continuum à partir de modèles discrets (à l'échelle atomistique), pour les solides, et sur le couplage de ces deux modèles, discret et continu. Une première approche consiste à poser le problème sous forme variationnelle (modélisation à température nulle). Nous nous sommes aussi intéressés au cas de systèmes à température finie, modélisés dans le cadre de la mécanique statistique. Dans certains cas, nous avons obtenu des modèles réduits, macroscopiques, où la température est un paramètre, en suivant des approches de type limite thermodynamique.
La troisième partie du mémoire s'intéresse à des questions d'homogénéisation stochastique, pour des équations aux dérivées partielles elliptiques linéaires. Les matériaux sont donc modélisés à l'échelle du continuum. Le constat qui motive notre travail est le fait que, même dans les cas les plus simples sur le plan théorique, les méthodes numériques à ce jour disponibles en homogénéisation stochastique conduisent à des calculs très lourds. Nous avons travaillé dans deux directions. La première consiste à réduire la variance des quantités aléatoires effectivement calculées, seules accessibles en pratique pour approcher la matrice homogénéisée. La seconde est d'étudier le cas de problèmes faiblement stochastiques, en partant du constat que les matériaux hétérogènes, rarement périodiques, ne sont pas pour autant systématiquement fortement aléatoires. Le cas d'un matériau aléatoire pour lequel cet aléa n'est qu'une petite perturbation autour d'un modèle périodique est donc intéressant, et peut se traiter avec un coût calcul beaucoup plus abordable.
Les questions étudiées portent sur l'échantillonnage de la mesure de Boltzmann-Gibbs (avec des résultats concernant la non-ergodicité de certains systèmes dynamiques proposés dans la littérature), et sur la construction de dynamiques effectives: supposant que le système suit une dynamique X_t régie par l'équation de Langevin amortie, et se donnant une variable scalaire macroscopique xi(X), lente en un certain sens, nous proposons une dynamique mono-dimensionnelle fermée qui approche xi(X_t), et dont la précision est estimée à l'aide de méthodes d'entropie relative.
Une autre partie du travail consiste à développer de nouveaux schémas numériques pour des problèmes Hamiltoniens hautement oscillants (souvent rencontrés en simulation moléculaire), en suivant une démarche d'homogénéisation en temps. Nous avons aussi proposé une adaptation au contexte Hamiltonien de l'algorithme pararéel, permettant d'obtenir la solution d'un problème d'évolution par des méthodes de calcul parallèle.
La seconde partie du mémoire présente des travaux sur la dérivation de modèles à l'échelle du continuum à partir de modèles discrets (à l'échelle atomistique), pour les solides, et sur le couplage de ces deux modèles, discret et continu. Une première approche consiste à poser le problème sous forme variationnelle (modélisation à température nulle). Nous nous sommes aussi intéressés au cas de systèmes à température finie, modélisés dans le cadre de la mécanique statistique. Dans certains cas, nous avons obtenu des modèles réduits, macroscopiques, où la température est un paramètre, en suivant des approches de type limite thermodynamique.
La troisième partie du mémoire s'intéresse à des questions d'homogénéisation stochastique, pour des équations aux dérivées partielles elliptiques linéaires. Les matériaux sont donc modélisés à l'échelle du continuum. Le constat qui motive notre travail est le fait que, même dans les cas les plus simples sur le plan théorique, les méthodes numériques à ce jour disponibles en homogénéisation stochastique conduisent à des calculs très lourds. Nous avons travaillé dans deux directions. La première consiste à réduire la variance des quantités aléatoires effectivement calculées, seules accessibles en pratique pour approcher la matrice homogénéisée. La seconde est d'étudier le cas de problèmes faiblement stochastiques, en partant du constat que les matériaux hétérogènes, rarement périodiques, ne sont pas pour autant systématiquement fortement aléatoires. Le cas d'un matériau aléatoire pour lequel cet aléa n'est qu'une petite perturbation autour d'un modèle périodique est donc intéressant, et peut se traiter avec un coût calcul beaucoup plus abordable.
Loading...