This work presents a comparative review and classification between some well-known thermodynamically consistent models of hydrogel behavior in a large deformation setting, specifically focusing on solvent absorption/desorption and its impact on mechanical deformation and network swelling. The proposed discussion addresses formulation aspects, general mathematical classification of the governing equations, and numerical implementation issues based on the finite element method. The theories are presented in a unified framework demonstrating that, despite not being evident in some cases, all of them follow equivalent thermodynamic arguments. A detailed numerical analysis is carried out where Taylor-Hood elements are employed in the spatial discretization to satisfy the inf-sup condition and to prevent spurious numerical oscillations. The resulting discrete problems are solved using the FEniCS platform through consistent variational formulations, employing both monolithic and staggered approaches. We conduct benchmark tests on various hydrogel structures, demonstrating that major differences arise from the chosen volumetric response of the hydrogel. The significance of this choice is frequently underestimated in the state-of-the-art literature but has been shown to have substantial implications on the resulting hydrogel behavior.
Spinodal metamaterials, with architectures inspired by natural phase-separation processes, have presented a significant alternative to periodic and symmetric morphologies when designing mechanical metamaterials with extreme performance. While their elastic mechanical properties have been systematically determined, their large-deformation, nonlinear responses have been challenging to predict and design, in part due to limited data sets and the need for complex nonlinear simulations. This work presents a novel physics-enhanced machine learning (ML) and optimization framework tailored to address the challenges of designing intricate spinodal metamaterials with customized mechanical properties in large-deformation scenarios where computational modeling is restrictive and experimental data is sparse. By utilizing large-deformation experimental data directly, this approach facilitates the inverse design of spinodal structures with precise finite-strain mechanical responses. The framework sheds light on instability-induced pattern formation in spinodal metamaterials -- observed experimentally and in selected nonlinear simulations -- leveraging physics-based inductive biases in the form of nonconvex energetic potentials. Altogether, this combined ML, experimental, and computational effort provides a route for efficient and accurate design of complex spinodal metamaterials for large-deformation scenarios where energy absorption and prediction of nonlinear failure mechanisms is essential.
Generative diffusion models apply the concept of Langevin dynamics in physics to machine leaning, attracting a lot of interest from industrial application, but a complete picture about inherent mechanisms is still lacking. In this paper, we provide a transparent physics analysis of the diffusion models, deriving the fluctuation theorem, entropy production, Franz-Parisi potential to understand the intrinsic phase transitions discovered recently. Our analysis is rooted in non-equlibrium physics and concepts from equilibrium physics, i.e., treating both forward and backward dynamics as a Langevin dynamics, and treating the reverse diffusion generative process as a statistical inference, where the time-dependent state variables serve as quenched disorder studied in spin glass theory. This unified principle is expected to guide machine learning practitioners to design better algorithms and theoretical physicists to link the machine learning to non-equilibrium thermodynamics.
A new model is presented to predict hydrogen-assisted fatigue. The model combines a phase field description of fracture and fatigue, stress-assisted hydrogen diffusion, and a toughness degradation formulation with cyclic and hydrogen contributions. Hydrogen-assisted fatigue crack growth predictions exhibit an excellent agreement with experiments over all the scenarios considered, spanning multiple load ratios, H2 pressures and loading frequencies. These are obtained without any calibration with hydrogen-assisted fatigue data, taking as input only mechanical and hydrogen transport material properties, the material's fatigue characteristics (from a single test in air), and the sensitivity of fracture toughness to hydrogen content. Furthermore, the model is used to determine: (i) what are suitable test loading frequencies to obtain conservative data, and (ii) the underestimation made when not pre-charging samples. The model can handle both laboratory specimens and large-scale engineering components, enabling the Virtual Testing paradigm in infrastructure exposed to hydrogen environments and cyclic loading.
We investigate a class of parametric elliptic eigenvalue problems with homogeneous essential boundary conditions where the coefficients (and hence the solution $u$) may depend on a parameter $y$. For the efficient approximate evaluation of parameter sensitivities of the first eigenpairs on the entire parameter space we propose and analyse Gevrey class and analytic regularity of the solution with respect to the parameters. This is made possible by a novel proof technique which we introduce and demonstrate in this paper. Our regularity result has immediate implications for convergence of various numerical schemes for parametric elliptic eigenvalue problems, in particular, for elliptic eigenvalue problems with infinitely many parameters arising from elliptic differential operators with random coefficients.
Large deformation analysis in geomechanics plays an important role in understanding the nature of post-failure flows and hazards associated with landslides under different natural calamities. In this study, a SPH framework is proposed for large deformation and failure analysis of geomaterials. An adaptive B-spline kernel function in combination with a pressure zone approach is proposed to counteract the numerical issues associated with tensile instability. The proposed algorithm is validated using a soil cylinder drop problem, and the results are compared with FEM. Finally, the effectiveness of the proposed algorithm in the successful removal of tensile instability and stress noise is demonstrated using the well-studied slope failure simulation of a cohesive soil vertical cut.
This article introduces a new Neural Network stochastic model to generate a 1-dimensional stochastic field with turbulent velocity statistics. Both the model architecture and training procedure ground on the Kolmogorov and Obukhov statistical theories of fully developed turbulence, so guaranteeing descriptions of 1) energy distribution, 2) energy cascade and 3) intermittency across scales in agreement with experimental observations. The model is a Generative Adversarial Network with multiple multiscale optimization criteria. First, we use three physics-based criteria: the variance, skewness and flatness of the increments of the generated field that retrieve respectively the turbulent energy distribution, energy cascade and intermittency across scales. Second, the Generative Adversarial Network criterion, based on reproducing statistical distributions, is used on segments of different length of the generated field. Furthermore, to mimic multiscale decompositions frequently used in turbulence's studies, the model architecture is fully convolutional with kernel sizes varying along the multiple layers of the model. To train our model we use turbulent velocity signals from grid turbulence at Modane wind tunnel.
Data-driven, machine learning (ML) models of atomistic interactions are often based on flexible and non-physical functions that can relate nuanced aspects of atomic arrangements into predictions of energies and forces. As a result, these potentials are as good as the training data (usually results of so-called ab initio simulations) and we need to make sure that we have enough information for a model to become sufficiently accurate, reliable and transferable. The main challenge stems from the fact that descriptors of chemical environments are often sparse high-dimensional objects without a well-defined continuous metric. Therefore, it is rather unlikely that any ad hoc method of choosing training examples will be indiscriminate, and it will be easy to fall into the trap of confirmation bias, where the same narrow and biased sampling is used to generate train- and test- sets. We will demonstrate that classical concepts of statistical planning of experiments and optimal design can help to mitigate such problems at a relatively low computational cost. The key feature of the method we will investigate is that they allow us to assess the informativeness of data (how much we can improve the model by adding/swapping a training example) and verify if the training is feasible with the current set before obtaining any reference energies and forces -- a so-called off-line approach. In other words, we are focusing on an approach that is easy to implement and doesn't require sophisticated frameworks that involve automated access to high-performance computational (HPC).
We present a subspace method based on neural networks for solving the partial differential equation in weak form with high accuracy. The basic idea of our method is to use some functions based on neural networks as base functions to span a subspace, then find an approximate solution in this subspace. Training base functions and finding an approximate solution can be separated, that is different methods can be used to train these base functions, and different methods can also be used to find an approximate solution. In this paper, we find an approximate solution of the partial differential equation in the weak form. Our method can achieve high accuracy with low cost of training. Numerical examples show that the cost of training these base functions is low, and only one hundred to two thousand epochs are needed for most tests. The error of our method can fall below the level of $10^{-7}$ for some tests. The proposed method has the better performance in terms of the accuracy and computational cost.
We present and analyze three distinct semi-discrete schemes for solving nonlocal geometric flows incorporating perimeter terms. These schemes are based on the finite difference method, the finite element method, and the finite element method with a specific tangential motion. We offer rigorous proofs of quadratic convergence under $H^1$-norm for the first scheme and linear convergence under $H^1$-norm for the latter two schemes. All error estimates rely on the observation that the error of the nonlocal term can be controlled by the error of the local term. Furthermore, we explore the relationship between the convergence under $L^\infty$-norm and manifold distance. Extensive numerical experiments are conducted to verify the convergence analysis, and demonstrate the accuracy of our schemes under various norms for different types of nonlocal flows.
This paper presents an analysis of properties of two hybrid discretization methods for Gaussian derivatives, based on convolutions with either the normalized sampled Gaussian kernel or the integrated Gaussian kernel followed by central differences. The motivation for studying these discretization methods is that in situations when multiple spatial derivatives of different order are needed at the same scale level, they can be computed significantly more efficiently compared to more direct derivative approximations based on explicit convolutions with either sampled Gaussian kernels or integrated Gaussian kernels. While these computational benefits do also hold for the genuinely discrete approach for computing discrete analogues of Gaussian derivatives, based on convolution with the discrete analogue of the Gaussian kernel followed by central differences, the underlying mathematical primitives for the discrete analogue of the Gaussian kernel, in terms of modified Bessel functions of integer order, may not be available in certain frameworks for image processing, such as when performing deep learning based on scale-parameterized filters in terms of Gaussian derivatives, with learning of the scale levels. In this paper, we present a characterization of the properties of these hybrid discretization methods, in terms of quantitative performance measures concerning the amount of spatial smoothing that they imply, as well as the relative consistency of scale estimates obtained from scale-invariant feature detectors with automatic scale selection, with an emphasis on the behaviour for very small values of the scale parameter, which may differ significantly from corresponding results obtained from the fully continuous scale-space theory, as well as between different types of discretization methods.