Randomness in the void distribution within a ductile metal complicates quantitative modeling of damage following the void growth to coalescence failure process. Though the sequence of micro-mechanisms leading to ductile failure is known from unit cell models, often based on assumptions of a regular distribution of voids, the effect of randomness remains a challenge. In the present work, mesoscale unit cell models, each containing an ensemble of four voids of equal size that are randomly distributed, are used to find statistical effects on the yield surface of the homogenized material. A yield locus is found based on a mean yield surface and a standard deviation of yield points obtained from 15 realizations of the four-void unit cells. It is found that the classical GTN model very closely agrees with the mean of the yield points extracted from the unit cell calculations with random void distributions, while the standard deviation $\textbf{S}$ varies with the imposed stress state. It is shown that the standard deviation is nearly zero for stress triaxialities $T\leq1/3$, while it rapidly increases for triaxialities above $T\approx 1$, reaching maximum values of about $\textbf{S}/\sigma_0\approx0.1$ at $T \approx 4$. At even higher triaxialities it decreases slightly. The results indicate that the dependence of the standard deviation on the stress state follows from variations in the deformation mechanism since a well-correlated variation is found for the volume fraction of the unit cell that deforms plastically at yield. Thus, the random void distribution activates different complex localization mechanisms at high stress triaxialities that differ from the ligament thinning mechanism forming the basis for the classical GTN model. A method for introducing the effect of randomness into the GTN continuum model is presented, and an excellent comparison to the unit cell yield locus is achieved.
Gradient-enhanced Kriging (GE-Kriging) is a well-established surrogate modelling technique for approximating expensive computational models. However, it tends to get impractical for high-dimensional problems due to the size of the inherent correlation matrix and the associated high-dimensional hyper-parameter tuning problem. To address these issues, a new method, called sliced GE-Kriging (SGE-Kriging), is developed in this paper for reducing both the size of the correlation matrix and the number of hyper-parameters. We first split the training sample set into multiple slices, and invoke Bayes' theorem to approximate the full likelihood function via a sliced likelihood function, in which multiple small correlation matrices are utilized to describe the correlation of the sample set rather than one large one. Then, we replace the original high-dimensional hyper-parameter tuning problem with a low-dimensional counterpart by learning the relationship between the hyper-parameters and the derivative-based global sensitivity indices. The performance of SGE-Kriging is finally validated by means of numerical experiments with several benchmarks and a high-dimensional aerodynamic modeling problem. The results show that the SGE-Kriging model features an accuracy and robustness that is comparable to the standard one but comes at much less training costs. The benefits are most evident for high-dimensional problems with tens of variables.
We propose a finite difference scheme for the numerical solution of a two-dimensional singularly perturbed convection-diffusion partial differential equation whose solution features interacting boundary and interior layers, the latter due to discontinuities in source term. The problem is posed on the unit square. The second derivative is multiplied by a singular perturbation parameter, $\epsilon$, while the nature of the first derivative term is such that flow is aligned with a boundary. These two facts mean that solutions tend to exhibit layers of both exponential and characteristic type. We solve the problem using a finite difference method, specially adapted to the discontinuities, and applied on a piecewise-uniform (Shishkin). We prove that that the computed solution converges to the true one at a rate that is independent of the perturbation parameter, and is nearly first-order. We present numerical results that verify that these results are sharp.
The multigrid V-cycle method is a popular method for solving systems of linear equations. It computes an approximate solution by using smoothing on fine levels and solving a system of linear equations on the coarsest level. Solving on the coarsest level depends on the size and difficulty of the problem. If the size permits, it is typical to use a direct method based on LU or Cholesky decomposition. In settings with large coarsest-level problems, approximate solvers such as iterative Krylov subspace methods, or direct methods based on low-rank approximation, are often used. The accuracy of the coarsest-level solver is typically determined based on the experience of the users with the concrete problems and methods. In this paper we present an approach to analyzing the effects of approximate coarsest-level solves on the convergence of the V-cycle method for symmetric positive definite problems. Using these results, we derive coarsest-level stopping criterion through which we may control the difference between the approximation computed by a V-cycle method with approximate coarsest-level solver and the approximation which would be computed if the coarsest-level problems were solved exactly. The coarsest-level stopping criterion may thus be set up such that the V-cycle method converges to a chosen finest-level accuracy in (nearly) the same number of V-cycle iterations as the V-cycle method with exact coarsest-level solver. We also utilize the theoretical results to discuss how the convergence of the V-cycle method may be affected by the choice of a tolerance in a coarsest-level stopping criterion based on the relative residual norm.
We introduce a novel sampler called the energy based diffusion generator for generating samples from arbitrary target distributions. The sampling model employs a structure similar to a variational autoencoder, utilizing a decoder to transform latent variables from a simple distribution into random variables approximating the target distribution, and we design an encoder based on the diffusion model. Leveraging the powerful modeling capacity of the diffusion model for complex distributions, we can obtain an accurate variational estimate of the Kullback-Leibler divergence between the distributions of the generated samples and the target. Moreover, we propose a decoder based on generalized Hamiltonian dynamics to further enhance sampling performance. Through empirical evaluation, we demonstrate the effectiveness of our method across various complex distribution functions, showcasing its superiority compared to existing methods.
Network meta-analysis (NMA) combines evidence from multiple trials to compare the effectiveness of a set of interventions. In public health research, interventions are often complex, made up of multiple components or features. This makes it difficult to define a common set of interventions on which to perform the analysis. One approach to this problem is component network meta-analysis (CNMA) which uses a meta-regression framework to define each intervention as a subset of components whose individual effects combine additively. In this paper, we are motivated by a systematic review of complex interventions to prevent obesity in children. Due to considerable heterogeneity across the trials, these interventions cannot be expressed as a subset of components but instead are coded against a framework of characteristic features. To analyse these data, we develop a bespoke CNMA-inspired model that allows us to identify the most important features of interventions. We define a meta-regression model with covariates on three levels: intervention, study, and follow-up time, as well as flexible interaction terms. By specifying different regression structures for trials with and without a control arm, we relax the assumption from previous CNMA models that a control arm is the absence of intervention components. Furthermore, we derive a correlation structure that accounts for trials with multiple intervention arms and multiple follow-up times. Although our model was developed for the specifics of the obesity data set, it has wider applicability to any set of complex interventions that can be coded according to a set of shared features.
A well-balanced second-order finite volume scheme is proposed and analyzed for a 2 X 2 system of non-linear partial differential equations which describes the dynamics of growing sandpiles created by a vertical source on a flat, bounded rectangular table in multiple dimensions. To derive a second-order scheme, we combine a MUSCL type spatial reconstruction with strong stability preserving Runge-Kutta time stepping method. The resulting scheme is ensured to be well-balanced through a modified limiting approach that allows the scheme to reduce to well-balanced first-order scheme near the steady state while maintaining the second-order accuracy away from it. The well-balanced property of the scheme is proven analytically in one dimension and demonstrated numerically in two dimensions. Additionally, numerical experiments reveal that the second-order scheme reduces finite time oscillations, takes fewer time iterations for achieving the steady state and gives sharper resolutions of the physical structure of the sandpile, as compared to the existing first-order schemes of the literature.
Starting from the Kirchhoff-Huygens representation and Duhamel's principle of time-domain wave equations, we propose novel butterfly-compressed Hadamard integrators for self-adjoint wave equations in both time and frequency domain in an inhomogeneous medium. First, we incorporate the leading term of Hadamard's ansatz into the Kirchhoff-Huygens representation to develop a short-time valid propagator. Second, using the Fourier transform in time, we derive the corresponding Eulerian short-time propagator in frequency domain; on top of this propagator, we further develop a time-frequency-time (TFT) method for the Cauchy problem of time-domain wave equations. Third, we further propose the time-frequency-time-frequency (TFTF) method for the corresponding point-source Helmholtz equation, which provides Green's functions of the Helmholtz equation for all angular frequencies within a given frequency band. Fourth, to implement TFT and TFTF methods efficiently, we introduce butterfly algorithms to compress oscillatory integral kernels at different frequencies. As a result, the proposed methods can construct wave field beyond caustics implicitly and advance spatially overturning waves in time naturally with quasi-optimal computational complexity and memory usage. Furthermore, once constructed the Hadamard integrators can be employed to solve both time-domain wave equations with various initial conditions and frequency-domain wave equations with different point sources. Numerical examples for two-dimensional wave equations illustrate the accuracy and efficiency of the proposed methods.
We consider wave scattering from a system of highly contrasting resonators with time-modulated material parameters. In this setting, the wave equation reduces to a system of coupled Helmholtz equations that models the scattering problem. We consider the one-dimensional setting. In order to understand the energy of the system, we prove a novel higher-order discrete, capacitance matrix approximation of the subwavelength resonant quasifrequencies. Further, we perform numerical experiments to support and illustrate our analytical results and show how periodically time-dependent material parameters affect the scattered wave field.
To obtain strong convergence rates of numerical schemes, an overwhelming majority of existing works impose a global monotonicity condition on coefficients of SDEs. On the contrary, a majority of SDEs from applications do not have globally monotone coefficients. As a recent breakthrough, the authors of [Hutzenthaler, Jentzen, Ann. Probab., 2020] originally presented a perturbation theory for stochastic differential equations (SDEs), which is crucial to recovering strong convergence rates of numerical schemes in a non-globally monotone setting. However, only a convergence rate of order $1/2$ was obtained there for time-stepping schemes such as a stopped increment-tamed Euler-Maruyama (SITEM) method. As an open problem, a natural question was raised by the aforementioned work as to whether higher convergence rate than $1/2$ can be obtained when higher order schemes are used. The present work attempts to solve the tough problem. To this end, we develop some new perturbation estimates that are able to reveal the order-one strong convergence of numerical methods. As the first application of the newly developed estimates, we identify the expected order-one pathwise uniformly strong convergence of the SITEM method for additive noise driven SDEs and multiplicative noise driven second order SDEs with non-globally monotone coefficients. As the other application, we propose and analyze a positivity preserving explicit Milstein-type method for Lotka-Volterra competition model driven by multi-dimensional noise, with a pathwise uniformly strong convergence rate of order one recovered under mild assumptions. These obtained results are completely new and significantly improve the existing theory. Numerical experiments are also provided to confirm the theoretical findings.
In fitting a continuous bounded data, the generalized beta (and several variants of this distribution) and the two-parameter Kumaraswamy (KW) distributions are the two most prominent univariate continuous distributions that come to our mind. There are some common features between these two rival probability models and to select one of them in a practical situation can be of great interest. Consequently, in this paper, we discuss various methods of selection between the generalized beta proposed by Libby and Novick (1982) (LNGB) and the KW distributions, such as the criteria based on probability of correct selection which is an improvement over the likelihood ratio statistic approach, and also based on pseudo-distance measures. We obtain an approximation for the probability of correct selection under the hypotheses HLNGB and HKW , and select the model that maximizes it. However, our proposal is more appealing in the sense that we provide the comparison study for the LNGB distribution that subsumes both types of classical beta and exponentiated generators (see, for details, Cordeiro et al. 2014; Libby and Novick 1982) which can be a natural competitor of a two-parameter KW distribution in an appropriate scenario.