亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper, we introduce an implicit staggered algorithm for crystal plasticity finite element method (CPFEM) which makes use of dynamic relaxation at the constitutive integration level. An uncoupled version of the constitutive system consists of a multi-surface flow law complemented by an evolution law for the hardening variables. Since a saturation law is adopted for hardening, a sequence of nonlinear iteration followed by a linear system is feasible. To tie the constitutive unknowns, the dynamic relaxation method is adopted. A Green-Nagdhi plasticity model is adopted based on the Hencky strain calculated using a [2/2] Pad\'e approximation. For the incompressible case, the approximation error is calculated exactly. A enhanced-assumed strain (EAS) element technology is adopted, which was found to be especially suited to localization problems such as the ones resulting from crystal plasticity plane slipping. Analysis of the results shows significant reduction of drift and well defined localization without spurious modes or hourglassing.

相關內容

This paper is devoted to the analysis of a numerical scheme based on the Finite Element Method for approximating the solution of Koiter's model for a linearly elastic elliptic membrane shell subjected to remaining confined in a prescribed half-space. First, we show that the solution of the obstacle problem under consideration is uniquely determined and satisfies a set of variational inequalities which are governed by a fourth order elliptic operator, and which are posed over a non-empty, closed, and convex subset of a suitable space. Second, we show that the solution of the obstacle problem under consideration can be approximated by means of the penalty method. Third, we show that the solution of the corresponding penalised problem is more regular up to the boundary. Fourth, we write down the mixed variational formulation corresponding to the penalised problem under consideration, and we show that the solution of the mixed variational formulation is more regular up to the boundary as well. In view of this result concerning the augmentation of the regularity of the solution of the mixed penalised problem, we are able to approximate the solution of the one such problem by means of a Finite Element scheme. Finally, we present numerical experiments corroborating the validity of the mathematical results we obtained.

Maximum entropy (Maxent) models are a class of statistical models that use the maximum entropy principle to estimate probability distributions from data. Due to the size of modern data sets, Maxent models need efficient optimization algorithms to scale well for big data applications. State-of-the-art algorithms for Maxent models, however, were not originally designed to handle big data sets; these algorithms either rely on technical devices that may yield unreliable numerical results, scale poorly, or require smoothness assumptions that many practical Maxent models lack. In this paper, we present novel optimization algorithms that overcome the shortcomings of state-of-the-art algorithms for training large-scale, non-smooth Maxent models. Our proposed first-order algorithms leverage the Kullback-Leibler divergence to train large-scale and non-smooth Maxent models efficiently. For Maxent models with discrete probability distribution of $n$ elements built from samples, each containing $m$ features, the stepsize parameters estimation and iterations in our algorithms scale on the order of $O(mn)$ operations and can be trivially parallelized. Moreover, the strong $\ell_{1}$ convexity of the Kullback--Leibler divergence allows for larger stepsize parameters, thereby speeding up the convergence rate of our algorithms. To illustrate the efficiency of our novel algorithms, we consider the problem of estimating probabilities of fire occurrences as a function of ecological features in the Western US MTBS-Interagency wildfire data set. Our numerical results show that our algorithms outperform the state of the arts by one order of magnitude and yield results that agree with physical models of wildfire occurrence and previous statistical analyses of wildfire drivers.

There has been a recent surge of interest in coupling methods for Markov chain Monte Carlo algorithms: they facilitate convergence quantification and unbiased estimation, while exploiting embarrassingly parallel computing capabilities. Motivated by these, we consider the design and analysis of couplings of the random walk Metropolis algorithm which scale well with the dimension of the target measure. Methodologically, we introduce a low-rank modification of the synchronous coupling that is provably optimally contractive in standard high-dimensional asymptotic regimes. We expose a shortcoming of the reflection coupling, the status quo at time of writing, and we propose a modification which mitigates the issue. Our analysis bridges the gap to the optimal scaling literature and builds a framework of asymptotic optimality which may be of independent interest. We illustrate the applicability of our proposed couplings, and the potential for extending our ideas, with various numerical experiments.

We introduce a causal regularisation extension to anchor regression (AR) for improved out-of-distribution (OOD) generalisation. We present anchor-compatible losses, aligning with the anchor framework to ensure robustness against distribution shifts. Various multivariate analysis (MVA) algorithms, such as (Orthonormalized) PLS, RRR, and MLR, fall within the anchor framework. We observe that simple regularisation enhances robustness in OOD settings. Estimators for selected algorithms are provided, showcasing consistency and efficacy in synthetic and real-world climate science problems. The empirical validation highlights the versatility of anchor regularisation, emphasizing its compatibility with MVA approaches and its role in enhancing replicability while guarding against distribution shifts. The extended AR framework advances causal inference methodologies, addressing the need for reliable OOD generalisation.

Data sets tend to live in low-dimensional non-linear subspaces. Ideal data analysis tools for such data sets should therefore account for such non-linear geometry. The symmetric Riemannian geometry setting can be suitable for a variety of reasons. First, it comes with a rich mathematical structure to account for a wide range of non-linear geometries that has been shown to be able to capture the data geometry through empirical evidence from classical non-linear embedding. Second, many standard data analysis tools initially developed for data in Euclidean space can also be generalised efficiently to data on a symmetric Riemannian manifold. A conceptual challenge comes from the lack of guidelines for constructing a symmetric Riemannian structure on the data space itself and the lack of guidelines for modifying successful algorithms on symmetric Riemannian manifolds for data analysis to this setting. This work considers these challenges in the setting of pullback Riemannian geometry through a diffeomorphism. The first part of the paper characterises diffeomorphisms that result in proper, stable and efficient data analysis. The second part then uses these best practices to guide construction of such diffeomorphisms through deep learning. As a proof of concept, different types of pullback geometries -- among which the proposed construction -- are tested on several data analysis tasks and on several toy data sets. The numerical experiments confirm the predictions from theory, i.e., that the diffeomorphisms generating the pullback geometry need to map the data manifold into a geodesic subspace of the pulled back Riemannian manifold while preserving local isometry around the data manifold for proper, stable and efficient data analysis, and that pulling back positive curvature can be problematic in terms of stability.

This paper considers the finite element approximation to parabolic optimal control problems with measure data in a nonconvex polygonal domain. Such problems usually possess low regularity in the state variable due to the presence of measure data and the nonconvex nature of the domain. The low regularity of the solution allows the finite element approximations to converge at lower orders. We prove the existence, uniqueness and regularity results for the solution to the control problem satisfying the first order optimality condition. For our error analysis we have used piecewise linear elements for the approximation of the state and co-state variables, whereas piecewise constant functions are employed to approximate the control variable. The temporal discretization is based on the implicit Euler scheme. We derive both a priori and a posteriori error bounds for the state, control and co-state variables. Numerical experiments are performed to validate the theoretical rates of convergence.

The consistency of the maximum likelihood estimator for mixtures of elliptically-symmetric distributions for estimating its population version is shown, where the underlying distribution $P$ is nonparametric and does not necessarily belong to the class of mixtures on which the estimator is based. In a situation where $P$ is a mixture of well enough separated but nonparametric distributions it is shown that the components of the population version of the estimator correspond to the well separated components of $P$. This provides some theoretical justification for the use of such estimators for cluster analysis in case that $P$ has well separated subpopulations even if these subpopulations differ from what the mixture model assumes.

This article is concerned with the multilevel Monte Carlo (MLMC) methods for approximating expectations of some functions of the solution to the Heston 3/2-model from mathematical finance, which takes values in $(0, \infty)$ and possesses superlinearly growing drift and diffusion coefficients. To discretize the SDE model, a new Milstein-type scheme is proposed to produce independent sample paths. The proposed scheme can be explicitly solved and is positivity-preserving unconditionally, i.e., for any time step-size $h>0$. This positivity-preserving property for large discretization time steps is particularly desirable in the MLMC setting. Furthermore, a mean-square convergence rate of order one is proved in the non-globally Lipschitz regime, which is not trivial, as the diffusion coefficient grows super-linearly. The obtained order-one convergence in turn promises the desired relevant variance of the multilevel estimator and justifies the optimal complexity $\mathcal{O}(\epsilon^{-2})$ for the MLMC approach, where $\epsilon > 0$ is the required target accuracy. Numerical experiments are finally reported to confirm the theoretical findings.

In this manuscript, we combine non-intrusive reduced order models (ROMs) with space-dependent aggregation techniques to build a mixed-ROM. The prediction of the mixed formulation is given by a convex linear combination of the predictions of some previously-trained ROMs, where we assign to each model a space-dependent weight. The ROMs taken into account to build the mixed model exploit different reduction techniques, such as Proper Orthogonal Decomposition (POD) and AutoEncoders (AE), and/or different approximation techniques, namely a Radial Basis Function Interpolation (RBF), a Gaussian Process Regression (GPR) or a feed-forward Artificial Neural Network (ANN). The contribution of each model is retained with higher weights in the regions where the model performs best, and, vice versa, with smaller weights where the model has a lower accuracy with respect to the other models. Finally, a regression technique, namely a Random Forest, is exploited to evaluate the weights for unseen conditions. The performance of the aggregated model is evaluated on two different test cases: the 2D flow past a NACA 4412 airfoil, with an angle of attack of 5 degrees, having as parameter the Reynolds number varying between 1e5 and 1e6 and a transonic flow over a NACA 0012 airfoil, considering as parameter the angle of attack. In both cases, the mixed-ROM has provided improved accuracy with respect to each individual ROM technique.

This work considers the nodal finite element approximation of peridynamics, in which the nodal displacements satisfy the peridynamics equation at each mesh node. For the nonlinear bond-based peridynamics model, it is shown that, under the suitable assumptions on an exact solution, the discretized solution associated with the central-in-time and nodal finite element discretization converges to the exact solution in $L^2$ norm at the rate $C_1 \Delta t + C_2 h^2/\epsilon^2$. Here, $\Delta t$, $h$, and $\epsilon$ are time step size, mesh size, and the size of the horizon or nonlocal length scale, respectively. Constants $C_1$ and $C_2$ are independent of $h$ and $\Delta t$ and depend on the norms of the exact solution. Several numerical examples involving pre-crack, void, and notch are considered, and the efficacy of the proposed nodal finite element discretization is analyzed.

北京阿比特科技有限公司