亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We present a novel image-based adaptive domain decomposition FEM framework to accelerate the solution of continuum damage mechanics problems. The key idea is to use image-processing techniques in order to identify the moving interface between the healthy subdomain and unhealthy subdomain as damage propagates, and then use an iterative Schur complement approach to efficiently solve the problem. The implementation of the algorithm consists of several modular components. Following the FEM solution of a load increment, the damage detection module is activated, a step that is based on several image-processing operations including colormap manipulation and morphological convolution-based operations. Then, the damage tracking module is invoked, to identify the crack growth direction using geometrical operations and ray casting algorithm. This information is then passed into the domain decomposition module, where the domain is divided into the healthy subdomain which contains only undamaged elements, and the unhealthy subdomain which comprises both damaged and undamaged elements. Continuity between the two regions is restored using penalty constraints. The computational savings of our method stem from the Schur complement, which allows for the iterative solution of the system of equations appertaining only to the unhealthy subdomain. Through an exhaustive comparison between our approach and single domain computations, we demonstrate the accuracy, efficiency, and robustness of the framework. We ensure its compatibility against local and non-local damage laws, structured and unstructured meshes, as well as in cases where different damage paths eventually merge. Since the key novelty lies in using image processing tools to inform the decomposition, our framework can be readily extended beyond damage mechanics and model several classes of non-linear problems such as plasticity and phase-field.

相關內容

ACM/IEEE第23屆模型驅動工程語言和系統國際會議,是模型驅動軟件和系統工程的首要會議系列,由ACM-SIGSOFT和IEEE-TCSE支持組織。自1998年以來,模型涵蓋了建模的各個方面,從語言和方法到工具和應用程序。模特的參加者來自不同的背景,包括研究人員、學者、工程師和工業專業人士。MODELS 2019是一個論壇,參與者可以圍繞建模和模型驅動的軟件和系統交流前沿研究成果和創新實踐經驗。今年的版本將為建模社區提供進一步推進建模基礎的機會,并在網絡物理系統、嵌入式系統、社會技術系統、云計算、大數據、機器學習、安全、開源等新興領域提出建模的創新應用以及可持續性。 官網鏈接: · Analysis · 推斷 · 模型驗證 · Performer ·
2024 年 12 月 19 日

Model misspecification analysis strategies, such as anomaly detection, model validation, and model comparison are a key component of scientific model development. Over the last few years, there has been a rapid rise in the use of simulation-based inference (SBI) techniques for Bayesian parameter estimation, applied to increasingly complex forward models. To move towards fully simulation-based analysis pipelines, however, there is an urgent need for a comprehensive simulation-based framework for model misspecification analysis. In this work, we provide a solid and flexible foundation for a wide range of model discrepancy analysis tasks, using distortion-driven model misspecification tests. From a theoretical perspective, we introduce the statistical framework built around performing many hypothesis tests for distortions of the simulation model. We also make explicit analytic connections to classical techniques: anomaly detection, model validation, and goodness-of-fit residual analysis. Furthermore, we introduce an efficient self-calibrating training algorithm that is useful for practitioners. We demonstrate the performance of the framework in multiple scenarios, making the connection to classical results where they are valid. Finally, we show how to conduct such a distortion-driven model misspecification test for real gravitational wave data, specifically on the event GW150914.

We propose a method utilizing physics-informed neural networks (PINNs) to solve Poisson equations that serve as control variates in the computation of transport coefficients via fluctuation formulas, such as the Green--Kubo and generalized Einstein-like formulas. By leveraging approximate solutions to the Poisson equation constructed through neural networks, our approach significantly reduces the variance of the estimator at hand. We provide an extensive numerical analysis of the estimators and detail a methodology for training neural networks to solve these Poisson equations. The approximate solutions are then incorporated into Monte Carlo simulations as effective control variates, demonstrating the suitability of the method for moderately high-dimensional problems where fully deterministic solutions are computationally infeasible.

We present a novel class of projected gradient (PG) methods for minimizing a smooth but not necessarily convex function over a convex compact set. We first provide a novel analysis of the "vanilla" PG method, achieving the best-known iteration complexity for finding an approximate stationary point of the problem. We then develop an "auto-conditioned" projected gradient (AC-PG) variant that achieves the same iteration complexity without requiring the input of the Lipschitz constant of the gradient or any line search procedure. The key idea is to estimate the Lipschitz constant using first-order information gathered from the previous iterations, and to show that the error caused by underestimating the Lipschitz constant can be properly controlled. We then generalize the PG methods to the stochastic setting, by proposing a stochastic projected gradient (SPG) method and a variance-reduced stochastic gradient (VR-SPG) method, achieving new complexity bounds in different oracle settings. We also present auto-conditioned stepsize policies for both stochastic PG methods and establish comparable convergence guarantees.

Many real-world processes have complex tail dependence structures that cannot be characterized using classical Gaussian processes. More flexible spatial extremes models exhibit appealing extremal dependence properties but are often exceedingly prohibitive to fit and simulate from in high dimensions. In this paper, we aim to push the boundaries on computation and modeling of high-dimensional spatial extremes via integrating a new spatial extremes model that has flexible and non-stationary dependence properties in the encoding-decoding structure of a variational autoencoder called the XVAE. The XVAE can emulate spatial observations and produce outputs that have the same statistical properties as the inputs, especially in the tail. Our approach also provides a novel way of making fast inference with complex extreme-value processes. Through extensive simulation studies, we show that our XVAE is substantially more time-efficient than traditional Bayesian inference while outperforming many spatial extremes models with a stationary dependence structure. Lastly, we analyze a high-resolution satellite-derived dataset of sea surface temperature in the Red Sea, which includes 30 years of daily measurements at 16703 grid cells. We demonstrate how to use XVAE to identify regions susceptible to marine heatwaves under climate change and examine the spatial and temporal variability of the extremal dependence structure.

A statistical network model with overlapping communities can be generated as a superposition of mutually independent random graphs of varying size. The model is parameterized by the number of nodes, the number of communities, and the joint distribution of the community size and the edge probability. This model admits sparse parameter regimes with power-law limiting degree distributions and non-vanishing clustering coefficients. This article presents large-scale approximations of clique and cycle frequencies for graph samples generated by the model, which are valid for regimes with unbounded numbers of overlapping communities. Our results reveal the growth rates of these subgraph frequencies and show that their theoretical densities can be reliably estimated from data.

A recently proposed node-based uniform strain virtual element method (NVEM) is here extended to small strain elastoplastic solids. In the proposed method, the strain is averaged at the nodes from the strain of surrounding linearly precise virtual elements using a generalization to virtual elements of the node-based uniform strain approach for finite elements. The averaged strain is then used to sample the weak form at the nodes of the mesh leading to a method in which all the field variables, including state and history-dependent variables, are related to the nodes and thus they are tracked only at these locations during the nonlinear computations. Through various elastoplastic benchmark problems, we demonstrate that the NVEM is locking-free while enabling linearly precise virtual elements to solve elastoplastic solids with accuracy.

We prove, for stably computably enumerable formal systems, direct analogues of the first and second incompleteness theorems of G\"odel. A typical stably computably enumerable set is the set of Diophantine equations with no integer solutions, and in particular such sets are generally not computably enumerable. And so this gives the first extension of the second incompleteness theorem to non classically computable formal systems. Let's motivate this with a somewhat physical application. Let $\mathcal{H} $ be the suitable infinite time limit (stabilization in the sense of the paper) of the mathematical output of humanity, specializing to first order sentences in the language of arithmetic (for simplicity), and understood as a formal system. Suppose that all the relevant physical processes in the formation of $\mathcal{H} $ are Turing computable. Then as defined $\mathcal{H} $ may \emph{not} be computably enumerable, but it is stably computably enumerable. Thus, the classical G\"odel disjunction applied to $\mathcal{H} $ is meaningless, but applying our incompleteness theorems to $\mathcal{H} $ we then get a sharper version of G\"odel's disjunction: assume $\mathcal{H} \vdash PA$ then either $\mathcal{H} $ is not stably computably enumerable or $\mathcal{H} $ is not 1-consistent (in particular is not sound) or $\mathcal{H} $ cannot prove a certain true statement of arithmetic (and cannot disprove it if in addition $\mathcal{H} $ is 2-consistent).

Combining microstructural mechanical models with experimental data enhances our understanding of the mechanics of soft tissue, such as tendons. In previous work, a Bayesian framework was used to infer constitutive parameters from uniaxial stress-strain experiments on horse tendons, specifically the superficial digital flexor tendon (SDFT) and common digital extensor tendon (CDET), on a per-experiment basis. Here, we extend this analysis to investigate the natural variation of these parameters across a population of horses. Using a Bayesian mixed effects model, we infer population distributions of these parameters. Given that the chosen hyperelastic model does not account for tendon damage, careful data selection is necessary. Avoiding ad hoc methods, we introduce a hierarchical Bayesian data selection method. This two-stage approach selects data per experiment, and integrates data weightings into the Bayesian mixed effects model. Our results indicate that the CDET is stiffer than the SDFT, likely due to a higher collagen volume fraction. The modes of the parameter distributions yield estimates of the product of the collagen volume fraction and Young's modulus as 811.5 MPa for the SDFT and 1430.2 MPa for the CDET. This suggests that positional tendons have stiffer collagen fibrils and/or higher collagen volume density than energy-storing tendons.

This paper presents an analysis of properties of two hybrid discretization methods for Gaussian derivatives, based on convolutions with either the normalized sampled Gaussian kernel or the integrated Gaussian kernel followed by central differences. The motivation for studying these discretization methods is that in situations when multiple spatial derivatives of different order are needed at the same scale level, they can be computed significantly more efficiently compared to more direct derivative approximations based on explicit convolutions with either sampled Gaussian kernels or integrated Gaussian kernels. While these computational benefits do also hold for the genuinely discrete approach for computing discrete analogues of Gaussian derivatives, based on convolution with the discrete analogue of the Gaussian kernel followed by central differences, the underlying mathematical primitives for the discrete analogue of the Gaussian kernel, in terms of modified Bessel functions of integer order, may not be available in certain frameworks for image processing, such as when performing deep learning based on scale-parameterized filters in terms of Gaussian derivatives, with learning of the scale levels. In this paper, we present a characterization of the properties of these hybrid discretization methods, in terms of quantitative performance measures concerning the amount of spatial smoothing that they imply, as well as the relative consistency of scale estimates obtained from scale-invariant feature detectors with automatic scale selection, with an emphasis on the behaviour for very small values of the scale parameter, which may differ significantly from corresponding results obtained from the fully continuous scale-space theory, as well as between different types of discretization methods.

The use of model order reduction techniques in combination with ensemble-based methods for estimating the state of systems described by nonlinear partial differential equations has been of great interest in recent years in the data assimilation community. Methods such as the multi-fidelity ensemble Kalman filter (MF-EnKF) and the multi-level ensemble Kalman filter (ML-EnKF) are recognized as state-of-the-art techniques. However, in many cases, the construction of low-fidelity models in an offline stage, before solving the data assimilation problem, prevents them from being both accurate and computationally efficient. In our work, we investigate the use of adaptive reduced basis techniques in which the approximation space is modified online based on the information that is extracted from a limited number of full order solutions and that is carried by the past models. This allows to simultaneously ensure good accuracy and low cost for the employed models and thus improve the performance of the multi-fidelity and multi-level methods.

北京阿比特科技有限公司