亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Due to their cost, experiments for inertial confinement fusion (ICF) heavily rely on numerical simulations to guide design. As simulation technology progresses, so too can the fidelity of models used to plan for new experiments. However, these high-fidelity models are by themselves insufficient for optimal experimental design, because their computational cost remains too high to efficiently and effectively explore the numerous parameters required to describe a typical experiment. Traditionally, ICF design has relied on low-fidelity modeling to initially identify potentially interesting design regions, which are then subsequently explored via selected high-fidelity modeling. In this paper, we demonstrate that this two-step approach can be insufficient: even for simple design problems, a two-step optimization strategy can lead high-fidelity searching towards incorrect regions and consequently waste computational resources on parameter regimes far away from the true optimal solution. We reveal that a primary cause of this behavior in ICF design problems is the presence of low-fidelity optima in distinct regions of the parameter space from high-fidelity optima. To address this issue, we propose an iterative multifidelity Bayesian optimization method based on Gaussian Process Regression that leverages both low- and high-fidelity modelings. We demonstrate, using both two- and eight-dimensional ICF test problems, that our algorithm can effectively utilize low-fidelity modeling for exploration, while automatically refining promising designs with high-fidelity models. This approach proves to be more efficient than relying solely on high-fidelity modeling for optimization.

相關內容

We consider the estimation of the cumulative hazard function, and equivalently the distribution function, with censored data under a setup that preserves the privacy of the survival database. This is done through a $\alpha$-locally differentially private mechanism for the failure indicators and by proposing a non-parametric kernel estimator for the cumulative hazard function that remains consistent under the privatization. Under mild conditions, we also prove lowers bounds for the minimax rates of convergence and show that estimator is minimax optimal under a well-chosen bandwidth.

In this paper, we consider the numerical approximation of a time-fractional stochastic Cahn--Hilliard equation driven by an additive fractionally integrated Gaussian noise. The model involves a Caputo fractional derivative in time of order $\alpha\in(0,1)$ and a fractional time-integral noise of order $\gamma\in[0,1]$. The numerical scheme approximates the model by a piecewise linear finite element method in space and a convolution quadrature in time (for both time-fractional operators), along with the $L^2$-projection for the noise. We carefully investigate the spatially semidiscrete and fully discrete schemes, and obtain strong convergence rates by using clever energy arguments. The temporal H\"older continuity property of the solution played a key role in the error analysis. Unlike the stochastic Allen--Cahn equation, the presence of the unbounded elliptic operator in front of the cubic nonlinearity in the underlying model adds complexity and challenges to the error analysis. To overcome these difficulties, several new techniques and error estimates are developed. The study concludes with numerical examples that validate the theoretical findings.

We propose a new class of finite element approximations to ideal compressible magnetohydrody- namic equations in smooth regime. Following variational approximations developed for fluid models in the last decade, our discretizations are built via a discrete variational principle mimicking the continuous Euler-Poincare principle, and to further exploit the geometrical structure of the prob- lem, vector fields are represented by their action as Lie derivatives on differential forms of any degree. The resulting semi-discrete approximations are shown to conserve the total mass, entropy and energy of the solutions for a wide class of finite element approximations. In addition, the divergence-free nature of the magnetic field is preserved in a pointwise sense and a time discretiza- tion is proposed, preserving those invariants and giving a reversible scheme at the fully discrete level. Numerical simulations are conducted to verify the accuracy of our approach and its ability to preserve the invariants for several test problems.

A novel scheme, based on third-order Weighted Essentially Non-Oscillatory (WENO) reconstructions, is presented. It attains unconditionally optimal accuracy when the data is smooth enough, even in presence of critical points, and second-order accuracy if a discontinuity crosses the data. The key to attribute these properties to this scheme is the inclusion of an additional node in the data stencil, which is only used in the computation of the weights measuring the smoothness. The accuracy properties of this scheme are proven in detail and several numerical experiments are presented, which show that this scheme is more efficient in terms of the error reduction versus CPU time than its traditional third-order counterparts as well as several higher-order WENO schemes that are found in the literature.

The joint bidiagonalization (JBD) process iteratively reduces a matrix pair $\{A,L\}$ to two bidiagonal forms simultaneously, which can be used for computing a partial generalized singular value decomposition (GSVD) of $\{A,L\}$. The process has a nested inner-outer iteration structure, where the inner iteration usually can not be computed exactly. In this paper, we study the inaccurately computed inner iterations of JBD by first investigating influence of computational error of the inner iteration on the outer iteration, and then proposing a reorthogonalized JBD (rJBD) process to keep orthogonality of a part of Lanczos vectors. An error analysis of the rJBD is carried out to build up connections with Lanczos bidiagonalizations. The results are then used to investigate convergence and accuracy of the rJBD based GSVD computation. It is shown that the accuracy of computed GSVD components depend on the computing accuracy of inner iterations and condition number of $(A^T,L^T)^T$ while the convergence rate is not affected very much. For practical JBD based GSVD computations, our results can provide a guideline for choosing a proper computing accuracy of inner iterations in order to obtain approximate GSVD components with a desired accuracy. Numerical experiments are made to confirm our theoretical results.

We consider the problem of learning support vector machines robust to uncertainty. It has been established in the literature that typical loss functions, including the hinge loss, are sensible to data perturbations and outliers, thus performing poorly in the setting considered. In contrast, using the 0-1 loss or a suitable non-convex approximation results in robust estimators, at the expense of large computational costs. In this paper we use mixed-integer optimization techniques to derive a new loss function that better approximates the 0-1 loss compared with existing alternatives, while preserving the convexity of the learning problem. In our computational results, we show that the proposed estimator is competitive with the standard SVMs with the hinge loss in outlier-free regimes and better in the presence of outliers.

Partial differential equations (PDEs) have become an essential tool for modeling complex physical systems. Such equations are typically solved numerically via mesh-based methods, such as finite element methods, with solutions over the spatial domain. However, obtaining these solutions are often prohibitively costly, limiting the feasibility of exploring parameters in PDEs. In this paper, we propose an efficient emulator that simultaneously predicts the solutions over the spatial domain, with theoretical justification of its uncertainty quantification. The novelty of the proposed method lies in the incorporation of the mesh node coordinates into the statistical model. In particular, the proposed method segments the mesh nodes into multiple clusters via a Dirichlet process prior and fits Gaussian process models with the same hyperparameters in each of them. Most importantly, by revealing the underlying clustering structures, the proposed method can provide valuable insights into qualitative features of the resulting dynamics that can be used to guide further investigations. Real examples are demonstrated to show that our proposed method has smaller prediction errors than its main competitors, with competitive computation time, and identifies interesting clusters of mesh nodes that possess physical significance, such as satisfying boundary conditions. An R package for the proposed methodology is provided in an open repository.

Most of the existing Mendelian randomization (MR) methods are limited by the assumption of linear causality between exposure and outcome, and the development of new non-linear MR methods is highly desirable. We introduce two-stage prediction estimation and control function estimation from econometrics to MR and extend them to non-linear causality. We give conditions for parameter identification and theoretically prove the consistency and asymptotic normality of the estimates. We compare the two methods theoretically under both linear and non-linear causality. We also extend the control function estimation to a more flexible semi-parametric framework without detailed parametric specifications of causality. Extensive simulations numerically corroborate our theoretical results. Application to UK Biobank data reveals non-linear causal relationships between sleep duration and systolic/diastolic blood pressure.

In this paper, we tackle a persistent numerical instability within the total Lagrangian smoothed particle hydrodynamics (TLSPH) solid dynamics. Specifically, we address the hourglass modes that may grow and eventually deteriorate the reliability of simulation, particularly in the scenarios characterized by large deformations. We propose a generalized essentially non-hourglass formulation based on volumetric-deviatoric stress decomposition, offering a general solution for elasticity, plasticity, anisotropy, and other material models. Comparing the standard SPH formulation with the original non-nested Laplacian operator applied in our previous work \cite{wu2023essentially} to handle the hourglass issues in standard elasticity, we introduce a correction for the discretization of shear stress that relies on the discrepancy produced by a tracing-back prediction of the initial inter-particle direction from the current deformation gradient. The present formulation, when applied to standard elastic materials, is able to recover the original Laplacian operator. Due to the dimensionless nature of the correction, this formulation handles complex material models in a very straightforward way. Furthermore, a magnitude limiter is introduced to minimize the correction in domains where the discrepancy is less pronounced. The present formulation is validated, with a single set of modeling parameters, through a series of benchmark cases, confirming good stability and accuracy across elastic, plastic, and anisotropic materials. To showcase its potential, the formulation is employed to simulate a complex problem involving viscous plastic Oobleck material, contacts, and very large deformation.

Graph-centric artificial intelligence (graph AI) has achieved remarkable success in modeling interacting systems prevalent in nature, from dynamical systems in biology to particle physics. The increasing heterogeneity of data calls for graph neural architectures that can combine multiple inductive biases. However, combining data from various sources is challenging because appropriate inductive bias may vary by data modality. Multimodal learning methods fuse multiple data modalities while leveraging cross-modal dependencies to address this challenge. Here, we survey 140 studies in graph-centric AI and realize that diverse data types are increasingly brought together using graphs and fed into sophisticated multimodal models. These models stratify into image-, language-, and knowledge-grounded multimodal learning. We put forward an algorithmic blueprint for multimodal graph learning based on this categorization. The blueprint serves as a way to group state-of-the-art architectures that treat multimodal data by choosing appropriately four different components. This effort can pave the way for standardizing the design of sophisticated multimodal architectures for highly complex real-world problems.

北京阿比特科技有限公司