亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The equations of Lagrangian gas dynamics fall into the larger class of overdetermined hyperbolic and thermodynamically compatible (HTC) systems of partial differential equations. They satisfy an entropy inequality (second principle of thermodynamics) and conserve total energy (first principle of thermodynamics). The aim of this work is to construct a novel thermodynamically compatible cell-centered Lagrangian finite volume scheme on unstructured meshes. Unlike in existing schemes, we choose to directly discretize the entropy inequality, hence obtaining total energy conservation as a consequence of the new thermodynamically compatible discretization of the other equations. First, the governing equations are written in fluctuation form. Next, the non-compatible centered numerical fluxes are corrected according to the approach recently introduced by Abgrall et al., using a scalar correction factor that is defined at the nodes of the grid. This perfectly fits into the formalism of nodal solvers which is typically adopted in cell-centered Lagrangian finite volume methods. Semi-discrete entropy conservative and entropy stable Lagrangian schemes are devised, and they are adequately blended together via a convex combination based on either a priori or a posteriori detectors of discontinuous solutions. The nonlinear stability in the energy norm is rigorously demonstrated and the new schemes are provably positivity preserving for density and pressure. Furthermore, they exhibit zero numerical diffusion for isentropic flows while being still nonlinearly stable. The new schemes are tested against classical benchmarks for Lagrangian hydrodynamics, assessing their convergence and robustness and comparing their numerical dissipation with classical Lagrangian finite volume methods.

相關內容

迄今為止,產品設計師最友好的交互動畫軟件。

Selective inference methods are developed for group lasso estimators for use with a wide class of distributions and loss functions. The method includes the use of exponential family distributions, as well as quasi-likelihood modeling for overdispersed count data, for example, and allows for categorical or grouped covariates as well as continuous covariates. A randomized group-regularized optimization problem is studied. The added randomization allows us to construct a post-selection likelihood which we show to be adequate for selective inference when conditioning on the event of the selection of the grouped covariates. This likelihood also provides a selective point estimator, accounting for the selection by the group lasso. Confidence regions for the regression parameters in the selected model take the form of Wald-type regions and are shown to have bounded volume. The selective inference method for grouped lasso is illustrated on data from the national health and nutrition examination survey while simulations showcase its behaviour and favorable comparison with other methods.

Partial differential equations (PDEs) with uncertain or random inputs have been considered in many studies of uncertainty quantification. In forward uncertainty quantification, one is interested in analyzing the stochastic response of the PDE subject to input uncertainty, which usually involves solving high-dimensional integrals of the PDE output over a sequence of stochastic variables. In practical computations, one typically needs to discretize the problem in several ways: approximating an infinite-dimensional input random field with a finite-dimensional random field, spatial discretization of the PDE using, e.g., finite elements, and approximating high-dimensional integrals using cubatures such as quasi-Monte Carlo methods. In this paper, we focus on the error resulting from dimension truncation of an input random field. We show how Taylor series can be used to derive theoretical dimension truncation rates for a wide class of problems and we provide a simple checklist of conditions that a parametric mathematical model needs to satisfy in order for our dimension truncation error bound to hold. Some of the novel features of our approach include that our results are applicable to non-affine parametric operator equations, dimensionally-truncated conforming finite element discretized solutions of parametric PDEs, and even compositions of PDE solutions with smooth nonlinear quantities of interest. As a specific application of our method, we derive an improved dimension truncation error bound for elliptic PDEs with lognormally parameterized diffusion coefficients. Numerical examples support our theoretical findings.

Computational models of neurodegeneration aim to emulate the evolving pattern of pathology in the brain during neurodegenerative disease, such as Alzheimer's disease. Previous studies have made specific choices on the mechanisms of pathology production and diffusion, or assume that all the subjects lie on the same disease progression trajectory. However, the complexity and heterogeneity of neurodegenerative pathology suggests that multiple mechanisms may contribute synergistically with complex interactions, meanwhile the degree of contribution of each mechanism may vary among individuals. We thus put forward a coupled-mechanisms modelling framework which non-linearly combines the network-topology-informed pathology appearance with the process of pathology spreading within a dynamic modelling system. We account for the heterogeneity of disease by fitting the model at the individual level, allowing the epicenters and rate of progression to vary among subjects. We construct a Bayesian model selection framework to account for feature importance and parameter uncertainty. This provides a combination of mechanisms that best explains the observations for each individual from the ADNI dataset. With the obtained distribution of mechanism importance for each subject, we are able to identify subgroups of patients sharing similar combinations of apparent mechanisms.

Turbulent fluctuations of the atmospheric refraction index, so-called optical turbulence, can significantly distort propagating laser beams. Therefore, modeling the strength of these fluctuations ($C_n^2$) is highly relevant for the successful development and deployment of future free-space optical communication links. In this letter, we propose a physics-informed machine learning (ML) methodology, $\Pi$-ML, based on dimensional analysis and gradient boosting to estimate $C_n^2$. Through a systematic feature importance analysis, we identify the normalized variance of potential temperature as the dominating feature for predicting $C_n^2$. For statistical robustness, we train an ensemble of models which yields high performance on the out-of-sample data of $R^2=0.958\pm0.001$.

By combining a logarithm transformation with a corrected Milstein-type method, the present article proposes an explicit, unconditional boundary and dynamics preserving scheme for the stochastic susceptible-infected-susceptible (SIS) epidemic model that takes value in (0,N). The scheme applied to the model is first proved to have a strong convergence rate of order one. Further, the dynamic behaviors are analyzed for the numerical approximations and it is shown that the scheme can unconditionally preserve both the domain and the dynamics of the model. More precisely, the proposed scheme gives numerical approximations living in the domain (0,N) and reproducing the extinction and persistence properties of the original model for any time discretization step-size h > 0, without any additional requirements on the model parameters. Numerical experiments are presented to verify our theoretical results.

This work puts forth low-complexity Riemannian subspace descent algorithms for the minimization of functions over the symmetric positive definite (SPD) manifold. Different from the existing Riemannian gradient descent variants, the proposed approach utilizes carefully chosen subspaces that allow the update to be written as a product of the Cholesky factor of the iterate and a sparse matrix. The resulting updates avoid the costly matrix operations like matrix exponentiation and dense matrix multiplication, which are generally required in almost all other Riemannian optimization algorithms on SPD manifold. We further identify a broad class of functions, arising in diverse applications, such as kernel matrix learning, covariance estimation of Gaussian distributions, maximum likelihood parameter estimation of elliptically contoured distributions, and parameter estimation in Gaussian mixture model problems, over which the Riemannian gradients can be calculated efficiently. The proposed uni-directional and multi-directional Riemannian subspace descent variants incur per-iteration complexities of $\mathcal{O}(n)$ and $\mathcal{O}(n^2)$ respectively, as compared to the $\mathcal{O}(n^3)$ or higher complexity incurred by all existing Riemannian gradient descent variants. The superior runtime and low per-iteration complexity of the proposed algorithms is also demonstrated via numerical tests on large-scale covariance estimation problems.

In Gaussian graphical models, the likelihood equations must typically be solved iteratively. We investigate two algorithms: A version of iterative proportional scaling which avoids inversion of large matrices, resulting in increased speed when graphs are sparse and we compare this to an algorithm based on convex duality and operating on the covariance matrix by neighbourhood coordinate descent, essentially corresponding to the graphical lasso with zero penalty. For large, sparse graphs, this version of the iterative proportional scaling algorithm appears feasible and has simple convergence properties. The algorithm based on neighbourhood coordinate descent is extremely fast and less dependent on sparsity, but needs a positive definite starting value to converge, which may be difficult to achieve when the number of variables exceeds the number of observations.

We consider an unknown multivariate function representing a system-such as a complex numerical simulator-taking both deterministic and uncertain inputs. Our objective is to estimate the set of deterministic inputs leading to outputs whose probability (with respect to the distribution of the uncertain inputs) of belonging to a given set is less than a given threshold. This problem, which we call Quantile Set Inversion (QSI), occurs for instance in the context of robust (reliability-based) optimization problems, when looking for the set of solutions that satisfy the constraints with sufficiently large probability. To solve the QSI problem, we propose a Bayesian strategy based on Gaussian process modeling and the Stepwise Uncertainty Reduction (SUR) principle, to sequentially choose the points at which the function should be evaluated to efficiently approximate the set of interest. We illustrate the performance and interest of the proposed SUR strategy through several numerical experiments.

Invariant finite-difference schemes for the one-dimensional shallow water equations in the presence of a magnetic field for various bottom topographies are constructed. Based on the results of the group classification recently carried out by the authors, finite-difference analogues of the conservation laws of the original differential model are obtained. Some typical problems are considered numerically, for which a comparison is made between the cases of a magnetic field presence and when it is absent (the standard shallow water model). The invariance of difference schemes in Lagrangian coordinates and the energy preservation on the obtained numerical solutions are also discussed.

The main computational cost per iteration of adaptive cubic regularization methods for solving large-scale nonconvex problems is the computation of the step $s_k$, which requires an approximate minimizer of the cubic model. We propose a new approach in which this minimizer is sought in a low dimensional subspace that, in contrast to classical approaches, is reused for a number of iterations. A regularized Newton step to correct $s_k$ is also incorporated whenever needed. We show that our method increases efficiency while preserving the worst-case complexity of classical cubic regularized methods. We also explore the use of rational Krylov subspaces for the subspace minimization, to overcome some of the issues encountered when using polynomial Krylov subspaces. We provide several experimental results illustrating the gains of the new approach when compared to classic implementations.

北京阿比特科技有限公司