亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper, we develop a fully conservative, positivity-preserving, and entropy-bounded discontinuous Galerkin scheme for simulating the chemically reacting, compressible Euler equations with complex thermodynamics. The proposed formulation is an extension of the conservative, high-order numerical method previously developed by Johnson and Kercher [J. Comput. Phys., 423 (2020), 109826] that maintains pressure equilibrium between adjacent elements. In this first part of our two-part paper, we focus on the one-dimensional case. Our methodology is rooted in the minimum entropy principle satisfied by entropy solutions to the multicomponent, compressible Euler equations, which was proved by Gouasmi et al. [ESAIM: Math. Model. Numer. Anal., 54 (2020), 373--389] for nonreacting flows. We first show that the minimum entropy principle holds in the reacting case as well. Next, we introduce the ingredients required for the solution to have nonnegative species concentrations, positive density, positive pressure, and bounded entropy. We also discuss how to retain the aforementioned ability to preserve pressure equilibrium between elements. Operator splitting is employed to handle stiff chemical reactions. To guarantee satisfaction of the minimum entropy principle in the reaction step, we develop an entropy-stable discontinuous Galerkin method based on diagonal-norm summation-by-parts operators for solving ordinary differential equations. The developed formulation is used to compute canonical one-dimensional test cases, namely thermal-bubble advection, multicomponent shock-tube flow, and a moving hydrogen-oxygen detonation wave with detailed chemistry. We find that the enforcement of an entropy bound can considerably reduce the large-scale nonlinear instabilities that emerge when only the positivity property is enforced, to an even greater extent than in the monocomponent, calorically perfect case.

相關內容

This paper introduces the exponential substitution calculus (ESC), a new presentation of cut elimination for IMELL, based on proof terms and building on the idea that exponentials can be seen as explicit substitutions. The idea in itself is not new, but here it is pushed to a new level, inspired by Accattoli and Kesner's linear substitution calculus (LSC). One of the key properties of the LSC is that it naturally models the sub-term property of abstract machines, that is the key ingredient for the study of reasonable time cost models for the $\lambda$-calculus. The new ESC is then used to design a cut elimination strategy with the sub-term property, providing the first polynomial cost model for cut elimination with unconstrained exponentials. For the ESC, we also prove untyped confluence and typed strong normalization, showing that it is an alternative to proof nets for an advanced study of cut elimination.

A semi-Lagrangian Characteristic Mapping method for the solution of the tracer transport equations on the sphere is presented. The method solves for the solution operator of the equations by approximating the inverse of the diffeomorphism generated by a given velocity field. The evolution of any tracer and mass density can then be computed via pullback with this map. We present a spatial discretization of the manifold-valued map using a projection-based approach with spherical spline interpolation. The numerical scheme yields $C^1$ continuity for the map and global second-order accuracy for the solution of the tracer transport equations. Error estimates are provided and supported by convergence tests involving solid body rotation, moving vortices, deformational, and compressible flows. Additionally, we illustrate some features of computing the solution operator using a numerical mixing test and the transport of a fractal set in a complex flow environment.

Latent variable discovery is a central problem in data analysis with a broad range of applications in applied science. In this work, we consider data given as an invertible mixture of two statistically independent components, and assume that one of the components is observed while the other is hidden. Our goal is to recover the hidden component. For this purpose, we propose an autoencoder equipped with a discriminator. Unlike the standard nonlinear ICA problem, which was shown to be non-identifiable, in the special case of ICA we consider here, we show that our approach can recover the component of interest up to entropy-preserving transformation. We demonstrate the performance of the proposed approach on several datasets, including image synthesis, voice cloning, and fetal ECG extraction.

The paper tackles the problem of clustering multiple networks, that do not share the same set of vertices, into groups of networks with similar topology. A statistical model-based approach based on a finite mixture of stochastic block models is proposed. A clustering is obtained by maximizing the integrated classification likelihood criterion. This is done by a hierarchical agglomerative algorithm, that starts from singleton clusters and successively merges clusters of networks. As such, a sequence of nested clusterings is computed that can be represented by a dendrogram providing valuable insights on the collection of networks. Using a Bayesian framework, model selection is performed in an automated way since the algorithm stops when the best number of clusters is attained. The algorithm is computationally efficient, when carefully implemented. The aggregation of groups of networks requires a means to overcome the label-switching problem of the stochastic block model and to match the block labels of the graphs. To address this problem, a new tool is proposed based on a comparison of the graphons of the associated stochastic block models. The clustering approach is assessed on synthetic data. An application to a collection of ecological networks illustrates the interpretability of the obtained results.

We present substantially generalized and improved quantum algorithms over prior work for inhomogeneous linear and nonlinear ordinary differential equations (ODE). Specifically, we show how the norm of the matrix exponential characterizes the run time of quantum algorithms for linear ODEs opening the door to an application to a wider class of linear and nonlinear ODEs. In Berry et al., (2017), a quantum algorithm for a certain class of linear ODEs is given, where the matrix involved needs to be diagonalizable. The quantum algorithm for linear ODEs presented here extends to many classes of non-diagonalizable matrices. The algorithm here is also exponentially faster than the bounds derived in Berry et al., (2017) for certain classes of diagonalizable matrices. Our linear ODE algorithm is then applied to nonlinear differential equations using Carleman linearization (an approach taken recently by us in Liu et al., (2021)). The improvement over that result is two-fold. First, we obtain an exponentially better dependence on error. This kind of logarithmic dependence on error has also been achieved by Xue et al., (2021), but only for homogeneous nonlinear equations. Second, the present algorithm can handle any sparse, invertible matrix (that models dissipation) if it has a negative log-norm (including non-diagonalizable matrices), whereas Liu et al., (2021) and Xue et al., (2021) additionally require normality.

Near-term quantum computers provide a promising platform for finding ground states of quantum systems, which is an essential task in physics, chemistry, and materials science. Near-term approaches, however, are constrained by the effects of noise as well as the limited resources of near-term quantum hardware. We introduce "neural error mitigation," which uses neural networks to improve estimates of ground states and ground-state observables obtained using near-term quantum simulations. To demonstrate our method's broad applicability, we employ neural error mitigation to find the ground states of the H$_2$ and LiH molecular Hamiltonians, as well as the lattice Schwinger model, prepared via the variational quantum eigensolver (VQE). Our results show that neural error mitigation improves numerical and experimental VQE computations to yield low energy errors, high fidelities, and accurate estimations of more-complex observables like order parameters and entanglement entropy, without requiring additional quantum resources. Furthermore, neural error mitigation is agnostic with respect to the quantum state preparation algorithm used, the quantum hardware it is implemented on, and the particular noise channel affecting the experiment, contributing to its versatility as a tool for quantum simulation.

The modeling and identification of time series data with a long memory are important in various fields. The streamflow discharge is one such example that can be reasonably described as an aggregated stochastic process of randomized affine processes where the probability measure, we call it reversion measure, for the randomization is not directly observable. Accurate identification of the reversion measure is critical because of its omnipresence in the aggregated stochastic process. However, the modeling accuracy is commonly limited by the available real-world data. One approach to this issue is to evaluate the upper and lower bounds of a statistic of interest subject to ambiguity of the reversion measure. Here, we use the Tsallis Value-at-Risk (TsVaR) as a convex risk measure to generalize the widely used entropic Value-at-Risk (EVaR) as a sharp statistical indicator. We demonstrate that the EVaR cannot be used for evaluating key statistics, such as mean and variance, of the streamflow discharge due to the blowup of some exponential integrand. In contrast, the TsVaR avoids this issue because it requires only the existence of some polynomial, not exponential moment. As a demonstration, we apply the semi-implicit gradient descent method to calculate the TsVaR and corresponding Radon-Nikodym derivative for time series data of actual streamflow discharges in mountainous river environments.

This paper presents a new strategy to deal with the excessive diffusion that standard finite volume methods for compressible Euler equations display in the limit of low Mach number. The strategy can be understood as using centered discretizations for the acoustic part of the Euler equations and stabilizing them with a leap-frog-type ("sequential explicit") time integration, a fully explicit method. This time integration takes inspiration from time-explicit staggered grid numerical methods. In this way, advantages of staggered methods carry over to collocated methods. The paper provides a number of new collocated schemes for linear acoustic/Maxwell equations that are inspired by the Yee scheme. They are then extended to an all-speed method for the full Euler equations on Cartesian grids. By taking the opposite view and taking inspiration from collocated methods, the paper also suggests a new way of staggering the variables which increases the stability as compared to the traditional Yee scheme.

Consider the approximation of stochastic Allen-Cahn-type equations (i.e. $1+1$-dimensional space-time white noise-driven stochastic PDEs with polynomial nonlinearities $F$ such that $F(\pm \infty)=\mp \infty$) by a fully discrete space-time explicit finite difference scheme. The consensus in literature, supported by rigorous lower bounds, is that strong convergence rate $1/2$ with respect to the parabolic grid meshsize is expected to be optimal. We show that one can reach almost sure convergence rate $1$ (and no better) when measuring the error in appropriate negative Besov norms, by temporarily `pretending' that the SPDE is singular.

This book develops an effective theory approach to understanding deep neural networks of practical relevance. Beginning from a first-principles component-level picture of networks, we explain how to determine an accurate description of the output of trained networks by solving layer-to-layer iteration equations and nonlinear learning dynamics. A main result is that the predictions of networks are described by nearly-Gaussian distributions, with the depth-to-width aspect ratio of the network controlling the deviations from the infinite-width Gaussian description. We explain how these effectively-deep networks learn nontrivial representations from training and more broadly analyze the mechanism of representation learning for nonlinear models. From a nearly-kernel-methods perspective, we find that the dependence of such models' predictions on the underlying learning algorithm can be expressed in a simple and universal way. To obtain these results, we develop the notion of representation group flow (RG flow) to characterize the propagation of signals through the network. By tuning networks to criticality, we give a practical solution to the exploding and vanishing gradient problem. We further explain how RG flow leads to near-universal behavior and lets us categorize networks built from different activation functions into universality classes. Altogether, we show that the depth-to-width ratio governs the effective model complexity of the ensemble of trained networks. By using information-theoretic techniques, we estimate the optimal aspect ratio at which we expect the network to be practically most useful and show how residual connections can be used to push this scale to arbitrary depths. With these tools, we can learn in detail about the inductive bias of architectures, hyperparameters, and optimizers.

北京阿比特科技有限公司