亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

A nonlinear multigrid solver for two-phase flow and transport in a mixed fractional-flow velocity-pressure-saturation formulation is proposed. The solver, which is under the framework of the full approximation scheme (FAS), extends our previous work on nonlinear multigrid for heterogeneous diffusion problems. The coarse spaces in the multigrid hierarchy are constructed by first aggregating degrees of freedom, and then solving some local flow problems. The mixed formulation and the choice of coarse spaces allow us to assemble the coarse problems without visiting finer levels during the solving phase, which is crucial for the scalability of multigrid methods. Specifically, a natural generalization of the upwind flux can be evaluated directly on coarse levels using the precomputed coarse flux basis vectors. The resulting solver is applicable to problems discretized on general unstructured grids. The performance of the proposed nonlinear multigrid solver in comparison with the standard single level Newton's method is demonstrated through challenging numerical examples. It is observed that the proposed solver is robust for highly nonlinear problems and clearly outperforms Newton's method in the case of high Courant-Friedrichs-Lewy (CFL) numbers.

相關內容

Solutions to many partial differential equations satisfy certain bounds or constraints. For example, the density and pressure are positive for equations of fluid dynamics, and in the relativistic case the fluid velocity is upper bounded by the speed of light, etc. As widely realized, it is crucial to develop bound-preserving numerical methods that preserve such intrinsic constraints. Exploring provably bound-preserving schemes has attracted much attention and is actively studied in recent years. This is however still a challenging task for many systems especially those involving nonlinear constraints. Based on some key insights from geometry, we systematically propose an innovative and general framework, referred to as geometric quasilinearization (GQL), which paves a new effective way for studying bound-preserving problems with nonlinear constraints. The essential idea of GQL is to equivalently transfer all nonlinear constraints into linear ones, through properly introducing some free auxiliary variables. We establish the fundamental principle and general theory of GQL via the geometric properties of convex regions, and propose three simple effective methods for constructing GQL. We apply the GQL approach to a variety of partial differential equations, and demonstrate its effectiveness and remarkable advantages for studying bound-preserving schemes, by diverse challenging examples and applications which cannot be easily handled by direct or traditional approaches.

Current multi-physics Finite Element Method (FEM) solvers are complex systems in terms of both their mathematical complexity and lines of code. This paper proposes a skeleton generic FEM solver, named MetaFEM, in total about 5,000 lines of Julia code, which translates generic input Partial Differential Equation (PDE) weak forms into corresponding GPU-accelerated simulations with a grammar similar to FEniCS or FreeFEM. Two novel approaches differentiate MetaFEM from the common solvers: (1) the FEM kernel is based on an original theory/algorithm which explicitly processes meta-expressions, as the name suggests, and (2) the symbolic engine is a rule-based Computer Algebra System (CAS), i.e., the equations are rewritten/derived according to a set of rewriting rules instead of going through completely fixed routines, supporting easy customization by developers. Example cases in thermal conduction, linear elasticity and incompressible flow are presented to demonstrate utility.

We study the meta-learning of numerical algorithms for scientific computing, which combines the mathematically driven, handcrafted design of general algorithm structure with a data-driven adaptation to specific classes of tasks. This represents a departure from the classical approaches in numerical analysis, which typically do not feature such learning-based adaptations. As a case study, we develop a machine learning approach that automatically learns effective solvers for initial value problems in the form of ordinary differential equations (ODEs), based on the Runge-Kutta (RK) integrator architecture. By combining neural network approximations and meta-learning, we show that we can obtain high-order integrators for targeted families of differential equations without the need for computing integrator coefficients by hand. Moreover, we demonstrate that in certain cases we can obtain superior performance to classical RK methods. This can be attributed to certain properties of the ODE families being identified and exploited by the approach. Overall, this work demonstrates an effective, learning-based approach to the design of algorithms for the numerical solution of differential equations, an approach that can be readily extended to other numerical tasks.

This paper considers how to obtain MCMC quantitative convergence bounds which can be translated into tight complexity bounds in high-dimensional {settings}. We propose a modified drift-and-minorization approach, which establishes generalized drift conditions defined in subsets of the state space. The subsets are called the "large sets", and are chosen to rule out some "bad" states which have poor drift property when the dimension of the state space gets large. Using the "large sets" together with a "fitted family of drift functions", a quantitative bound can be obtained which can be translated into a tight complexity bound. As a demonstration, we analyze several Gibbs samplers and obtain complexity upper bounds for the mixing time. In particular, for one example of Gibbs sampler which is related to the James--Stein estimator, we show that the number of iterations required for the Gibbs sampler to converge is constant under certain conditions on the observed data and the initial state. It is our hope that this modified drift-and-minorization approach can be employed in many other specific examples to obtain complexity bounds for high-dimensional Markov chains.

In this paper we present a proof system that operates on graphs instead of formulas. Starting from the well-known relationship between formulas and cographs, we drop the cograph-conditions and look at arbitrary undirected) graphs. This means that we lose the tree structure of the formulas corresponding to the cographs, and we can no longer use standard proof theoretical methods that depend on that tree structure. In order to overcome this difficulty, we use a modular decomposition of graphs and some techniques from deep inference where inference rules do not rely on the main connective of a formula. For our proof system we show the admissibility of cut and a generalization of the splitting property. Finally, we show that our system is a conservative extension of multiplicative linear logic with mix, and we argue that our graphs form a notion of generalized connective.

We study a generalization of relative submajorization that compares pairs of positive operators on representation spaces of some fixed group. A pair equivariantly relatively submajorizes another if there is an equivariant subnormalized channel that takes the components of the first pair to a pair satisfying similar positivity constraints as in the definition of relative submajorization. In the context of the resource theory approach to thermodynamics, this generalization allows one to study transformations by Gibbs-preserving maps that are in addition time-translation symmetric. We find a sufficient condition for the existence of catalytic transformations and a characterization of an asymptotic relaxation of the relation. For classical and certain quantum pairs the characterization is in terms of explicit monotone quantities related to the sandwiched quantum R\'enyi divergences. In the general quantum case the relevant quantities are given only implicitly. Nevertheless, we find a large collection of monotones that provide necessary conditions for asymptotic or catalytic transformations. When applied to time-translation symmetric maps, these give rise to second laws that constrain state transformations allowed by thermal operations even in the presence of catalysts.

In this paper we develop a neural network for the numerical simulation of time-dependent linear transport equations with diffusive scaling and uncertainties. The goal of the network is to resolve the computational challenges of curse-of-dimensionality and multiple scales of the problem. We first show that a standard Physics-Informed Neural Network (PINNs) fails to capture the multiscale nature of the problem, hence justifies the need to use Asymptotic-Preserving Neural Networks (APNNs). We show that not all classical AP formulations are fit for the neural network approach. We construct a micro-macro decomposition based neutral network, and also build in a mass conservation mechanism into the loss function, in order to capture the dynamic and multiscale nature of the solutions. Numerical examples are used to demonstrate the effectiveness of this APNNs.

The serverless platform allows a customer to effectively use cloud resources and pay for the exact amount of used resources. A number of dedicated open source and commercial cloud data management tools are available to handle the massive amount of data. Such modern cloud data management tools are not enough matured to integrate the generic cloud application with the serverless platform due to the lack of mature and stable standards. One of the most popular and mature standards, TOSCA (Topology and Orchestration Specification for Cloud Applications), mainly focuses on application and service portability and automated management of the generic cloud application components. This paper proposes the extension of the TOSCA standard, TOSCAdata, that focuses on the modeling of data pipeline-based cloud applications. Keeping the requirements of modern data pipeline cloud applications, TOSCAdata provides a number of TOSCA models that are independently deployable, schedulable, scalable, and re-usable, while effectively handling the flow and transformation of data in a pipeline manner. We also demonstrate the applicability of proposed TOSCAdata models by taking a web-based cloud application in the context of tourism promotion as a use case scenario.

In order to avoid the curse of dimensionality, frequently encountered in Big Data analysis, there was a vast development in the field of linear and nonlinear dimension reduction techniques in recent years. These techniques (sometimes referred to as manifold learning) assume that the scattered input data is lying on a lower dimensional manifold, thus the high dimensionality problem can be overcome by learning the lower dimensionality behavior. However, in real life applications, data is often very noisy. In this work, we propose a method to approximate $\mathcal{M}$ a $d$-dimensional $C^{m+1}$ smooth submanifold of $\mathbb{R}^n$ ($d \ll n$) based upon noisy scattered data points (i.e., a data cloud). We assume that the data points are located "near" the lower dimensional manifold and suggest a non-linear moving least-squares projection on an approximating $d$-dimensional manifold. Under some mild assumptions, the resulting approximant is shown to be infinitely smooth and of high approximation order (i.e., $O(h^{m+1})$, where $h$ is the fill distance and $m$ is the degree of the local polynomial approximation). The method presented here assumes no analytic knowledge of the approximated manifold and the approximation algorithm is linear in the large dimension $n$. Furthermore, the approximating manifold can serve as a framework to perform operations directly on the high dimensional data in a computationally efficient manner. This way, the preparatory step of dimension reduction, which induces distortions to the data, can be avoided altogether.

In this paper we discuss policy iteration methods for approximate solution of a finite-state discounted Markov decision problem, with a focus on feature-based aggregation methods and their connection with deep reinforcement learning schemes. We introduce features of the states of the original problem, and we formulate a smaller "aggregate" Markov decision problem, whose states relate to the features. The optimal cost function of the aggregate problem, a nonlinear function of the features, serves as an architecture for approximation in value space of the optimal cost function or the cost functions of policies of the original problem. We discuss properties and possible implementations of this type of aggregation, including a new approach to approximate policy iteration. In this approach the policy improvement operation combines feature-based aggregation with reinforcement learning based on deep neural networks, which is used to obtain the needed features. We argue that the cost function of a policy may be approximated much more accurately by the nonlinear function of the features provided by aggregation, than by the linear function of the features provided by deep reinforcement learning, thereby potentially leading to more effective policy improvement.

北京阿比特科技有限公司