亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We develop a class of mixed virtual volume methods for elliptic problems on polygonal/polyhedral grids. Unlike the mixed virtual element methods introduced in \cite{brezzi2014basic,da2016mixed}, our methods are reduced to symmetric, positive definite problems for the primary variable without using Lagrangian multipliers. We start from the usual way of changing the given equation into a mixed system using the Darcy's law, $\bu=-{\cal K} \nabla p$. By integrating the system of equations with some judiciously chosen test spaces on each element, we define new mixed virtual volume methods of all orders. We show that these new schemes are equivalent to the nonconforming virtual element methods for the primal variable $p$. Once the primary variable is computed solving the symmetric, positive definite system, all the degrees of freedom for the Darcy velocity are locally computed. Also, the $L^2$-projection onto the polynomial space is easy to compute. Hence our work opens an easy way to compute Darcy velocity on the polygonal/polyhedral grids. For the lowest order case, we give a formula to compute a Raviart-Thomas space like representation which satisfies the conservation law. An optimal error analysis is carried out and numerical results are presented which support the theory.

相關內容

We consider the Cauchy problem for a first-order evolution equation with memory in a finite-dimensional Hilbert space when the integral term is related to the time derivative of the solution. The main problems of the approximate solution of such nonlocal problems are due to the necessity to work with the approximate solution for all previous time moments. We propose a transformation of the first-order integrodifferential equation to a system of local evolutionary equations. We use the approach known in the theory of Voltaire integral equations with an approximation of the difference kernel by the sum of exponents. We formulate a local problem for a weakly coupled system of equations with additional ordinary differential equations. We have given estimates of the stability of the solution by initial data and the right-hand side for the solution of the corresponding Cauchy problem. The primary attention is paid to constructing and investigating the stability of two-level difference schemes, which are convenient for computational implementation. The numerical solution of a two-dimensional model problem for the evolution equation of the first order, when the Laplace operator conditions the dependence on spatial variables, is presented.

We study dynamical Galerkin schemes for evolutionary partial differential equations (PDEs), where the projection operator changes over time. When selecting a subset of basis functions, the projection operator is non-differentiable in time and an integral formulation has to be used. We analyze the projected equations with respect to existence and uniqueness of the solution and prove that non-smooth projection operators introduce dissipation, a result which is crucial for adaptive discretizations of PDEs, e.g., adaptive wavelet methods. For the Burgers equation we illustrate numerically that thresholding the wavelet coefficients, and thus changing the projection space, will indeed introduce dissipation of energy. We discuss consequences for the so-called `pseudo-adaptive' simulations, where time evolution and dealiasing are done in Fourier space, whilst thresholding is carried out in wavelet space. Numerical examples are given for the inviscid Burgers equation in 1D and the incompressible Euler equations in 2D and 3D.

In economic theory, an agent chooses from available alternatives -- modeled as a set. In decisions in the field or in the lab, however, agents do not have access to the set of alternatives at once. Instead, alternatives are represented by the outside world in a structured way. Online search results are lists of items, wine menus are often lists of lists (grouped by type or country), and online shopping often involves filtering items which can be viewed as navigating a tree. Representations constrain how an agent can choose. At the same time, an agent can also leverage representations when choosing, simplifying his/her choice process. For instance, in the case of a list he or she can use the order in which alternatives are represented to make his/her choice. In this paper, we model representations and decision procedures operating on them. We show that choice procedures are related to classical choice functions by a canonical mapping. Using this mapping, we can ask whether properties of choice functions can be lifted onto the choice procedures which induce them. We focus on the obvious benchmark: rational choice. We fully characterize choice procedures which can be rationalized by a strict preference relation for general representations including lists, list of lists, trees and others. Our framework can thereby be used as the basis for new tests of rational behavior. Classical choice theory operates on very limited information, typically budgets or menus and final choices. This is in stark contrast to the vast amount of data that specifically web companies collect about their users' choice process. Our framework offers a way to integrate such data into economic choice models.

In this paper, we propose an eXtended Virtual Element Method (X-VEM) for two-dimensional linear elastic fracture. This approach, which is an extension of the standard Virtual Element Method (VEM), facilitates mesh-independent modeling of crack discontinuities and elastic crack-tip singularities on general polygonal meshes. For elastic fracture in the X-VEM, the standard virtual element space is augmented by additional basis functions that are constructed by multiplying standard virtual basis functions by suitable enrichment fields, such as asymptotic mixed-mode crack-tip solutions. The design of the X-VEM requires an extended projector that maps functions lying in the extended virtual element space onto a set spanned by linear polynomials and the enrichment fields. An efficient scheme to compute the mixed-mode stress intensity factors using the domain form of the interaction integral is described. The formulation permits integration of weakly singular functions to be performed over the boundary edges of the element. Numerical experiments are conducted on benchmark mixed-mode linear elastic fracture problems that demonstrate the sound accuracy and optimal convergence in energy of the proposed formulation.

Numerical solutions to high-dimensional partial differential equations (PDEs) based on neural networks have seen exciting developments. This paper derives complexity estimates of the solutions of $d$-dimensional second-order elliptic PDEs in the Barron space, that is a set of functions admitting the integral of certain parametric ridge function against a probability measure on the parameters. We prove under some appropriate assumptions that if the coefficients and the source term of the elliptic PDE lie in Barron spaces, then the solution of the PDE is $\epsilon$-close with respect to the $H^1$ norm to a Barron function. Moreover, we prove dimension-explicit bounds for the Barron norm of this approximate solution, depending at most polynomially on the dimension $d$ of the PDE. As a direct consequence of the complexity estimates, the solution of the PDE can be approximated on any bounded domain by a two-layer neural network with respect to the $H^1$ norm with a dimension-explicit convergence rate.

Random fields are mathematical structures used to model the spatial interaction of random variables along time, with applications ranging from statistical physics and thermodynamics to system's biology and the simulation of complex systems. Despite being studied since the 19th century, little is known about how the dynamics of random fields are related to the geometric properties of their parametric spaces. For example, how can we quantify the similarity between two random fields operating in different regimes using an intrinsic measure? In this paper, we propose a numerical method for the computation of geodesic distances in Gaussian random field manifolds. First, we derive the metric tensor of the underlying parametric space (the 3 x 3 first-order Fisher information matrix), then we derive the 27 Christoffel symbols required in the definition of the system of non-linear differential equations whose solution is a geodesic curve starting at the initial conditions. The fourth-order Runge-Kutta method is applied to numerically solve the non-linear system through an iterative approach. The obtained results show that the proposed method can estimate the geodesic distances for several different initial conditions. Besides, the results reveal an interesting pattern: in several cases, the geodesic curve obtained by reversing the system of differential equations in time does not match the original curve, suggesting the existence of irreversible geometric deformations in the trajectory of a moving reference traveling along a geodesic curve.

The quadrature-based method of moments (QMOM) offers a promising class of approximation techniques for reducing kinetic equations to fluid equations that are valid beyond thermodynamic equilibrium. A major challenge with these and other closures is that whenever the flux function must be evaluated (e.g., in a numerical update), a moment-inversion problem must be solved that computes the flux from the known input moments. In this work we study a particular five-moment variant of QMOM known as HyQMOM and establish that this system is moment-invertible over a convex region in solution space. We then develop a high-order Lax-Wendroff discontinuous Galerkin scheme for solving the resulting fluid system. The scheme is based on a predictor-corrector approach, where the prediction step is a localized space-time discontinuous Galerkin scheme. The nonlinear algebraic system that arises in this prediction step is solved using a Picard iteration. The correction step is a straightforward explicit update using the predicted solution in order to evaluate space-time flux integrals. In the absence of additional limiters, the proposed high-order scheme does not in general guarantee that the numerical solution remains in the convex set over which HyQMOM is moment-invertible. To overcome this challenge, we introduce novel limiters that rigorously guarantee that the computed solution does not leave the convex set over which moment-invertible and hyperbolicity of the fluid system is guaranteed. We develop positivity-preserving limiters in both the prediction and correction steps, as well as an oscillation-limiter that damps unphysical oscillations near shocks and rarefactions. Finally, we perform convergence tests to verify the order of accuracy of the scheme, as well as test the scheme on Riemann data to demonstrate the shock-capturing and robustness of the method.

This work considers the question of how convenient access to copious data impacts our ability to learn causal effects and relations. In what ways is learning causality in the era of big data different from -- or the same as -- the traditional one? To answer this question, this survey provides a comprehensive and structured review of both traditional and frontier methods in learning causality and relations along with the connections between causality and machine learning. This work points out on a case-by-case basis how big data facilitates, complicates, or motivates each approach.

Many resource allocation problems in the cloud can be described as a basic Virtual Network Embedding Problem (VNEP): finding mappings of request graphs (describing the workloads) onto a substrate graph (describing the physical infrastructure). In the offline setting, the two natural objectives are profit maximization, i.e., embedding a maximal number of request graphs subject to the resource constraints, and cost minimization, i.e., embedding all requests at minimal overall cost. The VNEP can be seen as a generalization of classic routing and call admission problems, in which requests are arbitrary graphs whose communication endpoints are not fixed. Due to its applications, the problem has been studied intensively in the networking community. However, the underlying algorithmic problem is hardly understood. This paper presents the first fixed-parameter tractable approximation algorithms for the VNEP. Our algorithms are based on randomized rounding. Due to the flexible mapping options and the arbitrary request graph topologies, we show that a novel linear program formulation is required. Only using this novel formulation the computation of convex combinations of valid mappings is enabled, as the formulation needs to account for the structure of the request graphs. Accordingly, to capture the structure of request graphs, we introduce the graph-theoretic notion of extraction orders and extraction width and show that our algorithms have exponential runtime in the request graphs' maximal width. Hence, for request graphs of fixed extraction width, we obtain the first polynomial-time approximations. Studying the new notion of extraction orders we show that (i) computing extraction orders of minimal width is NP-hard and (ii) that computing decomposable LP solutions is in general NP-hard, even when restricting request graphs to planar ones.

We consider the task of learning the parameters of a {\em single} component of a mixture model, for the case when we are given {\em side information} about that component, we call this the "search problem" in mixture models. We would like to solve this with computational and sample complexity lower than solving the overall original problem, where one learns parameters of all components. Our main contributions are the development of a simple but general model for the notion of side information, and a corresponding simple matrix-based algorithm for solving the search problem in this general setting. We then specialize this model and algorithm to four common scenarios: Gaussian mixture models, LDA topic models, subspace clustering, and mixed linear regression. For each one of these we show that if (and only if) the side information is informative, we obtain parameter estimates with greater accuracy, and also improved computation complexity than existing moment based mixture model algorithms (e.g. tensor methods). We also illustrate several natural ways one can obtain such side information, for specific problem instances. Our experiments on real data sets (NY Times, Yelp, BSDS500) further demonstrate the practicality of our algorithms showing significant improvement in runtime and accuracy.

北京阿比特科技有限公司