亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this article, we propose two numerical methods, the Gaussian Process (GP) method and the Fourier Features (FF) algorithm, to solve mean field games (MFGs). The GP algorithm approximates the solution of a MFG with maximum a posteriori probability estimators of GPs conditioned on the partial differential equation (PDE) system of the MFG at a finite number of sample points. The main bottleneck of the GP method is to compute the inverse of a square gram matrix, whose size is proportional to the number of sample points. To improve the performance, we introduce the FF method, whose insight comes from the recent trend of approximating positive definite kernels with random Fourier features. The FF algorithm seeks approximated solutions in the space generated by sampled Fourier features. In the FF method, the size of the matrix to be inverted depends only on the number of Fourier features selected, which is much less than the size of sample points. Hence, the FF method reduces the precomputation time, saves the memory, and achieves comparable accuracy to the GP method. We give the existence and the convergence proofs for both algorithms. The convergence argument of the GP method does not depend on any monotonicity condition, which suggests the potential applications of the GP method to solve MFGs with non-monotone couplings in future work. We show the efficacy of our algorithms through experiments on a stationary MFG with a non-local coupling and on a time-dependent planning problem. We believe that the FF method can also serve as an alternative algorithm to solve general PDEs.

相關內容

 Processing 是一門開源編程語言和與之配套的集成開發環境(IDE)的名稱。Processing 在電子藝術和視覺設計社區被用來教授編程基礎,并運用于大量的新媒體和互動藝術作品中。

In $d$ dimensions, approximating an arbitrary function oscillating with frequency $\lesssim k$ requires $\sim k^d$ degrees of freedom. A numerical method for solving the Helmholtz equation (with wavenumber $k$) suffers from the pollution effect if, as $k\to \infty$, the total number of degrees of freedom needed to maintain accuracy grows faster than this natural threshold. While the $h$-version of the finite element method (FEM) (where accuracy is increased by decreasing the meshwidth $h$ and keeping the polynomial degree $p$ fixed) suffers from the pollution effect, the celebrated papers [Melenk, Sauter 2010], [Melenk, Sauter 2011], [Esterhazy, Melenk 2012], and [Melenk, Parsania, Sauter 2013] showed that the $hp$-FEM (where accuracy is increased by decreasing the meshwidth $h$ and increasing the polynomial degree $p$) applied to a variety of constant-coefficient Helmholtz problems does not suffer from the pollution effect. The heart of the proofs of these results is a PDE result splitting the solution of the Helmholtz equation into "high" and "low" frequency components. The main novelty of the present paper is that we prove this splitting for the constant-coefficient Helmholtz equation in full-space (i.e., in $\mathbb{R}^d$) using only integration by parts and elementary properties of the Fourier transform (this is contrast to the proof for this set-up in [Melenk, Sauter 2010] which uses somewhat-involved bounds on Bessel and Hankel functions). We combine this splitting with (i) standard arguments about convergence of the FEM applied to the Helmholtz equation (the so-called "Schatz argument", which we reproduce here) and (ii) polynomial-approximation results (which we quote from the literature without proof) to give a simple proof that the $hp$-FEM does not suffer from the pollution effect for the constant-coefficient full-space Helmholtz equation.

Gaussian mixture models (GMM) are fundamental tools in statistical and data sciences. We study the moments of multivariate Gaussians and GMMs. The $d$-th moment of an $n$-dimensional random variable is a symmetric $d$-way tensor of size $n^d$, so working with moments naively is assumed to be prohibitively expensive for $d>2$ and larger values of $n$. In this work, we develop theory and numerical methods for implicit computations with moment tensors of GMMs, reducing the computational and storage costs to $\mathcal{O}(n^2)$ and $\mathcal{O}(n^3)$, respectively, for general covariance matrices, and to $\mathcal{O}(n)$ and $\mathcal{O}(n)$, respectively, for diagonal ones. We derive concise analytic expressions for the moments in terms of symmetrized tensor products, relying on the correspondence between symmetric tensors and homogeneous polynomials, and combinatorial identities involving Bell polynomials. The primary application of this theory is to estimating GMM parameters from a set of observations, when formulated as a moment-matching optimization problem. If there is a known and common covariance matrix, we also show it is possible to debias the data observations, in which case the problem of estimating the unknown means reduces to symmetric CP tensor decomposition. Numerical results validate and illustrate the numerical efficiency of our approaches. This work potentially opens the door to the competitiveness of the method of moments as compared to expectation maximization methods for parameter estimation of GMMs.

We propose a collocation method based on multivariate polynomial splines over triangulation or tetrahedralization for the numerical solution of partial differential equations. We start with a detailed explanation of the method for the Poisson equation and then extend the study to the second-order elliptic PDE in non-divergence form. We shall show that the numerical solution can approximate the exact PDE solution very well. Then we present a large amount of numerical experimental results to demonstrate the performance of the method over the 2D and 3D settings. In addition, we present a comparison with the existing multivariate spline methods in \cite{ALW06} and \cite{LW17} to show that the new method produces a similar and sometimes more accurate approximation in a more efficient fashion.

Determining whether saddle points exist or are approximable for nonconvex-nonconcave problems is usually intractable. We take a step towards understanding certain nonconvex-nonconcave minimax problems that do remain tractable. Specifically, we study minimax problems cast in geodesic metric spaces, which provide a vast generalization of the usual convex-concave saddle point problems. The first main result of the paper is a geodesic metric space version of Sion's minimax theorem; we believe our proof is novel and transparent, as it relies on Helly's theorem only. In our second main result, we specialize to geodesically complete Riemannian manifolds: we devise and analyze the complexity of first-order methods for smooth minimax problems.

A new clustering accuracy measure is proposed to determine the unknown number of clusters and to assess the quality of clustering of a data set given in any dimensional space. Our validity index applies the classical nonparametric univariate kernel density estimation method to the interpoint distances computed between the members of data. Being based on interpoint distances, it is free of the curse of dimensionality and therefore efficiently computable for high-dimensional situations where the number of study variables can be larger than the sample size. The proposed measure is compatible with any clustering algorithm and with every kind of data set where the interpoint distance measure can be defined to have a density function. Simulation study proves its superiority over widely used cluster validity indices like the average silhouette width and the Dunn index, whereas its applicability is shown with respect to a high-dimensional Biostatistical study of Alon data set and a large Astrostatistical application of time series with light curves of new variable stars.

We present symbolic and numerical methods for computing Poisson brackets on the spaces of measures with positive densities of the plane, the 2-torus, and the 2-sphere. We apply our methods to compute symplectic areas of finite regions for the case of the 2-sphere, including an explicit example for Gaussian measures with positive densities.

We develop a novel a posteriori error estimator for the L2 error committed by the finite element discretization of the solution of the fractional Laplacian. Our a posteriori error estimator takes advantage of the semi-discretization scheme using a rational approximation which allows to reformulate the fractional problem into a family of non-fractional parametric problems. The estimator involves applying the implicit Bank-Weiser error estimation strategy to each parametric non-fractional problem and reconstructing the fractional error through the same rational approximation used to compute the solution to the original fractional problem. We provide several numerical examples in both two and three-dimensions demonstrating the effectivity of our estimator for varying fractional powers and its ability to drive an adaptive mesh refinement strategy.

In this study, we generalize a problem of sampling a scalar Gauss Markov Process, namely, the Ornstein-Uhlenbeck (OU) process, where the samples are sent to a remote estimator and the estimator makes a causal estimate of the observed realtime signal. In recent years, the problem is solved for stable OU processes. We present solutions for the optimal sampling policy that exhibits a smaller estimation error for both stable and unstable cases of the OU process along with a special case when the OU process turns to a Wiener process. The obtained optimal sampling policy is a threshold policy. However, the thresholds are different for all three cases. Later, we consider additional noise with the sample when the sampling decision is made beforehand. The estimator utilizes noisy samples to make an estimate of the current signal value. The mean-square error (mse) is changed from previous due to noise and the additional term in the mse is solved which provides performance upper bound and room for a pursuing further investigation on this problem to find an optimal sampling strategy that minimizes the estimation error when the observed samples are noisy. Numerical results show performance degradation caused by the additive noise.

Backward Stochastic Differential Equations (BSDEs) have been widely employed in various areas of social and natural sciences, such as the pricing and hedging of financial derivatives, stochastic optimal control problems, optimal stopping problems and gene expression. Most BSDEs cannot be solved analytically and thus numerical methods must be applied in order to approximate their solutions. There have been a variety of numerical methods proposed over the past few decades as well as many more currently being developed. For the most part, they exist in a complex and scattered manner with each requiring different and similar assumptions and conditions. The aim of the present work is thus to systematically survey various numerical methods for BSDEs, and in particular, compare and categorize them, for further developments and improvements. To achieve this goal, we focus primarily on the core features of each method on the basis of an exhaustive collection of 289 references: the main assumptions, the numerical algorithm itself, key convergence properties and advantages and disadvantages, in order to provide a full up-to-date coverage of numerical methods for BSDEs, with insightful summaries of each and a useful comparison and categorization.

Image segmentation is the process of partitioning the image into significant regions easier to analyze. Nowadays, segmentation has become a necessity in many practical medical imaging methods as locating tumors and diseases. Hidden Markov Random Field model is one of several techniques used in image segmentation. It provides an elegant way to model the segmentation process. This modeling leads to the minimization of an objective function. Conjugate Gradient algorithm (CG) is one of the best known optimization techniques. This paper proposes the use of the Conjugate Gradient algorithm (CG) for image segmentation, based on the Hidden Markov Random Field. Since derivatives are not available for this expression, finite differences are used in the CG algorithm to approximate the first derivative. The approach is evaluated using a number of publicly available images, where ground truth is known. The Dice Coefficient is used as an objective criterion to measure the quality of segmentation. The results show that the proposed CG approach compares favorably with other variants of Hidden Markov Random Field segmentation algorithms.

北京阿比特科技有限公司