亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

There has recently been much interest in Gaussian processes on linear networks and more generally on compact metric graphs. One proposed strategy for defining such processes on a metric graph $\Gamma$ is through a covariance function that is isotropic in a metric on the graph. Another is through a fractional order differential equation $L^\alpha (\tau u) = \mathcal{W}$ on $\Gamma$, where $L = \kappa^2 - \nabla(a\nabla)$ for (sufficiently nice) functions $\kappa, a$, and $\mathcal{W}$ is Gaussian white noise. We study Markov properties of these two types of fields. We first show that there are no Gaussian random fields on general metric graphs that are both isotropic and Markov. We then show that the second type of fields, the generalized Whittle--Mat\'ern fields, are Markov if and only if $\alpha\in\mathbb{N}$, and if $\alpha\in\mathbb{N}$, the field is Markov of order $\alpha$, which essentially means that the process in one region $S\subset\Gamma$ is conditionally independent the process in $\Gamma\setminus S$ given the values of the process and its $\alpha-1$ derivatives on $\partial S$. Finally, we show that the Markov property implies an explicit characterization of the process on a fixed edge $e$, which in particular shows that the conditional distribution of the process on $e$ given the values at the two vertices connected to $e$ is independent of the geometry of $\Gamma$.

相關內容

The efficient representation of random fields on geometrically complex domains is crucial for Bayesian modelling in engineering and machine learning. Today's prevalent random field representations are restricted to unbounded domains or are too restrictive in terms of possible field properties. As a result, new techniques leveraging the historically established link between stochastic PDEs (SPDEs) and random fields are especially appealing for engineering applications with complex geometries which already have a finite element discretisation for solving the physical conservation equations. Unlike the dense covariance matrix of a random field, its inverse, the precision matrix, is usually sparse and equal to the stiffness matrix of a Helmholtz-like SPDE. In this paper, we use the SPDE representation to develop a scalable framework for large-scale statistical finite element analysis (statFEM) and Gaussian process (GP) regression on geometrically complex domains. We use the SPDE formulation to obtain the relevant prior probability densities with a sparse precision matrix. The properties of the priors are governed by the parameters and possibly fractional order of the Helmholtz-like SPDE so that we can model on bounded domains and manifolds anisotropic, non-homogeneous random fields with arbitrary smoothness. We use for assembling the sparse precision matrix the same finite element mesh used for solving the physical conservation equations. The observation models for statFEM and GP regression are such that the posterior probability densities are Gaussians with a closed-form mean and precision. The expressions for the mean vector and the precision matrix can be evaluated using only sparse matrix operations. We demonstrate the versatility of the proposed framework and its convergence properties with one and two-dimensional Poisson and thin-shell examples.

We introduce and analyze a new finite-difference scheme, relying on the theta-method, for solving monotone second-order mean field games. These games consist of a coupled system of the Fokker-Planck and the Hamilton-Jacobi-Bellman equation. The theta-method is used for discretizing the diffusion terms: we approximate them with a convex combination of an implicit and an explicit term. On contrast, we use an explicit centered scheme for the first-order terms. Assuming that the running cost is strongly convex and regular, we first prove the monotonicity and the stability of our theta-scheme, under a CFL condition. Taking advantage of the regularity of the solution of the continuous problem, we estimate the consistency error of the theta-scheme. Our main result is a convergence rate of order $\mathcal{O}(h^r)$ for the theta-scheme, where $h$ is the step length of the space variable and $r \in (0,1)$ is related to the H\"older continuity of the solution of the continuous problem and some of its derivatives.

A new multivariate density estimator for stationary sequences is obtained by Fourier inversion of the thresholded empirical characteristic function. This estimator does not depend on the choice of parameters related to the smoothness of the density; it is directly adaptive. We establish oracle inequalities valid for independent, $\alpha$-mixing and $\tau$-mixing sequences, which allows us to derive optimal convergence rates, up to a logarithmic loss. On general anisotropic Sobolev classes, the estimator adapts to the regularity of the unknown density but also achieves directional adaptivity. In particular, if A is an invertible matrix, if the observations are drawn from X $\in$ R^d , d $\ge$ 1, it achieves the rate implied by the regularity of AX, which may be more regular than X. The estimator is easy to implement and numerically efficient. It depends on the calibration of a parameter for which we propose an innovative numerical selection procedure, using the Euler characteristic of the thresholded areas.

We present a novel mechanism to improve the accuracy of the recently-introduced class of graph random features (GRFs). Our method induces negative correlations between the lengths of the algorithm's random walks by imposing antithetic termination: a procedure to sample more diverse random walks which may be of independent interest. It has a trivial drop-in implementation. We derive strong theoretical guarantees on the properties of these quasi-Monte Carlo GRFs (q-GRFs), proving that they yield lower-variance estimators of the 2-regularised Laplacian kernel under mild conditions. Remarkably, our results hold for any graph topology. We demonstrate empirical accuracy improvements on a variety of tasks including a new practical application: time-efficient approximation of the graph diffusion process. To our knowledge, q-GRFs constitute the first rigorously studied quasi-Monte Carlo scheme for kernels defined on combinatorial objects, inviting new research on correlations between graph random walks.

This study examines the identifiability of interaction kernels in mean-field equations of interacting particles or agents, an area of growing interest across various scientific and engineering fields. The main focus is identifying data-dependent function spaces where a quadratic loss functional possesses a unique minimizer. We consider two data-adaptive $L^2$ spaces: one weighted by a data-adaptive measure and the other using the Lebesgue measure. In each $L^2$ space, we show that the function space of identifiability is the closure of the RKHS associated with the integral operator of inversion. Alongside prior research, our study completes a full characterization of identifiability in interacting particle systems with either finite or infinite particles, highlighting critical differences between these two settings. Moreover, the identifiability analysis has important implications for computational practice. It shows that the inverse problem is ill-posed, necessitating regularization. Our numerical demonstrations show that the weighted $L^2$ space is preferable over the unweighted $L^2$ space, as it yields more accurate regularized estimators.

Markov Switching models have had increasing success in time series analysis due to their ability to capture the existence of unobserved discrete states in the dynamics of the variables under study. This result is generally obtained thanks to the inference on states derived from the so--called Hamilton filter. One of the open problems in this framework is the identification of the number of states, generally fixed a priori; it is in fact impossible to apply classical tests due to the problem of the nuisance parameters present only under the alternative hypothesis. In this work we show, by Monte Carlo simulations, that fuzzy clustering is able to reproduce the parametric state inference derived from the Hamilton filter and that the typical indices used in clustering to determine the number of groups can be used to identify the number of states in this framework. The procedure is very simple to apply, considering that it is performed (in a nonparametric way) independently of the data generation process and that the indicators we use are present in most statistical packages. A final application on real data completes the analysis.

Markov chain Monte Carlo (MCMC) allows one to generate dependent replicates from a posterior distribution for effectively any Bayesian hierarchical model. However, MCMC can produce a significant computational burden. This motivates us to consider finding expressions of the posterior distribution that are computationally straightforward to obtain independent replicates from directly. We focus on a broad class of Bayesian latent Gaussian process (LGP) models that allow for spatially dependent data. First, we derive a new class of distributions we refer to as the generalized conjugate multivariate (GCM) distribution. The GCM distribution's theoretical development is similar to that of the CM distribution with two main differences; namely, (1) the GCM allows for latent Gaussian process assumptions, and (2) the GCM explicitly accounts for hyperparameters through marginalization. The development of GCM is needed to obtain independent replicates directly from the exact posterior distribution, which has an efficient projection/regression form. Hence, we refer to our method as Exact Posterior Regression (EPR). Illustrative examples are provided including simulation studies for weakly stationary spatial processes and spatial basis function expansions. An additional analysis of poverty incidence data from the U.S. Census Bureau's American Community Survey (ACS) using a conditional autoregressive model is presented.

Directional beamforming will play a paramount role in 5G and beyond networks in order to combat the higher path losses incurred at millimeter wave bands. Appropriate modeling and analysis of the angles and distances between transmitters and receivers in these networks are thus essential to understand performance and limiting factors. Most existing literature considers either infinite and uniform networks, where nodes are drawn according to a Poisson point process, or finite networks with the reference receiver placed at the origin of a disk. Under either of these assumptions, the distance and azimuth angle between transmitter and receiver are independent, and the angle follows a uniform distribution between $0$ and $2\pi$. Here, we consider a more realistic case of finite networks where the reference node is placed at any arbitrary location. We obtain the joint distribution between the distance and azimuth angle and demonstrate that these random variables do exhibit certain correlation, which depends on the shape of the region and the location of the reference node. To conduct the analysis, we present a general mathematical framework which is specialized to exemplify the case of a rectangular region. We then also derive the statistics for the 3D case where, considering antenna heights, the joint distribution of distance, azimuth and zenith angles is obtained. Finally, we describe some immediate applications of the present work, including the analysis of directional beamforming, the design of analog codebooks and wireless routing algorithms.

Evidence Networks can enable Bayesian model comparison when state-of-the-art methods (e.g. nested sampling) fail and even when likelihoods or priors are intractable or unknown. Bayesian model comparison, i.e. the computation of Bayes factors or evidence ratios, can be cast as an optimization problem. Though the Bayesian interpretation of optimal classification is well-known, here we change perspective and present classes of loss functions that result in fast, amortized neural estimators that directly estimate convenient functions of the Bayes factor. This mitigates numerical inaccuracies associated with estimating individual model probabilities. We introduce the leaky parity-odd power (l-POP) transform, leading to the novel ``l-POP-Exponential'' loss function. We explore neural density estimation for data probability in different models, showing it to be less accurate and scalable than Evidence Networks. Multiple real-world and synthetic examples illustrate that Evidence Networks are explicitly independent of dimensionality of the parameter space and scale mildly with the complexity of the posterior probability density function. This simple yet powerful approach has broad implications for model inference tasks. As an application of Evidence Networks to real-world data we compute the Bayes factor for two models with gravitational lensing data of the Dark Energy Survey. We briefly discuss applications of our methods to other, related problems of model comparison and evaluation in implicit inference settings.

Graph Convolutional Networks (GCNs) have been widely applied in various fields due to their significant power on processing graph-structured data. Typical GCN and its variants work under a homophily assumption (i.e., nodes with same class are prone to connect to each other), while ignoring the heterophily which exists in many real-world networks (i.e., nodes with different classes tend to form edges). Existing methods deal with heterophily by mainly aggregating higher-order neighborhoods or combing the immediate representations, which leads to noise and irrelevant information in the result. But these methods did not change the propagation mechanism which works under homophily assumption (that is a fundamental part of GCNs). This makes it difficult to distinguish the representation of nodes from different classes. To address this problem, in this paper we design a novel propagation mechanism, which can automatically change the propagation and aggregation process according to homophily or heterophily between node pairs. To adaptively learn the propagation process, we introduce two measurements of homophily degree between node pairs, which is learned based on topological and attribute information, respectively. Then we incorporate the learnable homophily degree into the graph convolution framework, which is trained in an end-to-end schema, enabling it to go beyond the assumption of homophily. More importantly, we theoretically prove that our model can constrain the similarity of representations between nodes according to their homophily degree. Experiments on seven real-world datasets demonstrate that this new approach outperforms the state-of-the-art methods under heterophily or low homophily, and gains competitive performance under homophily.

北京阿比特科技有限公司