亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Nonstationary Gaussian process models can capture complex spatially varying dependence structures in spatial datasets. However, the large number of observations in modern datasets makes fitting such models computationally intractable with conventional dense linear algebra. In addition, derivative-free or even first-order optimization methods can be slow to converge when estimating many spatially varying parameters. We present here a computational framework that couples an algebraic block-diagonal plus low-rank covariance matrix approximation with stochastic trace estimation to facilitate the efficient use of second-order solvers for maximum likelihood estimation of Gaussian process models with many parameters. We demonstrate the effectiveness of these methods by simultaneously fitting 192 parameters in the popular nonstationary model of Paciorek and Schervish using 107,600 sea surface temperature anomaly measurements.

相關內容

Multilevel linear models allow flexible statistical modelling of complex data with different levels of stratification. Identifying the most appropriate model from the large set of possible candidates is a challenging problem. In the Bayesian setting, the standard approach is a comparison of models using the model evidence or the Bayes factor. Explicit expressions for these quantities are available for simple linear models, but in most cases, direct computation is impossible. In practice, Markov Chain Monte Carlo approaches are widely used, such as sequential Monte Carlo, but it is not always clear how well such techniques perform. We present a method for estimation of the log model evidence, by an intermediate marginalisation over non-variance parameters. This reduces the dimensionality of the Monte Carlo sampling algorithm, which in turn yields more consistent estimates. The aim of this paper is to show how this framework fits together and works in practice, particularly on data with hierarchical structure. We illustrate this method on a popular multilevel dataset containing levels of radon in homes in the US state of Minnesota.

Low-rank approximation is a popular strategy to tackle the "big n problem" associated with large-scale Gaussian process regressions. Basis functions for developing low-rank structures are crucial and should be carefully specified. Predictive processes simplify the problem by inducing basis functions with a covariance function and a set of knots. The existing literature suggests certain practical implementations of knot selection and covariance estimation; however, theoretical foundations explaining the influence of these two factors on predictive processes are lacking. In this paper, the asymptotic prediction performance of the predictive process and Gaussian process predictions is derived and the impacts of the selected knots and estimated covariance are studied. We suggest the use of support points as knots, which best represent data locations. Extensive simulation studies demonstrate the superiority of support points and verify our theoretical results. Real data of precipitation and ozone are used as examples, and the efficiency of our method over other widely used low-rank approximation methods is verified.

Mark-point dependence plays a critical role in research problems that can be fitted into the general framework of marked point processes. In this work, we focus on adjusting for mark-point dependence when estimating the mean and covariance functions of the mark process, given independent replicates of the marked point process. We assume that the mark process is a Gaussian process and the point process is a log-Gaussian Cox process, where the mark-point dependence is generated through the dependence between two latent Gaussian processes. Under this framework, naive local linear estimators ignoring the mark-point dependence can be severely biased. We show that this bias can be corrected using a local linear estimator of the cross-covariance function and establish uniform convergence rates of the bias-corrected estimators. Furthermore, we propose a test statistic based on local linear estimators for mark-point independence, which is shown to converge to an asymptotic normal distribution in a parametric $\sqrt{n}$-convergence rate. Model diagnostics tools are developed for key model assumptions and a robust functional permutation test is proposed for a more general class of mark-point processes. The effectiveness of the proposed methods is demonstrated using extensive simulations and applications to two real data examples.

Latent Gaussian models and boosting are widely used techniques in statistics and machine learning. Tree-boosting shows excellent prediction accuracy on many data sets, but potential drawbacks are that it assumes conditional independence of samples, produces discontinuous predictions for, e.g., spatial data, and it can have difficulty with high-cardinality categorical variables. Latent Gaussian models, such as Gaussian process and grouped random effects models, are flexible prior models which explicitly model dependence among samples and which allow for efficient learning of predictor functions and for making probabilistic predictions. However, existing latent Gaussian models usually assume either a zero or a linear prior mean function which can be an unrealistic assumption. This article introduces a novel approach that combines boosting and latent Gaussian models to remedy the above-mentioned drawbacks and to leverage the advantages of both techniques. We obtain increased prediction accuracy compared to existing approaches in both simulated and real-world data experiments.

In many self-organising systems the ability to extract necessary resources from the external environment is essential to the system's growth and survival. Examples include the extraction of sunlight and nutrients in organic plants, of monetary income in business organisations and of mobile robots in swarm intelligence actions. When operating within competitive, ever-changing environments, such systems must distribute their internal assets wisely so as to improve and adapt their ability to extract available resources. As the system size increases, the asset-distribution process often gets organised around a multi-scale control topology. This topology may be static (fixed) or dynamic (enabling growth and structural adaptation) depending on the system's internal constraints and adaptive mechanisms. In this paper, we expand on a plant-inspired asset-distribution model and introduce a more general multi-scale model applicable across a wider range of natural and artificial system domains. We study the impact that the topology of the multi-scale control process has upon the system's ability to self-adapt asset distribution when resource availability changes within the environment. Results show how different topological characteristics and different competition levels between system branches impact overall system profitability, adaptation delays and disturbances when environmental changes occur. These findings provide a basis for system designers to select the most suitable topology and configuration for their particular application and execution environment.

In Euclidean Uniform Facility Location, the input is a set of clients in $\mathbb{R}^d$ and the goal is to place facilities to serve them, so as to minimize the total cost of opening facilities plus connecting the clients. We study the classical setting of dynamic geometric streams, where the clients are presented as a sequence of insertions and deletions of points in the grid $\{1,\ldots,\Delta\}^d$, and we focus on the \emph{high-dimensional regime}, where the algorithm's space complexity must be polynomial (and certainly not exponential) in $d\cdot\log\Delta$. We present a new algorithmic framework, based on importance sampling from the stream, for $O(1)$-approximation of the optimal cost using only $\mathrm{poly}(d\cdot\log\Delta)$ space. This framework is easy to implement in two passes, one for sampling points and the other for estimating their contribution. Over random-order streams, we can extend this to a one-pass algorithm by using the two halves of the stream separately. Our main result, for arbitrary-order streams, computes $O(d^{1.5})$-approximation in one pass by using the new framework but combining the two passes differently. This improves upon previous algorithms that either need space exponential in $d$ or only guarantee $O(d\cdot\log^2\Delta)$-approximation, and therefore our algorithms for high-dimensional streams are the first to avoid the $O(\log\Delta)$-factor in approximation that is inherent to the widely-used quadtree decomposition. Our improvement is achieved by employing a geometric hashing scheme that maps points in $\mathbb{R}^d$ into buckets of bounded diameter, with the key property that every point set of small-enough diameter is hashed into at most $\mathrm{poly}(d)$ distinct buckets. We complement our results by showing $1.085$-approximation requires space exponential in $\mathrm{poly}(d\cdot\log\Delta)$, even for insertion-only streams.

Bayesian inference for high-dimensional inverse problems is challenged by the computational costs of the forward operator and the selection of an appropriate prior distribution. Amortized variational inference addresses these challenges where a neural network is trained to approximate the posterior distribution over existing pairs of model and data. When fed previously unseen data and normally distributed latent samples as input, the pretrained deep neural network -- in our case a conditional normalizing flow -- provides posterior samples with virtually no cost. However, the accuracy of this approach relies on the availability of high-fidelity training data, which seldom exists in geophysical inverse problems due to the heterogeneous structure of the Earth. In addition, accurate amortized variational inference requires the observed data to be drawn from the training data distribution. As such, we propose to increase the resilience of amortized variational inference when faced with data distribution shift via a physics-based correction to the conditional normalizing flow latent distribution. To accomplish this, instead of a standard Gaussian latent distribution, we parameterize the latent distribution by a Gaussian distribution with an unknown mean and diagonal covariance. These unknown quantities are then estimated by minimizing the Kullback-Leibler divergence between the corrected and true posterior distributions. While generic and applicable to other inverse problems, by means of a seismic imaging example, we show that our correction step improves the robustness of amortized variational inference with respect to changes in number of source experiments, noise variance, and shifts in the prior distribution. This approach provides a seismic image with limited artifacts and an assessment of its uncertainty with approximately the same cost as five reverse-time migrations.

We present a non-asymptotic lower bound on the eigenspectrum of the design matrix generated by any linear bandit algorithm with sub-linear regret when the action set has well-behaved curvature. Specifically, we show that the minimum eigenvalue of the expected design matrix grows as $\Omega(\sqrt{n})$ whenever the expected cumulative regret of the algorithm is $O(\sqrt{n})$, where $n$ is the learning horizon, and the action-space has a constant Hessian around the optimal arm. This shows that such action-spaces force a polynomial lower bound rather than a logarithmic lower bound, as shown by \cite{lattimore2017end}, in discrete (i.e., well-separated) action spaces. Furthermore, while the previous result is shown to hold only in the asymptotic regime (as $n \to \infty$), our result for these ``locally rich" action spaces is any-time. Additionally, under a mild technical assumption, we obtain a similar lower bound on the minimum eigen value holding with high probability. We apply our result to two practical scenarios -- \emph{model selection} and \emph{clustering} in linear bandits. For model selection, we show that an epoch-based linear bandit algorithm adapts to the true model complexity at a rate exponential in the number of epochs, by virtue of our novel spectral bound. For clustering, we consider a multi agent framework where we show, by leveraging the spectral result, that no forced exploration is necessary -- the agents can run a linear bandit algorithm and estimate their underlying parameters at once, and hence incur a low regret.

Implicit Processes (IPs) represent a flexible framework that can be used to describe a wide variety of models, from Bayesian neural networks, neural samplers and data generators to many others. IPs also allow for approximate inference in function-space. This change of formulation solves intrinsic degenerate problems of parameter-space approximate inference concerning the high number of parameters and their strong dependencies in large models. For this, previous works in the literature have attempted to employ IPs both to set up the prior and to approximate the resulting posterior. However, this has proven to be a challenging task. Existing methods that can tune the prior IP result in a Gaussian predictive distribution, which fails to capture important data patterns. By contrast, methods producing flexible predictive distributions by using another IP to approximate the posterior process cannot tune the prior IP to the observed data. We propose here the first method that can accomplish both goals. For this, we rely on an inducing-point representation of the prior IP, as often done in the context of sparse Gaussian processes. The result is a scalable method for approximate inference with IPs that can tune the prior IP parameters to the data, and that provides accurate non-Gaussian predictive distributions.

Model selection in machine learning (ML) is a crucial part of the Bayesian learning procedure. Model choice may impose strong biases on the resulting predictions, which can hinder the performance of methods such as Bayesian neural networks and neural samplers. On the other hand, newly proposed approaches for Bayesian ML exploit features of approximate inference in function space with implicit stochastic processes (a generalization of Gaussian processes). The approach of Sparse Implicit Processes (SIP) is particularly successful in this regard, since it is fully trainable and achieves flexible predictions. Here, we expand on the original experiments to show that SIP is capable of correcting model bias when the data generating mechanism differs strongly from the one implied by the model. We use synthetic datasets to show that SIP is capable of providing predictive distributions that reflect the data better than the exact predictions of the initial, but wrongly assumed model.

北京阿比特科技有限公司