亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

A common approach to approximating Gaussian log-likelihoods at scale exploits the fact that precision matrices can be well-approximated by sparse matrices in some circumstances. This strategy is motivated by the \emph{screening effect}, which refers to the phenomenon in which the linear prediction of a process $Z$ at a point $\mathbf{x}_0$ depends primarily on measurements nearest to $\mathbf{x}_0$. But simple perturbations, such as i.i.d. measurement noise, can significantly reduce the degree to which this exploitable phenomenon occurs. While strategies to cope with this issue already exist and are certainly improvements over ignoring the problem, in this work we present a new one based on the EM algorithm that offers several advantages. While in this work we focus on the application to Vecchia's approximation (1988), a particularly popular and powerful framework in which we can demonstrate true second-order optimization of M steps, the method can also be applied using entirely matrix-vector products, making it applicable to a very wide class of precision matrix-based approximation methods.

相關內容

Processing 是一門開(kai)源(yuan)編(bian)程語言和與之配套的集(ji)成開(kai)發(fa)環境(jing)(IDE)的名(ming)稱。Processing 在電(dian)子藝(yi)術(shu)和視覺設計(ji)社區被用(yong)來教授編(bian)程基礎,并運用(yong)于大量(liang)的新媒體和互(hu)動藝(yi)術(shu)作品(pin)中。

We investigate the high-dimensional linear regression problem in situations where there is noise correlated with Gaussian covariates. In regression models, the phenomenon of the correlated noise is called endogeneity, which is due to unobserved variables and others, and has been a major problem setting in causal inference and econometrics. When the covariates are high-dimensional, it has been common to assume sparsity on the true parameters and estimate them using regularization, even with the endogeneity. However, when sparsity does not hold, it has not been well understood to control the endogeneity and high dimensionality simultaneously. In this paper, we demonstrate that an estimator without regularization can achieve consistency, i.e., benign overfitting, under certain assumptions on the covariance matrix. Specifically, we show that the error of this estimator converges to zero when covariance matrices of the correlated noise and instrumental variables satisfy a condition on their eigenvalues. We consider several extensions to relax these conditions and conduct experiments to support our theoretical findings. As a technical contribution, we utilize the convex Gaussian minimax theorem (CGMT) in our dual problem and extend the CGMT itself.

Gaussian processes are widely used as priors for unknown functions in statistics and machine learning. To achieve computationally feasible inference for large datasets, a popular approach is the Vecchia approximation, which is an ordered conditional approximation of the data vector that implies a sparse Cholesky factor of the precision matrix. The ordering and sparsity pattern are typically determined based on Euclidean distance of the inputs or locations corresponding to the data points. Here, we propose instead to use a correlation-based distance metric, which implicitly applies the Vecchia approximation in a suitable transformed input space. The correlation-based algorithm can be carried out in quasilinear time in the size of the dataset, and so it can be applied even for iterative inference on unknown parameters in the correlation structure. The correlation-based approach has two advantages for complex settings: It can result in more accurate approximations, and it offers a simple, automatic strategy that can be applied to any covariance, even when Euclidean distance is not applicable. We demonstrate these advantages in several settings, including anisotropic, nonstationary, multivariate, and spatio-temporal processes. We also illustrate our method on multivariate spatio-temporal temperature fields produced by a regional climate model.

There has recently been much interest in Gaussian processes on linear networks and more generally on compact metric graphs. One proposed strategy for defining such processes on a metric graph $\Gamma$ is through a covariance function that is isotropic in a metric on the graph. Another is through a fractional order differential equation $L^\alpha (\tau u) = \mathcal{W}$ on $\Gamma$, where $L = \kappa^2 - \nabla(a\nabla)$ for (sufficiently nice) functions $\kappa, a$, and $\mathcal{W}$ is Gaussian white noise. We study Markov properties of these two types of fields. We first show that there are no Gaussian random fields on general metric graphs that are both isotropic and Markov. We then show that the second type of fields, the generalized Whittle--Mat\'ern fields, are Markov if and only if $\alpha\in\mathbb{N}$, and if $\alpha\in\mathbb{N}$, the field is Markov of order $\alpha$, which essentially means that the process in one region $S\subset\Gamma$ is conditionally independent the process in $\Gamma\setminus S$ given the values of the process and its $\alpha-1$ derivatives on $\partial S$. Finally, we show that the Markov property implies an explicit characterization of the process on a fixed edge $e$, which in particular shows that the conditional distribution of the process on $e$ given the values at the two vertices connected to $e$ is independent of the geometry of $\Gamma$.

The Gaussian process state-space model (GPSSM) has garnered considerable attention over the past decade. However, the standard GP with a preliminary kernel, such as the squared exponential kernel or Mat\'{e}rn kernel, that is commonly used in GPSSM studies, limits the model's representation power and substantially restricts its applicability to complex scenarios. To address this issue, we propose a new class of probabilistic state-space models called TGPSSMs, which leverage a parametric normalizing flow to enrich the GP priors in the standard GPSSM, enabling greater flexibility and expressivity. Additionally, we present a scalable variational inference algorithm that offers a flexible and optimal structure for the variational distribution of latent states. The proposed algorithm is interpretable and computationally efficient due to the sparse GP representation and the bijective nature of normalizing flow. Moreover, we incorporate a constrained optimization framework into the algorithm to enhance the state-space representation capabilities and optimize the hyperparameters, leading to superior learning and inference performance. Experimental results on synthetic and real datasets corroborate that the proposed TGPSSM outperforms several state-of-the-art methods. The accompanying source code is available at \url{//github.com/zhidilin/TGPSSM}.

In an era where scientific experimentation is often costly, multi-fidelity emulation provides a powerful tool for predictive scientific computing. While there has been notable work on multi-fidelity modeling, existing models do not incorporate an important ``conglomerate'' property of multi-fidelity simulators, where the accuracies of different simulator components (modeling separate physics) are controlled by different fidelity parameters. Such conglomerate simulators are widely encountered in complex nuclear physics and astrophysics applications. We thus propose a new CONglomerate multi-FIdelity Gaussian process (CONFIG) model, which embeds this conglomerate structure within a novel non-stationary covariance function. We show that the proposed CONFIG model can capture prior knowledge on the numerical convergence of conglomerate simulators, which allows for cost-efficient emulation of multi-fidelity systems. We demonstrate the improved predictive performance of CONFIG over state-of-the-art models in a suite of numerical experiments and two applications, the first for emulation of cantilever beam deflection and the second for emulating the evolution of the quark-gluon plasma, which was theorized to have filled the Universe shortly after the Big Bang.

In this work, we study non-asymptotic bounds on correlation between two time realizations of stable linear systems with isotropic Gaussian noise. Consequently, via sampling from a sub-trajectory and using \emph{Talagrands'} inequality, we show that empirical averages of reward concentrate around steady state (dynamical system mixes to when closed loop system is stable under linear feedback policy ) reward , with high-probability. As opposed to common belief of larger the spectral radius stronger the correlation between samples, \emph{large discrepancy between algebraic and geometric multiplicity of system eigenvalues leads to large invariant subspaces related to system-transition matrix}; once the system enters the large invariant subspace it will travel away from origin for a while before coming close to a unit ball centered at origin where an isotropic Gaussian noise can with high probability allow it to escape the current invariant subspace it resides in, leading to \emph{bottlenecks} between different invariant subspaces that span $\mathbb{R}^{n}$, to be precise : system initiated in a large invariant subspace will be stuck there for a long-time: log-linear in dimension of the invariant subspace and inversely to log of inverse of magnitude of the eigenvalue. In the problem of Ordinary Least Squares estimate of system transition matrix via a single trajectory, this phenomenon is even more evident if spectrum of transition matrix associated to large invariant subspace is explosive and small invariant subspaces correspond to stable eigenvalues. Our analysis provide first interpretable and geometric explanation into intricacies of learning and concentration for random dynamical systems on continuous, high dimensional state space; exposing us to surprises in high dimensions

We propose a non-intrusive, reduced-basis, and data-driven method for approximating both eigenvalues and eigenvectors in parametric eigenvalue problems. We generate the basis of the reduced space by applying the proper orthogonal decomposition (POD) approach on a collection of pre-computed, full-order snapshots at a chosen set of parameters. Then, we use Bayesian linear regression (a.k.a. Gaussian Process Regression) in the online phase to predict both eigenvalues and eigenvectors at new parameters. A split of the data generated in the offline phase into training and test data sets is utilized in the numerical experiments following standard practices in the field of supervised machine learning. Furthermore, we discuss the connection between Gaussian Process Regression and spline methods, and compare the performance of GPR method against linear and cubic spline methods. We show that GPR outperforms other methods for functions with a certain regularity. To this end, we discuss various different covariance functions which influence the performance of GPR. The proposed method is shown to be accurate and efficient for the approximation of multiple 1D and 2D affine and non-affine parameter-dependent eigenvalue problems that exhibit crossing of eigenvalues.

This paper studies the problem of Simultaneous Sparse Approximation (SSA). This problem arises in many applications which work with multiple signals maintaining some degree of dependency such as radar and sensor networks. In this paper, we introduce a new method towards joint recovery of several independent sparse signals with the same support. We provide an analytical discussion on the convergence of our method called Simultaneous Iterative Method with Adaptive Thresholding (SIMAT). Additionally, we compare our method with other group-sparse reconstruction techniques, i.e., Simultaneous Orthogonal Matching Pursuit (SOMP), and Block Iterative Method with Adaptive Thresholding (BIMAT) through numerical experiments. The simulation results demonstrate that SIMAT outperforms these algorithms in terms of the metrics Signal to Noise Ratio (SNR) and Success Rate (SR). Moreover, SIMAT is considerably less complicated than BIMAT, which makes it feasible for practical applications such as implementation in MIMO radar systems.

One way to reduce the time of conducting optimization studies is to evaluate designs in parallel rather than just one-at-a-time. For expensive-to-evaluate black-boxes, batch versions of Bayesian optimization have been proposed. They work by building a surrogate model of the black-box to simultaneously select multiple designs via an infill criterion. Still, despite the increased availability of computing resources that enable large-scale parallelism, the strategies that work for selecting a few tens of parallel designs for evaluations become limiting due to the complexity of selecting more designs. It is even more crucial when the black-box is noisy, necessitating more evaluations as well as repeating experiments. Here we propose a scalable strategy that can keep up with massive batching natively, focused on the exploration/exploitation trade-off and a portfolio allocation. We compare the approach with related methods on noisy functions, for mono and multi-objective optimization tasks. These experiments show orders of magnitude speed improvements over existing methods with similar or better performance.

Using the equivalent inclusion method (a method strongly related to the Hashin-Shtrikman variational principle) as a surrogate model, we propose a variance reduction strategy for the numerical homogenization of random composites made of inclusions (or rather inhomogeneities) embedded in a homogeneous matrix. The efficiency of this strategy is demonstrated within the framework of two-dimensional, linear conductivity. Significant computational gains vs full-field simulations are obtained even for high contrast values. We also show that our strategy allows to investigate the influence of parameters of the microstructure on the macroscopic response. Our strategy readily extends to three-dimensional problems and to linear elasticity. Attention is paid to the computational cost of the surrogate model. In particular, an inexpensive approximation of the so-called influence tensors (that are used to compute the surrogate model) is proposed.

北京阿比特科技有限公司