亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We consider autocovariance operators of a stationary stochastic process on a Polish space that is embedded into a reproducing kernel Hilbert space. We investigate how empirical estimates of these operators converge along realizations of the process under various conditions. In particular, we examine ergodic and strongly mixing processes and obtain several asymptotic results as well as finite sample error bounds. We provide applications of our theory in terms of consistency results for kernel PCA with dependent data and the conditional mean embedding of transition probabilities. Finally, we use our approach to examine the nonparametric estimation of Markov transition operators and highlight how our theory can give a consistency analysis for a large family of spectral analysis methods including kernel-based dynamic mode decomposition.

相關內容

In this paper, we study the identifiability and the estimation of the parameters of a copula-based multivariate model when the margins are unknown and are arbitrary, meaning that they can be continuous, discrete, or mixtures of continuous and discrete. When at least one margin is not continuous, the range of values determining the copula is not the entire unit square and this situation could lead to identifiability issues that are discussed here. Next, we propose estimation methods when the margins are unknown and arbitrary, using pseudo log-likelihood adapted to the case of discontinuities. In view of applications to large data sets, we also propose a pairwise composite pseudo log-likelihood. These methodologies can also be easily modified to cover the case of parametric margins. One of the main theoretical result is an extension to arbitrary distributions of known convergence results of rank-based statistics when the margins are continuous. As a by-product, under smoothness assumptions, we obtain that the asymptotic distribution of the estimation errors of our estimators are Gaussian. Finally, numerical experiments are presented to assess the finite sample performance of the estimators, and the usefulness of the proposed methodologies is illustrated with a copula-based regression model for hydrological data.

We investigate unbiased high-dimensional mean estimators in differential privacy. We consider differentially private mechanisms whose expected output equals the mean of the input dataset, for every dataset drawn from a fixed convex domain $K$ in $\mathbb{R}^d$. In the setting of concentrated differential privacy, we show that, for every input such an unbiased mean estimator introduces approximately at least as much error as a mechanism that adds Gaussian noise with a carefully chosen covariance. This is true when the error is measured with respect to $\ell_p$ error for any $p \ge 2$. We extend this result to local differential privacy, and to approximate differential privacy, but for the latter the error lower bound holds either for a dataset or for a neighboring dataset. We also extend our results to mechanisms that take i.i.d.~samples from a distribution over $K$ and are unbiased with respect to the mean of the distribution.

Stein thinning is a promising algorithm proposed by (Riabiz et al., 2022) for post-processing outputs of Markov chain Monte Carlo (MCMC). The main principle is to greedily minimize the kernelized Stein discrepancy (KSD), which only requires the gradient of the log-target distribution, and is thus well-suited for Bayesian inference. The main advantages of Stein thinning are the automatic remove of the burn-in period, the correction of the bias introduced by recent MCMC algorithms, and the asymptotic properties of convergence towards the target distribution. Nevertheless, Stein thinning suffers from several empirical pathologies, which may result in poor approximations, as observed in the literature. In this article, we conduct a theoretical analysis of these pathologies, to clearly identify the mechanisms at stake, and suggest improved strategies. Then, we introduce the regularized Stein thinning algorithm to alleviate the identified pathologies. Finally, theoretical guarantees and extensive experiments show the high efficiency of the proposed algorithm.

Self organizing complex systems can be modeled using cellular automaton models. However, the parametrization of these models is crucial and significantly determines the resulting structural pattern. In this research, we introduce and successfully apply a sound statistical method to estimate these parameters. The method is based on constructing Gaussian likelihoods using characteristics of the structures such as the mean particle size. We show that our approach is robust with respect to the method parameters, domain size of patterns, or CA iterations.

At the fully discrete setting, stability of the discontinuous Petrov--Galerkin (DPG) method with optimal test functions requires local test spaces that ensure the existence of Fortin operators. We construct such operators for $H^1$ and $\boldsymbol{H}(\mathrm{div})$ on simplices in any space dimension and arbitrary polynomial degree. The resulting test spaces are smaller than previously analyzed cases. For parameter-dependent norms, we achieve uniform boundedness by the inclusion of exponential layers. As an example, we consider a canonical DPG setting for reaction-dominated diffusion. Our test spaces guarantee uniform stability and quasi-optimal convergence of the scheme. We present numerical experiments that illustrate the loss of stability and error control by the residual for small diffusion coefficient when using standard polynomial test spaces, whereas we observe uniform stability and error control with our construction.

Vandermonde matrices are usually exponentially ill-conditioned and often result in unstable approximations. In this paper, we introduce and analyze the \textit{multivariate Vandermonde with Arnoldi (V+A) method}, which is based on least-squares approximation together with a Stieltjes orthogonalization process, for approximating continuous, multivariate functions on $d$-dimensional irregular domains. The V+A method addresses the ill-conditioning of the Vandermonde approximation by creating a set of discrete orthogonal basis with respect to a discrete measure. The V+A method is simple and general. It relies only on the sample points from the domain and requires no prior knowledge of the domain. In this paper, we first analyze the sample complexity of the V+A approximation. In particular, we show that, for a large class of domains, the V+A method gives a well-conditioned and near-optimal $N$-dimensional least-squares approximation using $M=\mathcal{O}(N^2)$ equispaced sample points or $M=\mathcal{O}(N^2\log N)$ random sample points, independently of $d$. We also give a comprehensive analysis of the error estimates and rate of convergence of the V+A approximation. Based on the multivariate V+A approximation, we propose a new variant of the weighted V+A least-squares algorithm that uses only $M=\mathcal{O}(N\log N)$ sample points to give a near-optimal approximation. Our numerical results confirm that the (weighted) V+A method gives a more accurate approximation than the standard orthogonalization method for high-degree approximation using the Vandermonde matrix.

When an exposure of interest is confounded by unmeasured factors, an instrumental variable (IV) can be used to identify and estimate certain causal contrasts. Identification of the marginal average treatment effect (ATE) from IVs typically relies on strong untestable structural assumptions. When one is unwilling to assert such structural assumptions, IVs can nonetheless be used to construct bounds on the ATE. Famously, Balke and Pearl (1997) employed linear programming techniques to prove tight bounds on the ATE for a binary outcome, in a randomized trial with noncompliance and no covariate information. We demonstrate how these bounds remain useful in observational settings with baseline confounders of the IV, as well as randomized trials with measured baseline covariates. The resulting lower and upper bounds on the ATE are non-smooth functionals, and thus standard nonparametric efficiency theory is not immediately applicable. To remedy this, we propose (1) estimators of smooth approximations of these bounds, and (2) under a novel margin condition, influence function-based estimators of the ATE bounds that can attain parametric convergence rates when the nuisance functions are modeled flexibly. We propose extensions to continuous outcomes, and finally, illustrate the proposed estimators in a randomized experiment studying the effects of influenza vaccination encouragement on flu-related hospital visits.

Consider the approximation of stochastic Allen-Cahn-type equations (i.e. $1+1$-dimensional space-time white noise-driven stochastic PDEs with polynomial nonlinearities $F$ such that $F(\pm \infty)=\mp \infty$) by a fully discrete space-time explicit finite difference scheme. The consensus in literature, supported by rigorous lower bounds, is that strong convergence rate $1/2$ with respect to the parabolic grid meshsize is expected to be optimal. We show that one can reach almost sure convergence rate $1$ (and no better) when measuring the error in appropriate negative Besov norms, by temporarily `pretending' that the SPDE is singular.

A data-driven framework is presented, that enables the prediction of quantities, either observations or parameters, given sufficient partial data. The framework is illustrated via a computational model of the deposition of Cu in a Chemical Vapor Deposition (CVD) reactor, where the reactor pressure, the deposition temperature and feed mass flow rate are important process parameters that determine the outcome of the process. The sampled observations are high-dimensional vectors containing the outputs of a detailed CFD steady-state model of the process, i.e. the values of velocity, pressure, temperature, and species mass fractions at each point in the discretization. A machine learning workflow is presented, able to predict out-of-sample (a) observations (e.g. mass fraction in the reactor) given process parameters (e.g. inlet temperature); (b) process parameters given observation data; and (c) partial observations (e.g. temperature in the reactor) given other partial observations (e.g. mass fraction in the reactor). The proposed workflow relies on the manifold learning schemes Diffusion Maps and the associated Geometric Harmonics. Diffusion Maps is used for discovering a reduced representation of the available data, and Geometric Harmonics for extending functions defined on the manifold. In our work a special use case of Geometric Harmonics is formulated and implemented, which we call Double Diffusion Maps, to map from the reduced representation back to (partial) observations and process parameters. A comparison of our manifold learning scheme to the traditional Gappy-POD approach is provided: ours can be thought of as a "Gappy DMAP" approach. The presented methodology is easily transferable to application domains beyond reactor engineering.

We study the error of linear regression in the face of adversarial attacks. In this framework, an adversary changes the input to the regression model in order to maximize the prediction error. We provide bounds on the prediction error in the presence of an adversary as a function of the parameter norm and the error in the absence of such an adversary. We show how these bounds make it possible to study the adversarial error using analysis from non-adversarial setups. The obtained results shed light on the robustness of overparameterized linear models to adversarial attacks. Adding features might be either a source of additional robustness or brittleness. On the one hand, we use asymptotic results to illustrate how double-descent curves can be obtained for the adversarial error. On the other hand, we derive conditions under which the adversarial error can grow to infinity as more features are added, while at the same time, the test error goes to zero. We show this behavior is caused by the fact that the norm of the parameter vector grows with the number of features. It is also established that $\ell_\infty$ and $\ell_2$-adversarial attacks might behave fundamentally differently due to how the $\ell_1$ and $\ell_2$-norms of random projections concentrate. We also show how our reformulation allows for solving adversarial training as a convex optimization problem. This fact is then exploited to establish similarities between adversarial training and parameter-shrinking methods and to study how the training might affect the robustness of the estimated models.

北京阿比特科技有限公司