亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Parameters of the covariance kernel of a Gaussian process model often need to be estimated from the data generated by an unknown Gaussian process. We consider fixed-domain asymptotics of the maximum likelihood estimator of the scale parameter under smoothness misspecification. If the covariance kernel of the data-generating process has smoothness $\nu_0$ but that of the model has smoothness $\nu \geq \nu_0$, we prove that the expectation of the maximum likelihood estimator is of the order $N^{2(\nu-\nu_0)/d}$ if the $N$ observation points are quasi-uniform in $[0, 1]^d$. This indicates that maximum likelihood estimation of the scale parameter alone is sufficient to guarantee the correct rate of decay of the conditional variance. We also discuss a connection the expected maximum likelihood estimator has to Driscoll's theorem on sample path properties of Gaussian processes. The proofs are based on reproducing kernel Hilbert space techniques and worst-case case rates for approximation in Sobolev spaces.

相關內容

The reconfigurable intelligent surface (RIS) technology is a promising enabler for millimeter wave (mmWave) wireless communications, as it can potentially provide spectral efficiency comparable to the conventional massive multiple-input multiple-output (MIMO) but with significantly lower hardware complexity. In this paper, we focus on the estimation and projection of the uplink RIS-aided massive MIMO channel, which can be time-varying. We propose to let the user equipments (UE) transmit Zadoff-Chu (ZC) sequences and let the base station (BS) conduct maximum likelihood (ML) estimation of the uplink channel. The proposed scheme is computationally efficient: it uses ZC sequences to decouple the estimation of the frequency and time offsets; it uses the space-alternating generalized expectation-maximization (SAGE) method to reduce the high-dimensional problem due to the multipaths to multiple lower-dimensional ones per path. Owing to the estimation of the Doppler frequency offsets, the time-varying channel state can be projected, which can significantly lower the overhead of the pilots for channel estimation. The numerical simulations verify the effectiveness of the proposed scheme.

Given a point set $P\subset \mathbb{R}^d$, the kernel density estimate of $P$ is defined as \[ \overline{\mathcal{G}}_P(x) = \frac{1}{\left|P\right|}\sum_{p\in P}e^{-\left\lVert x-p \right\rVert^2} \] for any $x\in\mathbb{R}^d$. We study how to construct a small subset $Q$ of $P$ such that the kernel density estimate of $P$ is approximated by the kernel density estimate of $Q$. This subset $Q$ is called a coreset. The main technique in this work is constructing a $\pm 1$ coloring on the point set $P$ by discrepancy theory and we leverage Banaszczyk's Theorem. When $d>1$ is a constant, our construction gives a coreset of size $O\left(\frac{1}{\varepsilon}\right)$ as opposed to the best-known result of $O\left(\frac{1}{\varepsilon}\sqrt{\log\frac{1}{\varepsilon}}\right)$. It is the first result to give a breakthrough on the barrier of $\sqrt{\log}$ factor even when $d=2$.

Motivated by applications to topological data analysis, we give an efficient algorithm for computing a (minimal) presentation of a bigraded $K[x,y]$-module $M$, where $K$ is a field. The algorithm takes as input a short chain complex of free modules $X\xrightarrow{f} Y \xrightarrow{g} Z$ such that $M\cong \ker{g}/\mathrm{im}{f}$. It runs in time $O(|X|^3+|Y|^3+|Z|^3)$ and requires $O(|X|^2+|Y|^2+|Z|^2)$ memory, where $|\cdot |$ denotes the rank. Given the presentation computed by our algorithm, the bigraded Betti numbers of $M$ are readily computed. Our approach is based on a simple matrix reduction algorithm, slight variants of which compute kernels of morphisms between free modules, minimal generating sets, and Gr\"obner bases. Our algorithm for computing minimal presentations has been implemented in RIVET, a software tool for the visualization and analysis of two-parameter persistent homology. In experiments on topological data analysis problems, our implementation outperforms the standard computational commutative algebra packages Singular and Macaulay2 by a wide margin.

Let $X$ and $Y$ be two real-valued random variables. Let $(X_{1},Y_{1}),(X_{2},Y_{2}),\ldots$ be independent identically distributed copies of $(X,Y)$. Suppose there are two players A and B. Player A has access to $X_{1},X_{2},\ldots$ and player B has access to $Y_{1},Y_{2},\ldots$. Without communication, what joint probability distributions can players A and B jointly simulate? That is, if $k,m$ are fixed positive integers, what probability distributions on $\{1,\ldots,m\}^{2}$ are equal to the distribution of $(f(X_{1},\ldots,X_{k}),\,g(Y_{1},\ldots,Y_{k}))$ for some $f,g\colon\mathbb{R}^{k}\to\{1,\ldots,m\}$? When $X$ and $Y$ are standard Gaussians with fixed correlation $\rho\in(-1,1)$, we show that the set of probability distributions that can be noninteractively simulated from $k$ Gaussian samples is the same for any $k\geq m^{2}$. Previously, it was not even known if this number of samples $m^{2}$ would be finite or not, except when $m\leq 2$. Consequently, a straightforward brute-force search deciding whether or not a probability distribution on $\{1,\ldots,m\}^{2}$ is within distance $0<\epsilon<|\rho|$ of being noninteractively simulated from $k$ correlated Gaussian samples has run time bounded by $(5/\epsilon)^{m(\log(\epsilon/2) / \log|\rho|)^{m^{2}}}$, improving a bound of Ghazi, Kamath and Raghavendra. A nonlinear central limit theorem (i.e. invariance principle) of Mossel then generalizes this result to decide whether or not a probability distribution on $\{1,\ldots,m\}^{2}$ is within distance $0<\epsilon<|\rho|$ of being noninteractively simulated from $k$ samples of a given finite discrete distribution $(X,Y)$ in run time that does not depend on $k$, with constants that again improve a bound of Ghazi, Kamath and Raghavendra.

Ensemble methods based on subsampling, such as random forests, are popular in applications due to their high predictive accuracy. Existing literature views a random forest prediction as an infinite-order incomplete U-statistic to quantify its uncertainty. However, these methods focus on a small subsampling size of each tree, which is theoretically valid but practically limited. This paper develops an unbiased variance estimator based on incomplete U-statistics, which allows the tree size to be comparable with the overall sample size, making statistical inference possible in a broader range of real applications. Simulation results demonstrate that our estimators enjoy lower bias and more accurate confidence interval coverage without additional computational costs. We also propose a local smoothing procedure to reduce the variation of our estimator, which shows improved numerical performance when the number of trees is relatively small. Further, we investigate the ratio consistency of our proposed variance estimator under specific scenarios. In particular, we develop a new "double U-statistic" formulation to analyze the Hoeffding decomposition of the estimator's variance.

Dyadic data is often encountered when quantities of interest are associated with the edges of a network. As such it plays an important role in statistics, econometrics and many other data science disciplines. We consider the problem of uniformly estimating a dyadic Lebesgue density function, focusing on nonparametric kernel-based estimators taking the form of dyadic empirical processes. Our main contributions include the minimax-optimal uniform convergence rate of the dyadic kernel density estimator, along with strong approximation results for the associated standardized and Studentized $t$-processes. A consistent variance estimator enables the construction of valid and feasible uniform confidence bands for the unknown density function. A crucial feature of dyadic distributions is that they may be "degenerate" at certain points in the support of the data, a property making our analysis somewhat delicate. Nonetheless our methods for uniform inference remain robust to the potential presence of such points. For implementation purposes, we discuss procedures based on positive semi-definite covariance estimators, mean squared error optimal bandwidth selectors and robust bias-correction techniques. We illustrate the empirical finite-sample performance of our methods both in simulations and with real-world data. Our technical results concerning strong approximations and maximal inequalities are of potential independent interest.

The asymptotic distribution of a wide class of V- and U-statistics with estimated parameters is derived in the case when the kernel is not necessarily differentiable along the parameter. The results have their application in goodness-of-fit problems.

We consider Gaussian measures $\mu, \tilde{\mu}$ on a separable Hilbert space, with fractional-order covariance operators $A^{-2\beta}$ resp. $\tilde{A}^{-2\tilde{\beta}}$, and derive necessary and sufficient conditions on $A, \tilde{A}$ and $\beta, \tilde{\beta} > 0$ for I. equivalence of the measures $\mu$ and $\tilde{\mu}$, and II. uniform asymptotic optimality of linear predictions for $\mu$ based on the misspecified measure $\tilde{\mu}$. These results hold, e.g., for Gaussian processes on compact metric spaces. As an important special case, we consider the class of generalized Whittle-Mat\'ern Gaussian random fields, where $A$ and $\tilde{A}$ are elliptic second-order differential operators, formulated on a bounded Euclidean domain $\mathcal{D}\subset\mathbb{R}^d$ and augmented with homogeneous Dirichlet boundary conditions. Our outcomes explain why the predictive performances of stationary and non-stationary models in spatial statistics often are comparable, and provide a crucial first step in deriving consistency results for parameter estimation of generalized Whittle-Mat\'ern fields.

We consider the exploration-exploitation trade-off in reinforcement learning and we show that an agent imbued with a risk-seeking utility function is able to explore efficiently, as measured by regret. The parameter that controls how risk-seeking the agent is can be optimized exactly, or annealed according to a schedule. We call the resulting algorithm K-learning and show that the corresponding K-values are optimistic for the expected Q-values at each state-action pair. The K-values induce a natural Boltzmann exploration policy for which the `temperature' parameter is equal to the risk-seeking parameter. This policy achieves an expected regret bound of $\tilde O(L^{3/2} \sqrt{S A T})$, where $L$ is the time horizon, $S$ is the number of states, $A$ is the number of actions, and $T$ is the total number of elapsed time-steps. This bound is only a factor of $L$ larger than the established lower bound. K-learning can be interpreted as mirror descent in the policy space, and it is similar to other well-known methods in the literature, including Q-learning, soft-Q-learning, and maximum entropy policy gradient, and is closely related to optimism and count based exploration methods. K-learning is simple to implement, as it only requires adding a bonus to the reward at each state-action and then solving a Bellman equation. We conclude with a numerical example demonstrating that K-learning is competitive with other state-of-the-art algorithms in practice.

We develop an approach to risk minimization and stochastic optimization that provides a convex surrogate for variance, allowing near-optimal and computationally efficient trading between approximation and estimation error. Our approach builds off of techniques for distributionally robust optimization and Owen's empirical likelihood, and we provide a number of finite-sample and asymptotic results characterizing the theoretical performance of the estimator. In particular, we show that our procedure comes with certificates of optimality, achieving (in some scenarios) faster rates of convergence than empirical risk minimization by virtue of automatically balancing bias and variance. We give corroborating empirical evidence showing that in practice, the estimator indeed trades between variance and absolute performance on a training sample, improving out-of-sample (test) performance over standard empirical risk minimization for a number of classification problems.

北京阿比特科技有限公司