亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We present two linear relations between an arbitrary (real tempered second order) generalized stochastic process over $\mathbb{R}^{d}$ and White Noise processes over $\mathbb{R}^{d}$. The first is that any generalized stochastic process can be obtained as a linear transformation of a White Noise. The second indicates that, under dimensional compatibility conditions, a generalized stochastic process can be linearly transformed into a White Noise. The arguments rely on the regularity theorem for tempered distributions, which is used to obtain a mean-square continuous stochastic process which is then expressed in a Karhunen-Lo\`eve expansion with respect to a convenient Hilbert space. The first linear relation obtained allows also to conclude that any generalized stochastic process has an orthogonal representation as a series expansion of deterministic tempered distributions weighted by uncorrelated random variables with summable variances. This representation is then used to conclude the second linear relation.

相關內容

This article studies the estimation of community memberships from non-binary pair interactions represented by an $N$-by-$N$ tensor whose values are elements of $\mathcal S$, where $N$ is the number of nodes and $\mathcal S$ is the space of the pairwise interactions between the nodes. As an information-theoretic benchmark, we study data sets generated by a non-binary stochastic block model, and derive fundamental information criteria for the recovery of the community memberships as $N \to \infty$. Examples of applications include weighted networks ($\mathcal S = \mathbb R$), link-labeled networks $(\mathcal S = \{0, 1, \dots, L\}$), multiplex networks $(\mathcal S = \{0,1\}^M$) and temporal networks ($\mathcal S = \{0,1\}^T$). For temporal interactions, we show that (i) even a small increase in $T$ may have a big impact on the recovery of community memberships, (ii) consistent recovery is possible even for very sparse data (e.g.\ bounded average degree) when $T$ is large enough. We also present several estimation algorithms, both offline and online, which fully utilise the temporal nature of the observed data. We analyse the accuracy of the proposed estimation algorithms under various assumptions on data sparsity and identifiability. Numerical experiments show that even a poor initial estimate (e.g., blind random guess) of the community assignment leads to high accuracy obtained by the online algorithm after a small number of iterations, and remarkably so also in very sparse regimes.

This paper focuses on stochastic saddle point problems with decision-dependent distributions in both the static and time-varying settings. These are problems whose objective is the expected value of a stochastic payoff function, where random variables are drawn from a distribution induced by a distributional map. For general distributional maps, the problem of finding saddle points is in general computationally burdensome, even if the distribution is known. To enable a tractable solution approach, we introduce the notion of equilibrium points -- which are saddle points for the stationary stochastic minimax problem that they induce -- and provide conditions for their existence and uniqueness. We demonstrate that the distance between the two classes of solutions is bounded provided that the objective has a strongly-convex-strongly-concave payoff and Lipschitz continuous distributional map. We develop deterministic and stochastic primal-dual algorithms and demonstrate their convergence to the equilibrium point. In particular, by modeling errors emerging from a stochastic gradient estimator as sub-Weibull random variables, we provide error bounds in expectation and in high probability that hold for each iteration; moreover, we show convergence to a neighborhood in expectation and almost surely. Finally, we investigate a condition on the distributional map -- which we call opposing mixture dominance -- that ensures the objective is strongly-convex-strongly-concave. Under this assumption, we show that primal-dual algorithms converge to the saddle points in a similar fashion.

Stochastic majorization-minimization (SMM) is an online extension of the classical principle of majorization-minimization, which consists of sampling i.i.d. data points from a fixed data distribution and minimizing a recursively defined majorizing surrogate of an objective function. In this paper, we introduce stochastic block majorization-minimization, where the surrogates can now be only block multi-convex and a single block is optimized at a time within a diminishing radius. Relaxing the standard strong convexity requirements for surrogates in SMM, our framework gives wider applicability including online CANDECOMP/PARAFAC (CP) dictionary learning and yields greater computational efficiency especially when the problem dimension is large. We provide an extensive convergence analysis on the proposed algorithm, which we derive under possibly dependent data streams, relaxing the standard i.i.d. assumption on data samples. We show that the proposed algorithm converges almost surely to the set of stationary points of a nonconvex objective under constraints at a rate $O((\log n)^{1+\eps}/n^{1/2})$ for the empirical loss function and $O((\log n)^{1+\eps}/n^{1/4})$ for the expected loss function, where $n$ denotes the number of data samples processed. Under some additional assumption, the latter convergence rate can be improved to $O((\log n)^{1+\eps}/n^{1/2})$. Our results provide first convergence rate bounds for various online matrix and tensor decomposition algorithms under a general Markovian data setting.

We prove that the second moment of the number of critical points of any sufficiently regular random field, for example with almost surely $ C^3 $ sample paths, defined over a compact Whitney stratified manifold is finite. Our results hold without the assumption of stationarity - which has traditionally been assumed in other work. Under stationarity we demonstrate that our imposed conditions imply the generalized Geman condition of Estrade 2016.

The joint bidiagonalization (JBD) method has been used to compute some extreme generalized singular values and vectors of a large regular matrix pair $\{A,L\}$, where we propose three approaches to compute approximate generalized singular values and vectors. We make a numerical analysis of the underlying JBD process and establish relationships between it and two mathematically equivalent Lanczos bidiagonalizations in finite precision. Based on the results of numerical analysis, we investigate the convergence of the approximate generalized singular values and vectors of $\{A,L\}$. The results show that, under some mild conditions, the semiorthogonality of Lanczos type vectors suffices to deliver approximate generalized singular values with the same accuracy as the full orthogonality does, meaning that it is only necessary to seek for efficient semiorthogonalization strategies for the JBD process. We also establish a sharp bound for the residual norm of an approximate generalized singular value and corresponding approximate right generalized singular vectors, which can reliably estimate the residual norm without explicitly computing the approximate right generalized singular vectors before the convergence occurs.

In this paper, we show that the diagonal of a high-dimensional sample covariance matrix stemming from $n$ independent observations of a $p$-dimensional time series with finite fourth moments can be approximated in spectral norm by the diagonal of the population covariance matrix. We assume that $n,p\to \infty$ with $p/n$ tending to a constant which might be positive or zero. As applications, we provide an approximation of the sample correlation matrix ${\mathbf R}$ and derive a variety of results for its eigenvalues. We identify the limiting spectral distribution of ${\mathbf R}$ and construct an estimator for the population correlation matrix and its eigenvalues. Finally, the almost sure limits of the extreme eigenvalues of ${\mathbf R}$ in a generalized spiked correlation model are analyzed.

Kendall transformation is a conversion of an ordered feature into a vector of pairwise order relations between individual values. This way, it preserves ranking of observations and represents it in a categorical form. Such transformation allows for generalisation of methods requiring strictly categorical input, especially in the limit of small number of observations, when discretisation becomes problematic. In particular, many approaches of information theory can be directly applied to Kendall-transformed continuous data without relying on differential entropy or any additional parameters. Moreover, by filtering information to this contained in ranking, Kendall transformation leads to a better robustness at a reasonable cost of dropping sophisticated interactions which are anyhow unlikely to be correctly estimated. In bivariate analysis, Kendall transformation can be related to popular non-parametric methods, showing the soundness of the approach. The paper also demonstrates its efficiency in multivariate problems, as well as provides an example analysis of a real-world data.

Escaping saddle points is a central research topic in nonconvex optimization. In this paper, we propose a simple gradient-based algorithm such that for a smooth function $f\colon\mathbb{R}^n\to\mathbb{R}$, it outputs an $\epsilon$-approximate second-order stationary point in $\tilde{O}(\log n/\epsilon^{1.75})$ iterations. Compared to the previous state-of-the-art algorithms by Jin et al. with $\tilde{O}((\log n)^{4}/\epsilon^{2})$ or $\tilde{O}((\log n)^{6}/\epsilon^{1.75})$ iterations, our algorithm is polynomially better in terms of $\log n$ and matches their complexities in terms of $1/\epsilon$. For the stochastic setting, our algorithm outputs an $\epsilon$-approximate second-order stationary point in $\tilde{O}((\log n)^{2}/\epsilon^{4})$ iterations. Technically, our main contribution is an idea of implementing a robust Hessian power method using only gradients, which can find negative curvature near saddle points and achieve the polynomial speedup in $\log n$ compared to the perturbed gradient descent methods. Finally, we also perform numerical experiments that support our results.

Graph Neural Networks (GNN) come in many flavors, but should always be either invariant (permutation of the nodes of the input graph does not affect the output) or equivariant (permutation of the input permutes the output). In this paper, we consider a specific class of invariant and equivariant networks, for which we prove new universality theorems. More precisely, we consider networks with a single hidden layer, obtained by summing channels formed by applying an equivariant linear operator, a pointwise non-linearity and either an invariant or equivariant linear operator. Recently, Maron et al. (2019) showed that by allowing higher-order tensorization inside the network, universal invariant GNNs can be obtained. As a first contribution, we propose an alternative proof of this result, which relies on the Stone-Weierstrass theorem for algebra of real-valued functions. Our main contribution is then an extension of this result to the equivariant case, which appears in many practical applications but has been less studied from a theoretical point of view. The proof relies on a new generalized Stone-Weierstrass theorem for algebra of equivariant functions, which is of independent interest. Finally, unlike many previous settings that consider a fixed number of nodes, our results show that a GNN defined by a single set of parameters can approximate uniformly well a function defined on graphs of varying size.

Stochastic gradient Markov chain Monte Carlo (SGMCMC) has become a popular method for scalable Bayesian inference. These methods are based on sampling a discrete-time approximation to a continuous time process, such as the Langevin diffusion. When applied to distributions defined on a constrained space, such as the simplex, the time-discretisation error can dominate when we are near the boundary of the space. We demonstrate that while current SGMCMC methods for the simplex perform well in certain cases, they struggle with sparse simplex spaces; when many of the components are close to zero. However, most popular large-scale applications of Bayesian inference on simplex spaces, such as network or topic models, are sparse. We argue that this poor performance is due to the biases of SGMCMC caused by the discretization error. To get around this, we propose the stochastic CIR process, which removes all discretization error and we prove that samples from the stochastic CIR process are asymptotically unbiased. Use of the stochastic CIR process within a SGMCMC algorithm is shown to give substantially better performance for a topic model and a Dirichlet process mixture model than existing SGMCMC approaches.

北京阿比特科技有限公司