亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Low-rank matrix models have been universally useful for numerous applications starting from classical system identification to more modern matrix completion in signal processing and statistics. The nuclear norm has been employed as a convex surrogate of the low-rankness since it induces a low-rank solution to inverse problems. While the nuclear norm for low-rankness has a nice analogy with the $\ell_1$ norm for sparsity through the singular value decomposition, other matrix norms also induce low-rankness. Particularly as one interprets a matrix as a linear operator between Banach spaces, various tensor product norms generalize the role of the nuclear norm. We provide a tensor-norm-constrained estimator for recovery of approximately low-rank matrices from local measurements corrupted with noise. A tensor-norm regulizer is designed adapting to the local structure. We derive statistical analysis of the estimator over matrix completion and decentralized sketching through applying Maurey's empirical method to tensor products of Banach spaces. The estimator provides a near optimal error bound in a minimax sense and admits a polynomial-time algorithm for these applications.

相關內容

Iterative hard thresholding (IHT) has gained in popularity over the past decades in large-scale optimization. However, convergence properties of this method have only been explored recently in non-convex settings. In matrix completion, existing works often focus on the guarantee of global convergence of IHT via standard assumptions such as incoherence property and uniform sampling. While such analysis provides a global upper bound on the linear convergence rate, it does not describe the actual performance of IHT in practice. In this paper, we provide a novel insight into the local convergence of a specific variant of IHT for matrix completion. We uncover the exact linear rate of IHT in a closed-form expression and identify the region of convergence in which the algorithm is guaranteed to converge. Furthermore, we utilize random matrix theory to study the linear rate of convergence of IHTSVD for large-scale matrix completion. We find that asymptotically, the rate can be expressed in closed form in terms of the relative rank and the sampling rate. Finally, we present various numerical results to verify the aforementioned theoretical analysis.

Analysis and use of stochastic models represented by a discrete-time Markov Chain require evaluation of performance measures and characterization of its stationary distribution. Analytical solutions are often unavailable when the system states are continuous or mixed. This paper presents a new method for computing the stationary distribution and performance measures for stochastic systems represented by continuous-, or mixed-state Markov chains. We show the asymptotic convergence and provide deterministic non-asymptotic error bounds for our method under the supremum norm. Our finite approximation method is near-optimal among all discrete approximate distributions, including empirical distributions obtained from Markov chain Monte Carlo (MCMC). Numerical experiments validate the accuracy and efficiency of our method and show that it significantly outperforms MCMC based approach.

Stochastic gradient methods have enabled variational inference for high-dimensional models and large data. However, the steepest ascent direction in the parameter space of a statistical model is given not by the commonly used Euclidean gradient, but the natural gradient which premultiplies the Euclidean gradient by the inverted Fisher information matrix. Use of natural gradients can improve convergence significantly, but inverting the Fisher information matrix is daunting in high-dimensions. In Gaussian variational approximation, natural gradient updates of the natural parameters (expressed in terms of the mean and precision matrix) of the Gaussian distribution can be derived analytically, but do not ensure the precision matrix remains positive definite. To tackle this issue, we consider Cholesky decomposition of the covariance or precision matrix and derive explicit natural gradient updates of the Cholesky factor by finding the inverse of the Fisher information matrix analytically. Natural gradient updates of the Cholesky factor as compared to natural parameters, depend only on the first instead of the second derivative of the log posterior density and reduces computational cost. Sparsity constraints incorporating posterior independence structure can be imposed by fixing relevant entries in the Cholesky factor to zero.

Sparse binary matrices are of great interest in the field of compressed sensing. This class of matrices make possible to perform signal recovery with lower storage costs and faster decoding algorithms. In particular, random matrices formed by i.i.d Bernoulli $p$ random variables are of practical relevance in the context of nonnegative sparse recovery. In this work, we investigate the robust nullspace property of sparse Bernoulli $p$ matrices. Previous results in the literature establish that such matrices can accurately recover $n$-dimensional $s$-sparse vectors with $m=O\left (\frac{s}{c(p)}\log\frac{en}{s}\right )$ measurements, where $c(p) \le p$ is a constant that only depends on the parameter $p$. These results suggest that, when $p$ vanishes, the sparse Bernoulli matrix requires considerably more measurements than the minimal necessary achieved by the standard isotropic subgaussian designs. We show that this is not true. Our main result characterizes, for a wide range of levels sparsity $s$, the smallest $p$ such that it is possible to perform sparse recovery with the minimal number of measurements. We also provide matching lower bounds to establish the optimality of our results.

In this paper, we are interested to an inverse Cauchy problem governed by the Stokes equation, called the data completion problem. It consists in determining the unspecified fluid velocity, or one of its components over a part of its boundary, by introducing given measurements on its remaining part. As it's known, this problem is one of the highly ill-posed problems in the Hadamard's sense \cite{had}, it is then an interesting challenge to carry out a numerical procedure for approximating their solutions, mostly in the particular case of noisy data. To solve this problem, we propose here a regularizing approach based on a coupled complex boundary method, originally proposed in \cite{source}, for solving an inverse source problem. We show the existence of the regularization optimization problem and prove the convergence of the subsequence of optimal solutions of Tikhonov regularization formulations to the solution of the Cauchy problem. Then we suggest the numerical approximation of this problem using the adjoint gradient technic and the finite element method of $P1-bubble/P1$ type. Finally, we provide some numerical results showing the accuracy, effectiveness, and robustness of the proposed approach.

Several recent applications of optimal transport (OT) theory to machine learning have relied on regularization, notably entropy and the Sinkhorn algorithm. Because matrix-vector products are pervasive in the Sinkhorn algorithm, several works have proposed to \textit{approximate} kernel matrices appearing in its iterations using low-rank factors. Another route lies instead in imposing low-rank constraints on the feasible set of couplings considered in OT problems, with no approximations on cost nor kernel matrices. This route was first explored by Forrow et al., 2018, who proposed an algorithm tailored for the squared Euclidean ground cost, using a proxy objective that can be solved through the machinery of regularized 2-Wasserstein barycenters. Building on this, we introduce in this work a generic approach that aims at solving, in full generality, the OT problem under low-rank constraints with arbitrary costs. Our algorithm relies on an explicit factorization of low rank couplings as a product of \textit{sub-coupling} factors linked by a common marginal; similar to an NMF approach, we alternatively updates these factors. We prove the non-asymptotic stationary convergence of this algorithm and illustrate its efficiency on benchmark experiments.

In order to avoid the curse of dimensionality, frequently encountered in Big Data analysis, there was a vast development in the field of linear and nonlinear dimension reduction techniques in recent years. These techniques (sometimes referred to as manifold learning) assume that the scattered input data is lying on a lower dimensional manifold, thus the high dimensionality problem can be overcome by learning the lower dimensionality behavior. However, in real life applications, data is often very noisy. In this work, we propose a method to approximate $\mathcal{M}$ a $d$-dimensional $C^{m+1}$ smooth submanifold of $\mathbb{R}^n$ ($d \ll n$) based upon noisy scattered data points (i.e., a data cloud). We assume that the data points are located "near" the lower dimensional manifold and suggest a non-linear moving least-squares projection on an approximating $d$-dimensional manifold. Under some mild assumptions, the resulting approximant is shown to be infinitely smooth and of high approximation order (i.e., $O(h^{m+1})$, where $h$ is the fill distance and $m$ is the degree of the local polynomial approximation). The method presented here assumes no analytic knowledge of the approximated manifold and the approximation algorithm is linear in the large dimension $n$. Furthermore, the approximating manifold can serve as a framework to perform operations directly on the high dimensional data in a computationally efficient manner. This way, the preparatory step of dimension reduction, which induces distortions to the data, can be avoided altogether.

Implicit probabilistic models are models defined naturally in terms of a sampling procedure and often induces a likelihood function that cannot be expressed explicitly. We develop a simple method for estimating parameters in implicit models that does not require knowledge of the form of the likelihood function or any derived quantities, but can be shown to be equivalent to maximizing likelihood under some conditions. Our result holds in the non-asymptotic parametric setting, where both the capacity of the model and the number of data examples are finite. We also demonstrate encouraging experimental results.

In this paper, we propose a listwise approach for constructing user-specific rankings in recommendation systems in a collaborative fashion. We contrast the listwise approach to previous pointwise and pairwise approaches, which are based on treating either each rating or each pairwise comparison as an independent instance respectively. By extending the work of (Cao et al. 2007), we cast listwise collaborative ranking as maximum likelihood under a permutation model which applies probability mass to permutations based on a low rank latent score matrix. We present a novel algorithm called SQL-Rank, which can accommodate ties and missing data and can run in linear time. We develop a theoretical framework for analyzing listwise ranking methods based on a novel representation theory for the permutation model. Applying this framework to collaborative ranking, we derive asymptotic statistical rates as the number of users and items grow together. We conclude by demonstrating that our SQL-Rank method often outperforms current state-of-the-art algorithms for implicit feedback such as Weighted-MF and BPR and achieve favorable results when compared to explicit feedback algorithms such as matrix factorization and collaborative ranking.

This paper describes a suite of algorithms for constructing low-rank approximations of an input matrix from a random linear image of the matrix, called a sketch. These methods can preserve structural properties of the input matrix, such as positive-semidefiniteness, and they can produce approximations with a user-specified rank. The algorithms are simple, accurate, numerically stable, and provably correct. Moreover, each method is accompanied by an informative error bound that allows users to select parameters a priori to achieve a given approximation quality. These claims are supported by numerical experiments with real and synthetic data.

北京阿比特科技有限公司