亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We propose a theory for matrix completion that goes beyond the low-rank structure commonly considered in the literature and applies to general matrices of low description complexity, including sparse matrices, matrices with sparse factorizations such as, e.g., sparse R-factors in their QR-decomposition, and algebraic combinations of matrices of low description complexity. The mathematical concept underlying this theory is that of rectifiability, a basic notion in geometric measure theory. Complexity of the sets of matrices encompassed by the theory is measured in terms of Hausdorff and Minkowski dimensions. Our goal is the characterization of the number of linear measurements, with an emphasis on rank-$1$ measurements, needed for the existence of an algorithm that yields reconstruction, either perfect, with probability 1, or with arbitrarily small probability of error, depending on the setup. Specifically, we show that matrices taken from a set $\mathcal{U}$ such that $\mathcal{U}-\mathcal{U}$ has Hausdorff dimension $s$ %(or is countably $s$-rectifiable) can be recovered from $k>s$ measurements, and random matrices supported on a set $\mathcal{U}$ of Hausdorff dimension $s$ %(or a countably $s$-rectifiable set) can be recovered with probability 1 from $k>s$ measurements. What is more, we establish the existence of $\beta$-H\"older continuous decoders recovering matrices taken from a set of upper Minkowski dimension $s$ from $k>2s/(1-\beta)$ measurements and, with arbitrarily small probability of error, random matrices supported on a set of upper Minkowski dimension $s$ from $k>s/(1-\beta)$ measurements.

相關內容

An equivalent definition of hypermatrices is introduced. The matrix expression of hypermatrices is proposed. Using permutation matrices, the conversion of different matrix expressions is revealed. The various contracted products of hypermatrices are realized by semi-tensor products (STP) of matrices via matrix expressions of hypermatrices.

We study the approximation of integrals $\int_D f(\boldsymbol{x}^\top A) \mathrm{d} \mu(\boldsymbol{x})$, where $A$ is a matrix, by quasi-Monte Carlo (QMC) rules $N^{-1} \sum_{k=0}^{N-1} f(\boldsymbol{x}_k^\top A)$. We are interested in cases where the main cost arises from calculating the products $\boldsymbol{x}_k^\top A$. We design QMC rules for which the computation of $\boldsymbol{x}_k^\top A$, $k = 0, 1, \ldots, N-1$, can be done fast, and for which the error of the QMC rule is similar to the standard QMC error. We do not require that $A$ has any particular structure. For instance, this approach can be used when approximating the expected value of a function with a multivariate normal random variable with a given covariance matrix, or when approximating the expected value of the solution of a PDE with random coefficients. The speed-up of the computation time is sometimes better and sometimes worse than the fast QMC matrix-vector product from [Dick, Kuo, Le Gia, and Schwab, Fast QMC Matrix-Vector Multiplication, SIAM J. Sci. Comput. 37 (2015)]. As in that paper, our approach applies to (polynomial) lattice point sets, but also to digital nets (we are currently not aware of any approach which allows one to apply the fast method from the aforementioned paper of Dick, Kuo, Le Gia, and Schwab to digital nets). Our method does not use FFT, instead we use repeated values in the quadrature points to derive a reduction in the computation time. This arises from the reduced CBC construction of lattice rules and polynomial lattice rules. The reduced CBC construction has been shown to reduce the computation time for the CBC construction. Here we show that it can also be used to also reduce the computation time of the QMC rule.

Matrix decomposition is a very important mathematical tool in numerical linear algebra for data processing. In this paper, we introduce a new randomized matrix decomposition algorithm, which is called randomized approximate SVD based on Qatar Riyal decomposition (RCSVD-QR). Our method utilize random sampling and the OR decomposition to address a serious bottlenck associated with classical SVD. RCSVD-QR gives satisfactory convergence speed as well as accuracy as compared to those state-of-the-art algorithms. In addition, we provides an estimate for the expected approximation error in Frobenius norm. Numerical experiments verify these claims.

We study the 2D Navier-Stokes equation with transport noise subject to periodic boundary conditions. Our main result is an error estimate for the time-discretisation showing a convergence rate of order (up to) 1/2. It holds with respect to mean square error convergence, whereas previously such a rate for the stochastic Navier-Stokes equations was only known with respect to convergence in probability. Our result is based on uniform-in-probability estimates for the continuous as well as the time-discrete solution exploiting the particular structure of the noise.

Matrix recovery from sparse observations is an extensively studied topic emerging in various applications, such as recommendation system and signal processing, which includes the matrix completion and compressed sensing models as special cases. In this work we propose a general framework for dynamic matrix recovery of low-rank matrices that evolve smoothly over time. We start from the setting that the observations are independent across time, then extend to the setting that both the design matrix and noise possess certain temporal correlation via modified concentration inequalities. By pooling neighboring observations, we obtain sharp estimation error bounds of both settings, showing the influence of the underlying smoothness, the dependence and effective samples. We propose a dynamic fast iterative shrinkage thresholding algorithm that is computationally efficient, and characterize the interplay between algorithmic and statistical convergence. Simulated and real data examples are provided to support such findings.

Boolean matrix factorization (BMF) approximates a given binary input matrix as the product of two smaller binary factors. As opposed to binary matrix factorization which uses standard arithmetic, BMF uses the Boolean OR and Boolean AND operations to perform matrix products, which leads to lower reconstruction errors. BMF is an NP-hard problem. In this paper, we first propose an alternating optimization (AO) strategy that solves the subproblem in one factor matrix in BMF using an integer program (IP). We also provide two ways to initialize the factors within AO. Then, we show how several solutions of BMF can be combined optimally using another IP. This allows us to come up with a new algorithm: it generates several solutions using AO and then combines them in an optimal way. Experiments show that our algorithms (available on gitlab) outperform the state of the art on medium-scale problems.

This paper is concerned with a class of DC composite optimization problems which, as an extension of convex composite optimization problems and DC programs with nonsmooth components, often arises in robust factorization models of low-rank matrix recovery. For this class of nonconvex and nonsmooth problems, we propose an inexact linearized proximal algorithm (iLPA) by computing in each step an inexact minimizer of a strongly convex majorization constructed with a partial linearization of their objective functions, and establish the global convergence of the generated iterate sequence under the Kurdyka-\L\"ojasiewicz (KL) property of a potential function. In particular, by leveraging the composite structure, we provide a verifiable condition for the potential function to have the KL property of exponent $1/2$ at the limit point, so for the iterate sequence to have a local R-linear convergence rate, and clarify its relationship with the regularity used in the convergence analysis of algorithms for convex composite optimization. Finally, our iLPA is applied to a robust factorization model for matrix completions with outliers, and numerical comparison with the Polyak subgradient method confirms its superiority in computing time and quality of solutions.

We study the problem of hyperparameter tuning in sparse matrix factorization under Bayesian framework. In the prior work, an analytical solution of sparse matrix factorization with Laplace prior was obtained by variational Bayes method under several approximations. Based on this solution, we propose a novel numerical method of hyperparameter tuning by evaluating the zero point of normalization factor in sparse matrix prior. We also verify that our method shows excellent performance for ground-truth sparse matrix reconstruction by comparing it with the widely-used algorithm of sparse principal component analysis.

In 1954, Alston S. Householder published Principles of Numerical Analysis, one of the first modern treatments on matrix decomposition that favored a (block) LU decomposition-the factorization of a matrix into the product of lower and upper triangular matrices. And now, matrix decomposition has become a core technology in machine learning, largely due to the development of the back propagation algorithm in fitting a neural network. The sole aim of this survey is to give a self-contained introduction to concepts and mathematical tools in numerical linear algebra and matrix analysis in order to seamlessly introduce matrix decomposition techniques and their applications in subsequent sections. However, we clearly realize our inability to cover all the useful and interesting results concerning matrix decomposition and given the paucity of scope to present this discussion, e.g., the separated analysis of the Euclidean space, Hermitian space, Hilbert space, and things in the complex domain. We refer the reader to literature in the field of linear algebra for a more detailed introduction to the related fields.

Most algorithms for representation learning and link prediction in relational data have been designed for static data. However, the data they are applied to usually evolves with time, such as friend graphs in social networks or user interactions with items in recommender systems. This is also the case for knowledge bases, which contain facts such as (US, has president, B. Obama, [2009-2017]) that are valid only at certain points in time. For the problem of link prediction under temporal constraints, i.e., answering queries such as (US, has president, ?, 2012), we propose a solution inspired by the canonical decomposition of tensors of order 4. We introduce new regularization schemes and present an extension of ComplEx (Trouillon et al., 2016) that achieves state-of-the-art performance. Additionally, we propose a new dataset for knowledge base completion constructed from Wikidata, larger than previous benchmarks by an order of magnitude, as a new reference for evaluating temporal and non-temporal link prediction methods.

北京阿比特科技有限公司