For a general third-order tensor $\mathcal{A}\in\mathbb{R}^{n\times n\times n}$ the paper studies two closely related problems, the SVD-like tensor decomposition and the (approximate) tensor diagonalization. We develop the alternating least squares Jacobi-type algorithm that maximizes the squares of the diagonal entries of $\mathcal{A}$. The algorithm works on $2\times2\times2$ subtensors such that in each iteration the sum of the squares of two diagonal entries is maximized. We show how the rotation angles are calculated and prove the convergence of the algorithm. Different initializations of the algorithm are discussed, as well as the special cases of symmetric and antisymmetric tensors. The algorithm can be generalized to work on the higher-order tensors.
In this work, we consider the optimization formulation for symmetric tensor decomposition recently introduced in the Subspace Power Method (SPM) of Kileel and Pereira. Unlike popular alternative functionals for tensor decomposition, the SPM objective function has the desirable properties that its maximal value is known in advance, and its global optima are exactly the rank-1 components of the tensor when the input is sufficiently low-rank. We analyze the non-convex optimization landscape associated with the SPM objective. Our analysis accounts for working with noisy tensors. We derive quantitative bounds such that any second-order critical point with SPM objective value exceeding the bound must equal a tensor component in the noiseless case, and must approximate a tensor component in the noisy case. For decomposing tensors of size $D^{\times m}$, we obtain a near-global guarantee up to rank $\widetilde{o}(D^{\lfloor m/2 \rfloor})$ under a random tensor model, and a global guarantee up to rank $\mathcal{O}(D)$ assuming deterministic frame conditions. This implies that SPM with suitable initialization is a provable, efficient, robust algorithm for low-rank symmetric tensor decomposition. We conclude with numerics that show a practical preferability for using the SPM functional over a more established counterpart.
In this paper we present a finite element analysis for a Dirichlet boundary control problem governed by the Stokes equation. The Dirichlet control is considered in a convex closed subset of the energy space $\mathbf{H}^1(\Omega).$ Most of the previous works on the Stokes Dirichlet boundary control problem deals with either tangential control or the case where the flux of the control is zero. This choice of the control is very particular and their choice of the formulation leads to the control with limited regularity. To overcome this difficulty, we introduce the Stokes problem with outflow condition and the control acts on the Dirichlet boundary only hence our control is more general and it has both the tangential and normal components. We prove well-posedness and discuss on the regularity of the control problem. The first-order optimality condition for the control leads to a Signorini problem. We develop a two-level finite element discretization by using $\mathbf{P}_1$ elements(on the fine mesh) for the velocity and the control variable and $P_0$ elements (on the coarse mesh) for the pressure variable. The standard energy error analysis gives $\frac{1}{2}+\frac{\delta}{2}$ order of convergence when the control is in $\mathbf{H}^{\frac{3}{2}+\delta}(\Omega)$ space. Here we have improved it to $\frac{1}{2}+\delta,$ which is optimal. Also, when the control lies in less regular space we derived optimal order of convergence up to the regularity. The theoretical results are corroborated by a variety of numerical tests.
In this paper we present results on asymptotic characteristics of multivariate function classes in the uniform norm. Our main interest is the approximation of functions with mixed smoothness parameter not larger than $1/2$. Our focus will be on the behavior of the best $m$-term trigonometric approximation as well as the decay of Kolmogorov and entropy numbers in the uniform norm. It turns out that these quantities share a few fundamental abstract properties like their behavior under real interpolation, such that they can be treated simultaneously. We start with proving estimates on finite rank convolution operators with range in a step hyperbolic cross. These results imply bounds for the corresponding function space embeddings by a well-known decomposition technique. The decay of Kolmogorov numbers have direct implications for the problem of sampling recovery in $L_2$ in situations where recent results in the literature are not applicable since the corresponding approximation numbers are not square summable.
We consider the problem of approximating the arboricity of a graph $G= (V,E)$, which we denote by $\mathsf{arb}(G)$, in sublinear time, where the arboricity of a graph is the minimal number of forests required to cover its edges. An algorithm for this problem may perform degree and neighbor queries, and is allowed a small error probability. We design an algorithm that outputs an estimate $\hat{\alpha}$, such that with probability $1-1/\textrm{poly}(n)$, $\mathsf{arb}(G)/c\log^2 n \leq \hat{\alpha} \leq \mathsf{arb}(G)$, where $n=|V|$ and $c$ is a constant. The expected query complexity and running time of the algorithm are $O(n/\mathsf{arb}(G))\cdot \textrm{poly}(\log n)$, and this upper bound also holds with high probability. %($\widetilde{O}(\cdot)$ is used to suppress $\textrm{poly}(\log n)$ dependencies). This bound is optimal for such an approximation up to a $\textrm{poly}(\log n)$ factor.
Minimizing a sum of simple submodular functions of limited support is a special case of general submodular function minimization that has seen numerous applications in machine learning. We develop fast techniques for instances where components in the sum are cardinality-based, meaning they depend only on the size of the input set. This variant is one of the most widely applied in practice, encompassing, e.g., common energy functions arising in image segmentation and recent generalized hypergraph cut functions. We develop the first approximation algorithms for this problem, where the approximations can be quickly computed via reduction to a sparse graph cut problem, with graph sparsity controlled by the desired approximation factor. Our method relies on a new connection between sparse graph reduction techniques and piecewise linear approximations to concave functions. Our sparse reduction technique leads to significant improvements in theoretical runtimes, as well as substantial practical gains in problems ranging from benchmark image segmentation tasks to hypergraph clustering problems.
Johnson-Lindenstrauss lemma states random projections can be used as a topology preserving embedding technique for fixed vectors. In this paper, we try to understand how random projections affect probabilistic properties of random vectors. In particular we prove the distribution of inner product of two independent random vectors $X, Z \in {R}^n$ is preserved by random projection $S:{R}^n \to {R}^m$. More precisely, \[ \sup_t \left| \text{P}(\frac{1}{C_{m,n}} X^TS^TSZ <t) - \text{P}(\frac{1}{\sqrt{n}} X^TZ<t) \right| \le O\left(\frac{1}{\sqrt{n}}+ \frac{1}{\sqrt{m}} \right) \] As a by-product, we obtain product central limit theorem (product-CLT) for $\sum_{k=1}^{n} X_k Y_k$, where $\{X_k\}$ is a martingale difference sequence, and $\{Y_k\}$ has dependency within the sequence. We also obtain the rate of convergence in the spirit of Berry-Esseen theorem.
The Restricted Additive Schwarz method with impedance transmission conditions, also known as the Optimised Restricted Additive Schwarz (ORAS) method, is a simple overlapping one-level parallel domain decomposition method, which has been successfully used as an iterative solver and as a preconditioner for discretized Helmholtz boundary-value problems. In this paper, we give, for the first time, a convergence analysis for ORAS as an iterative solver -- and also as a preconditioner -- for nodal finite element Helmholtz systems of any polynomial order. The analysis starts by showing (for general domain decompositions) that ORAS as an unconventional finite element approximation of a classical parallel iterative Schwarz method, formulated at the PDE (non-discrete) level. This non-discrete Schwarz method was recently analysed in [Gong, Gander, Graham, Lafontaine, Spence, arXiv 2106.05218], and the present paper gives a corresponding discrete version of this analysis. In particular, for domain decompositions in strips in 2-d, we show that, when the mesh size is small enough, ORAS inherits the convergence properties of the Schwarz method, independent of polynomial order. The proof relies on characterising the ORAS iteration in terms of discrete `impedance-to-impedance maps', which we prove (via a novel weighted finite-element error analysis) converge as $h\rightarrow 0$ in the operator norm to their non-discrete counterparts.
Random projection (RP) have recently emerged as popular techniques in themachine learning community for their ability in reducing the dimension of veryhigh-dimensional tensors. Following the work in [29], we consider a tensorizedrandom projection relying on Tensor Train (TT) decomposition where each elementof the core tensors is drawn from a Rademacher distribution. Our theoreticalresults reveal that the Gaussian low-rank tensor represented in compressed formin TT format in [29] can be replaced by a TT tensor with core elements drawnfrom a Rademacher distribution with the same embedding size. Experiments onsynthetic data demonstrate that tensorized Rademacher RP can outperform thetensorized Gaussian RP studied in [29]. In addition, we show both theoreticallyand experimentally, that the tensorized RP in the Matrix Product Operator (MPO)format proposed in [5] for performing SVD on large matrices is not a Johnson-Lindenstrauss transform (JLT) and therefore not a well-suited random projectionmap
In order to avoid the curse of dimensionality, frequently encountered in Big Data analysis, there was a vast development in the field of linear and nonlinear dimension reduction techniques in recent years. These techniques (sometimes referred to as manifold learning) assume that the scattered input data is lying on a lower dimensional manifold, thus the high dimensionality problem can be overcome by learning the lower dimensionality behavior. However, in real life applications, data is often very noisy. In this work, we propose a method to approximate $\mathcal{M}$ a $d$-dimensional $C^{m+1}$ smooth submanifold of $\mathbb{R}^n$ ($d \ll n$) based upon noisy scattered data points (i.e., a data cloud). We assume that the data points are located "near" the lower dimensional manifold and suggest a non-linear moving least-squares projection on an approximating $d$-dimensional manifold. Under some mild assumptions, the resulting approximant is shown to be infinitely smooth and of high approximation order (i.e., $O(h^{m+1})$, where $h$ is the fill distance and $m$ is the degree of the local polynomial approximation). The method presented here assumes no analytic knowledge of the approximated manifold and the approximation algorithm is linear in the large dimension $n$. Furthermore, the approximating manifold can serve as a framework to perform operations directly on the high dimensional data in a computationally efficient manner. This way, the preparatory step of dimension reduction, which induces distortions to the data, can be avoided altogether.
We show that for the problem of testing if a matrix $A \in F^{n \times n}$ has rank at most $d$, or requires changing an $\epsilon$-fraction of entries to have rank at most $d$, there is a non-adaptive query algorithm making $\widetilde{O}(d^2/\epsilon)$ queries. Our algorithm works for any field $F$. This improves upon the previous $O(d^2/\epsilon^2)$ bound (SODA'03), and bypasses an $\Omega(d^2/\epsilon^2)$ lower bound of (KDD'14) which holds if the algorithm is required to read a submatrix. Our algorithm is the first such algorithm which does not read a submatrix, and instead reads a carefully selected non-adaptive pattern of entries in rows and columns of $A$. We complement our algorithm with a matching query complexity lower bound for non-adaptive testers over any field. We also give tight bounds of $\widetilde{\Theta}(d^2)$ queries in the sensing model for which query access comes in the form of $\langle X_i, A\rangle:=tr(X_i^\top A)$; perhaps surprisingly these bounds do not depend on $\epsilon$. We next develop a novel property testing framework for testing numerical properties of a real-valued matrix $A$ more generally, which includes the stable rank, Schatten-$p$ norms, and SVD entropy. Specifically, we propose a bounded entry model, where $A$ is required to have entries bounded by $1$ in absolute value. We give upper and lower bounds for a wide range of problems in this model, and discuss connections to the sensing model above.