The present paper focuses on the construction of a set of submatrices of a circulant matrix such that it is a smaller set to verify that the circulant matrix is an MDS (maximum distance separable) one, comparing to the complete set of square submatrices needed in general case. The general MDS verification method requires to test for singular submatrices: if at least one square submatrix is singular the matrix is not MDS. However, the complexity of the general method dramatically increases for matrices of a greater dimension. We develop an algorithm that constructs a smaller subset of submatrices thanks to a simple structure of circulant matrices. The algorithm proposed in the paper reduces the size of the testing set by approximately two matrix orders.
This paper presents algorithms for local inversion of maps and shows how several important computational problems such as cryptanalysis of symmetric encryption algorithms, RSA algorithm and solving the elliptic curve discrete log problem (ECDLP) can be addressed as local inversion problems. The methodology is termed as the \emph{Local Inversion Attack}. It utilizes the concept of \emph{Linear Complexity} (LC) of a recurrence sequence generated by the map defined by the cryptanalysis problem and the given data. It is shown that when the LC of the recurrence is bounded by a bound of polynomial order in the bit length of the input to the map, the local inversion can be accomplished in polynomial time. Hence an incomplete local inversion algorithm which searches a solution within a specified bound on computation can estimate the density of weak cases of cryptanalysis defined by such data causing low LC. Such cases can happen accidentally but cannot be avoided in practice and are fatal insecurity flaws of cryptographic primitives which are wrongly assumed to be secure on the basis of exponential average case complexity. An incomplete algorithm is proposed for solving problems such as key recovery of symmetric encryption algorithms, decryption of RSA ciphertext without factoring the modulus, decrypting any ciphertext of RSA given one plaintext ciphertext pair created with same private key in chosen ciphertext attack and solving the discrete logarithm on elliptic curves over finite fields (ECDLP) as local inversion problems. It is shown that when the LCs of the respective recurrences for given data are small, solutions of these problems are possible in practically feasible time and memory resources.
We consider the problem of designing secure and private codes for distributed matrix-matrix multiplication. A master server owns two private matrices and hires worker nodes to help compute their product. The matrices should remain information-theoretically private from the workers. Some of the workers are malicious and return corrupted results to the master. We design a framework for security against malicious workers in private matrix-matrix multiplication. The main idea is a careful use of Freivalds' algorithm to detect erroneous matrix multiplications. Our main goal is to apply this security framework to schemes with adaptive rates. Adaptive schemes divide the workers into clusters and thus provide flexibility in trading decoding complexity for efficiency. Our new scheme, SRPM3, provides a computationally efficient security check per cluster that detects the presence of one or more malicious workers with high probability. An additional per worker check is used to identify the malicious nodes. SRPM3 can tolerate the presence of an arbitrary number of malicious workers. We provide theoretical guarantees on the complexity of the security checks and simulation results on both, the missed detection rate as well as on the time needed for the integrity check.
Low rank matrix approximation is a popular topic in machine learning. In this paper, we propose a new algorithm for this topic by minimizing the least-squares estimation over the Riemannian manifold of fixed-rank matrices. The algorithm is an adaptation of classical gradient descent within the framework of optimization on manifolds. In particular, we reformulate an unconstrained optimization problem on a low-rank manifold into a differential dynamic system. We develop a splitting numerical integration method by applying a splitting integration scheme to the dynamic system. We conduct the convergence analysis of our splitting numerical integration algorithm. It can be guaranteed that the error between the recovered matrix and true result is monotonically decreasing in the Frobenius norm. Moreover, our splitting numerical integration can be adapted into matrix completion scenarios. Experimental results show that our approach has good scalability for large-scale problems with satisfactory accuracy
Sparse binary matrices are of great interest in the field of compressed sensing. This class of matrices make possible to perform signal recovery with lower storage costs and faster decoding algorithms. In particular, random matrices formed by i.i.d Bernoulli $p$ random variables are of practical relevance in the context of nonnegative sparse recovery. In this work, we investigate the robust nullspace property of sparse Bernoulli $p$ matrices. Previous results in the literature establish that such matrices can accurately recover $n$-dimensional $s$-sparse vectors with $m=O\left (\frac{s}{c(p)}\log\frac{en}{s}\right )$ measurements, where $c(p) \le p$ is a constant that only depends on the parameter $p$. These results suggest that, when $p$ vanishes, the sparse Bernoulli matrix requires considerably more measurements than the minimal necessary achieved by the standard isotropic subgaussian designs. We show that this is not true. Our main result characterizes, for a wide range of levels sparsity $s$, the smallest $p$ such that it is possible to perform sparse recovery with the minimal number of measurements. We also provide matching lower bounds to establish the optimality of our results.
The theory of reinforcement learning currently suffers from a mismatch between its empirical performance and the theoretical characterization of its performance, with consequences for, e.g., the understanding of sample efficiency, safety, and robustness. The linear quadratic regulator with unknown dynamics is a fundamental reinforcement learning setting with significant structure in its dynamics and cost function, yet even in this setting there is a gap between the best known regret lower-bound of $\Omega_p(\sqrt{T})$ and the best known upper-bound of $O_p(\sqrt{T}\,\text{polylog}(T))$. The contribution of this paper is to close that gap by establishing a novel regret upper-bound of $O_p(\sqrt{T})$. Our proof is constructive in that it analyzes the regret of a concrete algorithm, and simultaneously establishes an estimation error bound on the dynamics of $O_p(T^{-1/4})$ which is also the first to match the rate of a known lower-bound. The two keys to our improved proof technique are (1) a more precise upper- and lower-bound on the system Gram matrix and (2) a self-bounding argument for the expected estimation error of the optimal controller.
Physical systems are usually modeled by differential equations, but solving these differential equations analytically is often intractable. Instead, the differential equations can be solved numerically by discretization in a finite computational domain. The discretized equation is reduced to a large linear system, whose solution is typically found using an iterative solver. We start with an initial guess, x_0, and iterate the algorithm to obtain a sequence of solution vectors, x_m. The iterative algorithm is said to converge to solution $x$ if and only if x_m converges to $x$. Accuracy of the numerical solutions is important, especially in the design of safety critical systems such as airplanes, cars, or nuclear power plants. It is therefore important to formally guarantee that the iterative solvers converge to the "true" solution of the original differential equation. In this paper, we first formalize the necessary and sufficient conditions for iterative convergence in the Coq proof assistant. We then extend this result to two classical iterative methods: Gauss-Seidel iteration and Jacobi iteration. We formalize conditions for the convergence of the Gauss--Seidel classical iterative method, based on positive definiteness of the iterative matrix. We then formally state conditions for convergence of Jacobi iteration and instantiate it with an example to demonstrate convergence of iterative solutions to the direct solution of the linear system. We leverage recent developments of the Coq linear algebra and mathcomp library for our formalization.
An intensive line of research on fixed parameter tractability of integer programming is focused on exploiting the relation between the sparsity of a constraint matrix $A$ and the norm of the elements of its Graver basis. In particular, integer programming is fixed parameter tractable when parameterized by the primal tree-depth and the entry complexity of $A$, and when parameterized by the dual tree-depth and the entry complexity of $A$; both these parameterization imply that $A$ is sparse, in particular, the number of its non-zero entries is linear in the number of columns or rows, respectively. We study preconditioners transforming a given matrix to an equivalent sparse matrix if it exists and provide structural results characterizing the existence of a sparse equivalent matrix in terms of the structural properties of the associated column matroid. In particular, our results imply that the $\ell_1$-norm of the Graver basis is bounded by a function of the maximum $\ell_1$-norm of a circuit of $A$. We use our results to design a parameterized algorithm that constructs a matrix equivalent to an input matrix $A$ that has small primal/dual tree-depth and entry complexity if such an equivalent matrix exists. Our results yield parameterized algorithms for integer programming when parameterized by the $\ell_1$-norm of the Graver basis of the constraint matrix, when parameterized by the $\ell_1$-norm of the circuits of the constraint matrix, when parameterized by the smallest primal tree-depth and entry complexity of a matrix equivalent to the constraint matrix, and when parameterized by the smallest dual tree-depth and entry complexity of a matrix equivalent to the constraint matrix.
We show that the Identity Problem is decidable for finitely generated sub-semigroups of the group $\operatorname{UT}(4, \mathbb{Z})$ of $4 \times 4$ unitriangular integer matrices. As a byproduct of our proof, we have also shown the decidability of several subset reachability problems in $\operatorname{UT}(4, \mathbb{Z})$.
The problem of Approximate Nearest Neighbor (ANN) search is fundamental in computer science and has benefited from significant progress in the past couple of decades. However, most work has been devoted to pointsets whereas complex shapes have not been sufficiently treated. Here, we focus on distance functions between discretized curves in Euclidean space: they appear in a wide range of applications, from road segments to time-series in general dimension. For $\ell_p$-products of Euclidean metrics, for any $p$, we design simple and efficient data structures for ANN, based on randomized projections, which are of independent interest. They serve to solve proximity problems under a notion of distance between discretized curves, which generalizes both discrete Fr\'echet and Dynamic Time Warping distances. These are the most popular and practical approaches to comparing such curves. We offer the first data structures and query algorithms for ANN with arbitrarily good approximation factor, at the expense of increasing space usage and preprocessing time over existing methods. Query time complexity is comparable or significantly improved by our algorithms, our algorithm is especially efficient when the length of the curves is bounded.
We show that for the problem of testing if a matrix $A \in F^{n \times n}$ has rank at most $d$, or requires changing an $\epsilon$-fraction of entries to have rank at most $d$, there is a non-adaptive query algorithm making $\widetilde{O}(d^2/\epsilon)$ queries. Our algorithm works for any field $F$. This improves upon the previous $O(d^2/\epsilon^2)$ bound (SODA'03), and bypasses an $\Omega(d^2/\epsilon^2)$ lower bound of (KDD'14) which holds if the algorithm is required to read a submatrix. Our algorithm is the first such algorithm which does not read a submatrix, and instead reads a carefully selected non-adaptive pattern of entries in rows and columns of $A$. We complement our algorithm with a matching query complexity lower bound for non-adaptive testers over any field. We also give tight bounds of $\widetilde{\Theta}(d^2)$ queries in the sensing model for which query access comes in the form of $\langle X_i, A\rangle:=tr(X_i^\top A)$; perhaps surprisingly these bounds do not depend on $\epsilon$. We next develop a novel property testing framework for testing numerical properties of a real-valued matrix $A$ more generally, which includes the stable rank, Schatten-$p$ norms, and SVD entropy. Specifically, we propose a bounded entry model, where $A$ is required to have entries bounded by $1$ in absolute value. We give upper and lower bounds for a wide range of problems in this model, and discuss connections to the sensing model above.