This paper describes an algorithm which computes the characteristic polynomial of a matrix over a field within the same asymptotic complexity, up to constant factors, as the multiplication of two square matrices. Previously, this was only achieved by resorting to genericity assumptions or randomization techniques, while the best known complexity bound with a general deterministic algorithm was obtained by Keller-Gehrig in 1985 and involves logarithmic factors. Our algorithm computes more generally the determinant of a univariate polynomial matrix in reduced form, and relies on new subroutines for transforming shifted reduced matrices into shifted weak Popov matrices, and shifted weak Popov matrices into shifted Popov matrices.
We introduce and investigate matrix approximation by decomposition into a sum of radial basis function (RBF) components. An RBF component is a generalization of the outer product between a pair of vectors, where an RBF function replaces the scalar multiplication between individual vector elements. Even though the RBF functions are positive definite, the summation across components is not restricted to convex combinations and allows us to compute the decomposition for any real matrix that is not necessarily symmetric or positive definite. We formulate the problem of seeking such a decomposition as an optimization problem with a nonlinear and non-convex loss function. Several modern versions of the gradient descent method, including their scalable stochastic counterparts, are used to solve this problem. We provide extensive empirical evidence of the effectiveness of the RBF decomposition and that of the gradient-based fitting algorithm. While being conceptually motivated by singular value decomposition (SVD), our proposed nonlinear counterpart outperforms SVD by drastically reducing the memory required to approximate a data matrix with the same $L_2$-error for a wide range of matrix types. For example, it leads to 2 to 10 times memory save for Gaussian noise, graph adjacency matrices, and kernel matrices. Moreover, this proximity-based decomposition can offer additional interpretability in applications that involve, e.g., capturing the inner low-dimensional structure of the data, retaining graph connectivity structure, and preserving the acutance of images.
We investigate the complexity of computing the Zariski closure of a finitely generated group of matrices. The Zariski closure was previously shown to be computable by Derksen, Jeandel and Koiran, but the termination argument for their algorithm appears not to yield any complexity bound. In this paper we follow a different approach and obtain a bound on the degree of the polynomials that define the closure. Our bound shows that the closure can be computed in elementary time. We describe several applications of this result, e.g., concerning quantum automata and quantum universal gates. We also obtain an upper bound on the length of a strictly increasing chain of linear algebraic groups, all of which are generated over a fixed number field.
This paper studies \emph{linear} and \emph{affine} error-correcting codes for correcting synchronization errors such as insertions and deletions. We call such codes linear/affine insdel codes. Linear codes that can correct even a single deletion are limited to have information rate at most $1/2$ (achieved by the trivial 2-fold repetition code). Previously, it was (erroneously) reported that more generally no non-trivial linear codes correcting $k$ deletions exist, i.e., that the $(k+1)$-fold repetition codes and its rate of $1/(k+1)$ are basically optimal for any $k$. We disprove this and show the existence of binary linear codes of length $n$ and rate just below $1/2$ capable of correcting $\Omega(n)$ insertions and deletions. This identifies rate $1/2$ as a sharp threshold for recovery from deletions for linear codes, and reopens the quest for a better understanding of the capabilities of linear codes for correcting insertions/deletions. We prove novel outer bounds and existential inner bounds for the rate vs. (edit) distance trade-off of linear insdel codes. We complement our existential results with an efficient synchronization-string-based transformation that converts any asymptotically-good linear code for Hamming errors into an asymptotically-good linear code for insdel errors. Lastly, we show that the $\frac{1}{2}$-rate limitation does not hold for affine codes by giving an explicit affine code of rate $1-\epsilon$ which can efficiently correct a constant fraction of insdel errors.
In this note, we study the expander decomposition problem in a more general setting where the input graph has positively weighted edges and nonnegative demands on its vertices. We show how to extend the techniques of Chuzhoy et al. (FOCS 2020) to this wider setting, obtaining a deterministic algorithm for the problem in almost-linear time.
Probing is a general technique that is used to reduce the variance of the Hutchinson stochastic estimator for the trace of the inverse of a large, sparse matrix $A$. The variance of the estimator is the sum of the squares of the off-diagonal elements of $A^{-1}$. Therefore, this technique computes probing vectors that when used in the estimator they annihilate the largest off-diagonal elements. For matrices that display decay of the magnitude of $|A^{-1}_{ij}|$ with the graph distance between nodes $i$ and $j$, this is achieved through graph coloring of increasing powers $A^p$. Equivalently, when a matrix stems from a lattice discretization, it is computationally beneficial to find a distance-$p$ coloring of the lattice. In this work, we study probing for the more general problem of computing the trace of a permutation of $A^{-1}$, say $PA^{-1}$, motivated from Lattice QCD where we need to construct "disconnected diagrams" to extract flavor-separated Generalized Parton functions. In Lattice QCD, where the matrix has a 4D toroidal lattice structure, these non-local operators correspond to a $PA^{-1}$ where $P$ is the permutation relating to some displacement in one or more dimensions. We focus on a single dimension displacement ($k$) but our methods are general. We show that probing on $A^p$ or $(PA)^p$ do not annihilate the largest magnitude elements. To resolve this issue, our displacement-based probing works on $PA^p$ using a new coloring scheme that works directly on appropriately displaced neighborhoods on the lattice. We prove lower bounds on the number of colors needed, and study the effect of this scheme on variance reduction, both theoretically and experimentally on a real-world Lattice QCD calculation. We achieve orders of magnitude speedup over the un-probed or the naively probed methods.
In this paper, we focus on the variable selection techniques for a class of semiparametric spatial regression models which allow one to study the effects of explanatory variables in the presence of the spatial information. The spatial smoothing problem in the nonparametric part is tackled by means of bivariate splines over triangulation, which is able to deal efficiently with data distributed over irregularly shaped regions. In addition, we develop a unified procedure for variable selection to identify significant covariates under a double penalization framework, and we show that the penalized estimators enjoy the "oracle" property. The proposed method can simultaneously identify non-zero spatially distributed covariates and solve the problem of "leakage" across complex domains of the functional spatial component. To estimate the standard deviations of the proposed estimators for the coefficients, a sandwich formula is developed as well. In the end, Monte Carlo simulation examples and a real data example are provided to illustrate the proposed methodology. All technical proofs are given in the supplementary materials.
Parameter-space and function-space provide two different duality frames in which to study neural networks. We demonstrate that symmetries of network densities may be determined via dual computations of network correlation functions, even when the density is unknown and the network is not equivariant. Symmetry-via-duality relies on invariance properties of the correlation functions, which stem from the choice of network parameter distributions. Input and output symmetries of neural network densities are determined, which recover known Gaussian process results in the infinite width limit. The mechanism may also be utilized to determine symmetries during training, when parameters are correlated, as well as symmetries of the Neural Tangent Kernel. We demonstrate that the amount of symmetry in the initialization density affects the accuracy of networks trained on Fashion-MNIST, and that symmetry breaking helps only when it is in the direction of ground truth.
Fingerprints feature a ridge pattern with moderately varying ridge frequency (RF), following an orientation field (OF), which usually features some singularities. Additionally at some points, called minutiae, ridge lines end or fork and this point pattern is usually used for fingerprint identification and authentication. Whenever the OF features divergent ridge lines (e.g. near singularities), a nearly constant RF necessitates the generation of more ridge lines, originating at minutiae. We call these the necessary minutiae. It turns out that fingerprints feature additional minutiae which occur at rather arbitrary locations. We call these the random minutiae or, since they may convey fingerprint individuality beyond the OF, the characteristic minutiae. In consequence, the minutiae point pattern is assumed to be a realization of the superposition of two stochastic point processes: a Strauss point process (whose activity function is given by the divergence field) with an additional hard core, and a homogeneous Poisson point process, modelling the necessary and the characteristic minutiae, respectively. We perform Bayesian inference using an MCMC-based minutiae separating algorithm (MiSeal). In simulations, it provides good mixing and good estimation of underlying parameters. In application to fingerprints, we can separate the two minutiae patterns and verify by example of two different prints with similar OF that characteristic minutiae convey fingerprint individuality.
In this study, we develop an asymptotic theory of nonparametric regression for a locally stationary functional time series. First, we introduce the notion of a locally stationary functional time series (LSFTS) that takes values in a semi-metric space. Then, we propose a nonparametric model for LSFTS with a regression function that changes smoothly over time. We establish the uniform convergence rates of a class of kernel estimators, the Nadaraya-Watson (NW) estimator of the regression function, and a central limit theorem of the NW estimator.
We show that for the problem of testing if a matrix $A \in F^{n \times n}$ has rank at most $d$, or requires changing an $\epsilon$-fraction of entries to have rank at most $d$, there is a non-adaptive query algorithm making $\widetilde{O}(d^2/\epsilon)$ queries. Our algorithm works for any field $F$. This improves upon the previous $O(d^2/\epsilon^2)$ bound (SODA'03), and bypasses an $\Omega(d^2/\epsilon^2)$ lower bound of (KDD'14) which holds if the algorithm is required to read a submatrix. Our algorithm is the first such algorithm which does not read a submatrix, and instead reads a carefully selected non-adaptive pattern of entries in rows and columns of $A$. We complement our algorithm with a matching query complexity lower bound for non-adaptive testers over any field. We also give tight bounds of $\widetilde{\Theta}(d^2)$ queries in the sensing model for which query access comes in the form of $\langle X_i, A\rangle:=tr(X_i^\top A)$; perhaps surprisingly these bounds do not depend on $\epsilon$. We next develop a novel property testing framework for testing numerical properties of a real-valued matrix $A$ more generally, which includes the stable rank, Schatten-$p$ norms, and SVD entropy. Specifically, we propose a bounded entry model, where $A$ is required to have entries bounded by $1$ in absolute value. We give upper and lower bounds for a wide range of problems in this model, and discuss connections to the sensing model above.