We study the mathematical structure of the solution set (and its tangent space) to the matrix equation $G^*JG=J$ for a given square matrix $J$. In the language of pure mathematics, this is a Lie group which is the isometry group for a bilinear (or a sesquilinear) form. Generally these groups are described as intersections of a few special groups. The tangent space to $\{G: G^*JG=J \}$ consists of solutions to the linear matrix equation $X^*J+JX=0$. For the complex case, the solution set of this linear equation was computed by De Ter{\'a}n and Dopico. We found that on its own, the equation $X^*J+JX=0$ is hard to solve. By throwing into the mix the complementary linear equation $X^*J-JX=0$, we find that rather than increasing the complexity, we reduce the complexity. Not only is it possible to now solve the original problem, but we can approach the broader algebraic and geometric structure. One implication is that the two equations form an $\mathfrak{h}$ and $\mathfrak{m}$ pair familiar in the study of pseudo-Riemannian symmetric spaces. We explicitly demonstrate the computation of the solutions to the equation $X^*J\pm XJ=0$ for real and complex matrices. However, any real, complex or quaternionic case with an arbitrary involution (e.g., transpose, conjugate transpose, and the various quaternion transposes) can be effectively solved with the same strategy. We provide numerical examples and visualizations.
What is the "right" topological invariant of a large point cloud X? Prior research has focused on estimating the full persistence diagram of X, a quantity that is very expensive to compute, unstable to outliers, and far from a sufficient statistic. We therefore propose that the correct invariant is not the persistence diagram of X, but rather the collection of persistence diagrams of many small subsets. This invariant, which we call "distributed persistence," is perfectly parallelizable, more stable to outliers, and has a rich inverse theory. The map from the space of point clouds (with the quasi-isometry metric) to the space of distributed persistence invariants (with the Hausdorff-Bottleneck distance) is a global quasi-isometry. This is a much stronger property than simply being injective, as it implies that the inverse of a small neighborhood is a small neighborhood, and is to our knowledge the only result of its kind in the TDA literature. Moreover, the quasi-isometry bounds depend on the size of the subsets taken, so that as the size of these subsets goes from small to large, the invariant interpolates between a purely geometric one and a topological one. Lastly, we note that our inverse results do not actually require considering all subsets of a fixed size (an enormous collection), but a relatively small collection satisfying certain covering properties that arise with high probability when randomly sampling subsets. These theoretical results are complemented by two synthetic experiments demonstrating the use of distributed persistence in practice.
For high dimensional data, where P features for N objects (P >> N) are represented in an NxP matrix X, we describe a clustering algorithm based on the normalized left Gram matrix, G = XX'/P. Under certain regularity conditions, the rows in G that correspond to objects in the same cluster converge to the same mean vector. By clustering on the row means, the algorithm does not require preprocessing by dimension reduction or feature selection techniques and does not require specification of tuning or hyperparameter values. Because it is based on the NxN matrix G, it has a lower computational cost than many methods based on clustering the feature matrix X. When compared to 14 other clustering algorithms applied to 32 benchmarked microarray datasets, the proposed algorithm provided the most accurate estimate of the underlying cluster configuration more than twice as often as its closest competitors.
This paper studies the problem of recovering the hidden vertex correspondence between two edge-correlated random graphs. We focus on the Gaussian model where the two graphs are complete graphs with correlated Gaussian weights and the Erd\H{o}s-R\'enyi model where the two graphs are subsampled from a common parent Erd\H{o}s-R\'enyi graph $\mathcal{G}(n,p)$. For dense graphs with $p=n^{-o(1)}$, we prove that there exists a sharp threshold, above which one can correctly match all but a vanishing fraction of vertices and below which correctly matching any positive fraction is impossible, a phenomenon known as the "all-or-nothing" phase transition. Even more strikingly, in the Gaussian setting, above the threshold all vertices can be exactly matched with high probability. In contrast, for sparse Erd\H{o}s-R\'enyi graphs with $p=n^{-\Theta(1)}$, we show that the all-or-nothing phenomenon no longer holds and we determine the thresholds up to a constant factor. Along the way, we also derive the sharp threshold for exact recovery, sharpening the existing results in Erd\H{o}s-R\'enyi graphs. The proof of the negative results builds upon a tight characterization of the mutual information based on the truncated second-moment computation and an "area theorem" that relates the mutual information to the integral of the reconstruction error. The positive results follows from a tight analysis of the maximum likelihood estimator that takes into account the cycle structure of the induced permutation on the edges.
A Bohemian matrix family is a set of matrices all of whose entries are drawn from a fixed, usually discrete and hence bounded, subset of a field of characteristic zero. Originally these were integers -- hence the name, from the acronym BOunded HEight Matrix of Integers (BOHEMI) -- but other kinds of entries are also interesting. Some kinds of questions about Bohemian matrices can be answered by numerical computation, but sometimes exact computation is better. In this paper we explore some Bohemian families (symmetric, upper Hessenberg, or Toeplitz) computationally, and answer some open questions posed about the distributions of eigenvalue densities.
We show $\textsf{EOPL}=\textsf{PLS}\cap\textsf{PPAD}$. Here the class $\textsf{EOPL}$ consists of all total search problems that reduce to the End-of-Potential-Line problem, which was introduced in the works by Hubacek and Yogev (SICOMP 2020) and Fearnley et al. (JCSS 2020). In particular, our result yields a new simpler proof of the breakthrough collapse $\textsf{CLS}=\textsf{PLS}\cap\textsf{PPAD}$ by Fearnley et al. (STOC 2021). We also prove a companion result $\textsf{SOPL}=\textsf{PLS}\cap\textsf{PPADS}$, where $\textsf{SOPL}$ is the class associated with the Sink-of-Potential-Line problem.
$\newcommand{\NP}{\mathsf{NP}}\newcommand{\GapSVP}{\textrm{GapSVP}}$We give a simple proof that the (approximate, decisional) Shortest Vector Problem is $\NP$-hard under a randomized reduction. Specifically, we show that for any $p \geq 1$ and any constant $\gamma < 2^{1/p}$, the $\gamma$-approximate problem in the $\ell_p$ norm ($\gamma$-$\GapSVP_p$) is not in $\mathsf{RP}$ unless $\NP \subseteq \mathsf{RP}$. Our proof follows an approach pioneered by Ajtai (STOC 1998), and strengthened by Micciancio (FOCS 1998 and SICOMP 2000), for showing hardness of $\gamma$-$\GapSVP_p$ using locally dense lattices. We construct such lattices simply by applying "Construction A" to Reed-Solomon codes with suitable parameters, and prove their local density via an elementary argument originally used in the context of Craig lattices. As in all known $\NP$-hardness results for $\GapSVP_p$ with $p < \infty$, our reduction uses randomness. Indeed, it is a notorious open problem to prove $\NP$-hardness via a deterministic reduction. To this end, we additionally discuss potential directions and associated challenges for derandomizing our reduction. In particular, we show that a close deterministic analogue of our local density construction would improve on the state-of-the-art explicit Reed-Solomon list-decoding lower bounds of Guruswami and Rudra (STOC 2005 and IEEE Trans. Inf. Theory 2006). As a related contribution of independent interest, we also give a polynomial-time algorithm for decoding $n$-dimensional "Construction A Reed-Solomon lattices" (with different parameters than those used in our hardness proof) to a distance within an $O(\sqrt{\log n})$ factor of Minkowski's bound. This asymptotically matches the best known distance for decoding near Minkowski's bound, due to Mook and Peikert (IEEE Trans. Inf. Theory 2022), whose work we build on with a somewhat simpler construction and analysis.
Neural networks are increasingly being used to solve partial differential equations (PDEs), replacing slower numerical solvers. However, a critical issue is that neural PDE solvers require high-quality ground truth data, which usually must come from the very solvers they are designed to replace. Thus, we are presented with a proverbial chicken-and-egg problem. In this paper, we present a method, which can partially alleviate this problem, by improving neural PDE solver sample complexity -- Lie point symmetry data augmentation (LPSDA). In the context of PDEs, it turns out that we are able to quantitatively derive an exhaustive list of data transformations, based on the Lie point symmetry group of the PDEs in question, something not possible in other application areas. We present this framework and demonstrate how it can easily be deployed to improve neural PDE solver sample complexity by an order of magnitude.
In distributional reinforcement learning not only expected returns but the complete return distributions of a policy are taken into account. The return distribution for a fixed policy is given as the solution of an associated distributional Bellman equation. In this note we consider general distributional Bellman equations and study existence and uniqueness of their solutions as well as tail properties of return distributions. We give necessary and sufficient conditions for existence and uniqueness of return distributions and identify cases of regular variation. We link distributional Bellman equations to multivariate affine distributional equations. We show that any solution of a distributional Bellman equation can be obtained as the vector of marginal laws of a solution to a multivariate affine distributional equation. This makes the general theory of such equations applicable to the distributional reinforcement learning setting.
We investigate the Membership Problem for hypergeometric sequences: given a hypergeometric sequence $\langle u_n \rangle_{n=0}^\infty$ of rational numbers and a target $t \in \mathbb{Q}$, decide whether $t$ occurs in the sequence. We show decidability of this problem under the assumption that in the defining recurrence $p(n)u_{n+1}=q(n)u_n$, the roots of the polynomials $p(x)$ and $q(x)$ are all rational numbers. Our proof relies on bounds on the density of primes in arithmetic progressions. We also observe a relationship between the decidability of the Membership problem (and variants) and the Rohrlich-Lang conjecture in transcendence theory.
This paper presents a safety-aware learning framework that employs an adaptive model learning method together with barrier certificates for systems with possibly nonstationary agent dynamics. To extract the dynamic structure of the model, we use a sparse optimization technique, and the resulting model will be used in combination with control barrier certificates which constrain feedback controllers only when safety is about to be violated. Under some mild assumptions, solutions to the constrained feedback-controller optimization are guaranteed to be globally optimal, and the monotonic improvement of a feedback controller is thus ensured. In addition, we reformulate the (action-)value function approximation to make any kernel-based nonlinear function estimation method applicable. We then employ a state-of-the-art kernel adaptive filtering technique for the (action-)value function approximation. The resulting framework is verified experimentally on a brushbot, whose dynamics is unknown and highly complex.