We prove that the sum of $t$ boolean-valued random variables sampled by a random walk on a regular expander converges in total variation distance to a discrete normal distribution at a rate of $O(\lambda/t^{1/2-o(1)})$, where $\lambda$ is the second largest eigenvalue of the random walk matrix in absolute value. To the best of our knowledge, among known Berry-Esseen bounds for Markov chains, our result is the first to show convergence in total variation distance, and is also the first to incorporate a linear dependence on expansion $\lambda$. In contrast, prior Markov chain Berry-Esseen bounds showed a convergence rate of $O(1/\sqrt{t})$ in weaker metrics such as Kolmogorov distance. Our result also improves upon prior work in the pseudorandomness literature, which showed that the total variation distance is $O(\lambda)$ when the approximating distribution is taken to be a binomial distribution. We achieve the faster $O(\lambda/t^{1/2-o(1)})$ convergence rate by generalizing the binomial distribution to discrete normals of arbitrary variance. We specifically construct discrete normals using a random walk on an appropriate 2-state Markov chain. Our bound can therefore be viewed as a regularity lemma that reduces the study of arbitrary expanders to a small class of particularly simple expanders.
A sequence of random variables is called exchangeable if its joint distribution is invariant under permutations. The original formulation of de Finetti's theorem says that any exchangeable sequence of $\{0,1\}$-valued random variables can be thought of as a mixture of independent and identically distributed sequences in a certain precise mathematical sense. Interpreting this statement from a convex analytic perspective, Hewitt and Savage obtained the same conclusion for more general state spaces under some topological conditions. The main contribution of this paper is in providing a new framework that explains the theorem purely as a consequence of the underlying distribution of the random variables, with no topological conditions (beyond Hausdorffness) on the state space being necessary if the distribution is Radon. We also show that it is consistent with the axioms of ZFC that de Finetti's theorem holds for all sequences of exchangeable random variables taking values in any complete metric space. The framework we use is based on nonstandard analysis. We have provided a self-contained introduction to nonstandard analysis as an appendix, thus rendering measure theoretic probability and point-set topology as the only prerequisites for this paper. Our introduction aims to develop some new ideologies that might be of interest to mathematicians, philosophers, and mathematics educators alike. Our technical tools come from nonstandard topological measure theory, in which a highlight is a new generalization of Prokhorov's theorem. Modulo such technical tools, our proof relies on properties of the empirical measures induced by hyperfinitely many identically distributed random variables -- a feature that allows us to establish de Finetti's theorem in the generality that we seek while still retaining the combinatorial intuition of proofs of simpler versions of de Finetti's theorem.
We consider the optimization of a smooth and strongly convex objective using constant step-size stochastic gradient descent (SGD) and study its properties through the prism of Markov chains. We show that, for unbiased gradient estimates with mildly controlled variance, the iteration converges to an invariant distribution in total variation distance. We also establish this convergence in Wasserstein-2 distance under a relaxed assumption on the gradient noise distribution compared to previous work. Thanks to the invariance property of the limit distribution, our analysis shows that the latter inherits sub-Gaussian or sub-exponential concentration properties when these hold true for the gradient. This allows the derivation of high-confidence bounds for the final estimate. Finally, under such conditions in the linear case, we obtain a dimension-free deviation bound for the Polyak-Ruppert average of a tail sequence. All our results are non-asymptotic and their consequences are discussed through a few applications.
The Pearson correlation coefficient is generally not invariant under common marginal transforms, but such an invariance property may hold true for specific models such as independence. A bivariate random vector is said to have an invariant correlation if its Pearson correlation coefficient remains unchanged under any common marginal transforms. We characterize all models of such a random vector via a certain combination of independence and the strongest positive dependence called comonotonicity. In particular, we show that the class of exchangeable copulas with invariant correlation is precisely described by what we call positive Fr\'echet copulas. We then extend the concept of invariant correlation to multi-dimensional models, and characterize the set of all invariant correlation matrices via the clique partition polytope. We also propose a positive regression dependent model which admits any prescribed invariant correlation matrix. Finally, all our characterization results of invariant correlation, except one special case, remain the same if the common marginal transforms are confined to the set of increasing ones.
Two simple undirected graphs are cospectral if their respective adjacency matrices have the same multiset of eigenvalues. Cospectrality yields an equivalence relation on the family of graphs which is provably weaker than isomorphism. In this paper, we study cospectrality in relation to another well-studied relaxation of isomorphism, namely $k$-dimensional Weisfeiler-Leman ($k$-WL) indistinguishability. Cospectrality with respect to standard graph matrices such as the adjacency or the Laplacian matrix yields a strictly finer equivalence relation than $2$-WL indistinguishability. We show that individualising one vertex plus running $1$-WL already subsumes cospectrality with respect to all such graph matrices. Building on this result, we resolve an open problem of F\"urer (2010) about spectral invariants. Looking beyond $2$-WL, we devise a hierarchy of graph matrices generalising the adjacency matrix such that $k$-WL indistinguishability after a fixed number of iterations can be captured as a spectral condition on these matrices. Precisely, we provide a spectral characterisation of $k$-WL indistinguishability after $d$ iterations, for $k,d \in \mathbb{N}$. Our results can be viewed as characterisations of homomorphism indistinguishability over certain graph classes in terms of matrix equations. The study of homomorphism indistinguishability is an emerging field, to which we contribute by extending the algebraic framework of Man\v{c}inska and Roberson (2020) and Grohe et al. (2022).
A coupled hybridizable discontinuous Galerkin (HDG) and boundary integral (BI) method is proposed to efficiently analyze electromagnetic scattering from inhomogeneous/composite objects. The coupling between the HDG and the BI equations is realized using the numerical flux operating on the equivalent current and the global unknown of the HDG. This approach yields sparse coupling matrices upon discretization. Inclusion of the BI equation ensures that the only error in enforcing the radiation conditions is the discretization. However, the discretization of this equation yields a dense matrix, which prohibits the use of a direct matrix solver on the overall coupled system as often done with traditional HDG schemes. To overcome this bottleneck, a "hybrid" method is developed. This method uses an iterative scheme to solve the overall coupled system but within the matrix-vector multiplication subroutine of the iterations, the inverse of the HDG matrix is efficiently accounted for using a sparse direct matrix solver. The same subroutine also uses the multilevel fast multipole algorithm to accelerate the multiplication of the guess vector with the dense BI matrix. The numerical results demonstrate the accuracy, the efficiency, and the applicability of the proposed HDG-BI solver.
We examine the problem of variance components testing in general mixed effects models using the likelihood ratio test. We account for the presence of nuisance parameters, i.e. the fact that some untested variances might also be equal to zero. Two main issues arise in this context leading to a non regular setting. First, under the null hypothesis the true parameter value lies on the boundary of the parameter space. Moreover, due to the presence of nuisance parameters the exact location of these boundary points is not known, which prevents from using classical asymptotic theory of maximum likelihood estimation. Then, in the specific context of nonlinear mixed-effects models, the Fisher information matrix is singular at the true parameter value. We address these two points by proposing a shrinked parametric bootstrap procedure, which is straightforward to apply even for nonlinear models. We show that the procedure is consistent, solving both the boundary and the singularity issues, and we provide a verifiable criterion for the applicability of our theoretical results. We show through a simulation study that, compared to the asymptotic approach, our procedure has a better small sample performance and is more robust to the presence of nuisance parameters. A real data application is also provided.
Non-overlapping codes have been studied for almost 60 years. In such a code, no proper, non-empty prefix of any codeword is a suffix of any codeword. In this paper, we study codes in which overlaps of certain specified sizes are forbidden. We prove some general bounds and we give several constructions in the case of binary codes. Our techniques also allow us to provide an alternative, elementary proof of a lower bound on non-overlapping codes due to Levenshtein in 1964.
We consider the multiple testing of the general regression framework aiming at studying the relationship between a univariate response and a p-dimensional predictor. To test the hypothesis of the effect of each predictor, we construct an Angular Balanced Statistic (ABS) based on the estimator of the sliced inverse regression without assuming a model of the conditional distribution of the response. According to the developed limiting distribution results in this paper, we have shown that ABS is asymptotically symmetric with respect to zero under the null hypothesis. We then propose a Model-free multiple Testing procedure using Angular balanced statistics (MTA) and show theoretically that the false discovery rate of this method is less than or equal to a designated level asymptotically. Numerical evidence has shown that the MTA method is much more powerful than its alternatives, subject to the control of the false discovery rate.
In privacy under continual observation we study how to release differentially private estimates based on a dataset that evolves over time. The problem of releasing private prefix sums of $x_1,x_2,x_3,\dots \in\{0,1\}$ (where the value of each $x_i$ is to be private) is particularly well-studied, and a generalized form is used in state-of-the-art methods for private stochastic gradient descent (SGD). The seminal binary mechanism privately releases the first $t$ prefix sums with noise of variance polylogarithmic in $t$. Recently, Henzinger et al. and Denisov et al. showed that it is possible to improve on the binary mechanism in two ways: The variance of the noise can be reduced by a (large) constant factor, and also made more even across time steps. However, their algorithms for generating the noise distribution are not as efficient as one would like in terms of computation time and (in particular) space. We address the efficiency problem by presenting a simple alternative to the binary mechanism in which 1) generating the noise takes constant average time per value, 2) the variance is reduced by a factor about 4 compared to the binary mechanism, and 3) the noise distribution at each step is identical. Empirically, a simple Python implementation of our approach outperforms the running time of the approach of Henzinger et al., as well as an attempt to improve their algorithm using high-performance algorithms for multiplication with Toeplitz matrices.
Graph Neural Networks (GNNs) have been successfully used in many problems involving graph-structured data, achieving state-of-the-art performance. GNNs typically employ a message-passing scheme, in which every node aggregates information from its neighbors using a permutation-invariant aggregation function. Standard well-examined choices such as the mean or sum aggregation functions have limited capabilities, as they are not able to capture interactions among neighbors. In this work, we formalize these interactions using an information-theoretic framework that notably includes synergistic information. Driven by this definition, we introduce the Graph Ordering Attention (GOAT) layer, a novel GNN component that captures interactions between nodes in a neighborhood. This is achieved by learning local node orderings via an attention mechanism and processing the ordered representations using a recurrent neural network aggregator. This design allows us to make use of a permutation-sensitive aggregator while maintaining the permutation-equivariance of the proposed GOAT layer. The GOAT model demonstrates its increased performance in modeling graph metrics that capture complex information, such as the betweenness centrality and the effective size of a node. In practical use-cases, its superior modeling capability is confirmed through its success in several real-world node classification benchmarks.