We investigate the statistical task of closeness (or equivalence) testing for multidimensional distributions. Specifically, given sample access to two unknown distributions $\mathbf p, \mathbf q$ on $\mathbb R^d$, we want to distinguish between the case that $\mathbf p=\mathbf q$ versus $\|\mathbf p-\mathbf q\|_{A_k} > \epsilon$, where $\|\mathbf p-\mathbf q\|_{A_k}$ denotes the generalized ${A}_k$ distance between $\mathbf p$ and $\mathbf q$ -- measuring the maximum discrepancy between the distributions over any collection of $k$ disjoint, axis-aligned rectangles. Our main result is the first closeness tester for this problem with {\em sub-learning} sample complexity in any fixed dimension and a nearly-matching sample complexity lower bound. In more detail, we provide a computationally efficient closeness tester with sample complexity $O\left((k^{6/7}/ \mathrm{poly}_d(\epsilon)) \log^d(k)\right)$. On the lower bound side, we establish a qualitatively matching sample complexity lower bound of $\Omega(k^{6/7}/\mathrm{poly}(\epsilon))$, even for $d=2$. These sample complexity bounds are surprising because the sample complexity of the problem in the univariate setting is $\Theta(k^{4/5}/\mathrm{poly}(\epsilon))$. This has the interesting consequence that the jump from one to two dimensions leads to a substantial increase in sample complexity, while increases beyond that do not. As a corollary of our general $A_k$ tester, we obtain $d_{\mathrm TV}$-closeness testers for pairs of $k$-histograms on $\mathbb R^d$ over a common unknown partition, and pairs of uniform distributions supported on the union of $k$ unknown disjoint axis-aligned rectangles. Both our algorithm and our lower bound make essential use of tools from Ramsey theory.
We study deterministic matrix completion problem, i.e., recovering a low-rank matrix from a few observed entries where the sampling set is chosen as the edge set of a Ramanujan graph. We first investigate projected gradient descent (PGD) applied to a Burer-Monteiro least-squares problem and show that it converges linearly to the incoherent ground-truth with respect to the condition number \k{appa} of ground-truth under a benign initialization and large samples. We next apply the scaled variant of PGD to deal with the ill-conditioned case when \k{appa} is large, and we show the algorithm converges at a linear rate independent of the condition number \k{appa} under similar conditions. Finally, we provide numerical experiments to corroborate our results.
A recurring challenge in the application of redistricting simulation algorithms lies in extracting useful summaries and comparisons from a large ensemble of districting plans. Researchers often compute summary statistics for each district in a plan, and then study their distribution across the plans in the ensemble. This approach discards rich geographic information that is inherent in districting plans. We introduce the projective average, an operation that projects a district-level summary statistic back to the underlying geography and then averages this statistic across plans in the ensemble. Compared to traditional district-level summaries, projective averages are a powerful tool for geographically granular, sub-district analysis of districting plans along a variety of dimensions. However, care must be taken to account for variation within redistricting ensembles, to avoid misleading conclusions. We propose and validate a multiple-testing procedure to control the probability of incorrectly identifying outlier plans or regions when using projective averages.
We address the problem of checking the satisfiability of a set of constrained Horn clauses (CHCs) possibly including more than one query. We propose a transformation technique that takes as input a set of CHCs, including a set of queries, and returns as output a new set of CHCs, such that the transformed CHCs are satisfiable if and only if so are the original ones, and the transformed CHCs incorporate in each new query suitable information coming from the other ones so that the CHC satisfiability algorithm is able to exploit the relationships among all queries. We show that our proposed technique is effective on a non trivial benchmark of sets of CHCs that encode many verification problems for programs manipulating algebraic data types such as lists and trees.
Comparing spatial data sets is a ubiquitous task in data analysis, however the presence of spatial autocorrelation means that standard estimates of variance will be wrong and tend to over-estimate the statistical significance of correlations and other observations. While there are a number of existing approaches to this problem, none are ideal, requiring detailed analytical calculations, which are hard to generalise or detailed knowledge of the data generating process, which may not be available. In this work we propose a resampling approach based on Tobler's Law. By resampling the data with fixed spatial autocorrelation, measured by Moran's I, we generate a more realistic null model. Testing on real and synthetic data, we find that, as long as the spatial autocorrelation is not too strong, this approach works just as well as if we knew the data generating process.
In this work, simulation-based equations to calculate propagation constant in uniform or periodic structures (SES) are deduced and verified through simulations in various types of structures. The modeling of those structures are essentially based on field distributions from a driven-mode solver, and the field distributions are used as the input parameters of the FPPS. It allows the separation of forward and backward waves from a total wave inside such a uniform or periodic structure, and thus it can be used to calculate the propagation constants inside both uniform and periodic structures even with a strong reflection. In order to test the performance and function of the FPPS, it has been applied to a variety of typical structures, including uniform waveguides, lossfree closed structures, lossy closed structures, and open radiation structures, and compared with the results of eigenmode solvers, equivalent network methods, and spectral domain integral equation methods. The comparison shows the easy-to-use and adaptable nature of the FPPS. the FPPS. This FPPS could be also applied to open radiating structures, and even multi-dimensional periodic/uniform structures.
Generative models, as an important family of statistical modeling, target learning the observed data distribution via generating new instances. Along with the rise of neural networks, deep generative models, such as variational autoencoders (VAEs) and generative adversarial network (GANs), have made tremendous progress in 2D image synthesis. Recently, researchers switch their attentions from the 2D space to the 3D space considering that 3D data better aligns with our physical world and hence enjoys great potential in practice. However, unlike a 2D image, which owns an efficient representation (i.e., pixel grid) by nature, representing 3D data could face far more challenges. Concretely, we would expect an ideal 3D representation to be capable enough to model shapes and appearances in details, and to be highly efficient so as to model high-resolution data with fast speed and low memory cost. However, existing 3D representations, such as point clouds, meshes, and recent neural fields, usually fail to meet the above requirements simultaneously. In this survey, we make a thorough review of the development of 3D generation, including 3D shape generation and 3D-aware image synthesis, from the perspectives of both algorithms and more importantly representations. We hope that our discussion could help the community track the evolution of this field and further spark some innovative ideas to advance this challenging task.
We consider the problem of explaining the predictions of graph neural networks (GNNs), which otherwise are considered as black boxes. Existing methods invariably focus on explaining the importance of graph nodes or edges but ignore the substructures of graphs, which are more intuitive and human-intelligible. In this work, we propose a novel method, known as SubgraphX, to explain GNNs by identifying important subgraphs. Given a trained GNN model and an input graph, our SubgraphX explains its predictions by efficiently exploring different subgraphs with Monte Carlo tree search. To make the tree search more effective, we propose to use Shapley values as a measure of subgraph importance, which can also capture the interactions among different subgraphs. To expedite computations, we propose efficient approximation schemes to compute Shapley values for graph data. Our work represents the first attempt to explain GNNs via identifying subgraphs explicitly and directly. Experimental results show that our SubgraphX achieves significantly improved explanations, while keeping computations at a reasonable level.
Residual networks (ResNets) have displayed impressive results in pattern recognition and, recently, have garnered considerable theoretical interest due to a perceived link with neural ordinary differential equations (neural ODEs). This link relies on the convergence of network weights to a smooth function as the number of layers increases. We investigate the properties of weights trained by stochastic gradient descent and their scaling with network depth through detailed numerical experiments. We observe the existence of scaling regimes markedly different from those assumed in neural ODE literature. Depending on certain features of the network architecture, such as the smoothness of the activation function, one may obtain an alternative ODE limit, a stochastic differential equation or neither of these. These findings cast doubts on the validity of the neural ODE model as an adequate asymptotic description of deep ResNets and point to an alternative class of differential equations as a better description of the deep network limit.
We describe the new field of mathematical analysis of deep learning. This field emerged around a list of research questions that were not answered within the classical framework of learning theory. These questions concern: the outstanding generalization power of overparametrized neural networks, the role of depth in deep architectures, the apparent absence of the curse of dimensionality, the surprisingly successful optimization performance despite the non-convexity of the problem, understanding what features are learned, why deep architectures perform exceptionally well in physical problems, and which fine aspects of an architecture affect the behavior of a learning task in which way. We present an overview of modern approaches that yield partial answers to these questions. For selected approaches, we describe the main ideas in more detail.
Image segmentation is an important component of many image understanding systems. It aims to group pixels in a spatially and perceptually coherent manner. Typically, these algorithms have a collection of parameters that control the degree of over-segmentation produced. It still remains a challenge to properly select such parameters for human-like perceptual grouping. In this work, we exploit the diversity of segments produced by different choices of parameters. We scan the segmentation parameter space and generate a collection of image segmentation hypotheses (from highly over-segmented to under-segmented). These are fed into a cost minimization framework that produces the final segmentation by selecting segments that: (1) better describe the natural contours of the image, and (2) are more stable and persistent among all the segmentation hypotheses. We compare our algorithm's performance with state-of-the-art algorithms, showing that we can achieve improved results. We also show that our framework is robust to the choice of segmentation kernel that produces the initial set of hypotheses.