A seminal work of [Ahn-Guha-McGregor, PODS'12] showed that one can compute a cut sparsifier of an unweighted undirected graph by taking a near-linear number of linear measurements on the graph. Subsequent works also studied computing other graph sparsifiers using linear sketching, and obtained near-linear upper bounds for spectral sparsifiers [Kapralov-Lee-Musco-Musco-Sidford, FOCS'14] and first non-trivial upper bounds for spanners [Filtser-Kapralov-Nouri, SODA'21]. All these linear sketching algorithms, however, only work on unweighted graphs. In this paper, we initiate the study of weighted graph sparsification by linear sketching by investigating a natural class of linear sketches that we call incidence sketches, in which each measurement is a linear combination of the weights of edges incident on a single vertex. Our results are: 1. Weighted cut sparsification: We give an algorithm that computes a $(1 + \epsilon)$-cut sparsifier using $\tilde{O}(n \epsilon^{-3})$ linear measurements, which is nearly optimal. 2. Weighted spectral sparsification: We give an algorithm that computes a $(1 + \epsilon)$-spectral sparsifier using $\tilde{O}(n^{6/5} \epsilon^{-4})$ linear measurements. Complementing our algorithm, we then prove a superlinear lower bound of $\Omega(n^{21/20-o(1)})$ measurements for computing some $O(1)$-spectral sparsifier using incidence sketches. 3. Weighted spanner computation: We focus on graphs whose largest/smallest edge weights differ by an $O(1)$ factor, and prove that, for incidence sketches, the upper bounds obtained by~[Filtser-Kapralov-Nouri, SODA'21] are optimal up to an $n^{o(1)}$ factor.
The long-range and low energy consumption requirements in Internet of Things (IoT) applications have led to a new wireless communication technology known as Low Power Wide Area Network (LPWANs). In recent years, the Long Range (LoRa) protocol has gained a lot of attention as one of the most promising technologies in LPWAN. Choosing the right combination of transmission parameters is a major challenge in the LoRa networks. In LoRa, an Adaptive Data Rate (ADR) mechanism is executed to configure each End Device's (ED) transmission parameters, resulting in improved performance metrics. In this paper, we propose a link-based ADR approach that aims to configure the transmission parameters of EDs by making a decision without taking into account the history of the last received packets, resulting in a relatively low space complexity approach. In this study, we present four different scenarios for assessing performance, including a scenario where mobile EDs are considered. Our simulation results show that in a mobile scenario with high channel noise, our proposed algorithm's Packet Delivery Ratio (PDR) is 2.8 times outperforming the original ADR and 1.35 times that of other relevant algorithms.
We consider optimization problems in which the goal is find a $k$-dimensional subspace of $\mathbb{R}^n$, $k<<n$, which minimizes a convex and smooth loss. Such problems generalize the fundamental task of principal component analysis (PCA) to include robust and sparse counterparts, and logistic PCA for binary data, among others. This problem could be approached either via nonconvex gradient methods with highly-efficient iterations, but for which arguing about fast convergence to a global minimizer is difficult or, via a convex relaxation for which arguing about convergence to a global minimizer is straightforward, but the corresponding methods are often inefficient in high dimensions. In this work we bridge these two approaches under a strict complementarity assumption, which in particular implies that the optimal solution to the convex relaxation is unique and is also the optimal solution to the original nonconvex problem. Our main result is a proof that a natural nonconvex gradient method which is \textit{SVD-free} and requires only a single QR-factorization of an $n\times k$ matrix per iteration, converges locally with a linear rate. We also establish linear convergence results for the nonconvex projected gradient method, and the Frank-Wolfe method when applied to the convex relaxation.
Multi-view anchor graph clustering selects representative anchors to avoid full pair-wise similarities and therefore reduce the complexity of graph methods. Although widely applied in large-scale applications, existing approaches do not pay sufficient attention to establishing correct correspondences between the anchor sets across views. To be specific, anchor graphs obtained from different views are not aligned column-wisely. Such an \textbf{A}nchor-\textbf{U}naligned \textbf{P}roblem (AUP) would cause inaccurate graph fusion and degrade the clustering performance. Under multi-view scenarios, generating correct correspondences could be extremely difficult since anchors are not consistent in feature dimensions. To solve this challenging issue, we propose the first study of the generalized and flexible anchor graph fusion framework termed \textbf{F}ast \textbf{M}ulti-\textbf{V}iew \textbf{A}nchor-\textbf{C}orrespondence \textbf{C}lustering (FMVACC). Specifically, we show how to find anchor correspondence with both feature and structure information, after which anchor graph fusion is performed column-wisely. Moreover, we theoretically show the connection between FMVACC and existing multi-view late fusion \cite{liu2018late} and partial view-aligned clustering \cite{huang2020partially}, which further demonstrates our generality. Extensive experiments on seven benchmark datasets demonstrate the effectiveness and efficiency of our proposed method. Moreover, the proposed alignment module also shows significant performance improvement applying to existing multi-view anchor graph competitors indicating the importance of anchor alignment. Our code is available at \url{//github.com/wangsiwei2010/NeurIPS22-FMVACC}.
In this paper, we derive the limit of experiments for one parameter Ising models on dense regular graphs. In particular, we show that the limiting experiment is Gaussian in the low temperature regime, non Gaussian in the critical regime, and an infinite collection of Gaussians in the high temperature regime. We also derive the limiting distributions of the maximum likelihood and maximum pseudo-likelihood estimators, and study limiting power for tests of hypothesis against contiguous alternatives (whose scaling changes across the regimes). To the best of our knowledge, this is the first attempt at establishing the classical limits of experiments for Ising models (and more generally, Markov random fields).
The goal of Bayesian deep learning is to provide uncertainty quantification via the posterior distribution. However, exact inference over the weight space is computationally intractable due to the ultra-high dimensions of the neural network. Variational inference (VI) is a promising approach, but naive application on weight space does not scale well and often underperform on predictive accuracy. In this paper, we propose a new adaptive variational Bayesian algorithm to train neural networks on weight space that achieves high predictive accuracy. By showing that there is an equivalence to Stochastic Gradient Hamiltonian Monte Carlo(SGHMC) with preconditioning matrix, we then propose an MCMC within EM algorithm, which incorporates the spike-and-slab prior to capture the sparsity of the neural network. The EM-MCMC algorithm allows us to perform optimization and model pruning within one-shot. We evaluate our methods on CIFAR-10, CIFAR-100 and ImageNet datasets, and demonstrate that our dense model can reach the state-of-the-art performance and our sparse model perform very well compared to previously proposed pruning schemes.
For cloud service providers, fine-grained packet loss detection across data centers is crucial in improving their service level and increasing business income. However, the inability to obtain sufficient measurements makes it difficult owing to the fundamental limit that the wide-area network links responsible for communication are not under their management. Moreover, millisecond-level delay jitter and clock synchronization errors in the WAN disable many tools that perform well in data center networks on this issue. Therefore, there is an urgent need to develop a new tool or method. In this work, we propose SketchDecomp, a novel loss detection method, from a mathematical perspective that has never been considered before. Its key is to decompose sketches upstream and downstream into several sub-sketches and builds a low-rank matrix optimization model to solve them. Extensive experiments on the test bed demonstrate its superiority.
We consider the algorithmic decision problem that takes as input an $n$-vertex $k$-uniform hypergraph $H$ with minimum codegree at least $m-c$ and decides whether it has a matching of size $m$. We show that this decision problem is fixed parameter tractable with respect to $c$. Furthermore, our algorithm not only decides the problem, but actually either finds a matching of size $m$ or a certificate that no such matching exists. In particular, when $m=n/k$ and $c=O(\log n)$, this gives a polynomial-time algorithm, that given any $n$-vertex $k$-uniform hypergraph $H$ with minimum codegree at least $n/k-c$, finds either a perfect matching in $H$ or a certificate that no perfect matching exists.
In group testing, the goal is to identify a subset of defective items within a larger set of items based on tests whose outcomes indicate whether at least one defective item is present. This problem is relevant in areas such as medical testing, DNA sequencing, communication protocols, and many more. In this paper, we study (i) a sparsity-constrained version of the problem, in which the testing procedure is subjected to one of the following two constraints: items are finitely divisible and thus may participate in at most $\gamma$ tests; or tests are size-constrained to pool no more than $\rho$ items per test; and (ii) a noisy version of the problem, where each test outcome is independently flipped with some constant probability. Under each of these settings, considering the for-each recovery guarantee with asymptotically vanishing error probability, we introduce a fast splitting algorithm and establish its near-optimality not only in terms of the number of tests, but also in terms of the decoding time. While the most basic formulations of our algorithms require $\Omega(n)$ storage for each algorithm, we also provide low-storage variants based on hashing, with similar recovery guarantees.
Spectral clustering (SC) is a popular clustering technique to find strongly connected communities on a graph. SC can be used in Graph Neural Networks (GNNs) to implement pooling operations that aggregate nodes belonging to the same cluster. However, the eigendecomposition of the Laplacian is expensive and, since clustering results are graph-specific, pooling methods based on SC must perform a new optimization for each new sample. In this paper, we propose a graph clustering approach that addresses these limitations of SC. We formulate a continuous relaxation of the normalized minCUT problem and train a GNN to compute cluster assignments that minimize this objective. Our GNN-based implementation is differentiable, does not require to compute the spectral decomposition, and learns a clustering function that can be quickly evaluated on out-of-sample graphs. From the proposed clustering method, we design a graph pooling operator that overcomes some important limitations of state-of-the-art graph pooling techniques and achieves the best performance in several supervised and unsupervised tasks.
Attributed graph clustering is challenging as it requires joint modelling of graph structures and node attributes. Recent progress on graph convolutional networks has proved that graph convolution is effective in combining structural and content information, and several recent methods based on it have achieved promising clustering performance on some real attributed networks. However, there is limited understanding of how graph convolution affects clustering performance and how to properly use it to optimize performance for different graphs. Existing methods essentially use graph convolution of a fixed and low order that only takes into account neighbours within a few hops of each node, which underutilizes node relations and ignores the diversity of graphs. In this paper, we propose an adaptive graph convolution method for attributed graph clustering that exploits high-order graph convolution to capture global cluster structure and adaptively selects the appropriate order for different graphs. We establish the validity of our method by theoretical analysis and extensive experiments on benchmark datasets. Empirical results show that our method compares favourably with state-of-the-art methods.