We study robust community detection in the context of node-corrupted stochastic block model, where an adversary can arbitrarily modify all the edges incident to a fraction of the $n$ vertices. We present the first polynomial-time algorithm that achieves weak recovery at the Kesten-Stigum threshold even in the presence of a small constant fraction of corrupted nodes. Prior to this work, even state-of-the-art robust algorithms were known to break under such node corruption adversaries, when close to the Kesten-Stigum threshold. We further extend our techniques to the $Z_2$ synchronization problem, where our algorithm reaches the optimal recovery threshold in the presence of similar strong adversarial perturbations. The key ingredient of our algorithm is a novel identifiability proof that leverages the push-out effect of the Grothendieck norm of principal submatrices.
We develop a flexible stochastic approximation framework for analyzing the long-run behavior of learning in games (both continuous and finite). The proposed analysis template incorporates a wide array of popular learning algorithms, including gradient-based methods, the exponential/multiplicative weights algorithm for learning in finite games, optimistic and bandit variants of the above, etc. In addition to providing an integrated view of these algorithms, our framework further allows us to obtain several new convergence results, both asymptotic and in finite time, in both continuous and finite games. Specifically, we provide a range of criteria for identifying classes of Nash equilibria and sets of action profiles that are attracting with high probability, and we also introduce the notion of coherence, a game-theoretic property that includes strict and sharp equilibria, and which leads to convergence in finite time. Importantly, our analysis applies to both oracle-based and bandit, payoff-based methods - that is, when players only observe their realized payoffs.
The stochastic Lanczos quadrature method has garnered significant attention recently. Upon examination of the error analyses given by Ubaru, Chen and Saad and Cortinovis and Kressner, certain notable inconsistencies arise. It turns out that the former's results are valid for cases with symmetric quadrature nodes and may not be adequate for many practical cases such as estimating log determinant of matrices. This paper analyzes probabilistic error bound of the stochastic Lanczos quadrature method for cases with asymmetric quadrature nodes. Besides, an optimized error allocation technique is employed to minimize the overall number of matrix vector multiplications required by the stochastic Lanczos quadrature method.
This paper is a collection of results on combinatorial properties of codes for the Z-channel. A Z-channel with error fraction $\tau$ takes as input a length-$n$ binary codeword and injects in an adversarial manner up to $n\tau$ asymmetric errors, i.e., errors that only zero out bits but do not flip $0$'s to $1$'s. It is known that the largest $(L-1)$-list-decodable code for the Z-channel with error fraction $\tau$ has exponential size (in $n$) if $\tau$ is less than a critical value that we call the $(L-1)$-list-decoding Plotkin point and has constant size if $\tau$ is larger than the threshold. The $(L-1)$-list-decoding Plotkin point is known to be $ L^{-\frac{1}{L-1}} - L^{-\frac{L}{L-1}} $, which equals $1/4$ for unique-decoding with $ L-1=1 $. In this paper, we derive various results for the size of the largest codes above and below the list-decoding Plotkin point. In particular, we show that the largest $(L-1)$-list-decodable code $\epsilon$-above the Plotkin point, {for any given sufficiently small positive constant $ \epsilon>0 $,} has size $\Theta_L(\epsilon^{-3/2})$ for any $L-1\ge1$. We also devise upper and lower bounds on the exponential size of codes below the list-decoding Plotkin point.
Learning the community structure of a large-scale graph is a fundamental problem in machine learning, computer science and statistics. We study the problem of exactly recovering the communities in a graph generated from the Stochastic Block Model (SBM) in the Massively Parallel Computation (MPC) model. Specifically, given $kn$ vertices that are partitioned into $k$ equal-sized clusters (i.e., each has size $n$), a graph on these $kn$ vertices is randomly generated such that each pair of vertices is connected with probability~$p$ if they are in the same cluster and with probability $q$ if not, where $p > q > 0$. We give MPC algorithms for the SBM in the (very general) \emph{$s$-space MPC model}, where each machine has memory $s=\Omega(\log n)$. Under the condition that $\frac{p-q}{\sqrt{p}}\geq \tilde{\Omega}(k^{\frac12}n^{-\frac12+\frac{1}{2(r-1)}})$ for any integer $r\in [3,O(\log n)]$, our first algorithm exactly recovers all the $k$ clusters in $O(kr\log_s n)$ rounds using $\tilde{O}(m)$ total space, or in $O(r\log_s n)$ rounds using $\tilde{O}(km)$ total space. If $\frac{p-q}{\sqrt{p}}\geq \tilde{\Omega}(k^{\frac34}n^{-\frac14})$, our second algorithm achieves $O(\log_s n)$ rounds and $\tilde{O}(m)$ total space complexity. Both algorithms significantly improve upon a recent result of Cohen-Addad et al. [PODC'22], who gave algorithms that only work in the \emph{sublinear space MPC model}, where each machine has local memory~$s=O(n^{\delta})$ for some constant $\delta>0$, with a much stronger condition on $p,q,k$. Our algorithms are based on collecting the $r$-step neighborhood of each vertex and comparing the difference of some statistical information generated from the local neighborhoods for each pair of vertices. To implement the clustering algorithms in parallel, we present efficient approaches for implementing some basic graph operations in the $s$-space MPC model.
This paper proposes a novel signed $\beta$-model for directed signed network, which is frequently encountered in application domains but largely neglected in literature. The proposed signed $\beta$-model decomposes a directed signed network as the difference of two unsigned networks and embeds each node with two latent factors for in-status and out-status. The presence of negative edges leads to a non-concave log-likelihood, and a one-step estimation algorithm is developed to facilitate parameter estimation, which is efficient both theoretically and computationally. We also develop an inferential procedure for pairwise and multiple node comparisons under the signed $\beta$-model, which fills the void of lacking uncertainty quantification for node ranking. Theoretical results are established for the coverage probability of confidence interval, as well as the false discovery rate (FDR) control for multiple node comparison. The finite sample performance of the signed $\beta$-model is also examined through extensive numerical experiments on both synthetic and real-life networks.
The chain graph model admits both undirected and directed edges in one graph, where symmetric conditional dependencies are encoded via undirected edges and asymmetric causal relations are encoded via directed edges. Though frequently encountered in practice, the chain graph model has been largely under investigated in literature, possibly due to the lack of identifiability conditions between undirected and directed edges. In this paper, we first establish a set of novel identifiability conditions for the Gaussian chain graph model, exploiting a low rank plus sparse decomposition of the precision matrix. Further, an efficient learning algorithm is built upon the identifiability conditions to fully recover the chain graph structure. Theoretical analysis on the proposed method is conducted, assuring its asymptotic consistency in recovering the exact chain graph structure. The advantage of the proposed method is also supported by numerical experiments on both simulated examples and a real application on the Standard & Poor 500 index data.
We study sampling problems associated with potentials that lack smoothness. The potentials can be either convex or non-convex. Departing from the standard smooth setting, the potentials are only assumed to be weakly smooth or non-smooth, or the summation of multiple such functions. We develop a sampling algorithm that resembles proximal algorithms in optimization for this challenging sampling task. Our algorithm is based on a special case of Gibbs sampling known as the alternating sampling framework (ASF). The key contribution of this work is a practical realization of the ASF based on rejection sampling for both non-convex and convex potentials that are not necessarily smooth. In almost all the cases of sampling considered in this work, our proximal sampling algorithm achieves better complexity than all existing methods.
Theoretical studies on transfer learning or domain adaptation have so far focused on situations with a known hypothesis class or model; however in practice, some amount of model selection is usually involved, often appearing under the umbrella term of hyperparameter-tuning: for example, one may think of the problem of tuning for the right neural network architecture towards a target task, while leveraging data from a related source task. Now, in addition to the usual tradeoffs on approximation vs estimation errors involved in model selection, this problem brings in a new complexity term, namely, the transfer distance between source and target distributions, which is known to vary with the choice of hypothesis class. We present a first study of this problem, focusing on classification; in particular, the analysis reveals some remarkable phenomena: adaptive rates, i.e., those achievable with no distributional information, can be arbitrarily slower than oracle rates, i.e., when given knowledge on distances.
We propose a dynamical low-rank algorithm for a gyrokinetic model that is used to describe strongly magnetized plasmas. The low-rank approximation is based on a decomposition into variables parallel and perpendicular to the magnetic field, as suggested by the physics of the underlying problem. We show that the resulting scheme exactly recovers the dispersion relation even with rank 1. We then perform a simulation of kinetic shear Alfv\'en waves and show that using the proposed dynamical low-rank algorithm a drastic reduction (multiple orders of magnitude) in both computational time and memory consumption can be achieved. We also compare the performance of robust first and second-order projector splitting, BUG (also called unconventional), and augmented BUG integrators as well as a FFT-based spectral and Lax--Wendroff discretization.
Image segmentation is still an open problem especially when intensities of the interested objects are overlapped due to the presence of intensity inhomogeneity (also known as bias field). To segment images with intensity inhomogeneities, a bias correction embedded level set model is proposed where Inhomogeneities are Estimated by Orthogonal Primary Functions (IEOPF). In the proposed model, the smoothly varying bias is estimated by a linear combination of a given set of orthogonal primary functions. An inhomogeneous intensity clustering energy is then defined and membership functions of the clusters described by the level set function are introduced to rewrite the energy as a data term of the proposed model. Similar to popular level set methods, a regularization term and an arc length term are also included to regularize and smooth the level set function, respectively. The proposed model is then extended to multichannel and multiphase patterns to segment colourful images and images with multiple objects, respectively. It has been extensively tested on both synthetic and real images that are widely used in the literature and public BrainWeb and IBSR datasets. Experimental results and comparison with state-of-the-art methods demonstrate that advantages of the proposed model in terms of bias correction and segmentation accuracy.