In 2019, Letzter confirmed a conjecture of Balogh, Bar\'at, Gerbner, Gy\'arf\'as and S\'ark\"ozy, proving that every large $2$-edge-coloured graph $G$ on $n$ vertices with minimum degree at least $3n/4$ can be partitioned into two monochromatic cycles of different colours. Here, we propose a weaker condition on the degree sequence of $G$ to also guarantee such a partition and prove an approximate version. Continuing work by Allen, B\"ottcher, Lang, Skokan and Stein, we also show that if $\operatorname{deg}(u) + \operatorname{deg}(v) \geq 4n/3 + o(n)$ holds for all non-adjacent vertices $u,v \in V(G)$, then all but $o(n)$ vertices can be partitioned into three monochromatic cycles.
We study the following independence testing problem: given access to samples from a distribution $P$ over $\{0,1\}^n$, decide whether $P$ is a product distribution or whether it is $\varepsilon$-far in total variation distance from any product distribution. For arbitrary distributions, this problem requires $\exp(n)$ samples. We show in this work that if $P$ has a sparse structure, then in fact only linearly many samples are required. Specifically, if $P$ is Markov with respect to a Bayesian network whose underlying DAG has in-degree bounded by $d$, then $\tilde{\Theta}(2^{d/2}\cdot n/\varepsilon^2)$ samples are necessary and sufficient for independence testing.
Covariance estimation for matrix-valued data has received an increasing interest in applications. Unlike previous works that rely heavily on matrix normal distribution assumption and the requirement of fixed matrix size, we propose a class of distribution-free regularized covariance estimation methods for high-dimensional matrix data under a separability condition and a bandable covariance structure. Under these conditions, the original covariance matrix is decomposed into a Kronecker product of two bandable small covariance matrices representing the variability over row and column directions. We formulate a unified framework for estimating bandable covariance, and introduce an efficient algorithm based on rank one unconstrained Kronecker product approximation. The convergence rates of the proposed estimators are established, and the derived minimax lower bound shows our proposed estimator is rate-optimal under certain divergence regimes of matrix size. We further introduce a class of robust covariance estimators and provide theoretical guarantees to deal with heavy-tailed data. We demonstrate the superior finite-sample performance of our methods using simulations and real applications from a gridded temperature anomalies dataset and a S&P 500 stock data analysis.
We study the problem of testing whether a function $f: \mathbb{R}^n \to \mathbb{R}$ is a polynomial of degree at most $d$ in the \emph{distribution-free} testing model. Here, the distance between functions is measured with respect to an unknown distribution $\mathcal{D}$ over $\mathbb{R}^n$ from which we can draw samples. In contrast to previous work, we do not assume that $\mathcal{D}$ has finite support. We design a tester that given query access to $f$, and sample access to $\mathcal{D}$, makes $(d/\varepsilon)^{O(1)}$ many queries to $f$, accepts with probability $1$ if $f$ is a polynomial of degree $d$, and rejects with probability at least $2/3$ if every degree-$d$ polynomial $P$ disagrees with $f$ on a set of mass at least $\varepsilon$ with respect to $\mathcal{D}$. Our result also holds under mild assumptions when we receive only a polynomial number of bits of precision for each query to $f$, or when $f$ can only be queried on rational points representable using a logarithmic number of bits. Along the way, we prove a new stability theorem for multivariate polynomials that may be of independent interest.
A natural way of increasing our understanding of NP-complete graph problems is to restrict the input to a special graph class. Classes of $H$-free graphs, that is, graphs that do not contain some graph $H$ as an induced subgraph, have proven to be an ideal testbed for such a complexity study. However, if the forbidden graph $H$ contains a cycle or claw, then these problems often stay NP-complete. A recent complexity study on the $k$-Colouring problem shows that we may still obtain tractable results if we also bound the diameter of the $H$-free input graph. We continue this line of research by initiating a complexity study on the impact of bounding the diameter for a variety of classical vertex partitioning problems restricted to $H$-free graphs. We prove that bounding the diameter does not help for Independent Set, but leads to new tractable cases for problems closely related to 3-Colouring. That is, we show that Near-Bipartiteness, Independent Feedback Vertex Set, Independent Odd Cycle Transversal, Acyclic 3-Colouring and Star 3-Colouring are all polynomial-time solvable for chair-free graphs of bounded diameter. To obtain these results we exploit a new structural property of 3-colourable chair-free graphs.
The reconfiguration graph $\mathcal{C}_k(G)$ for the $k$-colourings of a graph $G$ has a vertex for each proper $k$-colouring of $G$, and two vertices of $\mathcal{C}_k(G)$ are adjacent precisely when those $k$-colourings differ on a single vertex of $G$. Much work has focused on bounding the maximum value of ${\rm{diam}}~\mathcal{C}_k(G)$ over all $n$-vertex graphs $G$. We consider the analogous problems for list colourings and for correspondence colourings. We conjecture that if $L$ is a list-assignment for a graph $G$ with $|L(v)|\ge d(v)+2$ for all $v\in V(G)$, then ${\rm{diam}}~\mathcal{C}_L(G)\le n(G)+\mu(G)$. We also conjecture that if $(L,H)$ is a correspondence cover for a graph $G$ with $|L(v)|\ge d(v)+2$ for all $v\in V(G)$, then ${\rm{diam}}~\mathcal{C}_{(L,H)}(G)\le n(G)+\tau(G)$. (Here $\mu(G)$ and $\tau(G)$ denote the matching number and vertex cover number of $G$.) For every graph $G$, we give constructions showing that both conjectures are best possible. Our first main result proves the upper bounds (for the list and correspondence versions, respectively) ${\rm{diam}}~\mathcal{C}_L(G)\le n(G)+2\mu(G)$ and ${\rm{diam}}~\mathcal{C}_{(L,H)}(G)\le n(G)+2\tau(G)$. Our second main result proves that both conjectured bounds hold, whenever all $v$ satisfy $|L(v)|\ge 2d(v)+1$. We also prove more precise results when $G$ is a tree. We conclude by proving one or both conjectures for various classes of graphs such as complete bipartite graphs, subcubic graphs, cactuses, and graphs with bounded maximum average degree.
A High-dimensional and sparse (HiDS) matrix is frequently encountered in a big data-related application like an e-commerce system or a social network services system. To perform highly accurate representation learning on it is of great significance owing to the great desire of extracting latent knowledge and patterns from it. Latent factor analysis (LFA), which represents an HiDS matrix by learning the low-rank embeddings based on its observed entries only, is one of the most effective and efficient approaches to this issue. However, most existing LFA-based models perform such embeddings on a HiDS matrix directly without exploiting its hidden graph structures, thereby resulting in accuracy loss. To address this issue, this paper proposes a graph-incorporated latent factor analysis (GLFA) model. It adopts two-fold ideas: 1) a graph is constructed for identifying the hidden high-order interaction (HOI) among nodes described by an HiDS matrix, and 2) a recurrent LFA structure is carefully designed with the incorporation of HOI, thereby improving the representa-tion learning ability of a resultant model. Experimental results on three real-world datasets demonstrate that GLFA outperforms six state-of-the-art models in predicting the missing data of an HiDS matrix, which evidently supports its strong representation learning ability to HiDS data.
Vector Perturbation Precoding (VPP) can speed up downlink data transmissions in Large and Massive Multi-User MIMO systems but is known to be NP-hard. While there are several algorithms in the literature for VPP under total power constraint, they are not applicable for VPP under per-antenna power constraint. This paper proposes a novel, parallel tree search algorithm for VPP under per-antenna power constraint, called \emph{\textbf{TreeStep}}, to find good quality solutions to the VPP problem with practical computational complexity. We show that our method can provide huge performance gain over simple linear precoding like Regularised Zero Forcing. We evaluate TreeStep for several large MIMO~($16\times16$ and $24\times24$) and massive MIMO~($16\times32$ and $24\times 48$) and demonstrate that TreeStep outperforms the popular polynomial-time VPP algorithm, the Fixed Complexity Sphere Encoder, by achieving the extremely low BER of $10^{-6}$ at a much lower SNR.
A palindromic substring $T[i.. j]$ of a string $T$ is said to be a shortest unique palindromic substring (SUPS) in $T$ for an interval $[p, q]$ if $T[i.. j]$ is a shortest one such that $T[i.. j]$ occurs only once in $T$, and $[i, j]$ contains $[p, q]$. The SUPS problem is, given a string $T$ of length $n$, to construct a data structure that can compute all the SUPSs for any given query interval. It is known that any SUPS query can be answered in $O(\alpha)$ time after $O(n)$-time preprocessing, where $\alpha$ is the number of SUPSs to output [Inoue et al., 2018]. In this paper, we first show that $\alpha$ is at most $4$, and the upper bound is tight. Also, we present an algorithm to solve the SUPS problem for a sliding window that can answer any query in $O(\log\log W)$ time and update data structures in amortized $O(\log\sigma)$ time, where $W$ is the size of the window, and $\sigma$ is the alphabet size. Furthermore, we consider the SUPS problem in the after-edit model and present an efficient algorithm. Namely, we present an algorithm that uses $O(n)$ time for preprocessing and answers any $k$ SUPS queries in $O(\log n\log\log n + k\log\log n)$ time after single character substitution. As a by-product, we propose a fully-dynamic data structure for range minimum queries (RmQs) with a constraint where the width of each query range is limited to polylogarithmic. The constrained RmQ data structure can answer such a query in constant time and support a single-element edit operation in amortized constant time.
We recall some of the history of the information-theoretic approach to deriving core results in probability theory and indicate parts of the recent resurgence of interest in this area with current progress along several interesting directions. Then we give a new information-theoretic proof of a finite version of de Finetti's classical representation theorem for finite-valued random variables. We derive an upper bound on the relative entropy between the distribution of the first $k$ in a sequence of $n$ exchangeable random variables, and an appropriate mixture over product distributions. The mixing measure is characterised as the law of the empirical measure of the original sequence, and de Finetti's result is recovered as a corollary. The proof is nicely motivated by the Gibbs conditioning principle in connection with statistical mechanics, and it follows along an appealing sequence of steps. The technical estimates required for these steps are obtained via the use of a collection of combinatorial tools known within information theory as `the method of types.'
Graph neural networks (GNNs) are a popular class of machine learning models whose major advantage is their ability to incorporate a sparse and discrete dependency structure between data points. Unfortunately, GNNs can only be used when such a graph-structure is available. In practice, however, real-world graphs are often noisy and incomplete or might not be available at all. With this work, we propose to jointly learn the graph structure and the parameters of graph convolutional networks (GCNs) by approximately solving a bilevel program that learns a discrete probability distribution on the edges of the graph. This allows one to apply GCNs not only in scenarios where the given graph is incomplete or corrupted but also in those where a graph is not available. We conduct a series of experiments that analyze the behavior of the proposed method and demonstrate that it outperforms related methods by a significant margin.