(I) We revisit the algorithmic problem of finding all triangles in a graph $G=(V,E)$ with $n$ vertices and $m$ edges. According to a result of Chiba and Nishizeki (1985), this task can be achieved by a combinatorial algorithm running in $O(m \alpha) = O(m^{3/2})$ time, where $\alpha= \alpha(G)$ is the graph arboricity. We provide a new very simple combinatorial algorithm for finding all triangles in a graph and show that is amenable to the same running time analysis. We derive these worst-case bounds from first principles and with very simple proofs that do not rely on classic results due to Nash-Williams from the 1960s. (II) We extend our arguments to the problem of finding all small complete subgraphs of a given fixed size. We show that the dependency on $m$ and $\alpha$ in the running time $O(\alpha^{\ell-2} \cdot m)$ of the algorithm of Chiba and Nishizeki for listing all copies of $K_\ell$, where $\ell \geq 3$, is asymptotically tight. (III) We give improved arboricity-sensitive running times for counting and/or detection of copies of $K_\ell$, for small $\ell \geq 4$. A key ingredient in our algorithms is, once again, the algorithm of Chiba and Nishizeki. Our new algorithms are faster than all previous algorithms in certain high-range arboricity intervals for every $\ell \geq 7$.
We review the cumulant decomposition (a way of decomposing the expectation of a product of random variables (e.g. $\mathbb{E}[XYZ]$) into a sum of terms corresponding to partitions of these variables.) and the Wick decomposition (a way of decomposing a product of (not necessarily random) variables into a sum of terms corresponding to subsets of the variables). Then we generalize each one to a new decomposition where the product function is generalized to an arbitrary function.
We propose a novel random walk-based algorithm for unbiased estimation of arbitrary functions of a weighted adjacency matrix, coined universal graph random features (u-GRFs). This includes many of the most popular examples of kernels defined on the nodes of a graph. Our algorithm enjoys subquadratic time complexity with respect to the number of nodes, overcoming the notoriously prohibitive cubic scaling of exact graph kernel evaluation. It can also be trivially distributed across machines, permitting learning on much larger networks. At the heart of the algorithm is a modulation function which upweights or downweights the contribution from different random walks depending on their lengths. We show that by parameterising it with a neural network we can obtain u-GRFs that give higher-quality kernel estimates or perform efficient, scalable kernel learning. We provide robust theoretical analysis and support our findings with experiments including pointwise estimation of fixed graph kernels, solving non-homogeneous graph ordinary differential equations, node clustering and kernel regression on triangular meshes.
The angular synchronization problem aims to accurately estimate (up to a constant additive phase) a set of unknown angles $\theta_1, \dots, \theta_n\in[0, 2\pi)$ from $m$ noisy measurements of their offsets $\theta_i-\theta_j \;\mbox{mod} \; 2\pi.$ Applications include, for example, sensor network localization, phase retrieval, and distributed clock synchronization. An extension of the problem to the heterogeneous setting (dubbed $k$-synchronization) is to estimate $k$ groups of angles simultaneously, given noisy observations (with unknown group assignment) from each group. Existing methods for angular synchronization usually perform poorly in high-noise regimes, which are common in applications. In this paper, we leverage neural networks for the angular synchronization problem, and its heterogeneous extension, by proposing GNNSync, a theoretically-grounded end-to-end trainable framework using directed graph neural networks. In addition, new loss functions are devised to encode synchronization objectives. Experimental results on extensive data sets demonstrate that GNNSync attains competitive, and often superior, performance against a comprehensive set of baselines for the angular synchronization problem and its extension, validating the robustness of GNNSync even at high noise levels.
We initiate the study of the parameterized complexity of the {\sc Collective Graph Exploration} ({\sc CGE}) problem. In {\sc CGE}, the input consists of an undirected connected graph $G$ and a collection of $k$ robots, initially placed at the same vertex $r$ of $G$, and each one of them has an energy budget of $B$. The objective is to decide whether $G$ can be \emph{explored} by the $k$ robots in $B$ time steps, i.e., there exist $k$ closed walks in $G$, one corresponding to each robot, such that every edge is covered by at least one walk, every walk starts and ends at the vertex $r$, and the maximum length of any walk is at most $B$. Unfortunately, this problem is \textsf{NP}-hard even on trees [Fraigniaud {\em et~al.}, 2006]. Further, we prove that the problem remains \textsf{W[1]}-hard parameterized by $k$ even for trees of treedepth $3$. Due to the \textsf{para-NP}-hardness of the problem parameterized by treedepth, and motivated by real-world scenarios, we study the parameterized complexity of the problem parameterized by the vertex cover number ($\mathsf{vc}$) of the graph, and prove that the problem is fixed-parameter tractable (\textsf{FPT}) parameterized by $\mathsf{vc}$. Additionally, we study the optimization version of {\sc CGE}, where we want to optimize $B$, and design an approximation algorithm with an additive approximation factor of $O(\mathsf{vc})$.
Quantum multiplication is a fundamental operation in quantum computing. Most existing quantum multipliers require $O(n)$ qubits to multiply two $n$-bit integer numbers, limiting their applicability to multiply large integer numbers using near-term quantum computers. In this paper, we propose the Quantum Multiplier Based on Exponent Adder (QMbead), a new approach that addresses this limitation by requiring just $\log_2(n)$ qubits to multiply two $n$-bit integer numbers. QMbead uses a so-called exponent encoding to represent two multiplicands as two superposition states, respectively, and then employs a quantum adder to obtain the sum of these two superposition states, and subsequently measures the outputs of the quantum adder to calculate the product of the multiplicands. This paper presents two types of quantum adders based on the quantum Fourier transform (QFT) for use in QMbead. The circuit depth of QMbead is determined by the chosen quantum adder, being $O(\log^2 n)$ when using the two QFT-based adders. If leveraging a logarithmic-depth quantum adder, the time complexity of QMbead is $O(n \log n)$, identical to that of the fastest classical multiplication algorithm, Harvey-Hoeven algorithm. Interestingly, QMbead maintains an advantage over the Harvey-Hoeven algorithm, given that the latter is only suitable for excessively large numbers, whereas QMbead is valid for both small and large numbers. The multiplicand can be either an integer or a decimal number. QMbead has been successfully implemented on quantum simulators to compute products with a bit length of up to 273 bits using only 17 qubits. This establishes QMbead as an efficient solution for multiplying large integer or decimal numbers with many bits.
This paper studies the prediction of a target $\mathbf{z}$ from a pair of random variables $(\mathbf{x},\mathbf{y})$, where the ground-truth predictor is additive $\mathbb{E}[\mathbf{z} \mid \mathbf{x},\mathbf{y}] = f_\star(\mathbf{x}) +g_{\star}(\mathbf{y})$. We study the performance of empirical risk minimization (ERM) over functions $f+g$, $f \in F$ and $g \in G$, fit on a given training distribution, but evaluated on a test distribution which exhibits covariate shift. We show that, when the class $F$ is "simpler" than $G$ (measured, e.g., in terms of its metric entropy), our predictor is more resilient to $\textbf{heterogenous covariate shifts}$ in which the shift in $\mathbf{x}$ is much greater than that in $\mathbf{y}$. Our analysis proceeds by demonstrating that ERM behaves $\textbf{qualitatively similarly to orthogonal machine learning}$: the rate at which ERM recovers the $f$-component of the predictor has only a lower-order dependence on the complexity of the class $G$, adjusted for partial non-indentifiability introduced by the additive structure. These results rely on a novel H\"older style inequality for the Dudley integral which may be of independent interest. Moreover, we corroborate our theoretical findings with experiments demonstrating improved resilience to shifts in "simpler" features across numerous domains.
Nonlinear boolean equation systems play an important role in a wide range of applications. Grover's algorithm is one of the best-known quantum search algorithms in solving the nonlinear boolean equation system on quantum computers. In this paper, we propose three novel techniques to improve the efficiency under Grover's algorithm framework. A W-cycle circuit construction introduces a recursive idea to increase the solvable number of boolean equations given a fixed number of qubits. Then, a greedy compression technique is proposed to reduce the oracle circuit depth. Finally, a randomized Grover's algorithm randomly chooses a subset of equations to form a random oracle every iteration, which further reduces the circuit depth and the number of ancilla qubits. Numerical results on boolean quadratic equations demonstrate the efficiency of the proposed techniques.
A group of $n$ agents with numerical preferences for each other are to be assigned to the $n$ seats of a dining table. We study two natural topologies:~circular (cycle) tables and panel (path) tables. For a given seating arrangement, an agent's utility is the sum of their preference values towards their (at most two) direct neighbors. An arrangement is envy-free if no agent strictly prefers someone else's seat, and it is stable if no two agents strictly prefer each other's seats. Recently, it was shown that for both paths and cycles it is NP-hard to decide whether an envy-free arrangement exists, even for symmetric binary preferences. In contrast, we show that, if agents come from a bounded number of classes, the problem is solvable in polynomial time for arbitrarily-valued possibly asymmetric preferences, including outputting an arrangement if possible. We also give simpler proofs of the previous hardness results if preferences are allowed to be asymmetric. For stability, it is known that deciding the existence of stable arrangements is NP-hard for both topologies, but only if sufficiently-many numerical values are allowed. As it turns out, even constructing unstable instances can be challenging in certain cases, e.g., binary values. We completely characterize the existence of stable arrangements based on the number of distinct values in the preference matrix and the number of agent classes. We also ask the same question for non-negative values and give an almost-complete characterization, the most interesting outstanding case being that of paths with two-valued non-negative preferences, for which we experimentally find that stable arrangements always exist and prove it under the additional constraint that agents can only swap seats when sitting at most two positions away. We moreover give a polynomial algorithm for determining a stable arrangement assuming a bounded number of classes.
In computed tomography (CT), the forward model consists of a linear Radon transform followed by an exponential nonlinearity based on the attenuation of light according to the Beer-Lambert Law. Conventional reconstruction often involves inverting this nonlinearity as a preprocessing step and then solving a convex inverse problem. However, this nonlinear measurement preprocessing required to use the Radon transform is poorly conditioned in the vicinity of high-density materials, such as metal. This preprocessing makes CT reconstruction methods numerically sensitive and susceptible to artifacts near high-density regions. In this paper, we study a technique where the signal is directly reconstructed from raw measurements through the nonlinear forward model. Though this optimization is nonconvex, we show that gradient descent provably converges to the global optimum at a geometric rate, perfectly reconstructing the underlying signal with a near minimal number of random measurements. We also prove similar results in the under-determined setting where the number of measurements is significantly smaller than the dimension of the signal. This is achieved by enforcing prior structural information about the signal through constraints on the optimization variables. We illustrate the benefits of direct nonlinear CT reconstruction with cone-beam CT experiments on synthetic and real 3D volumes. We show that this approach reduces metal artifacts compared to a commercial reconstruction of a human skull with metal dental crowns.
We investigate a lattice-structured LSTM model for Chinese NER, which encodes a sequence of input characters as well as all potential words that match a lexicon. Compared with character-based methods, our model explicitly leverages word and word sequence information. Compared with word-based methods, lattice LSTM does not suffer from segmentation errors. Gated recurrent cells allow our model to choose the most relevant characters and words from a sentence for better NER results. Experiments on various datasets show that lattice LSTM outperforms both word-based and character-based LSTM baselines, achieving the best results.