We consider the problem of entanglement-assisted one-shot classical communication. In the zero-error regime, entanglement can increase the one-shot zero-error capacity of a family of classical channels following the strategy of Cubitt et al., Phys. Rev. Lett. 104, 230503 (2010). This strategy uses the Kochen-Specker theorem which is applicable only to projective measurements. As such, in the regime of noisy states and/or measurements, this strategy cannot increase the capacity. To accommodate generically noisy situations, we examine the one-shot success probability of sending a fixed number of classical messages. We show that preparation contextuality powers the quantum advantage in this task, increasing the one-shot success probability beyond its classical maximum. Our treatment extends beyond Cubitt et al. and includes, for example, the experimentally implemented protocol of Prevedel et al., Phys. Rev. Lett. 106, 110505 (2011). We then show a mapping between this communication task and a corresponding nonlocal game. This mapping generalizes the connection with pseudotelepathy games previously noted in the zero-error case. Finally, after motivating a constraint we term context-independent guessing, we show that contextuality witnessed by noise-robust noncontextuality inequalities obtained in R. Kunjwal, Quantum 4, 219 (2020), is sufficient for enhancing the one-shot success probability. This provides an operational meaning to these inequalities and the associated hypergraph invariant, the weighted max-predictability, introduced in R. Kunjwal, Quantum 3, 184 (2019). Our results show that the task of entanglement-assisted one-shot classical communication provides a fertile ground to study the interplay of the Kochen-Specker theorem, Spekkens contextuality, and Bell nonlocality.
One of the main reasons for query model's prominence in quantum complexity is the presence of concrete lower bounding techniques: polynomial method and adversary method. There have been considerable efforts to not just give lower bounds using these methods but even to compare and relate them. We explore the value of these bounds on quantum query complexity for the class of symmetric functions, arguably one of the most natural and basic set of Boolean functions. We show (using the recently introduced spectral sensitivity) that both these bounds (positive adversary and approximate degree) give the same value for every total symmetric Boolean function. We also look at the quantum query complexity of Gap Majority, a partial symmetric function. It has gained importance recently in regard to understanding the composition of randomized query complexity. We characterize the quantum query complexity of Gap Majority and show a lower bound on noisy randomized query complexity (Ben-David and Blais, FOCS 2020) in terms of quantum query complexity. In addition, we study how large certificate complexity and block sensitivity can be as compared to sensitivity (even up to constant factors) for symmetric functions. We show tight separations, i.e., give upper bound on possible separations and construct functions achieving the same.
To gain a better theoretical understanding of how evolutionary algorithms (EAs) cope with plateaus of constant fitness, we propose the $n$-dimensional Plateau$_k$ function as natural benchmark and analyze how different variants of the $(1 + 1)$ EA optimize it. The Plateau$_k$ function has a plateau of second-best fitness in a ball of radius $k$ around the optimum. As evolutionary algorithm, we regard the $(1 + 1)$ EA using an arbitrary unbiased mutation operator. Denoting by $\alpha$ the random number of bits flipped in an application of this operator and assuming that $\Pr[\alpha = 1]$ has at least some small sub-constant value, we show the surprising result that for all constant $k \ge 2$, the runtime $T$ follows a distribution close to the geometric one with success probability equal to the probability to flip between $1$ and $k$ bits divided by the size of the plateau. Consequently, the expected runtime is the inverse of this number, and thus only depends on the probability to flip between $1$ and $k$ bits, but not on other characteristics of the mutation operator. Our result also implies that the optimal mutation rate for standard bit mutation here is approximately $k/(en)$. Our main analysis tool is a combined analysis of the Markov chains on the search point space and on the Hamming level space, an approach that promises to be useful also for other plateau problems.
Understanding spatial correlation is vital in many fields including epidemiology and social science. Lee, Meeks and Pettersson (Stat. Comput. 2021) recently demonstrated that improved inference for areal unit count data can be achieved by carrying out modifications to a graph representing spatial correlations; specifically, they delete edges of the planar graph derived from border-sharing between geographic regions in order to maximise a specific objective function. In this paper we address the computational complexity of the associated graph optimisation problem. We demonstrate that this problem cannot be solved in polynomial time unless P = NP; we further show intractability for two simpler variants of the problem. We follow these results with two parameterised algorithms that exactly solve the problem in polynomial time in restricted settings. The first of these utilises dynamic programming on a tree decomposition, and runs in polynomial time if both the treewidth and maximum degree are bounded. The second algorithm is restricted to problem instances with maximum degree three, as may arise from triangulations of planar surfaces, but is an FPT algorithm when the maximum number of edges that can be removed is taken as the parameter.
Our work focuses on extra gradient learning algorithms for finding Nash equilibria in bilinear zero-sum games. The proposed method, which can be formally considered as a variant of Optimistic Mirror Descent \cite{DBLP:conf/iclr/MertikopoulosLZ19}, uses a large learning rate for the intermediate gradient step which essentially leads to computing (approximate) best response strategies against the profile of the previous iteration. Although counter-intuitive at first sight due to the irrationally large, for an iterative algorithm, intermediate learning step, we prove that the method guarantees last-iterate convergence to an equilibrium. Particularly, we show that the algorithm reaches first an $\eta^{1/\rho}$-approximate Nash equilibrium, with $\rho > 1$, by decreasing the Kullback-Leibler divergence of each iterate by at least $\Omega(\eta^{1+\frac{1}{\rho}})$, for sufficiently small learning rate, $\eta$, until the method becomes a contracting map, and converges to the exact equilibrium. Furthermore, we perform experimental comparisons with the optimistic variant of the multiplicative weights update method, by \cite{Daskalakis2019LastIterateCZ} and show that our algorithm has significant practical potential since it offers substantial gains in terms of accelerated convergence.
Given access to a single long trajectory generated by an unknown irreducible Markov chain $M$, we simulate an $\alpha$-lazy version of $M$ which is ergodic. This enables us to generalize recent results on estimation and identity testing that were stated for ergodic Markov chains in a way that allows fully empirical inference. In particular, our approach shows that the pseudo spectral gap introduced by Paulin [2015] and defined for ergodic Markov chains may be given a meaning already in the case of irreducible but possibly periodic Markov chains.
We study random graphs with latent geometric structure, where the probability of each edge depends on the underlying random positions corresponding to the two endpoints. We focus on the setting where this conditional probability is a general monotone increasing function of the inner product of two vectors; such a function can naturally be viewed as the cumulative distribution function of some independent random variable. We consider a one-parameter family of random graphs, characterized by the variance of this random variable, that smoothly interpolates between a random dot product graph and an Erd\H{o}s--R\'enyi random graph. We prove phase transitions of detecting geometry in these graphs, in terms of the dimension of the underlying geometric space and the variance parameter of the conditional probability. When the dimension is high or the variance is large, the graph is similar to an Erd\H{o}s--R\'enyi graph with the same edge density that does not possess geometry; in other parameter regimes, there is a computationally efficient signed triangle statistic that distinguishes them. The proofs make use of information-theoretic inequalities and concentration of measure phenomena.
The partition function and free energy of a quantum many-body system determine its physical properties in thermal equilibrium. Here we study the computational complexity of approximating these quantities for $n$-qubit local Hamiltonians. First, we report a classical algorithm with $\mathrm{poly}(n)$ runtime which approximates the free energy of a given $2$-local Hamiltonian provided that it satisfies a certain denseness condition. Our algorithm combines the variational characterization of the free energy and convex relaxation methods. It contributes to a body of work on efficient approximation algorithms for dense instances of optimization problems which are hard in the general case, and can be viewed as simultaneously extending existing algorithms for (a) the ground energy of dense $2$-local Hamiltonians, and (b) the free energy of dense classical Ising models. Secondly, we establish polynomial-time equivalence between the problem of approximating the free energy of local Hamiltonians and three other natural quantum approximate counting problems, including the problem of approximating the number of witness states accepted by a QMA verifier. These results suggest that simulation of quantum many-body systems in thermal equilibrium may precisely capture the complexity of a broad family of computational problems that has yet to be defined or characterized in terms of known complexity classes. Finally, we summarize state-of-the-art classical and quantum algorithms for approximating the free energy and show how to improve their runtime and memory footprint.
We consider the problem of approximating the arboricity of a graph $G= (V,E)$, which we denote by $\mathsf{arb}(G)$, in sublinear time, where the arboricity of a graph is the minimal number of forests required to cover its edges. An algorithm for this problem may perform degree and neighbor queries, and is allowed a small error probability. We design an algorithm that outputs an estimate $\hat{\alpha}$, such that with probability $1-1/\textrm{poly}(n)$, $\mathsf{arb}(G)/c\log^2 n \leq \hat{\alpha} \leq \mathsf{arb}(G)$, where $n=|V|$ and $c$ is a constant. The expected query complexity and running time of the algorithm are $O(n/\mathsf{arb}(G))\cdot \textrm{poly}(\log n)$, and this upper bound also holds with high probability. %($\widetilde{O}(\cdot)$ is used to suppress $\textrm{poly}(\log n)$ dependencies). This bound is optimal for such an approximation up to a $\textrm{poly}(\log n)$ factor.
We consider the problem of deterministically enumerating all minimum $k$-cut-sets in a given hypergraph for any fixed $k$. The input here is a hypergraph $G = (V, E)$ with non-negative hyperedge costs. A subset $F$ of hyperedges is a $k$-cut-set if the number of connected components in $G - F$ is at least $k$ and it is a minimum $k$-cut-set if it has the least cost among all $k$-cut-sets. For fixed $k$, we call the problem of finding a minimum $k$-cut-set as Hypergraph-$k$-Cut and the problem of enumerating all minimum $k$-cut-sets as Enum-Hypergraph-$k$-Cut. The special cases of Hypergraph-$k$-Cut and Enum-Hypergraph-$k$-Cut restricted to graph inputs are well-known to be solvable in (randomized as well as deterministic) polynomial time. In contrast, it is only recently that polynomial-time algorithms for Hypergraph-$k$-Cut were developed. The randomized polynomial-time algorithm for Hypergraph-$k$-Cut that was designed in 2018 (Chandrasekaran, Xu, and Yu, SODA 2018) showed that the number of minimum $k$-cut-sets in a hypergraph is $O(n^{2k-2})$, where $n$ is the number of vertices in the input hypergraph, and that they can all be enumerated in randomized polynomial time, thus resolving Enum-Hypergraph-$k$-Cut in randomized polynomial time. A deterministic polynomial-time algorithm for Hypergraph-$k$-Cut was subsequently designed in 2020 (Chandrasekaran and Chekuri, FOCS 2020), but it is not guaranteed to enumerate all minimum $k$-cut-sets. In this work, we give the first deterministic polynomial-time algorithm to solve Enum-Hypergraph-$k$-Cut (this is non-trivial even for $k = 2$). Our algorithms are based on new structural results that allow for efficient recovery of all minimum $k$-cut-sets by solving minimum $(S,T)$-terminal cuts. Our techniques give new structural insights even for enumerating all minimum cut-sets (i.e., minimum 2-cut-sets) in a given hypergraph.
A new paradigm for data science has emerged, with quantum data, quantum models, and quantum computational devices. This field, called Quantum Machine Learning (QML), aims to achieve a speedup over traditional machine learning for data analysis. However, its success usually hinges on efficiently training the parameters in quantum neural networks, and the field of QML is still lacking theoretical scaling results for their trainability. Some trainability results have been proven for a closely related field called Variational Quantum Algorithms (VQAs). While both fields involve training a parametrized quantum circuit, there are crucial differences that make the results for one setting not readily applicable to the other. In this work we bridge the two frameworks and show that gradient scaling results for VQAs can also be applied to study the gradient scaling of QML models. Our results indicate that features deemed detrimental for VQA trainability can also lead to issues such as barren plateaus in QML. Consequently, our work has implications for several QML proposals in the literature. In addition, we provide theoretical and numerical evidence that QML models exhibit further trainability issues not present in VQAs, arising from the use of a training dataset. We refer to these as dataset-induced barren plateaus. These results are most relevant when dealing with classical data, as here the choice of embedding scheme (i.e., the map between classical data and quantum states) can greatly affect the gradient scaling.