In this paper we prove that the $\ell_0$ isoperimetric coefficient for any axis-aligned cubes, $\psi_{\mathcal{C}}$, is $\Theta(n^{-1/2})$ and that the isoperimetric coefficient for any measurable body $K$, $\psi_K$, is of order $O(n^{-1/2})$. As a corollary we deduce that axis-aligned cubes essentially "maximize" the $\ell_0$ isoperimetric coefficient: There exists a positive constant $q > 0$ such that $\psi_K \leq q \cdot \psi_{\mathcal{C}}$, whenever $\mathcal{C}$ is an axis-aligned cube and $K$ is any measurable set. Lastly, we give immediate applications of our results to the mixing time of Coordinate-Hit-and-Run for sampling points uniformly from convex bodies.
We prove the first hardness results against efficient proof search by quantum algorithms. We show that under Learning with Errors (LWE), the standard lattice-based cryptographic assumption, no quantum algorithm can weakly automate $\mathbf{TC}^0$-Frege. This extends the line of results of Kraj\'i\v{c}ek and Pudl\'ak (Information and Computation, 1998), Bonet, Pitassi, and Raz (FOCS, 1997), and Bonet, Domingo, Gavald\`a, Maciel, and Pitassi (Computational Complexity, 2004), who showed that Extended Frege, $\mathbf{TC}^0$-Frege and $\mathbf{AC}^0$-Frege, respectively, cannot be weakly automated by classical algorithms if either the RSA cryptosystem or the Diffie-Hellman key exchange protocol are secure. To the best of our knowledge, this is the first interaction between quantum computation and propositional proof search.
Boundary integral equation formulations of elliptic partial differential equations lead to dense system matrices when discretized, yet they are data-sparse. Using the $\mathcal{H}$-matrix format, this sparsity is exploited to achieve $\mathcal{O}(N\log N)$ complexity for storage and multiplication by a vector. This is achieved purely algebraically, based on low-rank approximations of subblocks, and hence the format is also applicable to a wider range of problems. The $\mathcal{H}^2$-matrix format improves the complexity to $\mathcal{O}(N)$ by introducing a recursive structure onto subblocks on multiple levels. However, in practice this comes with a large proportionality constant, making the $\mathcal{H}^2$-matrix format advantageous mostly for large problems. In this paper we investigate the usefulness of a matrix format that lies in between these two: Uniform $\mathcal{H}$-matrices. An algebraic compression algorithm is introduced to transform a regular $\mathcal{H}$-matrix into a uniform $\mathcal{H}$-matrix, which maintains the asymptotic complexity.
Vizing's theorem states that any $n$-vertex $m$-edge graph of maximum degree $\Delta$ can be {\em edge colored} using at most $\Delta + 1$ different colors [Diskret.~Analiz, '64]. Vizing's original proof is algorithmic and shows that such an edge coloring can be found in $\tilde{O}(mn)$ time. This was subsequently improved to $\tilde O(m\sqrt{n})$, independently by Arjomandi [1982] and by Gabow et al.~[1985]. In this paper we present an algorithm that computes such an edge coloring in $\tilde O(mn^{1/3})$ time, giving the first polynomial improvement for this fundamental problem in over 40 years.
We study the possibility of designing $N^{o(1)}$-round protocols for problems of substantially super-linear polynomial-time complexity on the congested clique with about $N^{1/2}$ nodes, where $N$ is the input size. We show that the exponent of the polynomial (if any) bounding the average time complexity of local computations performed at a node in such protocols has to be larger than that of the polynomial bounding the time complexity of the given problem.
An orientable sequence of order $n$ is a cyclic binary sequence such that each length-$n$ substring appears at most once \emph{in either direction}. Maximal length orientable sequences are known only for $n\leq 7$, and a trivial upper bound on their length is $2^{n-1} - 2^{\lfloor(n-1)/2\rfloor}$. This paper presents the first efficient algorithm to construct orientable sequences with asymptotically optimal length; more specifically, our algorithm constructs orientable sequences via cycle-joining and a successor-rule approach requiring $O(n)$ time per bit and $O(n)$ space. This answers a longstanding open question from Dai, Martin, Robshaw, Wild [Cryptography and Coding III (1993)]. Applying a recent concatenation-tree framework, the same sequences can be generated in $O(1)$-amortized time per bit using $O(n^2)$ space. Our sequences are applied to find new longest-known (aperiodic) orientable sequences for $n\leq 20$.
We investigate the maximum cardinality and the mathematical structure of error-correcting codes endowed with the Kendall-$\tau$ metric. We establish an averaging bound for the cardinality of a code with prescribed minimum distance, discuss its sharpness, and characterize codes attaining it. This leads to introducing the family of $t$-balanced codes in the Kendall-$\tau$ metric. The results are based on novel arguments that shed new light on the structure of the Kendall-$\tau$ metric space.
In this paper, we propose Wasserstein proximals of $\alpha$-divergences as suitable objective functionals for learning heavy-tailed distributions in a stable manner. First, we provide sufficient, and in some cases necessary, relations among data dimension, $\alpha$, and the decay rate of data distributions for the Wasserstein-proximal-regularized divergence to be finite. Finite-sample convergence rates for the estimation in the case of the Wasserstein-1 proximal divergences are then provided under certain tail conditions. Numerical experiments demonstrate stable learning of heavy-tailed distributions -- even those without first or second moment -- without any explicit knowledge of the tail behavior, using suitable generative models such as GANs and flow-based models related to our proposed Wasserstein-proximal-regularized $\alpha$-divergences. Heuristically, $\alpha$-divergences handle the heavy tails and Wasserstein proximals allow non-absolute continuity between distributions and control the velocities of flow-based algorithms as they learn the target distribution deep into the tails.
Objective: Magnetic particle imaging (MPI) is an emerging medical imaging modality which has gained increasing interest in recent years. Among the benefits of MPI are its high temporal resolution, and that the technique does not expose the specimen to any kind of ionizing radiation. It is based on the non-linear response of magnetic nanoparticles to an applied magnetic field. From the electric signal measured in receive coils, the particle concentration has to be reconstructed. Due to the ill-posedness of the reconstruction problem, various regularization methods have been proposed for reconstruction ranging from early stopping methods, via classical Tikhonov regularization and iterative methods to modern machine learning approaches. In this work, we contribute to the latter class: we propose a plug-and-play approach based on a generic zero-shot denoiser with an $\ell^1$-prior. Approach: We validate the reconstruction parameters of the method on a hybrid dataset and compare it with the baseline Tikhonov, DIP and the previous PP-MPI, which is a plug-and-play method with denoiser trained on MPI-friendly data. Main results: We offer a quantitative and qualitative evaluation of the zero-shot plug-and-play approach on the 3D Open MPI dataset. Moreover, we show the quality of the approach with different levels of preprocessing of the data. Significance: The proposed method employs a zero-shot denoiser which has not been trained for the MPI task and therefore saves the cost for training. Moreover, it offers a method that can be potentially applied in future MPI contexts.
The classical work of (Arora et al., 1999) provides a scheme that gives, for any $\epsilon>0$, a polynomial time $1-\epsilon$ approximation algorithm for dense instances of a family of $\mathcal{NP}$-hard problems, such as Max-CUT and Max-$k$-SAT. In this paper we extend and speed up this scheme using a logarithmic number of one-bit predictions. We propose a learning augmented framework which aims at finding fast algorithms which guarantees approximation consistency, smoothness and robustness with respect to the prediction error. We provide such algorithms, which moreover use predictions parsimoniously, for dense instances of various optimization problems.
$D^2$-sampling is a fundamental component of sampling-based clustering algorithms such as $k$-means++. Given a dataset $V \subset \mathbb{R}^d$ with $N$ points and a center set $C \subset \mathbb{R}^d$, $D^2$-sampling refers to picking a point from $V$ where the sampling probability of a point is proportional to its squared distance from the nearest center in $C$. Starting with empty $C$ and iteratively $D^2$-sampling and updating $C$ in $k$ rounds is precisely $k$-means++ seeding that runs in $O(Nkd)$ time and gives $O(\log{k})$-approximation in expectation for the $k$-means problem. We give a quantum algorithm for (approximate) $D^2$-sampling in the QRAM model that results in a quantum implementation of $k$-means++ that runs in time $\tilde{O}(\zeta^2 k^2)$. Here $\zeta$ is the aspect ratio (i.e., largest to smallest interpoint distance), and $\tilde{O}$ hides polylogarithmic factors in $N, d, k$. It can be shown through a robust approximation analysis of $k$-means++ that the quantum version preserves its $O(\log{k})$ approximation guarantee. Further, we show that our quantum algorithm for $D^2$-sampling can be 'dequantized' using the sample-query access model of Tang (PhD Thesis, Ewin Tang, University of Washington, 2023). This results in a fast quantum-inspired classical implementation of $k$-means++, which we call QI-$k$-means++, with a running time $O(Nd) + \tilde{O}(\zeta^2k^2d)$, where the $O(Nd)$ term is for setting up the sample-query access data structure. Experimental investigations show promising results for QI-$k$-means++ on large datasets with bounded aspect ratio. Finally, we use our quantum $D^2$-sampling with the known $ D^2$-sampling-based classical approximation scheme (i.e., $(1+\varepsilon)$-approximation for any given $\varepsilon>0$) to obtain the first quantum approximation scheme for the $k$-means problem with polylogarithmic running time dependence on $N$.