We consider the problems of testing and learning an $n$-qubit $k$-local Hamiltonian from queries to its evolution operator with respect the 2-norm of the Pauli spectrum, or equivalently, the normalized Frobenius norm. For testing whether a Hamiltonian is $\epsilon_1$-close to $k$-local or $\epsilon_2$-far from $k$-local, we show that $O(1/(\epsilon_2-\epsilon_1)^{8})$ queries suffice. This solves two questions posed in a recent work by Bluhm, Caro and Oufkir. For learning up to error $\epsilon$, we show that $\exp(O(k^2+k\log(1/\epsilon)))$ queries suffice. Our proofs are simple, concise and based on Pauli-analytic techniques.
This article introduces a general mesh intersection algorithm that exactly computes the so-called Weiler model and that uses it to implement boolean operations with arbitrary multi-operand expressions, CSG (constructive solid geometry) and some mesh repair operations. From an input polygon soup, the algorithm first computes the co-refinement, with an exact representation of the intersection points. Then, the decomposition of 3D space into volumetric regions (Weiler model) is constructed, by sorting the facets around the non-manifold intersection edges (radial sort), using specialized exact predicates. Finally, based on the input boolean expression, the triangular facets that belong to the boundary of the result are classified. This is, to our knowledge, the first algorithm that computes an exact Weiler model. To implement all the involved predicates and constructions, two geometric kernels are proposed, tested and discussed (arithmetic expansions and multi-precision floating-point). As a guiding principle,the combinatorial information shared between each step is kept as simple as possible. It is made possible by treating all the particular cases in the kernel. In particular, triangles with intersections are remeshed using the (uniquely defined) Constrained Delaunay Triangulation, with symbolic perturbations to disambiguate configurations with co-cyclic points. It makes it easy to discard the duplicated triangles that appear when remeshing overlapping facets. The method is tested and compared with previous work, on the existing "thingi10K" dataset (to test co-refinement and mesh repair) and on a new "thingiCSG" dataset made publicly available (to test the full CSG pipeline) on a variety of interesting examples featuring different types of "pathologies"
Based on Bellman's dynamic-programming principle, Lange (2024) presents an approximate method for filtering, smoothing and parameter estimation for possibly non-linear and/or non-Gaussian state-space models. While the approach applies more generally, this pedagogical note highlights the main results in the case where (i) the state transition remains linear and Gaussian while (ii) the observation density is log-concave and sufficiently smooth in the state variable. I demonstrate how Kalman's (1960) filter and Rauch et al.'s (1965) smoother can be obtained as special cases within the proposed framework. The main aim is to present non-experts (and my own students) with an accessible introduction, enabling them to implement the proposed methods.
The complexity of the promise constraint satisfaction problem $\operatorname{PCSP}(\mathbf{A},\mathbf{B})$ is largely unknown, even for symmetric $\mathbf{A}$ and $\mathbf{B}$, except for the case when $\mathbf{A}$ and $\mathbf{B}$ are Boolean. First, we establish a dichotomy for $\operatorname{PCSP}(\mathbf{A},\mathbf{B})$ where $\mathbf{A}, \mathbf{B}$ are symmetric, $\mathbf{B}$ is functional (i.e. any $r-1$ elements of an $r$-ary tuple uniquely determines the last one), and $(\mathbf{A},\mathbf{B})$ satisfies technical conditions we introduce called dependency and additivity. This result implies a dichotomy for $\operatorname{PCSP}(\mathbf{A},\mathbf{B})$ with $\mathbf{A},\mathbf{B}$ symmetric and $\mathbf{B}$ functional if (i) $\mathbf{A}$ is Boolean, or (ii) $\mathbf{A}$ is a hypergraph of a small uniformity, or (iii) $\mathbf{A}$ has a relation $R^{\mathbf{A}}$ of arity at least 3 such that the hypergraph diameter of $(A, R^{\mathbf{A}})$ is at most 1. Second, we show that for $\operatorname{PCSP}(\mathbf{A},\mathbf{B})$, where $\mathbf{A}$ and $\mathbf{B}$ contain a single relation, $\mathbf{A}$ satisfies a technical condition called balancedness, and $\mathbf{B}$ is arbitrary, the combined basic linear programming relaxation (BLP) and the affine integer programming relaxation (AIP) is no more powerful than the (in general strictly weaker) AIP relaxation. Balanced $\mathbf{A}$ include symmetric $\mathbf{A}$ or, more generally, $\mathbf{A}$ preserved by a transitive permutation group.
We consider the problem of learning low-degree quantum objects up to $\varepsilon$-error in $\ell_2$-distance. We show the following results: $(i)$ unknown $n$-qubit degree-$d$ (in the Pauli basis) quantum channels and unitaries can be learned using $O(1/\varepsilon^d)$ queries (independent of $n$), $(ii)$ polynomials $p:\{-1,1\}^n\rightarrow [-1,1]$ arising from $d$-query quantum algorithms can be classically learned from $O((1/\varepsilon)^d\cdot \log n)$ many random examples $(x,p(x))$ (which implies learnability even for $d=O(\log n)$), and $(iii)$ degree-$d$ polynomials $p:\{-1,1\}^n\to [-1,1]$ can be learned through $O(1/\varepsilon^d)$ queries to a quantum unitary $U_p$ that block-encodes $p$. Our main technical contributions are new Bohnenblust-Hille inequalities for quantum channels and completely bounded~polynomials.
One of the main theoretical challenges in learning dynamical systems from data is providing upper bounds on the generalization error, that is, the difference between the expected prediction error and the empirical prediction error measured on some finite sample. In machine learning, a popular class of such bounds are the so-called Probably Approximately Correct (PAC) bounds. In this paper, we derive a PAC bound for stable continuous-time linear parameter-varying (LPV) systems. Our bound depends on the H2 norm of the chosen class of the LPV systems, but does not depend on the time interval for which the signals are considered.
It is known that when the diffuse interface thickness $\epsilon$ vanishes, the sharp interface limit of the stochastic reaction-diffusion equation is formally a stochastic geometric flow. To capture and simulate such geometric flow, it is crucial to develop numerical approximations whose error bounds depends on $\frac 1\epsilon$ polynomially. However, due to loss of spectral estimate of the linearized stochastic reaction-diffusion equation, how to get such error bound of numerical approximation has been an open problem. In this paper, we solve this weak error bound problem for stochastic reaction-diffusion equations near sharp interface limit. We first introduce a regularized problem which enjoys the exponential ergodicity. Then we present the regularity analysis of the regularized Kolmogorov and Poisson equations which only depends on $\frac 1{\epsilon}$ polynomially. Furthermore, we establish such weak error bound. This phenomenon could be viewed as a kind of the regularization effect of noise on the numerical approximation of stochastic partial differential equation (SPDE). As a by-product, a central limit theorem of the weak approximation is shown near sharp interface limit. Our method of proof could be extended to a number of other spatial and temporal numerical approximations for semilinear SPDEs.
Out-of-distribution (OOD) detection is critical when deploying machine learning models in the real world. Outlier exposure methods, which incorporate auxiliary outlier data in the training process, can drastically improve OOD detection performance compared to approaches without advanced training strategies. We introduce Hopfield Boosting, a boosting approach, which leverages modern Hopfield energy (MHE) to sharpen the decision boundary between the in-distribution and OOD data. Hopfield Boosting encourages the model to concentrate on hard-to-distinguish auxiliary outlier examples that lie close to the decision boundary between in-distribution and auxiliary outlier data. Our method achieves a new state-of-the-art in OOD detection with outlier exposure, improving the FPR95 metric from 2.28 to 0.92 on CIFAR-10 and from 11.76 to 7.94 on CIFAR-100.
The correlated Erd\"os-R\'enyi random graph ensemble is a probability law on pairs of graphs with $n$ vertices, parametrized by their average degree $\lambda$ and their correlation coefficient $s$. It can be used as a benchmark for the graph alignment problem, in which the labels of the vertices of one of the graphs are reshuffled by an unknown permutation; the goal is to infer this permutation and thus properly match the pairs of vertices in both graphs. A series of recent works has unveiled the role of Otter's constant $\alpha$ (that controls the exponential rate of growth of the number of unlabeled rooted trees as a function of their sizes) in this problem: for $s>\sqrt{\alpha}$ and $\lambda$ large enough it is possible to recover in a time polynomial in $n$ a positive fraction of the hidden permutation. The exponent of this polynomial growth is however quite large and depends on the other parameters, which limits the range of applications of the algorithm. In this work we present a family of faster algorithms for this task, show through numerical simulations that their accuracy is only slightly reduced with respect to the original one, and conjecture that they undergo, in the large $\lambda$ limit, phase transitions at modified Otter's thresholds $\sqrt{\widehat{\alpha}}>\sqrt{\alpha}$, with $\widehat{\alpha}$ related to the enumeration of a restricted family of trees.
We solve the PBW-like problem of normal ordering for enveloping algebras of direct sums.
This paper studies the numerical approximation of evolution equations by nonlinear parametrizations $u(t)=\Phi(q(t))$ with time-dependent parameters $q(t)$, which are to be determined in the computation. The motivation comes from approximations in quantum dynamics by multiple Gaussians and approximations of various dynamical problems by tensor networks and neural networks. In all these cases, the parametrization is typically irregular: the derivative $\Phi'(q)$ can have arbitrarily small singular values and may have varying rank. We derive approximation results for a regularized approach in the time-continuous case as well as in time-discretized cases. With a suitable choice of the regularization parameter and the time stepsize, the approach can be successfully applied in irregular situations, even though it runs counter to the basic principle in numerical analysis to avoid solving ill-posed subproblems when aiming for a stable algorithm. Numerical experiments with sums of Gaussians for approximating quantum dynamics and with neural networks for approximating the flow map of a system of ordinary differential equations illustrate and complement the theoretical results.