We give the first tester-learner for halfspaces that succeeds universally over a wide class of structured distributions. Our universal tester-learner runs in fully polynomial time and has the following guarantee: the learner achieves error $O(\mathrm{opt}) + \epsilon$ on any labeled distribution that the tester accepts, and moreover, the tester accepts whenever the marginal is any distribution that satisfies a Poincar\'e inequality. In contrast to prior work on testable learning, our tester is not tailored to any single target distribution but rather succeeds for an entire target class of distributions. The class of Poincar\'e distributions includes all strongly log-concave distributions, and, assuming the Kannan--L\'{o}vasz--Simonovits (KLS) conjecture, includes all log-concave distributions. In the special case where the label noise is known to be Massart, our tester-learner achieves error $\mathrm{opt} + \epsilon$ while accepting all log-concave distributions unconditionally (without assuming KLS). Our tests rely on checking hypercontractivity of the unknown distribution using a sum-of-squares (SOS) program, and crucially make use of the fact that Poincar\'e distributions are certifiably hypercontractive in the SOS framework.
The problem of comparing probability distributions is at the heart of many tasks in statistics and machine learning and the most classical comparison methods assume that the distributions occur in spaces of the same dimension. Recently, a new geometric solution has been proposed to address this problem when the measures live in Euclidean spaces of differing dimensions. Here, we study the same problem of comparing probability distributions of different dimensions in the tropical geometric setting, which is becoming increasingly relevant in computations and applications involving complex, geometric data structures. Specifically, we construct a Wasserstein distance between measures on different tropical projective tori - the focal metric spaces in both theory and applications of tropical geometry - via tropical mappings between probability measures. We prove equivalence of the directionality of the maps, whether starting from the lower dimensional space and mapping to the higher dimensional space or vice versa. As an important practical implication, our work provides a framework for comparing probability distributions on the spaces of phylogenetic trees with different leaf sets.
We study the recovery of functions in the uniform norm based on function evaluations. We obtain worst case error bounds for general classes of functions in terms of the best $L_2$-approximation from a given nested sequence of subspaces combined with bounds on the the Christoffel function of these subspaces. Besides an explicit bound, we obtain that linear algorithms using $n$ samples are optimal up to a factor $\sqrt{n}$ among all algorithms using arbitrary linear information. Moreover, our results imply that linear sampling algorithms are optimal up to a constant factor for many reproducing kernel Hilbert spaces. We also discuss results for approximation in more general seminorms, including $L_p$-approximation.
We study universal rates for multiclass classification, establishing the optimal rates (up to log factors) for all hypothesis classes. This generalizes previous results on binary classification (Bousquet, Hanneke, Moran, van Handel, and Yehudayoff, 2021), and resolves an open question studied by Kalavasis, Velegkas, and Karbasi (2022) who handled the multiclass setting with a bounded number of class labels. In contrast, our result applies for any countable label space. Even for finite label space, our proofs provide a more precise bounds on the learning curves, as they do not depend on the number of labels. Specifically, we show that any class admits exponential rates if and only if it has no infinite Littlestone tree, and admits (near-)linear rates if and only if it has no infinite Daniely-Shalev-Shwartz-Littleston (DSL) tree, and otherwise requires arbitrarily slow rates. DSL trees are a new structure we define in this work, in which each node of the tree is given by a pseudo-cube of possible classifications of a given set of points. Pseudo-cubes are a structure, rooted in the work of Daniely and Shalev-Shwartz (2014), and recently shown by Brukhim, Carmon, Dinur, Moran, and Yehudayoff (2022) to characterize PAC learnability (i.e., uniform rates) for multiclass classification. We also resolve an open question of Kalavasis, Velegkas, and Karbasi (2022) regarding the equivalence of classes having infinite Graph-Littlestone (GL) trees versus infinite Natarajan-Littlestone (NL) trees, showing that they are indeed equivalent.
Mutual coherence is a measure of similarity between two opinions. Although the notion comes from philosophy, it is essential for a wide range of technologies, e.g., the Wahl-O-Mat system. In Germany, this system helps voters to find candidates that are the closest to their political preferences. The exact computation of mutual coherence is highly time-consuming due to the iteration over all subsets of an opinion. Moreover, for every subset, an instance of the SAT model counting problem has to be solved which is known to be a hard problem in computer science. This work is the first study to accelerate this computation. We model the distribution of the so-called confirmation values as a mixture of three Gaussians and present efficient heuristics to estimate its model parameters. The mutual coherence is then approximated with the expected value of the distribution. Some of the presented algorithms are fully polynomial-time, others only require solving a small number of instances of the SAT model counting problem. The average squared error of our best algorithm lies below 0.0035 which is insignificant if the efficiency is taken into account. Furthermore, the accuracy is precise enough to be used in Wahl-O-Mat-like systems.
Quality diversity~(QD) is a branch of evolutionary computation that gained increasing interest in recent years. The Map-Elites QD approach defines a feature space, i.e., a partition of the search space, and stores the best solution for each cell of this space. We study a simple QD algorithm in the context of pseudo-Boolean optimisation on the ``number of ones'' feature space, where the $i$th cell stores the best solution amongst those with a number of ones in $[(i-1)k, ik-1]$. Here $k$ is a granularity parameter $1 \leq k \leq n+1$. We give a tight bound on the expected time until all cells are covered for arbitrary fitness functions and for all $k$ and analyse the expected optimisation time of QD on \textsc{OneMax} and other problems whose structure aligns favourably with the feature space. On combinatorial problems we show that QD finds a ${(1-1/e)}$-approximation when maximising any monotone sub-modular function with a single uniform cardinality constraint efficiently. Defining the feature space as the number of connected components of a connected graph, we show that QD finds a minimum spanning tree in expected polynomial time.
Elevator systems are one kind of Cyber-Physical Systems (CPSs), and as such, test cases are usually complex and long in time. This is mainly because realistic test scenarios are employed (e.g., for testing elevator dispatching algorithms, typically a full day of passengers traveling through a system of elevators is used). However, in such a context, when needing to reproduce a failure, it is of high benefit to provide the minimal test input to the software developers. This way, analyzing and trying to localize the root-cause of the failure is easier and more agile. Delta debugging has been found to be an efficient technique to reduce failure-inducing test inputs. In this paper, we enhance this technique by first monitoring the environment at which the CPS operates as well as its physical states. With the monitored information, we search for stable states of the CPS during the execution of the simulation. In a second step, we use such identified stable states to help the delta debugging algorithm isolate the failure-inducing test inputs more efficiently. We report our experience of applying our approach into an industrial elevator dispatching algorithm. An empirical evaluation carried out with real operational data from a real installation of elevators suggests that the proposed environment-wise delta debugging algorithm is between 1.3 to 1.8 times faster than the traditional delta debugging, while producing a larger reduction in the failure-inducing test inputs. The results provided by the different implemented delta debugging algorithm versions are qualitatively assessed with domain experts. This assessment provides new insights and lessons learned, such as, potential applications of the delta debugging algorithm beyond debugging.
In this paper, we devise a scheme for kernelizing, in sublinear space and polynomial time, various problems on planar graphs. The scheme exploits planarity to ensure that the resulting algorithms run in polynomial time and use O((sqrt(n) + k) log n) bits of space, where n is the number of vertices in the input instance and k is the intended solution size. As examples, we apply the scheme to Dominating Set and Vertex Cover. For Dominating Set, we also show that a well-known kernelization algorithm due to Alber et al. (JACM 2004) can be carried out in polynomial time and space O(k log n). Along the way, we devise restricted-memory procedures for computing region decompositions and approximating the aforementioned problems, which might be of independent interest.
Let $G$ be an undirected graph, and $s,t$ distinguished vertices of $G$. A minimal $s,t$-separator is an inclusion-wise minimal vertex-set whose removal places $s$ and $t$ in distinct connected components. We present an algorithm for listing the minimal $s,t$-separators of a graph in non-decreasing order of cardinality, in polynomial-delay. This problem finds applications in various algorithms parameterized by treewidth, which include query evaluation in relational databases, probabilistic inference, and many more. In the process, we prove several results that are of independent interest. We establish a new island of tractability to the intensively studied 2-disjoint connected subgraphs problem, which is NP-complete even for restricted graph classes that include planar graphs, and prove new characterizations of minimal $s,t$-separators. Ours is the first to present a ranked enumeration algorithm for minimal separators where the delay is polynomial in the size of the input graph.
The volume function V(t) of a compact set S\in R^d is just the Lebesgue measure of the set of points within a distance to S not larger than t. According to some classical results in geometric measure theory, the volume function turns out to be a polynomial, at least in a finite interval, under a quite intuitive, easy to interpret, sufficient condition (called ``positive reach'') which can be seen as an extension of the notion of convexity. However, many other simple sets, not fulfilling the positive reach condition, have also a polynomial volume function. To our knowledge, there is no general, simple geometric description of such sets. Still, the polynomial character of $V(t)$ has some relevant consequences since the polynomial coefficients carry some useful geometric information. In particular, the constant term is the volume of S and the first order coefficient is the boundary measure (in Minkowski's sense). This paper is focused on sets whose volume function is polynomial on some interval starting at zero, whose length (that we call ``polynomial reach'') might be unknown. Our main goal is to approximate such polynomial reach by statistical means, using only a large enough random sample of points inside S. The practical motivation is simple: when the value of the polynomial reach , or rather a lower bound for it, is approximately known, the polynomial coefficients can be estimated from the sample points by using standard methods in polynomial approximation. As a result, we get a quite general method to estimate the volume and boundary measure of the set, relying only on an inner sample of points and not requiring the use any smoothing parameter. This paper explores the theoretical and practical aspects of this idea.
Classic algorithms and machine learning systems like neural networks are both abundant in everyday life. While classic computer science algorithms are suitable for precise execution of exactly defined tasks such as finding the shortest path in a large graph, neural networks allow learning from data to predict the most likely answer in more complex tasks such as image classification, which cannot be reduced to an exact algorithm. To get the best of both worlds, this thesis explores combining both concepts leading to more robust, better performing, more interpretable, more computationally efficient, and more data efficient architectures. The thesis formalizes the idea of algorithmic supervision, which allows a neural network to learn from or in conjunction with an algorithm. When integrating an algorithm into a neural architecture, it is important that the algorithm is differentiable such that the architecture can be trained end-to-end and gradients can be propagated back through the algorithm in a meaningful way. To make algorithms differentiable, this thesis proposes a general method for continuously relaxing algorithms by perturbing variables and approximating the expectation value in closed form, i.e., without sampling. In addition, this thesis proposes differentiable algorithms, such as differentiable sorting networks, differentiable renderers, and differentiable logic gate networks. Finally, this thesis presents alternative training strategies for learning with algorithms.