We consider the problem of computing the (two-sided) Hausdorff distance between the unit $\ell_{p_{1}}$ and $\ell_{p_{2}}$ norm balls in finite dimensional Euclidean space for $1 \leq p_1 < p_2 \leq \infty$, and derive a closed-form formula for the same. We also derive a closed-form formula for the Hausdorff distance between the $k_1$ and $k_2$ unit $D$-norm balls, which are certain polyhedral norm balls in $d$ dimensions for $1 \leq k_1 < k_2 \leq d$. When two different $\ell_p$ norm balls are transformed via a common linear map, we obtain several estimates for the Hausdorff distance between the resulting convex sets. These estimates upper bound the Hausdorff distance or its expectation, depending on whether the linear map is arbitrary or random. We then generalize the developments for the Hausdorff distance between two set-valued integrals obtained by applying a parametric family of linear maps to different $\ell_p$ unit norm balls, and then taking the Minkowski sums of the resulting sets in a limiting sense. To illustrate an application, we show that the problem of computing the Hausdorff distance between the reach sets of a linear dynamical system with different unit norm ball-valued input uncertainties, reduces to this set-valued integral setting.
This paper considers the problem of calculating the matrix multiplication of two massive matrices $\mathbf{A}$ and $\mathbf{B}$ distributedly. We provide a modulo technique that can be applied to coded distributed matrix multiplication problems to reduce the recovery threshold. This technique exploits the special structure of interpolation points and can be applied to many existing coded matrix designs. Recently studied discrete Fourier transform based code achieves a smaller recovery threshold than the optimal MatDot code with the expense that it cannot resist stragglers. We also propose a distributed matrix multiplication scheme based on the idea of locally repairable code to reduce the recovery threshold of MatDot code and provide resilience to stragglers. We also apply our constructions to a type of matrix computing problems, where generalized linear models act as a special case.
The key assumption underlying linear Markov Decision Processes (MDPs) is that the learner has access to a known feature map $\phi(x, a)$ that maps state-action pairs to $d$-dimensional vectors, and that the rewards and transitions are linear functions in this representation. But where do these features come from? In the absence of expert domain knowledge, a tempting strategy is to use the ``kitchen sink" approach and hope that the true features are included in a much larger set of potential features. In this paper we revisit linear MDPs from the perspective of feature selection. In a $k$-sparse linear MDP, there is an unknown subset $S \subset [d]$ of size $k$ containing all the relevant features, and the goal is to learn a near-optimal policy in only poly$(k,\log d)$ interactions with the environment. Our main result is the first polynomial-time algorithm for this problem. In contrast, earlier works either made prohibitively strong assumptions that obviated the need for exploration, or required solving computationally intractable optimization problems. Along the way we introduce the notion of an emulator: a succinct approximate representation of the transitions that suffices for computing certain Bellman backups. Since linear MDPs are a non-parametric model, it is not even obvious whether polynomial-sized emulators exist. We show that they do exist and can be computed efficiently via convex programming. As a corollary of our main result, we give an algorithm for learning a near-optimal policy in block MDPs whose decoding function is a low-depth decision tree; the algorithm runs in quasi-polynomial time and takes a polynomial number of samples. This can be seen as a reinforcement learning analogue of classic results in computational learning theory. Furthermore, it gives a natural model where improving the sample complexity via representation learning is computationally feasible.
For estimating the proportion of false null hypotheses in multiple testing, a family of estimators by Storey (2002) is widely used in the applied and statistical literature, with many methods suggested for selecting the parameter $\lambda$. Inspired by change-point concepts, our new approach to the latter problem first approximates the $p$-value plot with a piecewise linear function with a single change-point and then selects the $p$-value at the change-point location as $\lambda$. Simulations show that our method has among the smallest RMSE across various settings, and we extend it to address the estimation in cases of superuniform $p$-values. We provide asymptotic theory for our estimator, relying on the theory of quantile processes. Additionally, we propose an application in the change-point literature and illustrate it using high-dimensional CNV data.
Majority voting is a simple mathematical function that returns the value that appears most often in a set. As a popular decision fusion technique, the majority voting function (MVF) finds applications in resolving conflicts, where a number of independent voters report their opinions on a classification problem. Despite its importance and its various applications in ensemble learning, data crowd-sourcing, remote sensing, and data oracles for blockchains, the accuracy of the MVF for the general multi-class classification problem has remained unknown. In this paper, we derive a new upper bound on the accuracy of the MVF for the multi-class classification problem. More specifically, we show that under certain conditions, the error rate of the MVF exponentially decays toward zero as the number of independent voters increases. Conversely, the error rate of the MVF exponentially grows with the number of independent voters if these conditions are not met. We first explore the problem for independent and identically distributed voters where we assume that every voter follows the same conditional probability distribution of voting for different classes, given the true classification of the data point. Next, we extend our results for the case where the voters are independent but non-identically distributed. Using the derived results, we then provide a discussion on the accuracy of the truth discovery algorithms. We show that in the best-case scenarios, truth discovery algorithms operate as an amplified MVF and thereby achieve a small error rate only when the MVF achieves a small error rate, and vice versa, achieve a large error rate when the MVF also achieves a large error rate. In the worst-case scenario, the truth discovery algorithms may achieve a higher error rate than the MVF. Finally, we confirm our theoretical results using numerical simulations.
The problem of recovering a signal $\boldsymbol{x} \in \mathbb{R}^n$ from a quadratic system $\{y_i=\boldsymbol{x}^\top\boldsymbol{A}_i\boldsymbol{x},\ i=1,\ldots,m\}$ with full-rank matrices $\boldsymbol{A}_i$ frequently arises in applications such as unassigned distance geometry and sub-wavelength imaging. With i.i.d. standard Gaussian matrices $\boldsymbol{A}_i$, this paper addresses the high-dimensional case where $m\ll n$ by incorporating prior knowledge of $\boldsymbol{x}$. First, we consider a $k$-sparse $\boldsymbol{x}$ and introduce the thresholded Wirtinger flow (TWF) algorithm that does not require the sparsity level $k$. TWF comprises two steps: the spectral initialization that identifies a point sufficiently close to $\boldsymbol{x}$ (up to a sign flip) when $m=O(k^2\log n)$, and the thresholded gradient descent (with a good initialization) that produces a sequence linearly converging to $\boldsymbol{x}$ with $m=O(k\log n)$ measurements. Second, we explore the generative prior, assuming that $\boldsymbol{x}$ lies in the range of an $L$-Lipschitz continuous generative model with $k$-dimensional inputs in an $\ell_2$-ball of radius $r$. We develop the projected gradient descent (PGD) algorithm that also comprises two steps: the projected power method that provides an initial vector with $O\big(\sqrt{\frac{k \log L}{m}}\big)$ $\ell_2$-error given $m=O(k\log(Lnr))$ measurements, and the projected gradient descent that refines the $\ell_2$-error to $O(\delta)$ at a geometric rate when $m=O(k\log\frac{Lrn}{\delta^2})$. Experimental results corroborate our theoretical findings and show that: (i) our approach for the sparse case notably outperforms the existing provable algorithm sparse power factorization; (ii) leveraging the generative prior allows for precise image recovery in the MNIST dataset from a small number of quadratic measurements.
Lawvere showed that generalised metric spaces are categories enriched over $[0, \infty]$, the quantale of the positive extended reals. The statement of enrichment is a quantitative analogue of being a preorder. Towards seeking a logic for quantitative metric reasoning, we investigate three $[0,\infty]$-valued propositional logics over the Lawvere quantale. The basic logical connectives shared by all three logics are those that can be interpreted in any quantale, viz finite conjunctions and disjunctions, tensor (addition for the Lawvere quantale) and linear implication (here a truncated subtraction); to these we add, in turn, the constant $1$ to express integer values, and scalar multiplication by a non-negative real to express general affine combinations. Quantitative equational logic can be interpreted in the third logic if we allow inference systems instead of axiomatic systems. For each of these logics we develop a natural deduction system which we prove to be decidably complete w.r.t. the quantale-valued semantics. The heart of the completeness proof makes use of the Motzkin transposition theorem. Consistency is also decidable; the proof makes use of Fourier-Motzkin elimination of linear inequalities. Strong completeness does not hold in general, even (as is known) for theories over finitely-many propositional variables; indeed even an approximate form of strong completeness in the sense of Pavelka or Ben Yaacov -- provability up to arbitrary precision -- does not hold. However, we can show it for theories axiomatized by a (not necessarily finite) set of judgements in normal form over a finite set of propositional variables when we restrict to models that do not map variables to $\infty$; the proof uses Hurwicz's general form of the Farkas' Lemma.
In Linear Logic ($\mathsf{LL}$), the exponential modality $!$ brings forth a distinction between non-linear proofs and linear proofs, where linear means using an argument exactly once. Differential Linear Logic ($\mathsf{DiLL}$) is an extension of Linear Logic which includes additional rules for $!$ which encode differentiation and the ability of linearizing proofs. On the other hand, Graded Linear Logic ($\mathsf{GLL}$) is a variation of Linear Logic in such a way that $!$ is now indexed over a semiring $R$. This $R$-grading allows for non-linear proofs of degree $r \in R$, such that the linear proofs are of degree $1 \in R$. There has been recent interest in combining these two variations of $\mathsf{LL}$ together and developing Graded Differential Linear Logic ($\mathsf{GDiLL}$). In this paper we present a sequent calculus for $\mathsf{GDiLL}$, as well as introduce its categorical semantics, which we call graded differential categories, using both coderelictions and deriving transformations. We prove that symmetric powers always give graded differential categories, and provide other examples of graded differential categories. We also discuss graded versions of (monoidal) coalgebra modalities, additive bialgebra modalities, and the Seely isomorphisms, as well as their implementations in the sequent calculus of $\mathsf{GDiLL}$.
We propose a new algorithm for variance reduction when estimating $f(X_T)$ where $X$ is the solution to some stochastic differential equation and $f$ is a test function. The new estimator is $(f(X^1_T) + f(X^2_T))/2$, where $X^1$ and $X^2$ have same marginal law as $X$ but are pathwise correlated so that to reduce the variance. The optimal correlation function $\rho$ is approximated by a deep neural network and is calibrated along the trajectories of $(X^1, X^2)$ by policy gradient and reinforcement learning techniques. Finding an optimal coupling given marginal laws has links with maximum optimal transport.
For a fixed finite set of finite tournaments ${\mathcal F}$, the ${\mathcal F}$-free orientation problem asks whether a given finite undirected graph $G$ has an $\mathcal F$-free orientation, i.e., whether the edges of $G$ can be oriented so that the resulting digraph does not embed any of the tournaments from ${\mathcal F}$. We prove that for every ${\mathcal F}$, this problem is in P or NP-complete. Our proof reduces the classification task to a complete complexity classification of the orientation completion problem for ${\mathcal F}$, which is the variant of the problem above where the input is a directed graph instead of an undirected graph, introduced by Bang-Jensen, Huang, and Zhu (2017). Our proof uses results from the theory of constraint satisfaction, and a result of Agarwal and Kompatscher (2018) about infinite permutation groups and transformation monoids.
Let $A$ and $B$ be sets of vertices in a graph $G$. Menger's theorem states that for every positive integer $k$, either there exists a collection of $k$ vertex-disjoint paths between $A$ and $B$, or $A$ can be separated from $B$ by a set of at most $k-1$ vertices. Let $\Delta$ be the maximum degree of $G$. We show that there exists a function $f(\Delta) = (\Delta+1)^{\Delta^2+1}$, so that for every positive integer $k$, either there exists a collection of $k$ vertex-disjoint and pairwise anticomplete paths between $A$ and $B$, or $A$ can be separated from $B$ by a set of at most $k \cdot f(\Delta)$ vertices. We also show that the result can be generalized from bounded-degree graphs to graphs excluding a topological minor. On the negative side, we show that no such relation holds on graphs that have degeneracy 2 and arbitrarily large girth, even when $k = 2$. Similar results were obtained independently and concurrently by Hendrey, Norin, Steiner, and Turcotte [arXiv:2309.07905].