In this paper, we identify that the classic Gaussian mechanism and its variants for differential privacy all suffer from \textbf{the curse of full-rank covariance matrices}, and hence the expected accuracy losses of these mechanisms applied to high dimensional query results, e.g., in $\mathbb{R}^M$, all increase linearly with $M$. To lift this curse, we design a Rank-1 Singular Multivariate Gaussian Mechanism (R1SMG). It achieves $(\epsilon,\delta)$-DP on query results in $\mathbb{R}^M$ by perturbing the results with noise following a singular multivariate Gaussian distribution, whose covariance matrix is a \textbf{randomly} generated rank-1 positive semi-definite matrix. In contrast, the classic Gaussian mechanism and its variants all consider \textbf{deterministic} full-rank covariance matrices. Our idea is motivated by a clue from Dwork et al.'s work on Gaussian mechanism that has been ignored in the literature: when projecting multivariate Gaussian noise with a full-rank covariance matrix onto a set of orthonormal basis in $\mathbb{R}^M$, only the coefficient of a single basis can contribute to the privacy guarantee. This paper makes the following technical contributions. (i) R1SMG achieves $(\epsilon,\delta)$-DP guarantee on query results in $\mathbb{R}^M$, while the magnitude of the additive noise decreases with $M$. Therefore, \textbf{less is more}, i.e., less amount of noise is able to sanitize higher dimensional query results. When $M\rightarrow \infty$, the expected accuracy loss converges to ${2(\Delta_2f)^2}/{\epsilon}$, where $\Delta_2f$ is the $l_2$ sensitivity of the query function $f$. (ii) Compared with other mechanisms, R1SMG is less likely to generate noise with large magnitude that overwhelms the query results, because the kurtosis and skewness of the nondeterministic accuracy loss introduced by R1SMG is larger than that introduced by other mechanisms.
The Gromov--Hausdorff distance measures the difference in shape between compact metric spaces and poses a notoriously difficult problem in combinatorial optimization. We introduce its quadratic relaxation over a convex polytope whose solutions provably deliver the Gromov--Hausdorff distance. The optimality guarantee is enabled by the fact that the search space of our approach is not constrained to a generalization of bijections, unlike in other relaxations such as the Gromov--Wasserstein distance. We suggest the Frank--Wolfe algorithm with $O(n^3)$-time iterations for solving the relaxation and numerically demonstrate its performance on metric spaces of hundreds of points. In particular, we obtain a new upper bound of the Gromov--Hausdorff distance between the unit circle and the unit hemisphere equipped with Euclidean metric. Our approach is implemented as a Python package dGH.
The big data era of science and technology motivates statistical modeling of matrix-valued data using a low-rank representation that simultaneously summarizes key characteristics of the data and enables dimension reduction for data compression and storage. Low-rank representations such as singular value decomposition factor the original data into the product of orthonormal basis functions and weights, where each basis function represents an independent feature of the data. However, the basis functions in these factorizations are typically computed using algorithmic methods that cannot quantify uncertainty or account for explicit structure beyond what is implicitly specified via data correlation. We propose a flexible prior distribution for orthonormal matrices that can explicitly model structure in the basis functions. The prior is used within a general probabilistic model for singular value decomposition to conduct posterior inference on the basis functions while accounting for measurement error and fixed effects. To contextualize the proposed prior and model, we discuss how the prior specification can be used for various scenarios and relate the model to its deterministic counterpart. We demonstrate favorable model properties through synthetic data examples and apply our method to sea surface temperature data from the northern Pacific, enhancing our understanding of the ocean's internal variability.
In this paper we propose a method to approximate the Gaussian function on ${\mathbb R}$ by a short cosine sum. We extend the differential approximation method proposed in [4,39] to approximate $\mathrm{e}^{-t^{2}/2\sigma}$ in the weighted space $L_2({\mathbb R}, \mathrm{e}^{-t^{2}/2\rho})$ where $\sigma, \, \rho >0$. We prove that the optimal frequency parameters $\lambda_1, \ldots , \lambda_{N}$ for this method in the approximation problem $ \min\limits_{\lambda_{1},\ldots, \lambda_{N}, \gamma_{1} \ldots \gamma_{N}}\|\mathrm{e}^{-\cdot^{2}/2\sigma} - \sum\limits_{j=1}^{N} \gamma_{j} \, {\mathrm e}^{\lambda_{j} \cdot}\|_{L_{2}({\mathbb R}, \mathrm{e}^{-t^{2}/2\rho})}$, are zeros of a scaled Hermite polynomial. This observation leads us to a numerically stable approximation method with low computational cost of $\mathit{O}(N^{3})$ operations. Furthermore, we derive a direct algorithm to solve this approximation problem based on a matrix pencil method for a special structured matrix. The entries of this matrix are determined by hypergeometric functions. For the weighted $L_{2}$-norm, we prove that the approximation error decays exponentially with respect to the length $N$ of the sum. An exponentially decaying error in the (unweighted) $L^{2}$-norm is achieved using a truncated cosine sum.
Counterfactual Explanations (CEs) are an important tool in Algorithmic Recourse for addressing two questions: 1. What are the crucial factors that led to an automated prediction/decision? 2. How can these factors be changed to achieve a more favorable outcome from a user's perspective? Thus, guiding the user's interaction with AI systems by proposing easy-to-understand explanations and easy-to-attain feasible changes is essential for the trustworthy adoption and long-term acceptance of AI systems. In the literature, various methods have been proposed to generate CEs, and different quality measures have been suggested to evaluate these methods. However, the generation of CEs is usually computationally expensive, and the resulting suggestions are unrealistic and thus non-actionable. In this paper, we introduce a new method to generate CEs for a pre-trained binary classifier by first shaping the latent space of an autoencoder to be a mixture of Gaussian distributions. CEs are then generated in latent space by linear interpolation between the query sample and the centroid of the target class. We show that our method maintains the characteristics of the input sample during the counterfactual search. In various experiments, we show that the proposed method is competitive based on different quality measures on image and tabular datasets -- efficiently returns results that are closer to the original data manifold compared to three state-of-the-art methods, which are essential for realistic high-dimensional machine learning applications.
Multistate current status (CS) data presents a more severe form of censoring due to the single observation of study participants transitioning through a sequence of well-defined disease states at random inspection times. Moreover, these data may be clustered within specified groups, and informativeness of the cluster sizes may arise due to the existing latent relationship between the transition outcomes and the cluster sizes. Failure to adjust for this informativeness may lead to a biased inference. Motivated by a clinical study of periodontal disease (PD), we propose an extension of the pseudo-value approach to estimate covariate effects on the state occupation probabilities (SOP) for these clustered multistate CS data with informative cluster or intra-cluster group sizes. In our approach, the proposed pseudo-value technique initially computes marginal estimators of the SOP utilizing nonparametric regression. Next, the estimating equations based on the corresponding pseudo-values are reweighted by functions of the cluster sizes to adjust for informativeness. We perform a variety of simulation studies to study the properties of our pseudo-value regression based on the nonparametric marginal estimators under different scenarios of informativeness. For illustration, the method is applied to the motivating PD dataset, which encapsulates the complex data-generation mechanism.
We consider covariance estimation of any subgaussian distribution from finitely many i.i.d. samples that are quantized to one bit of information per entry. Recent work has shown that a reliable estimator can be constructed if uniformly distributed dithers on $[-\lambda,\lambda]$ are used in the one-bit quantizer. This estimator enjoys near-minimax optimal, non-asymptotic error estimates in the operator and Frobenius norms if $\lambda$ is chosen proportional to the largest variance of the distribution. However, this quantity is not known a-priori, and in practice $\lambda$ needs to be carefully tuned to achieve good performance. In this work we resolve this problem by introducing a tuning-free variant of this estimator, which replaces $\lambda$ by a data-driven quantity. We prove that this estimator satisfies the same non-asymptotic error estimates - up to small (logarithmic) losses and a slightly worse probability estimate. Our proof relies on a new version of the Burkholder-Rosenthal inequalities for matrix martingales, which is expected to be of independent interest.
In this paper, we study the Maximum Vertex-weighted $b$-Matching (MVbM) problem on bipartite graphs in a new game-theoretical environment. In contrast to other game-theoretical settings, we consider the case in which the value of the tasks is public and common to every agent so that the private information of every agent consists of edges connecting them to the set of tasks. In this framework, we study three mechanisms. Two of these mechanisms, namely $\MB$ and $\MD$, are optimal but not truthful, while the third one, $\MG$, is truthful but sub-optimal. Albeit these mechanisms are induced by known algorithms, we show $\MB$ and $\MD$ are the best possible mechanisms in terms of Price of Anarchy and Price of Stability, while $\MG$ is the best truthful mechanism in terms of approximated ratio. Furthermore, we characterize the Nash Equilibria of $\MB$ and $\MD$ and retrieve sets of conditions under which $\MB$ acts as a truthful mechanism, which highlights the differences between $\MB$ and $\MD$. Finally, we extend our results to the case in which agents' capacity is part of their private information.
The Separating Hyperplane theorem is a fundamental result in Convex Geometry with myriad applications. Our first result, Random Separating Hyperplane Theorem (RSH), is a strengthening of this for polytopes. $\rsh$ asserts that if the distance between $a$ and a polytope $K$ with $k$ vertices and unit diameter in $\Re^d$ is at least $\delta$, where $\delta$ is a fixed constant in $(0,1)$, then a randomly chosen hyperplane separates $a$ and $K$ with probability at least $1/poly(k)$ and margin at least $\Omega \left(\delta/\sqrt{d} \right)$. An immediate consequence of our result is the first near optimal bound on the error increase in the reduction from a Separation oracle to an Optimization oracle over a polytope. RSH has algorithmic applications in learning polytopes. We consider a fundamental problem, denoted the ``Hausdorff problem'', of learning a unit diameter polytope $K$ within Hausdorff distance $\delta$, given an optimization oracle for $K$. Using RSH, we show that with polynomially many random queries to the optimization oracle, $K$ can be approximated within error $O(\delta)$. To our knowledge this is the first provable algorithm for the Hausdorff Problem. Building on this result, we show that if the vertices of $K$ are well-separated, then an optimization oracle can be used to generate a list of points, each within Hausdorff distance $O(\delta)$ of $K$, with the property that the list contains a point close to each vertex of $K$. Further, we show how to prune this list to generate a (unique) approximation to each vertex of the polytope. We prove that in many latent variable settings, e.g., topic modeling, LDA, optimization oracles do exist provided we project to a suitable SVD subspace. Thus, our work yields the first efficient algorithm for finding approximations to the vertices of the latent polytope under the well-separatedness assumption.
The ability of aerial robots to operate in the presence of failures is crucial in various applications that demand continuous operations, such as surveillance, monitoring, and inspection. In this paper, we propose a fault-tolerant control strategy for quadrotors that can adapt to single and dual complete rotor failures. Our approach augments a classic geometric tracking controller on $SO(3)\times\mathbb{R}^3$ to accommodate the effects of rotor failures. We provide an in-depth analysis of several attitude error metrics to identify the most appropriate design choice for fault-tolerant control strategies. To assess the effectiveness of these metrics, we evaluate trajectory tracking accuracies. Simulation results demonstrate the performance of the proposed approach.
The conjoining of dynamical systems and deep learning has become a topic of great interest. In particular, neural differential equations (NDEs) demonstrate that neural networks and differential equation are two sides of the same coin. Traditional parameterised differential equations are a special case. Many popular neural network architectures, such as residual networks and recurrent networks, are discretisations. NDEs are suitable for tackling generative problems, dynamical systems, and time series (particularly in physics, finance, ...) and are thus of interest to both modern machine learning and traditional mathematical modelling. NDEs offer high-capacity function approximation, strong priors on model space, the ability to handle irregular data, memory efficiency, and a wealth of available theory on both sides. This doctoral thesis provides an in-depth survey of the field. Topics include: neural ordinary differential equations (e.g. for hybrid neural/mechanistic modelling of physical systems); neural controlled differential equations (e.g. for learning functions of irregular time series); and neural stochastic differential equations (e.g. to produce generative models capable of representing complex stochastic dynamics, or sampling from complex high-dimensional distributions). Further topics include: numerical methods for NDEs (e.g. reversible differential equations solvers, backpropagation through differential equations, Brownian reconstruction); symbolic regression for dynamical systems (e.g. via regularised evolution); and deep implicit models (e.g. deep equilibrium models, differentiable optimisation). We anticipate this thesis will be of interest to anyone interested in the marriage of deep learning with dynamical systems, and hope it will provide a useful reference for the current state of the art.