We study the problem of maximizing the probability that (i) an electric component or financial institution $X$ does not default before another component or institution $Y$ and (ii) that $X$ and $Y$ default jointly within the class of all random variables $X,Y$ with given univariate continuous distribution functions $F$ and $G$, respectively, and show that the maximization problems correspond to finding copulas maximizing the mass of the endograph $\Gamma^\leq(T)$ and the graph $\Gamma(T)$ of $T=G \circ F^-$, respectively. After providing simple, copula-based proofs for the existence of copulas attaining the two maxima $\overline{m}_T$ and $\overline{w}_T$ we generalize the obtained results to the case of general (not necessarily monotonic) transformations $T:[0,1] \rightarrow [0,1]$ and derive simple and easily calculable formulas for $\overline{m}_T$ and $\overline{w}_T$ involving the distribution function $F_T$ of $T$ (interpreted as random variable on $[0,1]$). The latter are then used to charac\-terize all non-decreasing transformations $T:[0,1] \rightarrow [0,1]$ for which $\overline{m}_T$ and $\overline{w}_T$ coincide. A strongly consistent estimator for the maximum probability that $X$ does not default before $Y$ is derived and proven to be asymptotically normal under very mild regularity conditions. Several examples and graphics illustrate the main results and falsify some seemingly natural conjectures.
In statistical learning and analysis from shared data, which is increasingly widely adopted in platforms such as federated learning and meta-learning, there are two major concerns: privacy and robustness. Each participating individual should be able to contribute without the fear of leaking one's sensitive information. At the same time, the system should be robust in the presence of malicious participants inserting corrupted data. Recent algorithmic advances in learning from shared data focus on either one of these threats, leaving the system vulnerable to the other. We bridge this gap for the canonical problem of estimating the mean from i.i.d. samples. We introduce PRIME, which is the first efficient algorithm that achieves both privacy and robustness for a wide range of distributions. We further complement this result with a novel exponential time algorithm that improves the sample complexity of PRIME, achieving a near-optimal guarantee and matching a known lower bound for (non-robust) private mean estimation. This proves that there is no extra statistical cost to simultaneously guaranteeing privacy and robustness.
This paper considers a novel multi-agent linear stochastic approximation algorithm driven by Markovian noise and general consensus-type interaction, in which each agent evolves according to its local stochastic approximation process which depends on the information from its neighbors. The interconnection structure among the agents is described by a time-varying directed graph. While the convergence of consensus-based stochastic approximation algorithms when the interconnection among the agents is described by doubly stochastic matrices (at least in expectation) has been studied, less is known about the case when the interconnection matrix is simply stochastic. For any uniformly strongly connected graph sequences whose associated interaction matrices are stochastic, the paper derives finite-time bounds on the mean-square error, defined as the deviation of the output of the algorithm from the unique equilibrium point of the associated ordinary differential equation. For the case of interconnection matrices being stochastic, the equilibrium point can be any unspecified convex combination of the local equilibria of all the agents in the absence of communication. Both the cases with constant and time-varying step-sizes are considered. In the case when the convex combination is required to be a straight average and interaction between any pair of neighboring agents may be uni-directional, so that doubly stochastic matrices cannot be implemented in a distributed manner, the paper proposes a push-sum-type distributed stochastic approximation algorithm and provides its finite-time bound for the time-varying step-size case by leveraging the analysis for the consensus-type algorithm with stochastic matrices and developing novel properties of the push-sum algorithm.
For integers $d \geq 2$ and $k \geq d+1$, a $k$-hole in a set $S$ of points in general position in $\mathbb{R}^d$ is a $k$-tuple of points from $S$ in convex position such that the interior of their convex hull does not contain any point from $S$. For a convex body $K \subseteq \mathbb{R}^d$ of unit $d$-dimensional volume, we study the expected number $EH^K_{d,k}(n)$ of $k$-holes in a set of $n$ points drawn uniformly and independently at random from $K$. We prove an asymptotically tight lower bound on $EH^K_{d,k}(n)$ by showing that, for all fixed integers $d \geq 2$ and $k\geq d+1$, the number $EH_{d,k}^K(n)$ is at least $\Omega(n^d)$. For some small holes, we even determine the leading constant $\lim_{n \to \infty}n^{-d}EH^K_{d,k}(n)$ exactly. We improve the currently best known lower bound on $\lim_{n \to \infty}n^{-d}EH^K_{d,d+1}(n)$ by Reitzner and Temesvari (2019). In the plane, we show that the constant $\lim_{n \to \infty}n^{-2}EH^K_{2,k}(n)$ is independent of $K$ for every fixed $k \geq 3$ and we compute it exactly for $k=4$, improving earlier estimates by Fabila-Monroy, Huemer, and Mitsche (2015) and by the authors (2020).
A well-studied challenge that arises in the structure learning problem of causal directed acyclic graphs (DAG) is that using observational data, one can only learn the graph up to a "Markov equivalence class" (MEC). The remaining undirected edges have to be oriented using interventions, which can be very expensive to perform in applications. Thus, the problem of minimizing the number of interventions needed to fully orient the MEC has received a lot of recent attention, and is also the focus of this work. We prove two main results. The first is a new universal lower bound on the number of atomic interventions that any algorithm (whether active or passive) would need to perform in order to orient a given MEC. Our second result shows that this bound is, in fact, within a factor of two of the size of the smallest set of atomic interventions that can orient the MEC. Our lower bound is provably better than previously known lower bounds. The proof of our lower bound is based on the new notion of clique-block shared-parents (CBSP) orderings, which are topological orderings of DAGs without v-structures and satisfy certain special properties. Further, using simulations on synthetic graphs and by giving examples of special graph families, we show that our bound is often significantly better.
Joint modeling of a large number of variables often requires dimension reduction strategies that lead to structural assumptions of the underlying correlation matrix, such as equal pair-wise correlations within subsets of variables. The underlying correlation matrix is thus of interest for both model specification and model validation. In this paper, we develop tests of the hypothesis that the entries of the Kendall rank correlation matrix are linear combinations of a smaller number of parameters. The asymptotic behavior of the proposed test statistics is investigated both when the dimension is fixed and when it grows with the sample size. We pay special attention to the restricted hypothesis of partial exchangeability, which contains full exchangeability as a special case. We show that under partial exchangeability, the test statistics and their large-sample distributions simplify, which leads to computational advantages and better performance of the tests. We propose various scalable numerical strategies for implementation of the proposed procedures, investigate their behavior through simulations and power calculations under local alternatives, and demonstrate their use on a real dataset of mean sea levels at various geographical locations.
In this paper, we provide an optimal additive noise mechanism for database queries with discrete answers on a finite support. The noise provides the minimum error rate for a given $(\epsilon,\delta)$ pair. Popular schemes apply random additive noise with infinite support and then clamp the resulting query response to the desired range. Clamping, unfortunately, compromises the privacy guarantees. Using modulo addition, rather than clamping, we show that, for any $(\epsilon,\delta)$ pair, the optimum additive noise distribution can be obtained by solving a mixed integer linear program (MILP). Having introduced our optimal noise design formulation, we derive closed form solutions for the optimal noise probability mass function (PMF) and the probability of error for two special cases. In our performance studies, we show that the proposed optimal mechanism outperforms state of the art for a given probability of error and for any budget $\epsilon >0$.
Intersection over Union (IoU) is the most popular evaluation metric used in the object detection benchmarks. However, there is a gap between optimizing the commonly used distance losses for regressing the parameters of a bounding box and maximizing this metric value. The optimal objective for a metric is the metric itself. In the case of axis-aligned 2D bounding boxes, it can be shown that $IoU$ can be directly used as a regression loss. However, $IoU$ has a plateau making it infeasible to optimize in the case of non-overlapping bounding boxes. In this paper, we address the weaknesses of $IoU$ by introducing a generalized version as both a new loss and a new metric. By incorporating this generalized $IoU$ ($GIoU$) as a loss into the state-of-the art object detection frameworks, we show a consistent improvement on their performance using both the standard, $IoU$ based, and new, $GIoU$ based, performance measures on popular object detection benchmarks such as PASCAL VOC and MS COCO.
Implicit probabilistic models are models defined naturally in terms of a sampling procedure and often induces a likelihood function that cannot be expressed explicitly. We develop a simple method for estimating parameters in implicit models that does not require knowledge of the form of the likelihood function or any derived quantities, but can be shown to be equivalent to maximizing likelihood under some conditions. Our result holds in the non-asymptotic parametric setting, where both the capacity of the model and the number of data examples are finite. We also demonstrate encouraging experimental results.
Discrete random structures are important tools in Bayesian nonparametrics and the resulting models have proven effective in density estimation, clustering, topic modeling and prediction, among others. In this paper, we consider nested processes and study the dependence structures they induce. Dependence ranges between homogeneity, corresponding to full exchangeability, and maximum heterogeneity, corresponding to (unconditional) independence across samples. The popular nested Dirichlet process is shown to degenerate to the fully exchangeable case when there are ties across samples at the observed or latent level. To overcome this drawback, inherent to nesting general discrete random measures, we introduce a novel class of latent nested processes. These are obtained by adding common and group-specific completely random measures and, then, normalising to yield dependent random probability measures. We provide results on the partition distributions induced by latent nested processes, and develop an Markov Chain Monte Carlo sampler for Bayesian inferences. A test for distributional homogeneity across groups is obtained as a by product. The results and their inferential implications are showcased on synthetic and real data.
In this paper, we study the optimal convergence rate for distributed convex optimization problems in networks. We model the communication restrictions imposed by the network as a set of affine constraints and provide optimal complexity bounds for four different setups, namely: the function $F(\xb) \triangleq \sum_{i=1}^{m}f_i(\xb)$ is strongly convex and smooth, either strongly convex or smooth or just convex. Our results show that Nesterov's accelerated gradient descent on the dual problem can be executed in a distributed manner and obtains the same optimal rates as in the centralized version of the problem (up to constant or logarithmic factors) with an additional cost related to the spectral gap of the interaction matrix. Finally, we discuss some extensions to the proposed setup such as proximal friendly functions, time-varying graphs, improvement of the condition numbers.