The $\alpha$-divergences include the well-known Kullback-Leibler divergence, Hellinger distance and $\chi^2$-divergence. In this paper, we derive differential and integral relations between the $\alpha$-divergences that are generalizations of the relation between the Kullback-Leibler divergence and the $\chi^2$-divergence. We also show tight lower bounds for the $\alpha$-divergences under given means and variances. In particular, we show a necessary and sufficient condition such that the binary divergences, which are divergences between probability measures on the same $2$-point set, always attain lower bounds. Kullback-Leibler divergence, Hellinger distance, and $\chi^2$-divergence satisfy this condition.
Secure codes are widely-studied combinatorial structures which were introduced for traitor tracing in broadcast encryption. To determine the maximum size of such structures is the main research objective. In this paper, we investigate the lower bounds for secure codes and their related structures. First, we give some improved lower bounds for the rates of $2$-frameproof codes and $\overline{2}$-separable codes for slightly large alphabet size. Then we improve the lower bounds for the rate of some related structures, i.e., strongly $2$-separable matrices and $2$-cancellative set families. Finally, we give a general method to derive new lower bounds for strongly $t$-separable matrices and $t$-cancellative set families for $t\ge 3.$
In seminal work, Lov\'asz, Spencer, and Vesztergombi [European J. Combin., 1986] proved a lower bound for the hereditary discrepancy of a matrix $A \in \mathbb{R}^{m \times n}$ in terms of the maximum $|\det(B)|^{1/k}$ over all $k \times k$ submatrices $B$ of $A$. We show algorithmically that this determinant lower bound can be off by at most a factor of $O(\sqrt{\log (m) \cdot \log (n)})$, improving over the previous bound of $O(\log(mn) \cdot \sqrt{\log (n)})$ given by Matou\v{s}ek [Proc. of the AMS, 2013]. Our result immediately implies $\mathrm{herdisc}(\mathcal{F}_1 \cup \mathcal{F}_2) \leq O(\sqrt{\log (m) \cdot \log (n)}) \cdot \max(\mathrm{herdisc}(\mathcal{F}_1), \mathrm{herdisc}(\mathcal{F}_2))$, for any two set systems $\mathcal{F}_1, \mathcal{F}_2$ over $[n]$ satisfying $|\mathcal{F}_1 \cup \mathcal{F}_2| = m$. Our bounds are tight up to constants when $m = O(\mathrm{poly}(n))$ due to a construction of P\'alv\"olgyi [Discrete Comput. Geom., 2010] or the counterexample to Beck's three permutation conjecture by Newman, Neiman and Nikolov [FOCS, 2012].
In distributed differential privacy, the parties perform analysis over their joint data while preserving the privacy for both datasets. Interestingly, for a few fundamental two-party functions such as inner product and Hamming distance, the accuracy of the distributed solution lags way behind what is achievable in the client-server setting. McGregor, Mironov, Pitassi, Reingold, Talwar, and Vadhan [FOCS '10] proved that this gap is inherent, showing upper bounds on the accuracy of (any) distributed solution for these functions. These limitations can be bypassed when settling for computational differential privacy, where the data is differentially private only in the eyes of a computationally bounded observer, using public-key cryptography primitives. We prove that the use of public-key cryptography is necessary for bypassing the limitation of McGregor et al., showing that a non-trivial solution for the inner-product, or the Hamming distance, implies the existence of a key-agreement protocol. Our bound implies a combinatorial proof for the fact that non-Boolean inner product of independent (strong) Santha-Vazirani sources is a good condenser. We obtain our main result by showing that the inner-product of a (single, strong) SV source with a uniformly random seed is a good condenser, even when the seed and source are dependent.
The Chebyshev or $\ell_{\infty}$ estimator is an unconventional alternative to the ordinary least squares in solving linear regressions. It is defined as the minimizer of the $\ell_{\infty}$ objective function \begin{align*} \hat{\boldsymbol{\beta}} := \arg\min_{\boldsymbol{\beta}} \|\boldsymbol{Y} - \mathbf{X}\boldsymbol{\beta}\|_{\infty}. \end{align*} The asymptotic distribution of the Chebyshev estimator under fixed number of covariates were recently studied (Knight, 2020), yet finite sample guarantees and generalizations to high-dimensional settings remain open. In this paper, we develop non-asymptotic upper bounds on the estimation error $\|\hat{\boldsymbol{\beta}}-\boldsymbol{\beta}^*\|_2$ for a Chebyshev estimator $\hat{\boldsymbol{\beta}}$, in a regression setting with uniformly distributed noise $\varepsilon_i\sim U([-a,a])$ where $a$ is either known or unknown. With relatively mild assumptions on the (random) design matrix $\mathbf{X}$, we can bound the error rate by $\frac{C_p}{n}$ with high probability, for some constant $C_p$ depending on the dimension $p$ and the law of the design. Furthermore, we illustrate that there exist designs for which the Chebyshev estimator is (nearly) minimax optimal. In addition we show that "Chebyshev's LASSO" has advantages over the regular LASSO in high dimensional situations, provided that the noise is uniform. Specifically, we argue that it achieves a much faster rate of estimation under certain assumptions on the growth rate of the sparsity level and the ambient dimension with respect to the sample size.
We study $k$-clustering problems with lower bounds, including $k$-median and $k$-means clustering with lower bounds. In addition to the point set $P$ and the number of centers $k$, a $k$-clustering problem with (uniform) lower bounds gets a number $B$. The solution space is restricted to clusterings where every cluster has at least $B$ points. We demonstrate how to approximate $k$-median with lower bounds via a reduction to facility location with lower bounds, for which $O(1)$-approximation algorithms are known. Then we propose a new constrained clustering problem with lower bounds where we allow points to be assigned multiple times (to different centers). This means that for every point, the clustering specifies a set of centers to which it is assigned. We call this clustering with weak lower bounds. We give a $(6.5+\epsilon)$-approximation for $k$-median clustering with weak lower bounds and an $O(1)$-approximation for $k$-means with weak lower bounds. We conclude by showing that at a constant increase in the approximation factor, we can restrict the number of assignments of every point to $2$ (or, if we allow fractional assignments, to $1+\epsilon$). This also leads to the first bicritera approximation algorithm for $k$-means with (standard) lower bounds where bicriteria is interpreted in the sense that the lower bounds are violated by a constant factor. All algorithms in this paper run in time that is polynomial in $n$ and $k$ (and $d$ for the Euclidean variants considered).
Let $G$ be a graph with $n$ vertices and $m$ edges. One of several hierarchies towards the stability number of $G$ is the exact subgraph hierarchy (ESH). On the first level it computes the Lov\'{a}sz theta function $\vartheta(G)$ as semidefinite program (SDP) with a matrix variable of order $n+1$ and $n+m+1$ constraints. On the $k$-th level it adds all exact subgraph constraints (ESC) for subgraphs of order $k$ to the SDP. An ESC ensures that the submatrix of the matrix variable corresponding to the subgraph is in the correct polytope. By including only some ESCs into the SDP the ESH can be exploited computationally. In this paper we introduce a variant of the ESH that computes $\vartheta(G)$ through an SDP with a matrix variable of order $n$ and $m+1$ constraints. We show that it makes sense to include the ESCs into this SDP and introduce the compressed ESH (CESH) analogously to the ESH. Computationally the CESH seems favorable as the SDP is smaller. However, we prove that the bounds based on the ESH are always at least as good as those of the CESH. In computational experiments sometimes they are significantly better. We also introduce scaled ESCs (SESCs), which are a more natural way to include exactness constraints into the smaller SDP and we prove that including an SESC is equivalent to including an ESC for every subgraph.
In this paper, we study the weight spectrum of linear codes with \emph{super-linear} field size and use the probabilistic method to show that for nearly all such codes, the corresponding weight spectrum is very close to that of a maximum distance separable (MDS) code.
The communication cost of distributed optimization algorithms is a major bottleneck in their scalability. This work considers a parameter-server setting in which the worker is constrained to communicate information to the server using only $R$ bits per dimension. We show that $\mathbf{democratic}$ $\mathbf{embeddings}$ from random matrix theory are significantly useful for designing efficient and optimal vector quantizers that respect this bit budget. The resulting polynomial complexity source coding schemes are used to design distributed optimization algorithms with convergence rates matching the minimax optimal lower bounds for (i) Smooth and Strongly-Convex objectives with access to an Exact Gradient oracle, as well as (ii) General Convex and Non-Smooth objectives with access to a Noisy Subgradient oracle. We further propose a relaxation of this coding scheme which is nearly minimax optimal. Numerical simulations validate our theoretical claims.
Approximate Message Passing (AMP) algorithms have seen widespread use across a variety of applications. However, the precise forms for their Onsager corrections and state evolutions depend on properties of the underlying random matrix ensemble, limiting the extent to which AMP algorithms derived for white noise may be applicable to data matrices that arise in practice. In this work, we study more general AMP algorithms for random matrices $W$ that satisfy orthogonal rotational invariance in law, where $W$ may have a spectral distribution that is different from the semicircle and Marcenko-Pastur laws characteristic of white noise. The Onsager corrections and state evolutions in these algorithms are defined by the free cumulants or rectangular free cumulants of the spectral distribution of $W$. Their forms were derived previously by Opper, \c{C}akmak, and Winther using non-rigorous dynamic functional theory techniques, and we provide rigorous proofs. Our motivating application is a Bayes-AMP algorithm for Principal Components Analysis, when there is prior structure for the principal components (PCs) and possibly non-white noise. For sufficiently large signal strengths and any non-Gaussian prior distributions for the PCs, we show that this algorithm provably achieves higher estimation accuracy than the sample PCs.
Distributionally robust optimization (DRO) is a worst-case framework for stochastic optimization under uncertainty that has drawn fast-growing studies in recent years. When the underlying probability distribution is unknown and observed from data, DRO suggests to compute the worst-case distribution within a so-called uncertainty set that captures the involved statistical uncertainty. In particular, DRO with uncertainty set constructed as a statistical divergence neighborhood ball has been shown to provide a tool for constructing valid confidence intervals for nonparametric functionals, and bears a duality with the empirical likelihood (EL). In this paper, we show how adjusting the ball size of such type of DRO can reduce higher-order coverage errors similar to the Bartlett correction. Our correction, which applies to general von Mises differentiable functionals, is more general than the existing EL literature that only focuses on smooth function models or $M$-estimation. Moreover, we demonstrate a higher-order "self-normalizing" property of DRO regardless of the choice of divergence. Our approach builds on the development of a higher-order expansion of DRO, which is obtained through an asymptotic analysis on a fixed point equation arising from the Karush-Kuhn-Tucker conditions.