The {\em discrepancy} of a matrix $M \in \mathbb{R}^{d \times n}$ is given by $\mathrm{DISC}(M) := \min_{\boldsymbol{x} \in \{-1,1\}^n} \|M\boldsymbol{x}\|_\infty$. An outstanding conjecture, attributed to Koml\'os, stipulates that $\mathrm{DISC}(M) = O(1)$, whenever $M$ is a Koml\'os matrix, that is, whenever every column of $M$ lies within the unit sphere. Our main result asserts that $\mathrm{DISC}(M + R/\sqrt{d}) = O(d^{-1/2})$ holds asymptotically almost surely, whenever $M \in \mathbb{R}^{d \times n}$ is Koml\'os, $R \in \mathbb{R}^{d \times n}$ is a Rademacher random matrix, $d = \omega(1)$, and $n = \tilde \omega(d^{5/4})$. We conjecture that $n = \omega(d \log d)$ suffices for the same assertion to hold. The factor $d^{-1/2}$ normalising $R$ is essentially best possible.
Uniform sampling from the set $\mathcal{G}(\mathbf{d})$ of graphs with a given degree-sequence $\mathbf{d} = (d_1, \dots, d_n) \in \mathbb N^n$ is a classical problem in the study of random graphs. We consider an analogue for temporal graphs in which the edges are labeled with integer timestamps. The input to this generation problem is a tuple $\mathbf{D} = (\mathbf{d}, T) \in \mathbb N^n \times \mathbb N_{>0}$ and the task is to output a uniform random sample from the set $\mathcal{G}(\mathbf{D})$ of temporal graphs with degree-sequence $\mathbf{d}$ and timestamps in the interval $[1, T]$. By allowing repeated edges with distinct timestamps, $\mathcal{G}(\mathbf{D})$ can be non-empty even if $\mathcal{G}(\mathbf{d})$ is, and as a consequence, existing algorithms are difficult to apply. We describe an algorithm for this generation problem which runs in expected time $O(M)$ if $\Delta^{2+\epsilon} = O(M)$ for some constant $\epsilon > 0$ and $T - \Delta = \Omega(T)$ where $M = \sum_i d_i$ and $\Delta = \max_i d_i$. Our algorithm applies the switching method of McKay and Wormald $[1]$ to temporal graphs: we first generate a random temporal multigraph and then remove self-loops and duplicated edges with switching operations which rewire the edges in a degree-preserving manner.
In this paper we study the problem of maximizing the distance to a given point $C_0$ over a polytope $\mathcal{P}$. Assuming that the polytope is circumscribed by a known ball we construct an intersection of balls which preserves the vertices of the polytope on the boundary of this ball, and show that the intersection of balls approximates the polytope arbitrarily well. Then, we use some known results regarding the maximization of distances to a given point over an intersection of balls to create a new polytope which preserves the maximizers to the original problem. Next, a new intersection of balls is obtained in a similar fashion, and as such, after a finite number of iterations, we conjecture, we end up with an intersection of balls over which we can maximize the distance to the given point. The obtained distance is shown to be a non trivial upper bound to the original distance. Tests are made with maximizing the distance to a random point over the unit hypercube up to dimension $n = 100$. Several detailed 2-d examples are also shown.
Given a set of arms $\mathcal{Z}\subset \mathbb{R}^d$ and an unknown parameter vector $\theta_\ast\in\mathbb{R}^d$, the pure exploration linear bandit problem aims to return $\arg\max_{z\in \mathcal{Z}} z^{\top}\theta_{\ast}$, with high probability through noisy measurements of $x^{\top}\theta_{\ast}$ with $x\in \mathcal{X}\subset \mathbb{R}^d$. Existing (asymptotically) optimal methods require either a) potentially costly projections for each arm $z\in \mathcal{Z}$ or b) explicitly maintaining a subset of $\mathcal{Z}$ under consideration at each time. This complexity is at odds with the popular and simple Thompson Sampling algorithm for regret minimization, which just requires access to a posterior sampling and argmax oracle, and does not need to enumerate $\mathcal{Z}$ at any point. Unfortunately, Thompson sampling is known to be sub-optimal for pure exploration. In this work, we pose a natural question: is there an algorithm that can explore optimally and only needs the same computational primitives as Thompson Sampling? We answer the question in the affirmative. We provide an algorithm that leverages only sampling and argmax oracles and achieves an exponential convergence rate, with the exponent being the optimal among all possible allocations asymptotically. In addition, we show that our algorithm can be easily implemented and performs as well empirically as existing asymptotically optimal methods.
In generative compressed sensing (GCS), we want to recover a signal $\mathbf{x}^* \in \mathbb{R}^n$ from $m$ measurements ($m\ll n$) using a generative prior $\mathbf{x}^*\in G(\mathbb{B}_2^k(r))$, where $G$ is typically an $L$-Lipschitz continuous generative model and $\mathbb{B}_2^k(r)$ represents the radius-$r$ $\ell_2$-ball in $\mathbb{R}^k$. Under nonlinear measurements, most prior results are non-uniform, i.e., they hold with high probability for a fixed $\mathbf{x}^*$ rather than for all $\mathbf{x}^*$ simultaneously. In this paper, we build a unified framework to derive uniform recovery guarantees for nonlinear GCS where the observation model is nonlinear and possibly discontinuous or unknown. Our framework accommodates GCS with 1-bit/uniformly quantized observations and single index models as canonical examples. Specifically, using a single realization of the sensing ensemble and generalized Lasso, {\em all} $\mathbf{x}^*\in G(\mathbb{B}_2^k(r))$ can be recovered up to an $\ell_2$-error at most $\epsilon$ using roughly $\tilde{O}({k}/{\epsilon^2})$ samples, with omitted logarithmic factors typically being dominated by $\log L$. Notably, this almost coincides with existing non-uniform guarantees up to logarithmic factors, hence the uniformity costs very little. As part of our technical contributions, we introduce the Lipschitz approximation to handle discontinuous observation models. We also develop a concentration inequality that produces tighter bounds for product processes whose index sets have low metric entropy. Experimental results are presented to corroborate our theory.
Consider the {$\ell_{\alpha}$} regularized linear regression, also termed Bridge regression. For $\alpha\in (0,1)$, Bridge regression enjoys several statistical properties of interest such as sparsity and near-unbiasedness of the estimates (Fan and Li, 2001). However, the main difficulty lies in the non-convex nature of the penalty for these values of $\alpha$, which makes an optimization procedure challenging and usually it is only possible to find a local optimum. To address this issue, Polson et al. (2013) took a sampling based fully Bayesian approach to this problem, using the correspondence between the Bridge penalty and a power exponential prior on the regression coefficients. However, their sampling procedure relies on Markov chain Monte Carlo (MCMC) techniques, which are inherently sequential and not scalable to large problem dimensions. Cross validation approaches are similarly computation-intensive. To this end, our contribution is a novel \emph{non-iterative} method to fit a Bridge regression model. The main contribution lies in an explicit formula for Stein's unbiased risk estimate for the out of sample prediction risk of Bridge regression, which can then be optimized to select the desired tuning parameters, allowing us to completely bypass MCMC as well as computation-intensive cross validation approaches. Our procedure yields results in a fraction of computational times compared to iterative schemes, without any appreciable loss in statistical performance. An R implementation is publicly available online at: //github.com/loriaJ/Sure-tuned_BridgeRegression .
We prove that any semi-streaming algorithm for $(1-\epsilon)$-approximation of maximum bipartite matching requires \[ \Omega(\frac{\log{(1/\epsilon)}}{{\log{(1/\beta)}}}) \] passes, where $\beta \in (0,1)$ is the largest parameter so that an $n$-vertex graph with $n^{\beta}$ edge-disjoint induced matchings of size $\Theta(n)$ exist (such graphs are referred to as RS graphs). Currently, it is known that \[ \Omega(\frac{1}{\log\log{n}}) \leq \beta \leq 1-\Theta(\frac{\log^*{n}}{{\log{n}}}) \] and closing this huge gap between upper and lower bounds has remained a notoriously difficult problem in combinatorics. Under the plausible hypothesis that $\beta = \Omega(1)$, our lower bound result provides the first pass-approximation lower bound for (small) constant approximation of matchings in the semi-streaming model, a longstanding open question in the graph streaming literature. Our techniques are based on analyzing communication protocols for compressing (hidden) permutations. Prior work in this context relied on reducing such problems to Boolean domain and analyzing them via tools like XOR Lemmas and Fourier analysis on Boolean hypercube. In contrast, our main technical contribution is a hardness amplification result for permutations through concatenation in place of prior XOR Lemmas. This result is proven by analyzing permutations directly via simple tools from group representation theory combined with detailed information-theoretic arguments, and can be of independent interest.
This paper studies the prediction of a target $\mathbf{z}$ from a pair of random variables $(\mathbf{x},\mathbf{y})$, where the ground-truth predictor is additive $\mathbb{E}[\mathbf{z} \mid \mathbf{x},\mathbf{y}] = f_\star(\mathbf{x}) +g_{\star}(\mathbf{y})$. We study the performance of empirical risk minimization (ERM) over functions $f+g$, $f \in F$ and $g \in G$, fit on a given training distribution, but evaluated on a test distribution which exhibits covariate shift. We show that, when the class $F$ is "simpler" than $G$ (measured, e.g., in terms of its metric entropy), our predictor is more resilient to $\textbf{heterogenous covariate shifts}$ in which the shift in $\mathbf{x}$ is much greater than that in $\mathbf{y}$. Our analysis proceeds by demonstrating that ERM behaves $\textbf{qualitatively similarly to orthogonal machine learning}$: the rate at which ERM recovers the $f$-component of the predictor has only a lower-order dependence on the complexity of the class $G$, adjusted for partial non-indentifiability introduced by the additive structure. These results rely on a novel H\"older style inequality for the Dudley integral which may be of independent interest. Moreover, we corroborate our theoretical findings with experiments demonstrating improved resilience to shifts in "simpler" features across numerous domains.
We make an experimental comparison of methods for computing the numerical radius of an $n\times n$ complex matrix, based on two well-known characterizations, the first a nonconvex optimization problem in one real variable and the second a convex optimization problem in $n^{2}+1$ real variables. We make comparisons with respect to both accuracy and computation time using publicly available software.
Consider that there are $k\le n$ agents in a simple, connected, and undirected graph $G=(V,E)$ with $n$ nodes and $m$ edges. The goal of the dispersion problem is to move these $k$ agents to distinct nodes. Agents can communicate only when they are at the same node, and no other means of communication such as whiteboards are available. We assume that the agents operate synchronously. We consider two scenarios: when all agents are initially located at any single node (rooted setting) and when they are initially distributed over any one or more nodes (general setting). Kshemkalyani and Sharma presented a dispersion algorithm for the general setting, which uses $O(m_k)$ time and $\log(k+\delta)$ bits of memory per agent [OPODIS 2021]. Here, $m_k$ is the maximum number of edges in any induced subgraph of $G$ with $k$ nodes, and $\delta$ is the maximum degree of $G$. This algorithm is the fastest in the literature, as no algorithm with $o(m_k)$ time has been discovered even for the rooted setting. In this paper, we present faster algorithms for both the rooted and general settings. First, we present an algorithm for the rooted setting that solves the dispersion problem in $O(k\log \min(k,\delta))=O(k\log k)$ time using $O(\log \delta)$ bits of memory per agent. Next, we propose an algorithm for the general setting that achieves dispersion in $O(k (\log k)\cdot (\log \min(k,\delta))=O(k \log^2 k)$ time using $O(\log (k+\delta))$ bits.
A kernelization for a parameterized decision problem $\mathcal{Q}$ is a polynomial-time preprocessing algorithm that reduces any parameterized instance $(x,k)$ into an instance $(x',k')$ whose size is bounded by a function of $k$ alone and which has the same yes/no answer for $\mathcal{Q}$. Such preprocessing algorithms cannot exist in the context of counting problems, when the answer to be preserved is the number of solutions, since this number can be arbitrarily large compared to $k$. However, we show that for counting minimum feedback vertex sets of size at most $k$, and for counting minimum dominating sets of size at most $k$ in a planar graph, there is a polynomial-time algorithm that either outputs the answer or reduces to an instance $(G',k')$ of size polynomial in $k$ with the same number of minimum solutions. This shows that a meaningful theory of kernelization for counting problems is possible and opens the door for future developments. Our algorithms exploit that if the number of solutions exceeds $2^{\mathsf{poly}(k)}$, the size of the input is exponential in terms of $k$ so that the running time of a parameterized counting algorithm can be bounded by $\mathsf{poly}(n)$. Otherwise, we can use gadgets that slightly increase $k$ to represent choices among $2^{O(k)}$ options by only $\mathsf{poly}(k)$ vertices.