This paper analyzes the correlation matrix between the a priori error and measurement noise vectors for affine projection algorithms (APA). This correlation stems from the dependence between the filter tap estimates and the noise samples, and has a strong influence on the mean square behavior of the algorithm. We show that the correlation matrix is upper triangular, and compute the diagonal elements in closed form, showing that they are independent of the input process statistics. Also, for white inputs we show that the matrix is fully diagonal. These results are valid in the transient and steady states of the algorithm considering a possibly variable step-size. Our only assumption is that the filter order is large compared to the projection order of APA and we make no assumptions on the input signal except for stationarity. Using these results, we perform a steady-state analysis of the algorithm for small step size and provide a new simple closed-form expression for mean-square error, which has comparable or better accuracy to many preexisting expressions, and is much simpler to compute. Finally, we also obtain expressions for the steady-state energy of the other components of the error vector.
We develop machinery to design efficiently computable and consistent estimators, achieving estimation error approaching zero as the number of observations grows, when facing an oblivious adversary that may corrupt responses in all but an $\alpha$ fraction of the samples. As concrete examples, we investigate two problems: sparse regression and principal component analysis (PCA). For sparse regression, we achieve consistency for optimal sample size $n\gtrsim (k\log d)/\alpha^2$ and optimal error rate $O(\sqrt{(k\log d)/(n\cdot \alpha^2)})$ where $n$ is the number of observations, $d$ is the number of dimensions and $k$ is the sparsity of the parameter vector, allowing the fraction of inliers to be inverse-polynomial in the number of samples. Prior to this work, no estimator was known to be consistent when the fraction of inliers $\alpha$ is $o(1/\log \log n)$, even for (non-spherical) Gaussian design matrices. Results holding under weak design assumptions and in the presence of such general noise have only been shown in dense setting (i.e., general linear regression) very recently by d'Orsi et al. [dNS21]. In the context of PCA, we attain optimal error guarantees under broad spikiness assumptions on the parameter matrix (usually used in matrix completion). Previous works could obtain non-trivial guarantees only under the assumptions that the measurement noise corresponding to the inliers is polynomially small in $n$ (e.g., Gaussian with variance $1/n^2$). To devise our estimators, we equip the Huber loss with non-smooth regularizers such as the $\ell_1$ norm or the nuclear norm, and extend d'Orsi et al.'s approach [dNS21] in a novel way to analyze the loss function. Our machinery appears to be easily applicable to a wide range of estimation problems.
A Dyck sequence is a sequence of opening and closing parentheses (of various types) that is balanced. The Dyck edit distance of a given sequence of parentheses $S$ is the smallest number of edit operations (insertions, deletions, and substitutions) needed to transform $S$ into a Dyck sequence. We consider the threshold Dyck edit distance problem, where the input is a sequence of parentheses $S$ and a positive integer $k$, and the goal is to compute the Dyck edit distance of $S$ only if the distance is at most $k$, and otherwise report that the distance is larger than $k$. Backurs and Onak [PODS'16] showed that the threshold Dyck edit distance problem can be solved in $O(n+k^{16})$ time. In this work, we design new algorithms for the threshold Dyck edit distance problem which costs $O(n+k^{4.782036})$ time with high probability or $O(n+k^{4.853059})$ deterministically. Our algorithms combine several new structural properties of the Dyck edit distance problem, a refined algorithm for fast $(\min,+)$ matrix product, and a careful modification of ideas used in Valiant's parsing algorithm.
The Gaussian noise stability of a function $f:\mathbb{R}^n \to \{-1, 1\}$ is the expected value of $f(\boldsymbol{x}) \cdot f(\boldsymbol{y})$ over $\rho$-correlated Gaussian random variables $\boldsymbol{x}$ and $\boldsymbol{y}$. Borell's inequality states that for $-1 \leq \rho \leq 0$, this is minimized by the halfspace $f(x) = \mathrm{sign}(x_1)$. In this work, we generalize this result to hold for functions $f:\mathbb{R}^n \to S^{k-1}$ which output $k$-dimensional unit vectors. Our main result shows that the expected value of $\langle f(\boldsymbol{x}), f(\boldsymbol{y})\rangle$ over $\rho$-correlated Gaussians $\boldsymbol{x}$ and $\boldsymbol{y}$ is minimized by the function $f(x) = x_{\leq k} / \Vert x_{\leq k} \Vert$, where $x_{\leq k} = (x_1, \ldots, x_k)$. As an application, we show several hardness of approximation results for Quantum Max-Cut, a special case of the local Hamiltonian problem related to the anti-ferromagnetic Heisenberg model. Quantum Max-Cut is a natural quantum analogue of classical Max-Cut and has become testbed for designing quantum approximation algorithms. We show the following: (1) The integrality gap of the basic SDP is $0.498$, matching an existing rounding algorithm. Combined with existing approximation results for Quantum Max-Cut, this shows that the basic SDP does not achieve the optimal approximation ratio. (2) It is Unique Games-hard (UG-hard) to compute a $(0.956+\varepsilon)$-approximation to the value of the best product state, matching an existing approximation algorithm. This result may be viewed as applying to a generalization of Max-Cut where one seeks to assign $3$-dimensional unit vectors to each vertex; we also give tight hardness results for the analogous $k$-dimensional generalization of Max-Cut. (3) It is UG-hard to compute a $(0.956+\varepsilon)$-approximation to the value of the best (possibly entangled) state.
Given a $k$-vertex-connected graph $G$ and a set $S$ of extra edges (links), the goal of the $k$-vertex-connectivity augmentation problem is to find a set $S' \subseteq S$ of minimum size such that adding $S'$ to $G$ makes it $(k+1)$-vertex-connected. Unlike the edge-connectivity augmentation problem, research for the vertex-connectivity version has been sparse. In this work we present the first polynomial time approximation algorithm that improves the known ratio of 2 for $2$-vertex-connectivity augmentation, for the case in which $G$ is a cycle. This is the first step for attacking the more general problem of augmenting a $2$-connected graph. Our algorithm is based on local search and attains an approximation ratio of $1.8704$. To derive it, we prove novel results on the structure of minimal solutions.
We propose an approach to the estimation of infinite sets of random vectors. The problem addressed is as follows. Given two infinite sets of random vectors, find a single estimator that estimates vectors from with a controlled associated error. A new theory for the existence and implementation of such an estimator is studied. In particular, we show that the proposed estimator is asymptotically optimal. Moreover, the estimator is determined in terms of pseudo-inverse matrices and, therefore, it always exists.
Large health care data repositories such as electronic health records (EHR) opens new opportunities to derive individualized treatment strategies to improve disease outcomes. We study the problem of estimating sequential treatment rules tailored to patient's individual characteristics, often referred to as dynamic treatment regimes (DTRs). We seek to find the optimal DTR which maximizes the discontinuous value function through direct maximization of a fisher consistent surrogate loss function. We show that a large class of concave surrogates fails to be Fisher consistent, which differs from the classic setting for binary classification. We further characterize a non-concave family of Fisher consistent smooth surrogate functions, which can be optimized with gradient descent using off-the-shelf machine learning algorithms. Compared to the existing direct search approach under the support vector machine framework (Zhao et al., 2015), our proposed DTR estimation via surrogate loss optimization (DTRESLO) method is more computationally scalable to large sample size and allows for a broader functional class for the predictor effects. We establish theoretical properties for our proposed DTR estimator and obtain a sharp upper bound on the regret corresponding to our DTRESLO method. Finite sample performance of our proposed estimator is evaluated through extensive simulations and an application on deriving an optimal DTR for treatment sepsis using EHR data from patients admitted to intensive care units.
Recently there has been increased interest in using machine learning techniques to improve classical algorithms. In this paper we study when it is possible to construct compact, composable sketches for weighted sampling and statistics estimation according to functions of data frequencies. Such structures are now central components of large-scale data analytics and machine learning pipelines. However, many common functions, such as thresholds and p-th frequency moments with p > 2, are known to require polynomial-size sketches in the worst case. We explore performance beyond the worst case under two different types of assumptions. The first is having access to noisy advice on item frequencies. This continues the line of work of Hsu et al. (ICLR 2019), who assume predictions are provided by a machine learning model. The second is providing guaranteed performance on a restricted class of input frequency distributions that are better aligned with what is observed in practice. This extends the work on heavy hitters under Zipfian distributions in a seminal paper of Charikar et al. (ICALP 2002). Surprisingly, we show analytically and empirically that "in practice" small polylogarithmic-size sketches provide accuracy for "hard" functions.
This article studies a novel distributed precoding design, coined team minimum mean-square error (TMMSE) precoding, which rigorously generalizes classical centralized MMSE precoding to distributed operations based on transmitter-specific channel state information (CSIT). Building on the so-called theory of teams, we derive a set of necessary and sufficient conditions for optimal TMMSE precoding, in the form of an infinite dimensional linear system of equations. These optimality conditions are further specialized to cell-free massive MIMO networks, and explicitly solved for two important examples, i.e., the classical case of local CSIT and the case of unidirectional CSIT sharing along a serial fronthaul. The latter case is relevant, e.g., for the recently proposed radio stripe concept and the related advances on sequential processing exploiting serial connections. In both cases, our optimal design outperforms the heuristic methods that are known from the previous literature. Duality arguments and numerical simulations validate the effectiveness of the proposed team theoretical approach in terms of ergodic achievable rates under a sum-power constraint.
We are interested in learning generative models for complex geometries described via manifolds, such as spheres, tori, and other implicit surfaces. Current extensions of existing (Euclidean) generative models are restricted to specific geometries and typically suffer from high computational costs. We introduce Moser Flow (MF), a new class of generative models within the family of continuous normalizing flows (CNF). MF also produces a CNF via a solution to the change-of-variable formula, however differently from other CNF methods, its model (learned) density is parameterized as the source (prior) density minus the divergence of a neural network (NN). The divergence is a local, linear differential operator, easy to approximate and calculate on manifolds. Therefore, unlike other CNFs, MF does not require invoking or backpropagating through an ODE solver during training. Furthermore, representing the model density explicitly as the divergence of a NN rather than as a solution of an ODE facilitates learning high fidelity densities. Theoretically, we prove that MF constitutes a universal density approximator under suitable assumptions. Empirically, we demonstrate for the first time the use of flow models for sampling from general curved surfaces and achieve significant improvements in density estimation, sample quality, and training complexity over existing CNFs on challenging synthetic geometries and real-world benchmarks from the earth and climate sciences.
In this paper, we study the optimal convergence rate for distributed convex optimization problems in networks. We model the communication restrictions imposed by the network as a set of affine constraints and provide optimal complexity bounds for four different setups, namely: the function $F(\xb) \triangleq \sum_{i=1}^{m}f_i(\xb)$ is strongly convex and smooth, either strongly convex or smooth or just convex. Our results show that Nesterov's accelerated gradient descent on the dual problem can be executed in a distributed manner and obtains the same optimal rates as in the centralized version of the problem (up to constant or logarithmic factors) with an additional cost related to the spectral gap of the interaction matrix. Finally, we discuss some extensions to the proposed setup such as proximal friendly functions, time-varying graphs, improvement of the condition numbers.