Frequency estimation is one of the most fundamental problems in streaming algorithms. Given a stream $S$ of elements from some universe $U=\{1 \ldots n\}$, the goal is to compute, in a single pass, a short sketch of $S$ so that for any element $i \in U$, one can estimate the number $x_i$ of times $i$ occurs in $S$ based on the sketch alone. Two state of the art solutions to this problems are the Count-Min and Count-Sketch algorithms. The frequency estimator $\tilde{x}$ produced by Count-Min, using $O(1/\varepsilon \cdot \log n)$ dimensions, guarantees that $\|\tilde{x}-x\|_{\infty} \le \varepsilon \|x\|_1$ with high probability, and $\tilde{x} \ge x$ holds deterministically. Also, Count-Min works under the assumption that $x \ge 0$. On the other hand, Count-Sketch, using $O(1/\varepsilon^2 \cdot \log n)$ dimensions, guarantees that $\|\tilde{x}-x\|_{\infty} \le \varepsilon \|x\|_2$ with high probability. A natural question is whether it is possible to design the best of both worlds sketching method, with error guarantees depending on the $\ell_2$ norm and space comparable to Count-Sketch, but (like Count-Min) also has the no-underestimation property. Our main set of results shows that the answer to the above question is negative. We show this in two incomparable computational models: linear sketching and streaming algorithms. We also study the complementary problem, where the sketch is required to not over-estimate, i.e., $\tilde{x} \le x$ should hold always.
Recent quasi-optimal error estimates for the finite element approximation of total-variation regularized minimization problems using the Crouzeix--Raviart finite element require the existence of a Lipschitz continuous dual solution, which is not generally given. We provide analytic proofs showing that the Lipschitz continuity of a dual solution is not necessary, in general. Using the Lipschitz truncation technique, we, in addition, derive error estimates that depend directly on the Sobolev regularity of a given dual solution.
In the $(1+\varepsilon,r)$-approximate near-neighbor problem for curves (ANNC) under some distance measure $\delta$, the goal is to construct a data structure for a given set $\mathcal{C}$ of curves that supports approximate near-neighbor queries: Given a query curve $Q$, if there exists a curve $C\in\mathcal{C}$ such that $\delta(Q,C)\le r$, then return a curve $C'\in\mathcal{C}$ with $\delta(Q,C')\le(1+\varepsilon)r$. There exists an efficient reduction from the $(1+\varepsilon)$-approximate nearest-neighbor problem to ANNC, where in the former problem the answer to a query is a curve $C\in\mathcal{C}$ with $\delta(Q,C)\le(1+\varepsilon)\cdot\delta(Q,C^*)$, where $C^*$ is the curve of $\mathcal{C}$ closest to $Q$. Given a set $\mathcal{C}$ of $n$ curves, each consisting of $m$ points in $d$ dimensions, we construct a data structure for ANNC that uses $n\cdot O(\frac{1}{\varepsilon})^{md}$ storage space and has $O(md)$ query time (for a query curve of length $m$), where the similarity between two curves is their discrete Fr\'echet or dynamic time warping distance. Our method is simple to implement, deterministic, and results in an exponential improvement in both query time and storage space compared to all previous bounds. Further, we also consider the asymmetric version of ANNC, where the length of the query curves is $k \ll m$, and obtain essentially the same storage and query bounds as above, except that $m$ is replaced by $k$. Finally, we apply our method to a version of approximate range counting for curves and achieve similar bounds.
In this paper, we discuss two-stage encoding algorithms capable of correcting a fraction of asymmetric errors. Suppose that the encoder transmits $n$ binary symbols $(x_1,\ldots,x_n)$ one-by-one over the Z-channel, in which a 1 is received only if a 1 is transmitted. At some designated moment, say $n_1$, the encoder uses noiseless feedback and adjusts further encoding strategy based on the partial output of the channel $(y_1,\ldots,y_{n_1})$. The goal is to transmit error-free as much information as possible under the assumption that the total number of errors inflicted by the Z-channel is limited by $\tau n$, $0<\tau<1$. We propose an encoding strategy that uses a list-decodable code at the first stage and a high-error low-rate code at the second stage. This strategy and our converse result yield that there is a sharp transition at $\tau=\max\limits_{0<w<1}\frac{w + w^3}{1+4w^3}\approx 0.44$ from positive rate to zero rate for two-stage encoding strategies. As side results, we derive bounds on the size of list-decodable codes for the Z-channel and prove that for a fraction $1/4+\epsilon$ of asymmetric errors, an error-correcting code contains at most $O(\epsilon^{-3/2})$ codewords.
Given two weighted automata, we consider the problem of whether one is big-O of the other, i.e., if the weight of every finite word in the first is not greater than some constant multiple of the weight in the second. We show that the problem is undecidable, even for the instantiation of weighted automata as labelled Markov chains. Moreover, even when it is known that one weighted automaton is big-O of another, the problem of finding or approximating the associated constant is also undecidable. Our positive results show that the big-O problem is polynomial-time solvable for unambiguous automata, coNP-complete for unlabelled weighted automata (i.e., when the alphabet is a single character) and decidable, subject to Schanuel's conjecture, when the language is bounded (i.e., a subset of $w_1^*\dots w_m^*$ for some finite words $w_1,\dots,w_m$) or when the automaton has finite ambiguity. On labelled Markov chains, the problem can be restated as a ratio total variation distance, which, instead of finding the maximum difference between the probabilities of any two events, finds the maximum ratio between the probabilities of any two events. The problem is related to $\epsilon$-differential privacy, for which the optimal constant of the big-O notation is exactly $\exp(\epsilon)$.
In this paper we apply methods originated in Complexity theory to some problems of Approximation. We notice that the construction of Alman and Williams that disproves the rigidity of Walsh-Hadamard matrices, provides good $\ell_p$-approximation for $p<2$. It follows that the first $n$ functions of Walsh system can be approximated with an error $n^{-\delta}$ by a linear space of dimension $n^{1-\delta}$: $$ d_{n^{1-\delta}}(\{w_1,\ldots,w_n\}, L_p[0,1]) \le n^{-\delta},\quad p\in[1,2),\;\delta=\delta(p)>0. $$ We do not know if this is possible for the trigonometric system. We show that the algebraic method of Alon--Frankl--R\"odl for bounding the number of low-signum-rank matrices, works for tensors: almost all signum-tensors have large signum-rank and can't be $\ell_1$-approximated by low-rank tensors. This implies lower bounds for $\Theta_m$~ -- the error of $m$-term approximation of multivariate functions by sums of tensor products $u^1(x_1)\cdots u^d(x_d)$. In particular, for the set of trigonometric polynomials with spectrum in $\prod_{j=1}^d[-n_j,n_j]$ and of norm $\|t\|_\infty\le 1$ we have $$ \Theta_m(\mathcal T(n_1,\ldots,n_d)_\infty,L_1[-\pi,\pi]^d) \ge c_1(d)>0,\quad m\le c_2(d)\frac{\prod n_j}{\max\{n_j\}}. $$ Sharp bounds follow for classes of dominated mixed smoothness: $$ \Theta_m(W^{(r,r,\ldots,r)}_p,L_q[0,1]^d)\asymp m^{-\frac{rd}{d-1}},\quad\mbox 2\le p\le\infty,\; 1\le q\le 2. $$
This paper focuses on stochastic saddle point problems with decision-dependent distributions in both the static and time-varying settings. These are problems whose objective is the expected value of a stochastic payoff function, where random variables are drawn from a distribution induced by a distributional map. For general distributional maps, the problem of finding saddle points is in general computationally burdensome, even if the distribution is known. To enable a tractable solution approach, we introduce the notion of equilibrium points -- which are saddle points for the stationary stochastic minimax problem that they induce -- and provide conditions for their existence and uniqueness. We demonstrate that the distance between the two classes of solutions is bounded provided that the objective has a strongly-convex-strongly-concave payoff and Lipschitz continuous distributional map. We develop deterministic and stochastic primal-dual algorithms and demonstrate their convergence to the equilibrium point. In particular, by modeling errors emerging from a stochastic gradient estimator as sub-Weibull random variables, we provide error bounds in expectation and in high probability that hold for each iteration; moreover, we show convergence to a neighborhood in expectation and almost surely. Finally, we investigate a condition on the distributional map -- which we call opposing mixture dominance -- that ensures the objective is strongly-convex-strongly-concave. Under this assumption, we show that primal-dual algorithms converge to the saddle points in a similar fashion.
Computing linear minimum mean square error (LMMSE) filters is often ill conditioned, suggesting that unconstrained minimization of the mean square error is an inadequate principle for filter design. To address this, we first develop a unifying framework for studying constrained LMMSE estimation problems. Using this framework, we expose an important structural property of all constrained LMMSE filters and show that they all involve an inherent preconditioning step. This parameterizes all such filters only by their preconditioners. Moreover, each filters is invariant to invertible linear transformations of its preconditioner. We then clarify that merely constraining the rank of the filters, leading to the well-known low-rank Wiener filter, does not suitably address the problem of ill conditioning. Instead, we use a constraint that explicitly requires solutions to be well conditioned in a certain specific sense. We introduce two well-conditioned estimators and evaluate their mean-squared-error performance. We show these two estimators converge to the standard LMMSE filter as their truncated-power ratio converges to zero, but more slowly than the low-rank Wiener filter in terms of scaling law. This exposes the price for being well conditioned. We also show quantitative results with historical VIX data to illustrate the performance of our two well-conditioned estimators.
We argue the Fisher information matrix (FIM) of one hidden layer networks with the ReLU activation function. Let $W$ denote the $d \times p$ weight matrix from the $d$-dimensional input to the hidden layer consisting of $p$ neurons, and $v$ the $p$-dimensional weight vector from the hidden layer to the scalar output. We focus on the FIM of $v$, which we denote as $I$. When $p$ is large, under certain conditions, the following approximately holds. 1) There are three major clusters in the eigenvalue distribution. 2) Since $I$ is non-negative owing to the ReLU, the first eigenvalue is the Perron-Frobenius eigenvalue. 3) For the cluster of the next maximum values, the eigenspace is spanned by the row vectors of $W$. 4) The direct sum of the eigenspace of the first eigenvalue and that of the third cluster is spanned by the set of all the vectors obtained as the Hadamard product of any pair of the row vectors of $W$. We confirmed by numerical simulation that the above is approximately correct when the number of hidden nodes is about 10000.
The low-rank canonical polyadic tensor decomposition is useful in data analysis and can be computed by solving a sequence of overdetermined least squares subproblems. Motivated by consideration of sparse tensors, we propose sketching each subproblem using leverage scores to select a subset of the rows, with probabilistic guarantees on the solution accuracy. We randomly sample rows proportional to leverage score upper bounds that can be efficiently computed using the special Khatri-Rao subproblem structure inherent in tensor decomposition. Crucially, for a $(d+1)$-way tensor, the number of rows in the sketched system is $O(r^d/\epsilon)$ for a decomposition of rank $r$ and $\epsilon$-accuracy in the least squares solve, independent of both the size and the number of nonzeros in the tensor. Along the way, we provide a practical solution to the generic matrix sketching problem of sampling overabundance for high-leverage-score rows, proposing to include such rows deterministically and combine repeated samples in the sketched system; we conjecture that this can lead to improved theoretical bounds. Numerical results on real-world large-scale tensors show the method is significantly faster than deterministic methods at nearly the same level of accuracy.
Many problems on signal processing reduce to nonparametric function estimation. We propose a new methodology, piecewise convex fitting (PCF), and give a two-stage adaptive estimate. In the first stage, the number and location of the change points is estimated using strong smoothing. In the second stage, a constrained smoothing spline fit is performed with the smoothing level chosen to minimize the MSE. The imposed constraint is that a single change point occurs in a region about each empirical change point of the first-stage estimate. This constraint is equivalent to requiring that the third derivative of the second-stage estimate has a single sign in a small neighborhood about each first-stage change point. We sketch how PCF may be applied to signal recovery, instantaneous frequency estimation, surface reconstruction, image segmentation, spectral estimation and multivariate adaptive regression.