In this paper we present a new perspective on error analysis of Legendre approximations for differentiable functions. We start by introducing a sequence of Legendre-Gauss-Lobatto polynomials and prove their theoretical properties, such as an explicit and optimal upper bound. We then apply these properties to derive a new and explicit bound for the Legendre coefficients of differentiable functions and establish some explicit and optimal error bounds for Legendre projections in the $L^2$ and $L^{\infty}$ norms. Illustrative examples are provided to demonstrate the sharpness of our new results.
In the $(1+\varepsilon,r)$-approximate near-neighbor problem for curves (ANNC) under some distance measure $\delta$, the goal is to construct a data structure for a given set $\mathcal{C}$ of curves that supports approximate near-neighbor queries: Given a query curve $Q$, if there exists a curve $C\in\mathcal{C}$ such that $\delta(Q,C)\le r$, then return a curve $C'\in\mathcal{C}$ with $\delta(Q,C')\le(1+\varepsilon)r$. There exists an efficient reduction from the $(1+\varepsilon)$-approximate nearest-neighbor problem to ANNC, where in the former problem the answer to a query is a curve $C\in\mathcal{C}$ with $\delta(Q,C)\le(1+\varepsilon)\cdot\delta(Q,C^*)$, where $C^*$ is the curve of $\mathcal{C}$ closest to $Q$. Given a set $\mathcal{C}$ of $n$ curves, each consisting of $m$ points in $d$ dimensions, we construct a data structure for ANNC that uses $n\cdot O(\frac{1}{\varepsilon})^{md}$ storage space and has $O(md)$ query time (for a query curve of length $m$), where the similarity between two curves is their discrete Fr\'echet or dynamic time warping distance. Our method is simple to implement, deterministic, and results in an exponential improvement in both query time and storage space compared to all previous bounds. Further, we also consider the asymmetric version of ANNC, where the length of the query curves is $k \ll m$, and obtain essentially the same storage and query bounds as above, except that $m$ is replaced by $k$. Finally, we apply our method to a version of approximate range counting for curves and achieve similar bounds.
In this paper we apply methods originated in Complexity theory to some problems of Approximation. We notice that the construction of Alman and Williams that disproves the rigidity of Walsh-Hadamard matrices, provides good $\ell_p$-approximation for $p<2$. It follows that the first $n$ functions of Walsh system can be approximated with an error $n^{-\delta}$ by a linear space of dimension $n^{1-\delta}$: $$ d_{n^{1-\delta}}(\{w_1,\ldots,w_n\}, L_p[0,1]) \le n^{-\delta},\quad p\in[1,2),\;\delta=\delta(p)>0. $$ We do not know if this is possible for the trigonometric system. We show that the algebraic method of Alon--Frankl--R\"odl for bounding the number of low-signum-rank matrices, works for tensors: almost all signum-tensors have large signum-rank and can't be $\ell_1$-approximated by low-rank tensors. This implies lower bounds for $\Theta_m$~ -- the error of $m$-term approximation of multivariate functions by sums of tensor products $u^1(x_1)\cdots u^d(x_d)$. In particular, for the set of trigonometric polynomials with spectrum in $\prod_{j=1}^d[-n_j,n_j]$ and of norm $\|t\|_\infty\le 1$ we have $$ \Theta_m(\mathcal T(n_1,\ldots,n_d)_\infty,L_1[-\pi,\pi]^d) \ge c_1(d)>0,\quad m\le c_2(d)\frac{\prod n_j}{\max\{n_j\}}. $$ Sharp bounds follow for classes of dominated mixed smoothness: $$ \Theta_m(W^{(r,r,\ldots,r)}_p,L_q[0,1]^d)\asymp m^{-\frac{rd}{d-1}},\quad\mbox 2\le p\le\infty,\; 1\le q\le 2. $$
We contribute to a better understanding of the class of functions that is represented by a neural network with ReLU activations and a given architecture. Using techniques from mixed-integer optimization, polyhedral theory, and tropical geometry, we provide a mathematical counterbalance to the universal approximation theorems which suggest that a single hidden layer is sufficient for learning tasks. In particular, we investigate whether the class of exactly representable functions strictly increases by adding more layers (with no restrictions on size). This problem has potential impact on algorithmic and statistical aspects because of the insight it provides into the class of functions represented by neural hypothesis classes. However, to the best of our knowledge, this question has not been investigated in the neural network literature. We also present upper bounds on the sizes of neural networks required to represent functions in these neural hypothesis classes.
We first consider the problem of approximating a few eigenvalues of a rational matrix-valued function closest to a prescribed target. It is assumed that the proper rational part of the rational matrix-valued function is expressed in the transfer function form $H(s) = C (sI - A)^{-1} B$, where the middle factor is large, whereas the number of rows of $C$ and the number of columns of $B$ are equal and small. We propose a subspace framework that performs two-sided or one-sided projections on the state-space representation of $H(\cdot)$, commonly employed in model reduction and giving rise to a reduced transfer function. At every iteration, the projection subspaces are expanded to attain Hermite interpolation conditions at the eigenvalues of the reduced transfer function closest to the target, which in turn leads to a new reduced transfer function. We prove in theory that, when a sequence of eigenvalues of the reduced transfer functions converges to an eigenvalue of the full problem, it converges at least at a quadratic rate. In the second part, we extend the proposed framework to locate the eigenvalues of a general square large-scale nonlinear meromorphic matrix-valued function $T(\cdot)$, where we exploit a representation $\mathcal{R}(s) = C(s) A(s)^{-1} B(s) - D(s)$ defined in terms of the block components of $T(\cdot)$. The numerical experiments illustrate that the proposed framework is reliable in locating a few eigenvalues closest to the target point, and that, with respect to runtime, it is competitive to established methods for nonlinear eigenvalue problems.
Computing linear minimum mean square error (LMMSE) filters is often ill conditioned, suggesting that unconstrained minimization of the mean square error is an inadequate principle for filter design. To address this, we first develop a unifying framework for studying constrained LMMSE estimation problems. Using this framework, we expose an important structural property of all constrained LMMSE filters and show that they all involve an inherent preconditioning step. This parameterizes all such filters only by their preconditioners. Moreover, each filters is invariant to invertible linear transformations of its preconditioner. We then clarify that merely constraining the rank of the filters, leading to the well-known low-rank Wiener filter, does not suitably address the problem of ill conditioning. Instead, we use a constraint that explicitly requires solutions to be well conditioned in a certain specific sense. We introduce two well-conditioned estimators and evaluate their mean-squared-error performance. We show these two estimators converge to the standard LMMSE filter as their truncated-power ratio converges to zero, but more slowly than the low-rank Wiener filter in terms of scaling law. This exposes the price for being well conditioned. We also show quantitative results with historical VIX data to illustrate the performance of our two well-conditioned estimators.
Measure-preserving neural networks are well-developed invertible models, however, their approximation capabilities remain unexplored. This paper rigorously analyses the approximation capabilities of existing measure-preserving neural networks including NICE and RevNets. It is shown that for compact $U \subset \R^D$ with $D\geq 2$, the measure-preserving neural networks are able to approximate arbitrary measure-preserving map $\psi: U\to \R^D$ which is bounded and injective in the $L^p$-norm. In particular, any continuously differentiable injective map with $\pm 1$ determinant of Jacobian are measure-preserving, thus can be approximated.
Optimization under uncertainty and risk is indispensable in many practical situations. Our paper addresses stability of optimization problems using composite risk functionals which are subjected to measure perturbations. Our main focus is the asymptotic behavior of data-driven formulations with empirical or smoothing estimators such as kernels or wavelets applied to some or to all functions of the compositions. We analyze the properties of the new estimators and we establish strong law of large numbers, consistency, and bias reduction potential under fairly general assumptions. Our results are germane to risk-averse optimization and to data science in general.
Although Deep Neural Networks (DNNs) have shown incredible performance in perceptive and control tasks, several trustworthy issues are still open. One of the most discussed topics is the existence of adversarial perturbations, which has opened an interesting research line on provable techniques capable of quantifying the robustness of a given input. In this regard, the Euclidean distance of the input from the classification boundary denotes a well-proved robustness assessment as the minimal affordable adversarial perturbation. Unfortunately, computing such a distance is highly complex due the non-convex nature of NNs. Despite several methods have been proposed to address this issue, to the best of our knowledge, no provable results have been presented to estimate and bound the error committed. This paper addresses this issue by proposing two lightweight strategies to find the minimal adversarial perturbation. Differently from the state-of-the-art, the proposed approach allows formulating an error estimation theory of the approximate distance with respect to the theoretical one. Finally, a substantial set of experiments is reported to evaluate the performance of the algorithms and support the theoretical findings. The obtained results show that the proposed strategies approximate the theoretical distance for samples close to the classification boundary, leading to provable robustness guarantees against any adversarial attacks.
We study the class of first-order locally-balanced Metropolis--Hastings algorithms introduced in Livingstone & Zanella (2021). To choose a specific algorithm within the class the user must select a balancing function $g:\mathbb{R} \to \mathbb{R}$ satisfying $g(t) = tg(1/t)$, and a noise distribution for the proposal increment. Popular choices within the class are the Metropolis-adjusted Langevin algorithm and the recently introduced Barker proposal. We first establish a universal limiting optimal acceptance rate of 57% and scaling of $n^{-1/3}$ as the dimension $n$ tends to infinity among all members of the class under mild smoothness assumptions on $g$ and when the target distribution for the algorithm is of the product form. In particular we obtain an explicit expression for the asymptotic efficiency of an arbitrary algorithm in the class, as measured by expected squared jumping distance. We then consider how to optimise this expression under various constraints. We derive an optimal choice of noise distribution for the Barker proposal, optimal choice of balancing function under a Gaussian noise distribution, and optimal choice of first-order locally-balanced algorithm among the entire class, which turns out to depend on the specific target distribution. Numerical simulations confirm our theoretical findings and in particular show that a bi-modal choice of noise distribution in the Barker proposal gives rise to a practical algorithm that is consistently more efficient than the original Gaussian version.
Statistical divergences (SDs), which quantify the dissimilarity between probability distributions, are a basic constituent of statistical inference and machine learning. A modern method for estimating those divergences relies on parametrizing an empirical variational form by a neural network (NN) and optimizing over parameter space. Such neural estimators are abundantly used in practice, but corresponding performance guarantees are partial and call for further exploration. In particular, there is a fundamental tradeoff between the two sources of error involved: approximation and empirical estimation. While the former needs the NN class to be rich and expressive, the latter relies on controlling complexity. We explore this tradeoff for an estimator based on a shallow NN by means of non-asymptotic error bounds, focusing on four popular $\mathsf{f}$-divergences -- Kullback-Leibler, chi-squared, squared Hellinger, and total variation. Our analysis relies on non-asymptotic function approximation theorems and tools from empirical process theory. The bounds reveal the tension between the NN size and the number of samples, and enable to characterize scaling rates thereof that ensure consistency. For compactly supported distributions, we further show that neural estimators of the first three divergences above with appropriate NN growth-rate are near minimax rate-optimal, achieving the parametric rate up to logarithmic factors.