We show, that the Complex Step approximation to the Fr\'echet derivative of real matrix functions is applicable to the matrix sign, square root and polar mapping using iterative schemes. While this property was already discovered for the matrix sign using Newton's method, we extend the research to the family of Pad\'e iterations, that allows us to introduce iterative schemes for finding function and derivative values while approximately preserving automorphism group structure.
We study the expressibility and learnability of convex optimization solution functions and their multi-layer architectural extension. The main results are: \emph{(1)} the class of solution functions of linear programming (LP) and quadratic programming (QP) is a universal approximant for the $C^k$ smooth model class or some restricted Sobolev space, and we characterize the rate-distortion, \emph{(2)} the approximation power is investigated through a viewpoint of regression error, where information about the target function is provided in terms of data observations, \emph{(3)} compositionality in the form of a deep architecture with optimization as a layer is shown to reconstruct some basic functions used in numerical analysis without error, which implies that \emph{(4)} a substantial reduction in rate-distortion can be achieved with a universal network architecture, and \emph{(5)} we discuss the statistical bounds of empirical covering numbers for LP/QP, as well as a generic optimization problem (possibly nonconvex) by exploiting tame geometry. Our results provide the \emph{first rigorous analysis of the approximation and learning-theoretic properties of solution functions} with implications for algorithmic design and performance guarantees.
Consider the semigroup of random walk on a complete graph, which we call the Potts semigroup. Diaconis and Saloff-Coste computed the maximum of the ratio of the relative entropy and the Dirichlet form obtaining the constant $\alpha_2$ in the $2$-log-Sobolev inequality ($2$-LSI). In this paper, we obtain the best possible non-linear inequality relating entropy and the Dirichlet form (i.e., $p$-NLSI, $p\ge1$). As an example, we show $\alpha_1 = 1+\frac{1+o(1)}{\log k}$. By integrating the $1$-NLSI we obtain the new strong data processing inequality (SDPI), which in turn allows us to improve results of Mossel and Peres on reconstruction thresholds for Potts models on trees. A special case is the problem of reconstructing color of the root of a $k$-colored tree given knowledge of colors of all the leaves. We show that to have a non-trivial reconstruction probability the branching number of the tree should be at least $$\frac{\log k}{\log k - \log(k-1)} = (1-o(1))k\log k.$$ This recovers previous results (of Sly and Bhatnagar et al.) in (slightly) more generality, but more importantly avoids the need for any coloring-specialized arguments. Similarly, we improve the state-of-the-art on the weak recovery threshold for the stochastic block model with $k$ balanced groups, for all $k\ge 3$. To further show the power of our method, we prove optimal non-reconstruction results for a broadcasting on trees model with Gaussian kernels, closing a gap left open by Eldan et al. These improvements advocate information-theoretic methods as a useful complement to the conventional techniques originating from the statistical physics.
It is known that the exact form of the Burrows-Wheeler-Transform (BWT) of a string collection depends, in most implementations, on the input order of the strings in the collection. Reordering strings of an input collection affects the number of equal-letter runs $r$, arguably the most important parameter of BWT-based data structures, such as the FM-index or the $r$-index. Bentley, Gibney, and Thankachan [ESA 2020] introduced a linear-time algorithm for computing the permutation of the input collection which yields the minimum number of runs of the resulting BWT. In this paper, we present the first tool that guarantees a Burrows-Wheeler-Transform with minimum number of runs (optBWT), by combining i) an algorithm that builds the BWT from a string collection (either SAIS-based [Cenzato et al., SPIRE 2021] or BCR [Bauer et al., CPM 2011]); ii) the SAP array data structure introduced in [Cox et al., Bioinformatics, 2012]; and iii) the algorithm by Bentley et al. We present results both on real-life and simulated data, showing that the improvement achieved in terms of $r$ with respect to the input order is significant and the overhead created by the computation of the optimal BWT negligible, making our tool competitive with other tools for BWT-computation in terms of running time and space usage. In particular, on real data the optBWT obtains up to 31 times fewer runs with only a $1.39\times$ slowdown. Source code is available at //github.com/davidecenzato/optimalBWT.git.
This paper investigates the approximation properties of deep neural networks with piecewise-polynomial activation functions. We derive the required depth, width, and sparsity of a deep neural network to approximate any H\"{o}lder smooth function up to a given approximation error in H\"{o}lder norms in such a way that all weights of this neural network are bounded by $1$. The latter feature is essential to control generalization errors in many statistical and machine learning applications.
Many applications, such as system identification, classification of time series, direct and inverse problems in partial differential equations, and uncertainty quantification lead to the question of approximation of a non-linear operator between metric spaces $\mathfrak{X}$ and $\mathfrak{Y}$. We study the problem of determining the degree of approximation of such operators on a compact subset $K_\mathfrak{X}\subset \mathfrak{X}$ using a finite amount of information. If $\mathcal{F}: K_\mathfrak{X}\to K_\mathfrak{Y}$, a well established strategy to approximate $\mathcal{F}(F)$ for some $F\in K_\mathfrak{X}$ is to encode $F$ (respectively, $\mathcal{F}(F)$) in terms of a finite number $d$ (repectively $m$) of real numbers. Together with appropriate reconstruction algorithms (decoders), the problem reduces to the approximation of $m$ functions on a compact subset of a high dimensional Euclidean space $\mathbb{R}^d$, equivalently, the unit sphere $\mathbb{S}^d$ embedded in $\mathbb{R}^{d+1}$. The problem is challenging because $d$, $m$, as well as the complexity of the approximation on $\mathbb{S}^d$ are all large, and it is necessary to estimate the accuracy keeping track of the inter-dependence of all the approximations involved. In this paper, we establish constructive methods to do this efficiently; i.e., with the constants involved in the estimates on the approximation on $\mathbb{S}^d$ being $\mathcal{O}(d^{1/6})$. We study different smoothness classes for the operators, and also propose a method for approximation of $\mathcal{F}(F)$ using only information in a small neighborhood of $F$, resulting in an effective reduction in the number of parameters involved.
Clustering with outliers is one of the most fundamental problems in Computer Science. Given a set $X$ of $n$ points and two integers $k$ and $m$, the clustering with outliers aims to exclude $m$ points from $X$ and partition the remaining points into $k$ clusters that minimizes a certain cost function. In this paper, we give a general approach for solving clustering with outliers, which results in a fixed-parameter tractable (FPT) algorithm in $k$ and $m$, that almost matches the approximation ratio for its outlier-free counterpart. As a corollary, we obtain FPT approximation algorithms with optimal approximation ratios for $k$-Median and $k$-Means with outliers in general metrics. We also exhibit more applications of our approach to other variants of the problem that impose additional constraints on the clustering, such as fairness or matroid constraints.
We introduce an adaptation of integral approximation operators to set-valued functions (SVFs, multifunctions), mapping a compact interval $[a,b]$ into the space of compact non-empty subsets of ${\mathbb R}^d$. All operators are adapted by replacing the Riemann integral for real-valued functions by the weighted metric integral for SVFs of bounded variation with compact graphs. For such a set-valued function $F$, we obtain pointwise error estimates for sequences of integral operators at points of continuity, leading to convergence at such points to $F$. At points of discontinuity of $F$, we derive estimates, which yield the convergence to a set, first described in our previous work on the metric Fourier operator. Our analysis uses recently defined one-sided local quasi-moduli at points of discontinuity and several notions of local Lipschitz property at points of continuity. We also provide a global approach for error bounds. A multifunction $F$ is represented by the set of all its metric selections, while its approximation (its image under the operator) is represented by the set of images of these metric selections under the operator. A bound on the Hausdorff distance between these two sets of single-valued functions in $L^1$ provides our global estimates. The theory is illustrated by presenting the examples of two concrete operators: the Bernstein-Durrmeyer operator and the Kantorovich operator.
We introduce an integral representation of the Monge-Amp\`ere equation, which leads to a new finite difference method based upon numerical quadrature. The resulting scheme is monotone and fits immediately into existing convergence proofs for the Monge-Amp\`ere equation with either Dirichlet or optimal transport boundary conditions. The use of higher-order quadrature schemes allows for substantial reduction in the component of the error that depends on the angular resolution of the finite difference stencil. This, in turn, allows for significant improvements in both stencil width and formal truncation error. The resulting schemes can achieve a formal accuracy that is arbitrarily close to $\mathcal{O}(h^2)$, which is the optimal consistency order for monotone approximations of second order operators. We present three different implementations of this method. The first two exploit the spectral accuracy of the trapezoid rule on uniform angular discretizations to allow for computation on a nearest-neighbors finite difference stencil over a large range of grid refinements. The third uses higher-order quadrature to produce superlinear convergence while simultaneously utilizing narrower stencils than other monotone methods. Computational results are presented in two dimensions for problems of various regularity.
In sparse estimation, in which the sum of the loss function and the regularization term is minimized, methods such as the proximal gradient method and the proximal Newton method are applied. The former is slow to converge to a solution, while the latter converges quickly but is inefficient for problems such as group lasso problems. In this paper, we examine how to efficiently find a solution by finding the convergence destination of the proximal gradient method. However, the case in which the Lipschitz constant of the derivative of the loss function is unknown has not been studied theoretically, and only the Newton method has been proposed for the case in which the Lipschitz constant is known. We show that the Newton method converges when the Lipschitz constant is unknown and extend the theory. Furthermore, we propose a new quasi-Newton method that avoids Hessian calculations and improves efficiency, and we prove that it converges quickly, providing a theoretical guarantee. Finally, numerical experiments show that the proposed method can significantly improve the efficiency.
With the rapid increase of large-scale, real-world datasets, it becomes critical to address the problem of long-tailed data distribution (i.e., a few classes account for most of the data, while most classes are under-represented). Existing solutions typically adopt class re-balancing strategies such as re-sampling and re-weighting based on the number of observations for each class. In this work, we argue that as the number of samples increases, the additional benefit of a newly added data point will diminish. We introduce a novel theoretical framework to measure data overlap by associating with each sample a small neighboring region rather than a single point. The effective number of samples is defined as the volume of samples and can be calculated by a simple formula $(1-\beta^{n})/(1-\beta)$, where $n$ is the number of samples and $\beta \in [0,1)$ is a hyperparameter. We design a re-weighting scheme that uses the effective number of samples for each class to re-balance the loss, thereby yielding a class-balanced loss. Comprehensive experiments are conducted on artificially induced long-tailed CIFAR datasets and large-scale datasets including ImageNet and iNaturalist. Our results show that when trained with the proposed class-balanced loss, the network is able to achieve significant performance gains on long-tailed datasets.