We present convergence estimates of two types of greedy algorithms in terms of the metric entropy of underlying compact sets. In the first part, we measure the error of a standard greedy reduced basis method for parametric PDEs by the metric entropy of the solution manifold in Banach spaces. This contrasts with the classical analysis based on the Kolmogorov n-widths and enables us to obtain direct comparisons between the greedy algorithm error and the entropy numbers, where the multiplicative constants are explicit and simple. The entropy-based convergence estimate is sharp and improves upon the classical width-based analysis of reduced basis methods for elliptic model problems. In the second part, we derive a novel and simple convergence analysis of the classical orthogonal greedy algorithm for nonlinear dictionary approximation using the metric entropy of the symmetric convex hull of the dictionary. This also improves upon existing results by giving a direct comparison between the algorithm error and the metric entropy.
This paper introduces a numerical approach to solve singularly perturbed convection diffusion boundary value problems for second-order ordinary differential equations that feature a small positive parameter {\epsilon} multiplying the highest derivative. We specifically examine Dirichlet boundary conditions. To solve this differential equation, we propose an upwind finite difference method and incorporate the Shishkin mesh scheme to capture the solution near boundary layers. Our solver is both direct and of high accuracy, with computation time that scales linearly with the number of grid points. MATLAB code of the numerical recipe is made publicly available. We present numerical results to validate the theoretical results and assess the accuracy of our method. The tables and graphs included in this paper demonstrate the numerical outcomes, which indicate that our proposed method offers a highly accurate approximation of the exact solution.
We conduct a non asymptotic study of the Cross Validation (CV) estimate of the generalization risk for learning algorithms dedicated to extreme regions of the covariates space. In this Extreme Value Analysis context, the risk function measures the algorithm's error given that the norm of the input exceeds a high quantile. The main challenge within this framework is the negligible size of the extreme training sample with respect to the full sample size and the necessity to re-scale the risk function by a probability tending to zero. We open the road to a finite sample understanding of CV for extreme values by establishing two new results: an exponential probability bound on the \Kfold CV error and a polynomial probability bound on the leave-\textrm{p}-out CV. Our bounds are sharp in the sense that they match state-of-the-art guarantees for standard CV estimates while extending them to encompass a conditioning event of small probability. We illustrate the significance of our results regarding high dimensional classification in extreme regions via a Lasso-type logistic regression algorithm. The tightness of our bounds is investigated in numerical experiments.
In this paper, we study the problems of detection and recovery of hidden submatrices with elevated means inside a large Gaussian random matrix. We consider two different structures for the planted submatrices. In the first model, the planted matrices are disjoint, and their row and column indices can be arbitrary. Inspired by scientific applications, the second model restricts the row and column indices to be consecutive. In the detection problem, under the null hypothesis, the observed matrix is a realization of independent and identically distributed standard normal entries. Under the alternative, there exists a set of hidden submatrices with elevated means inside the same standard normal matrix. Recovery refers to the task of locating the hidden submatrices. For both problems, and for both models, we characterize the statistical and computational barriers by deriving information-theoretic lower bounds, designing and analyzing algorithms matching those bounds, and proving computational lower bounds based on the low-degree polynomials conjecture. In particular, we show that the space of the model parameters (i.e., number of planted submatrices, their dimensions, and elevated mean) can be partitioned into three regions: the impossible regime, where all algorithms fail; the hard regime, where while detection or recovery are statistically possible, we give some evidence that polynomial-time algorithm do not exist; and finally the easy regime, where polynomial-time algorithms exist.
In this paper, we develop a posteriori error estimates for numerical approximations of scalar hyperbolic conservation laws in one space dimension. We develop novel quantitative partially $L^2$-type estimates by using the theory of shifts, and in particular, the framework for proving stability first developed in [Krupa-Vasseur. On uniqueness of solutions to conservation laws verifying a single entropy condition. J. Hyperbolic Differ. Equ., 2019]. In this paper, we solve two of the major obstacles to using the theory of shifts for quantitative estimates, including the change-of-variables problem and the loss of control on small shocks. Our methods have no inherit small-data limitations. Thus, our hope is to apply our techniques to the systems case to understand the numerical stability of large data. There is hope for our results to generalize to systems: the stability framework [Krupa-Vasseur. On uniqueness of solutions to conservation laws verifying a single entropy condition. J. Hyperbolic Differ. Equ., 2019] itself has been generalized to systems [Chen-Krupa-Vasseur. Uniqueness and weak-BV stability for $2\times 2$ conservation laws. Arch. Ration. Mech. Anal., 246(1):299--332, 2022]. Moreover, we are careful not to appeal to the Kruzhkov theory for scalar conservation laws. Instead, we work entirely within the context of the theory of shifts and $a$-contraction -- and these theories apply equally to systems. We present a MATLAB numerical implementation and numerical experiments. We also provide a brief introduction to the theory of shifts and $a$-contraction.
Large and complex datasets are often collected from several, possibly heterogeneous sources. Collaborative learning methods improve efficiency by leveraging commonalities across datasets while accounting for possible differences among them. Here we study collaborative linear regression and contextual bandits, where each instance's associated parameters are equal to a global parameter plus a sparse instance-specific term. We propose a novel two-stage estimator called MOLAR that leverages this structure by first constructing an entry-wise median of the instances' linear regression estimates, and then shrinking the instance-specific estimates towards the median. MOLAR improves the dependence of the estimation error on the data dimension, compared to independent least squares estimates. We then apply MOLAR to develop methods for sparsely heterogeneous collaborative contextual bandits, which lead to improved regret guarantees compared to independent bandit methods. We further show that our methods are minimax optimal by providing a number of lower bounds. Finally, we support the efficiency of our methods by performing experiments on both synthetic data and the PISA dataset on student educational outcomes from heterogeneous countries.
The multigrid V-cycle method is a popular method for solving systems of linear equations. It computes an approximate solution by using smoothing on fine levels and solving a system of linear equations on the coarsest level. Solving on the coarsest level depends on the size and difficulty of the problem. If the size permits, it is typical to use a direct method based on LU or Cholesky decomposition. In the settings with large coarsest-level problems approximate solvers such as iterative Krylov subspace methods, or direct methods based on low-rank approximation, are often used. The accuracy of the coarsest-level solver is typically determined based on the experience of the users with the concrete problems and methods. In this paper we present an approach to analyzing the effects of approximate coarsest-level solves on the convergence of the V-cycle method for symmetric positive definite problems. Using this approach we discuss how the convergence of the V-cycle method may be affected by (1) the choice of the tolerance in a stopping criterion based on the relative residual norm for an iterative coarsest-level solver or (2) by the choices of the low-rank threshold parameter and finite precision arithmetic for a block low-rank direct coarsest-level solver.Furthermore we present new coarsest-level stopping criteria tailored to the multigrid method and suggest a heuristic strategy for their effective use in practical computations.
When repeated evaluations for varying parameter configurations of a high-fidelity physical model are required, surrogate modeling techniques based on model order reduction are desired. In absence of the governing equations describing the dynamics, we need to construct the parametric reduced-order surrogate model in a non-intrusive fashion. In this setting, the usual residual-based error estimate for optimal parameter sampling associated with the reduced basis method is not directly available. Our work provides a non-intrusive optimality criterion to efficiently populate the parameter snapshots, thereby, enabling us to effectively construct a parametric surrogate model. We consider separate parameter-specific proper orthogonal decomposition (POD) subspaces and propose an active-learning-driven surrogate model using kernel-based shallow neural networks, abbreviated as ActLearn-POD-KSNN surrogate model. To demonstrate the validity of our proposed ideas, we present numerical experiments using two physical models, namely Burgers' equation and shallow water equations. Both the models have mixed -- convective and diffusive -- effects within their respective parameter domains, with each of them dominating in certain regions. The proposed ActLearn-POD-KSNN surrogate model efficiently predicts the solution at new parameter locations, even for a setting with multiple interacting shock profiles.
We consider the problem of solving linear least squares problems in a framework where only evaluations of the linear map are possible. We derive randomized methods that do not need any other matrix operations than forward evaluations, especially no evaluation of the adjoint map is needed. Our method is motivated by the simple observation that one can get an unbiased estimate of the application of the adjoint. We show convergence of the method and then derive a more efficient method that uses an exact linesearch. This method, called random descent, resembles known methods in other context and has the randomized coordinate descent method as special case. We provide convergence analysis of the random descent method emphasizing the dependence on the underlying distribution of the random vectors. Furthermore we investigate the applicability of the method in the context of ill-posed inverse problems and show that the method can have beneficial properties when the unknown solution is rough. We illustrate the theoretical findings in numerical examples. One particular result is that the random descent method actually outperforms established transposed-free methods (TFQMR and CGS) in examples.
In this paper, we propose a model averaging approach for addressing model uncertainty in the context of partial linear functional additive models. These models are designed to describe the relation between a response and mixed-types of predictors by incorporating both the parametric effect of scalar variables and the additive effect of a functional variable. The proposed model averaging scheme assigns weights to candidate models based on the minimization of a multi-fold cross-validation criterion. Furthermore, we establish the asymptotic optimality of the resulting estimator in terms of achieving the lowest possible square prediction error loss under model misspecification. Extensive simulation studies and an application to a near infrared spectra dataset are presented to support and illustrate our method.
With the rapid increase of large-scale, real-world datasets, it becomes critical to address the problem of long-tailed data distribution (i.e., a few classes account for most of the data, while most classes are under-represented). Existing solutions typically adopt class re-balancing strategies such as re-sampling and re-weighting based on the number of observations for each class. In this work, we argue that as the number of samples increases, the additional benefit of a newly added data point will diminish. We introduce a novel theoretical framework to measure data overlap by associating with each sample a small neighboring region rather than a single point. The effective number of samples is defined as the volume of samples and can be calculated by a simple formula $(1-\beta^{n})/(1-\beta)$, where $n$ is the number of samples and $\beta \in [0,1)$ is a hyperparameter. We design a re-weighting scheme that uses the effective number of samples for each class to re-balance the loss, thereby yielding a class-balanced loss. Comprehensive experiments are conducted on artificially induced long-tailed CIFAR datasets and large-scale datasets including ImageNet and iNaturalist. Our results show that when trained with the proposed class-balanced loss, the network is able to achieve significant performance gains on long-tailed datasets.