There are plenty of applications and analysis for time-independent elliptic partial differential equations in the literature hinting at the benefits of overtesting by using more collocation conditions than the number of basis functions. Overtesting not only reduces the problem size, but is also known to be necessary for stability and convergence of widely used unsymmetric Kansa-type strong-form collocation methods. We consider kernel-based meshfree methods, which is a method of lines with collocation and overtesting spatially, for solving parabolic partial differential equations on surfaces without parametrization. In this paper, we extend the time-independent convergence theories for overtesting techniques to the parabolic equations on smooth and closed surfaces.
A major challenge in current optimization research for deep learning is to automatically find optimal step sizes for each update step. The optimal step size is closely related to the shape of the loss in the update step direction. However, this shape has not yet been examined in detail. This work shows empirically that the batch loss over lines in negative gradient direction is mostly convex locally and well suited for one-dimensional parabolic approximations. By exploiting this parabolic property we introduce a simple and robust line search approach, which performs loss-shape dependent update steps. Our approach combines well-known methods such as parabolic approximation, line search and conjugate gradient, to perform efficiently. It surpasses other step size estimating methods and competes with common optimization methods on a large variety of experiments without the need of hand-designed step size schedules. Thus, it is of interest for objectives where step-size schedules are unknown or do not perform well. Our extensive evaluation includes multiple comprehensive hyperparameter grid searches on several datasets and architectures. Finally, we provide a general investigation of exact line searches in the context of batch losses and exact losses, including their relation to our line search approach.
This paper is devoted to condition numbers of the total least squares problem with linear equality constraint (TLSE). With novel limit techniques, closed formulae for normwise, mixed and componentwise condition numbers of the TLSE problem are derived. Computable expressions and upper bounds for these condition numbers are also given to avoid the costly Kronecker product-based operations. The results unify the ones for the TLS problem. For TLSE problems with equilibratory input data, numerical experiments illustrate that normwise condition number-based estimate is sharp to evaluate the forward error of the solution, while for sparse and badly scaled matrices, mixed and componentwise condition numbers-based estimates are much tighter.
We consider flux-corrected finite element discretizations of 3D convection-dominated transport problems and assess the computational efficiency of algorithms based on such approximations. The methods under investigation include flux-corrected transport schemes and monolithic limiters. We discretize in space using a continuous Galerkin method and $\mathbb{P}_1$ or $\mathbb{Q}_1$ finite elements. Time integration is performed using the Crank-Nicolson method or an explicit strong stability preserving Runge-Kutta method. Nonlinear systems are solved using a fixed-point iteration method, which requires solution of large linear systems at each iteration or time step. The great variety of options in the choice of discretization methods and solver components calls for a dedicated comparative study of existing approaches. To perform such a study, we define new 3D test problems for time-dependent and stationary convection-diffusion-reaction equations. The results of our numerical experiments illustrate how the limiting technique, time discretization and solver impact on the overall performance.
In this paper we present results on asymptotic characteristics of multivariate function classes in the uniform norm. Our main interest is the approximation of functions with mixed smoothness parameter not larger than $1/2$. Our focus will be on the behavior of the best $m$-term trigonometric approximation as well as the decay of Kolmogorov and entropy numbers in the uniform norm. It turns out that these quantities share a few fundamental abstract properties like their behavior under real interpolation, such that they can be treated simultaneously. We start with proving estimates on finite rank convolution operators with range in a step hyperbolic cross. These results imply bounds for the corresponding function space embeddings by a well-known decomposition technique. The decay of Kolmogorov numbers have direct implications for the problem of sampling recovery in $L_2$ in situations where recent results in the literature are not applicable since the corresponding approximation numbers are not square summable.
We propose a new computationally efficient test for conditional independence based on the $L^{p}$ distance between two kernel-based representatives of well suited distributions. By evaluating the difference of these two representatives at a finite set of locations, we derive a finite dimensional approximation of the $L^{p}$ metric, obtain its asymptotic distribution under the null hypothesis of conditional independence and design a simple statistical test from it. The test obtained is consistent and computationally efficient. We conduct a series of experiments showing that the performance of our new tests outperforms state-of-the-art methods both in term of statistical power and type-I error even in the high dimensional setting.
Uncertainty in data is certainly one of the main problems in epidemiology, as shown by the recent COVID-19 pandemic. The need for efficient methods capable of quantifying uncertainty in the mathematical model is essential in order to produce realistic scenarios of the spread of infection. In this paper, we introduce a bi-fidelity approach to quantify uncertainty in spatially dependent epidemic models. The approach is based on evaluating a high-fidelity model on a small number of samples properly selected from a large number of evaluations of a low-fidelity model. In particular, we will consider the class of multiscale transport models recently introduced in Bertaglia, Boscheri, Dimarco & Pareschi, Math. Biosci. Eng. (2021) and Boscheri, Dimarco & Pareschi, Math. Mod. Meth. App. Scie. (2021) as the high-fidelity reference and use simple two-velocity discrete models for low-fidelity evaluations. Both models share the same diffusive behavior and are solved with ad-hoc asymptotic-preserving numerical discretizations. A series of numerical experiments confirm the validity of the approach.
Recently, a new variant of the BiCGStab method, known as the pipeline BiCGStab, has been proposed. This method can achieve a higher degree of scalability and speed-up rates through a mechanism in which the communication phase for the computation of the inner product can be overlapped with the computation of the matrix-vector product. On the other hand, there exist several generalized iteration methods with better convergence behavior than BiCGStab such as ssBiCGSafe, BiCGSafe, GPBi-CG. Of these methods, ssBiCGSafe, which requires a single phase of computing inner products per one iteration, is best suited for high-performance computing systems. In this paper, inspired by the success of the pipelined BiCGStab method, we propose variations of the ssBiCGSafe method, in which only one phase of inner product computation per iteration is required and this inner product computation phase can be overlapped with the matrix-vector computation. Through numerical experiments, we show that the proposed methods lead to improvements in convergence behavior and execution time compared to the pipelined BiCGStab and ssBiCGSafe methods.
We apply the recently developed least squares stabilized symmetric Nitsche method for enforcement of Dirichlet boundary conditions to the finite cell method. The least squares stabilized Nitsche method in combination with finite cell stabilization leads to a symmetric positive definite stiffness matrix and relies only on elementwise stabilization, which does not lead to additional fill in. We prove a priori error estimates and bounds on the condition numbers.
This work addresses testing the independence of two continuous and finite-dimensional random variables from the design of a data-driven partition. The empirical log-likelihood statistic is adopted to approximate the sufficient statistics of an oracle test against independence (that knows the two hypotheses). It is shown that approximating the sufficient statistics of the oracle test offers a learning criterion for designing a data-driven partition that connects with the problem of mutual information estimation. Applying these ideas in the context of a data-dependent tree-structured partition (TSP), we derive conditions on the TSP's parameters to achieve a strongly consistent distribution-free test of independence over the family of probabilities equipped with a density. Complementing this result, we present finite-length results that show our TSP scheme's capacity to detect the scenario of independence structurally with the data-driven partition as well as new sampling complexity bounds for this detection. Finally, some experimental analyses provide evidence regarding our scheme's advantage for testing independence compared with some strategies that do not use data-driven representations.
We develop an approach to risk minimization and stochastic optimization that provides a convex surrogate for variance, allowing near-optimal and computationally efficient trading between approximation and estimation error. Our approach builds off of techniques for distributionally robust optimization and Owen's empirical likelihood, and we provide a number of finite-sample and asymptotic results characterizing the theoretical performance of the estimator. In particular, we show that our procedure comes with certificates of optimality, achieving (in some scenarios) faster rates of convergence than empirical risk minimization by virtue of automatically balancing bias and variance. We give corroborating empirical evidence showing that in practice, the estimator indeed trades between variance and absolute performance on a training sample, improving out-of-sample (test) performance over standard empirical risk minimization for a number of classification problems.