The total variation diminishing (TVD) property is an important tool for ensuring nonlinear stability and convergence of numerical solutions of one-dimensional scalar conservation laws. However, it proved to be challenging to extend this approach to two-dimensional problems. Using the anisotropic definition for discrete total variation (TV), it was shown in \cite{Goodman} that TVD solutions of two-dimensional hyperbolic equations are at most first order accurate. We propose to use an alternative definition resulting from a full discretization of the semi-discrete Raviart-Thomas TV. We demonstrate numerically using the second order discontinuous Galerkin method that limited solutions of two-dimensional hyperbolic equations are TVD in means when total variation is computed using the new definition.
We consider Broyden's method and some accelerated schemes for nonlinear equations having a strongly regular singularity of first order with a one-dimensional nullspace. Our two main results are as follows. First, we show that the use of a preceding Newton-like step ensures convergence for starting points in a starlike domain with density 1. This extends the domain of convergence of these methods significantly. Second, we establish that the matrix updates of Broyden's method converge q-linearly with the same asymptotic factor as the iterates. This contributes to the long-standing question whether the Broyden matrices converge by showing that this is indeed the case for the setting at hand. Furthermore, we prove that the Broyden directions violate uniform linear independence, which implies that existing results for convergence of the Broyden matrices cannot be applied. Numerical experiments of high precision confirm the enlarged domain of convergence, the q-linear convergence of the matrix updates, and the lack of uniform linear independence. In addition, they suggest that these results can be extended to singularities of higher order and that Broyden's method can converge r-linearly without converging q-linearly. The underlying code is freely available.
In this work we present a novel bulk-surface virtual element method (BSVEM) for the numerical approximation of elliptic bulk-surface partial differential equations (BSPDEs) in three space dimensions. The BSVEM is based on the discretisation of the bulk domain into polyhedral elements with arbitrarily many faces. The polyhedral approximation of the bulk induces a polygonal approximation of the surface. Firstly, we present a geometric error analysis of bulk-surface polyhedral meshes independent of the numerical method. Hence, we show that BSVEM has optimal second-order convergence in space, provided the exact solution is $H^{2+3/4}$ in the bulk and $H^2$ on the surface, where the additional $\frac{3}{4}$ is due to the combined effect of surface curvature and polyhedral elements close to the boundary. We show that general polyhedra can be exploited to reduce the computational time of the matrix assembly. To support our convergence results, a numerical example is presented which demonstrates optimal convergence of an elliptic BSPDE in three space dimensions.
We propose a topology optimisation of acoustic devices that work in a certain bandwidth. To achieve this, we define the objective function as the frequency-averaged sound intensity at given observation points, which is represented by a frequency integral over a given frequency band. It is, however, prohibitively expensive to evaluate such an integral naively by a quadrature. We thus estimate the frequency response by the Pad\'{e} approximation and integrate the approximated function to obtain the objective function. The corresponding topological derivative is derived with the help of the adjoint variable method and chain rule. It is shown that the objective and its sensitivity can be evaluated semi-analytically. We present efficient numerical procedures to compute them and incorporate them into a topology optimisation based on the level-set method. We confirm the validity and effectiveness of the present method through some numerical examples.
Bregman proximal point algorithm (BPPA), as one of the centerpieces in the optimization toolbox, has been witnessing emerging applications. With simple and easy to implement update rule, the algorithm bears several compelling intuitions for empirical successes, yet rigorous justifications are still largely unexplored. We study the computational properties of BPPA through classification tasks with separable data, and demonstrate provable algorithmic regularization effects associated with BPPA. We show that BPPA attains non-trivial margin, which closely depends on the condition number of the distance generating function inducing the Bregman divergence. We further demonstrate that the dependence on the condition number is tight for a class of problems, thus showing the importance of divergence in affecting the quality of the obtained solutions. In addition, we extend our findings to mirror descent (MD), for which we establish similar connections between the margin and Bregman divergence. We demonstrate through a concrete example, and show BPPA/MD converges in direction to the maximal margin solution with respect to the Mahalanobis distance. Our theoretical findings are among the first to demonstrate the benign learning properties BPPA/MD, and also provide corroborations for a careful choice of divergence in the algorithmic design.
Many problems in fluid dynamics are effectively modeled as Stokes flows - slow, viscous flows where the Reynolds number is small. Boundary integral equations are often used to solve these problems, where the fundamental solutions for the fluid velocity are the Stokeslet and stresslet. One of the main challenges in evaluating the boundary integrals is that the kernels become singular on the surface. A regularization method that eliminates the singularities and reduces the numerical error through correction terms for both the Stokeslet and stresslet integrals was developed in Tlupova and Beale, JCP (2019). In this work we build on the previously developed method to introduce a new stresslet regularization that is simpler and results in higher accuracy when evaluated on the surface. Our regularization replaces a seventh-degree polynomial that results from an equation with two conditions and two unknowns with a fifth-degree polynomial that results from an equation with one condition and one unknown. Numerical experiments demonstrate that the new regularization retains the same order of convergence as the regularization developed by Tlupova and Beale but shows a decreased magnitude of the error.
By the asymptotic oracle property, non-convex penalties represented by minimax concave penalty (MCP) and smoothly clipped absolute deviation (SCAD) have attracted much attentions in high-dimensional data analysis, and have been widely used in signal processing, image restoration, matrix estimation, etc. However, in view of their non-convex and non-smooth characteristics, they are computationally challenging. Almost all existing algorithms converge locally, and the proper selection of initial values is crucial. Therefore, in actual operation, they often combine a warm-starting technique to meet the rigid requirement that the initial value must be sufficiently close to the optimal solution of the corresponding problem. In this paper, based on the DC (difference of convex functions) property of MCP and SCAD penalties, we aim to design a global two-stage algorithm for the high-dimensional least squares linear regression problems. A key idea for making the proposed algorithm to be efficient is to use the primal dual active set with continuation (PDASC) method, which is equivalent to the semi-smooth Newton (SSN) method, to solve the corresponding sub-problems. Theoretically, we not only prove the global convergence of the proposed algorithm, but also verify that the generated iterative sequence converges to a d-stationary point. In terms of computational performance, the abundant research of simulation and real data show that the algorithm in this paper is superior to the latest SSN method and the classic coordinate descent (CD) algorithm for solving non-convex penalized high-dimensional linear regression problems.
A space-time Trefftz discontinuous Galerkin method for the Schr\"odinger equation with piecewise-constant potential is proposed and analyzed. Following the spirit of Trefftz methods, trial and test spaces are spanned by non-polynomial complex wave functions that satisfy the Schro\"odinger equation locally on each element of the space-time mesh. This allows for a significant reduction in the number of degrees of freedom in comparison with full polynomial spaces. We prove well-posedness and stability of the method, and, for the one- and two- dimensional cases, optimal, high-order, h-convergence error estimates in a skeleton norm. Some numerical experiments validate the theoretical results presented.
We propose a new fast randomized algorithm for interpolative decomposition of matrices which utilizes CountSketch. We then extend this approach to the tensor interpolative decomposition problem introduced by Biagioni et al. (J. Comput. Phys. 281, pp. 116-134, 2015). Theoretical performance guarantees are provided for both the matrix and tensor settings. Numerical experiments on both synthetic and real data demonstrate that our algorithms maintain the accuracy of competing methods, while running in less time, achieving at least an order of magnitude speed-up on large matrices and tensors.
We consider the problem of discovering $K$ related Gaussian directed acyclic graphs (DAGs), where the involved graph structures share a consistent causal order and sparse unions of supports. Under the multi-task learning setting, we propose a $l_1/l_2$-regularized maximum likelihood estimator (MLE) for learning $K$ linear structural equation models. We theoretically show that the joint estimator, by leveraging data across related tasks, can achieve a better sample complexity for recovering the causal order (or topological order) than separate estimations. Moreover, the joint estimator is able to recover non-identifiable DAGs, by estimating them together with some identifiable DAGs. Lastly, our analysis also shows the consistency of union support recovery of the structures. To allow practical implementation, we design a continuous optimization problem whose optimizer is the same as the joint estimator and can be approximated efficiently by an iterative algorithm. We validate the theoretical analysis and the effectiveness of the joint estimator in experiments.
This paper focuses on the expected difference in borrower's repayment when there is a change in the lender's credit decisions. Classical estimators overlook the confounding effects and hence the estimation error can be magnificent. As such, we propose another approach to construct the estimators such that the error can be greatly reduced. The proposed estimators are shown to be unbiased, consistent, and robust through a combination of theoretical analysis and numerical testing. Moreover, we compare the power of estimating the causal quantities between the classical estimators and the proposed estimators. The comparison is tested across a wide range of models, including linear regression models, tree-based models, and neural network-based models, under different simulated datasets that exhibit different levels of causality, different degrees of nonlinearity, and different distributional properties. Most importantly, we apply our approaches to a large observational dataset provided by a global technology firm that operates in both the e-commerce and the lending business. We find that the relative reduction of estimation error is strikingly substantial if the causal effects are accounted for correctly.