Nowadays, a posteriori error control methods have formed a new important part of the numerical analysis. Their purpose is to obtain computable error estimates in various norms and error indicators that show distributions of global and local errors of a particular numerical solution. In this paper, we focus on a particular class of domain decomposition methods (DDM), which are among the most efficient numerical methods for solving PDEs. We adapt functional type a posteriori error estimates and construct a special form of error majorant which allows efficient error control of approximations computed via these DDM by performing only subdomain-wise computations. The presented guaranteed error bounds use an extended set of admissible fluxes which arise naturally in DDM.
In Trefftz discontinuous Galerkin methods a partial differential equation is discretized using discontinuous shape functions that are chosen to be elementwise in the kernel of the corresponding differential operator. We propose a new variant, the embedded Trefftz discontinuous Galerkin method, which is the Galerkin projection of an underlying discontinuous Galerkin method onto a subspace of Trefftz-type. The subspace can be described in a very general way and to obtain it no Trefftz functions have to be calculated explicitly, instead the corresponding embedding operator is constructed. In the simplest cases the method recovers established Trefftz discontinuous Galerkin methods. But the approach allows to conveniently extend to general cases, including inhomogeneous sources and non-constant coefficient differential operators. We introduce the method, discuss implementational aspects and explore its potential on a set of standard PDE problems. Compared to standard discontinuous Galerkin methods we observe a severe reduction of the globally coupled unknowns in all considered cases reducing the corresponding computing time significantly. Moreover, for the Helmholtz problem we even observe an improved accuracy similar to Trefftz discontinuous Galerkin methods based on plane waves.
We develop a family of cut finite element methods of different orders based on the discontinuous Galerkin framework, for hyperbolic conservation laws with stationary interfaces in both one and two space dimensions, and for moving interfaces in one space dimension. Interface conditions are imposed weakly and so that both conservation and stability are ensured. A CutFEM with discontinuous elements in space is developed and coupled to standard explicit time-stepping schemes for linear advection problems and the acoustic wave problem with stationary interfaces. In the case of moving interfaces, we propose a space-time CutFEM based on discontinuous elements both in space and time for linear advection problems. We show that the proposed CutFEM are conservative and energy stable. For the stationary interface case an a priori error estimate is proven. Numerical computations in both one and two space dimensions support the analysis, and in addition demonstrate that the proposed methods have the expected accuracy.
A numerical algorithm is presented to solve a benchmark problem proposed by Hemker. The algorithm incorporates asymptotic information into the design of appropriate piecewise-uniform Shishkin meshes. Moreover, different co-ordinate systems are utilized due to the different geometries and associated layer structures that are involved in this problem. Numerical results are presented to demonstrate the effectiveness of the proposed numerical algorithm.
We investigate the problem of approximating the matrix function $f(A)$ by $r(A)$, with $f$ a Markov function, $r$ a rational interpolant of $f$, and $A$ a symmetric Toeplitz matrix. In a first step, we obtain a new upper bound for the relative interpolation error $1-r/f$ on the spectral interval of $A$. By minimizing this upper bound over all interpolation points, we obtain a new, simple and sharp a priori bound for the relative interpolation error. We then consider three different approaches of representing and computing the rational interpolant $r$. Theoretical and numerical evidence is given that any of these methods for a scalar argument allows to achieve high precision, even in the presence of finite precision arithmetic. We finally investigate the problem of efficiently evaluating $r(A)$, where it turns out that the relative error for a matrix argument is only small if we use a partial fraction decomposition for $r$ following Antoulas and Mayo. An important role is played by a new stopping criterion which ensures to automatically find the degree of $r$ leading to a small error, even in presence of finite precision arithmetic.
A priori error bounds have been derived for different balancing-related model reduction methods. The most classical result is a bound for balanced truncation and singular perturbation approximation that is applicable for asymptotically stable linear time-invariant systems with homogeneous initial conditions. Recently, there have been a few attempts to generalize the balancing-related reduction methods to the case with inhomogeneous initial conditions, but the existing error bounds for these generalizations are quite restrictive. Particularly, it is required to restrict the initial conditions to a low-dimensional subspace, which has to be chosen before the reduced model is constructed. In this paper, we propose an estimator that circumvents this hard constraint completely. Our estimator is applicable to a large class of reduction methods, whereas the former results were only derived for certain specific methods. Moreover, our approach yields to significantly more effective error estimation, as also will be demonstrated numerically.
Weak lensing mass-mapping is a useful tool to access the full distribution of dark matter on the sky, but because of intrinsic galaxy ellipticies and finite fields/missing data, the recovery of dark matter maps constitutes a challenging ill-posed inverse problem. We introduce a novel methodology allowing for efficient sampling of the high-dimensional Bayesian posterior of the weak lensing mass-mapping problem, and relying on simulations for defining a fully non-Gaussian prior. We aim to demonstrate the accuracy of the method on simulations, and then proceed to applying it to the mass reconstruction of the HST/ACS COSMOS field. The proposed methodology combines elements of Bayesian statistics, analytic theory, and a recent class of Deep Generative Models based on Neural Score Matching. This approach allows us to do the following: 1) Make full use of analytic cosmological theory to constrain the 2pt statistics of the solution. 2) Learn from cosmological simulations any differences between this analytic prior and full simulations. 3) Obtain samples from the full Bayesian posterior of the problem for robust Uncertainty Quantification. We demonstrate the method on the $\kappa$TNG simulations and find that the posterior mean significantly outperfoms previous methods (Kaiser-Squires, Wiener filter, Sparsity priors) both on root-mean-square error and in terms of the Pearson correlation. We further illustrate the interpretability of the recovered posterior by establishing a close correlation between posterior convergence values and SNR of clusters artificially introduced into a field. Finally, we apply the method to the reconstruction of the HST/ACS COSMOS field and yield the highest quality convergence map of this field to date.
Dyadic data is often encountered when quantities of interest are associated with the edges of a network. As such it plays an important role in statistics, econometrics and many other data science disciplines. We consider the problem of uniformly estimating a dyadic Lebesgue density function, focusing on nonparametric kernel-based estimators which take the form of U-process-like dyadic empirical processes. We provide uniform point estimation and distributional results for the dyadic kernel density estimator, giving valid and feasible procedures for robust uniform inference. Our main contributions include the minimax-optimal uniform convergence rate of the dyadic kernel density estimator, along with strong approximation results for the associated standardized $t$-process. A consistent variance estimator is introduced in order to obtain analogous results for the Studentized $t$-process, enabling the construction of provably valid and feasible uniform confidence bands for the unknown density function. A crucial feature of U-process-like dyadic empirical processes is that they may be "degenerate" at some or possibly all points in the support of the data, a property making our uniform analysis somewhat delicate. Nonetheless we show formally that our proposed methods for uniform inference remain robust to the potential presence of such unknown degenerate points. For the purpose of implementation, we discuss uniform inference procedures based on positive semi-definite covariance estimators, mean squared error optimal bandwidth selectors and robust bias-correction methods. We illustrate the empirical finite-sample performance of our robust inference methods in a simulation study. Our technical results concerning strong approximations and maximal inequalities are of potential independent interest.
This paper deals with robust inference for parametric copula models. Estimation using Canonical Maximum Likelihood might be unstable, especially in the presence of outliers. We propose to use a procedure based on the Maximum Mean Discrepancy (MMD) principle. We derive non-asymptotic oracle inequalities, consistency and asymptotic normality of this new estimator. In particular, the oracle inequality holds without any assumption on the copula family, and can be applied in the presence of outliers or under misspecification. Moreover, in our MMD framework, the statistical inference of copula models for which there exists no density with respect to the Lebesgue measure on $[0,1]^d$, as the Marshall-Olkin copula, becomes feasible. A simulation study shows the robustness of our new procedures, especially compared to pseudo-maximum likelihood estimation. An R package implementing the MMD estimator for copula models is available.
We construct a space-time parallel method for solving parabolic partial differential equations by coupling the Parareal algorithm in time with overlapping domain decomposition in space. The goal is to obtain a discretization consisting of "local" problems that can be solved on parallel computers efficiently. However, this introduces significant sources of error that must be evaluated. Reformulating the original Parareal algorithm as a variational method and implementing a finite element discretization in space enables an adjoint-based a posteriori error analysis to be performed. Through an appropriate choice of adjoint problems and residuals the error analysis distinguishes between errors arising due to the temporal and spatial discretizations, as well as between the errors arising due to incomplete Parareal iterations and incomplete iterations of the domain decomposition solver. We first develop an error analysis for the Parareal method applied to parabolic partial differential equations, and then refine this analysis to the case where the associated spatial problems are solved using overlapping domain decomposition. These constitute our Time Parallel Algorithm (TPA) and Space-Time Parallel Algorithm (STPA) respectively. Numerical experiments demonstrate the accuracy of the estimator for both algorithms and the iterations between distinct components of the error.
We consider the exploration-exploitation trade-off in reinforcement learning and we show that an agent imbued with a risk-seeking utility function is able to explore efficiently, as measured by regret. The parameter that controls how risk-seeking the agent is can be optimized exactly, or annealed according to a schedule. We call the resulting algorithm K-learning and show that the corresponding K-values are optimistic for the expected Q-values at each state-action pair. The K-values induce a natural Boltzmann exploration policy for which the `temperature' parameter is equal to the risk-seeking parameter. This policy achieves an expected regret bound of $\tilde O(L^{3/2} \sqrt{S A T})$, where $L$ is the time horizon, $S$ is the number of states, $A$ is the number of actions, and $T$ is the total number of elapsed time-steps. This bound is only a factor of $L$ larger than the established lower bound. K-learning can be interpreted as mirror descent in the policy space, and it is similar to other well-known methods in the literature, including Q-learning, soft-Q-learning, and maximum entropy policy gradient, and is closely related to optimism and count based exploration methods. K-learning is simple to implement, as it only requires adding a bonus to the reward at each state-action and then solving a Bellman equation. We conclude with a numerical example demonstrating that K-learning is competitive with other state-of-the-art algorithms in practice.