An a posteriori error estimator based on an equilibrated flux reconstruction is proposed for defeaturing problems in the context of finite element discretizations. Defeaturing consists in the simplification of a geometry by removing features that are considered not relevant for the approximation of the solution of a given PDE. In this work, the focus is on Poisson equation with Neumann boundary conditions on the feature boundary. The estimator accounts both for the so-called defeaturing error and for the numerical error committed by approximating the solution on the defeatured domain. Unlike other estimators that were previously proposed for defeaturing problems, the use of the equilibrated flux reconstruction allows to obtain a sharp bound for the numerical component of the error. Furthermore, it does not require the evaluation of the normal trace of the numerical flux on the feature boundary: this makes the estimator well-suited for finite element discretizations, in which the normal trace of the numerical flux is typically discontinuous across elements. The reliability of the estimator is proven and verified on several numerical examples. Its capability to identify the most relevant features is also shown, in anticipation of a future application to an adaptive strategy.
This paper considers the problem of manifold functional multiple regression with functional response, time--varying scalar regressors, and functional error term displaying Long Range Dependence (LRD) in time. Specifically, the error term is given by a manifold multifractionally integrated functional time series (see, e.g., Ovalle--Mu\~noz \& Ruiz--Medina, 2024)). The manifold is defined by a connected and compact two--point homogeneous space. The functional regression parameters have support in the manifold. The Generalized Least--Squares (GLS) estimator of the vector functional regression parameter is computed, and its asymptotic properties are analyzed under a totally specified and misspecified model scenario. A multiscale residual correlation analysis in the simulation study undertaken illustrates the empirical distributional properties of the errors at different spherical resolution levels.
We investigate pointwise estimation of the function-valued velocity field of a second-order linear SPDE. Based on multiple spatially localised measurements, we construct a weighted augmented MLE and study its convergence properties as the spatial resolution of the observations tends to zero and the number of measurements increases. By imposing H\"older smoothness conditions, we recover the pointwise convergence rate known to be minimax-optimal in the linear regression framework. The optimality of the rate in the current setting is verified by adapting the lower bound ansatz based on the RKHS of local measurements to the nonparametric situation.
We present a study on asymptotically compatible Galerkin discretizations for a class of parametrized nonlinear variational problems. The abstract analytical framework is based on variational convergence, or Gamma-convergence. We demonstrate the broad applicability of the theoretical framework by developing asymptotically compatible finite element discretizations of some representative nonlinear nonlocal variational problems on a bounded domain. These include nonlocal nonlinear problems with classically-defined, local boundary constraints through heterogeneous localization at the boundary, as well as nonlocal problems posed on parameter-dependent domains.
Spectral deferred corrections (SDC) are a class of iterative methods for the numerical solution of ordinary differential equations. SDC can be interpreted as a Picard iteration to solve a fully implicit collocation problem, preconditioned with a low-order method. It has been widely studied for first-order problems, using explicit, implicit or implicit-explicit Euler and other low-order methods as preconditioner. For first-order problems, SDC achieves arbitrary order of accuracy and possesses good stability properties. While numerical results for SDC applied to the second-order Lorentz equations exist, no theoretical results are available for SDC applied to second-order problems. We present an analysis of the convergence and stability properties of SDC using velocity-Verlet as the base method for general second-order initial value problems. Our analysis proves that the order of convergence depends on whether the force in the system depends on the velocity. We also demonstrate that the SDC iteration is stable under certain conditions. Finally, we show that SDC can be computationally more efficient than a simple Picard iteration or a fourth-order Runge-Kutta-Nystr\"om method.
The ultimate goal of any numerical scheme for partial differential equations (PDEs) is to compute an approximation of user-prescribed accuracy at quasi-minimal computational time. To this end, algorithmically, the standard adaptive finite element method (AFEM) integrates an inexact solver and nested iterations with discerning stopping criteria balancing the different error components. The analysis ensuring optimal convergence order of AFEM with respect to the overall computational cost critically hinges on the concept of R-linear convergence of a suitable quasi-error quantity. This work tackles several shortcomings of previous approaches by introducing a new proof strategy. First, the algorithm requires several fine-tuned parameters in order to make the underlying analysis work. A redesign of the standard line of reasoning and the introduction of a summability criterion for R-linear convergence allows us to remove restrictions on those parameters. Second, the usual assumption of a (quasi-)Pythagorean identity is replaced by the generalized notion of quasi-orthogonality from [Feischl, Math. Comp., 91 (2022)]. Importantly, this paves the way towards extending the analysis to general inf-sup stable problems beyond the energy minimization setting. Numerical experiments investigate the choice of the adaptivity parameters.
In 2012 Chen and Singer introduced the notion of discrete residues for rational functions as a complete obstruction to rational summability. More explicitly, for a given rational function f(x), there exists a rational function g(x) such that f(x) = g(x+1) - g(x) if and only if every discrete residue of f(x) is zero. Discrete residues have many important further applications beyond summability: to creative telescoping problems, thence to the determination of (differential-)algebraic relations among hypergeometric sequences, and subsequently to the computation of (differential) Galois groups of difference equations. However, the discrete residues of a rational function are defined in terms of its complete partial fraction decomposition, which makes their direct computation impractical due to the high complexity of completely factoring arbitrary denominator polynomials into linear factors. We develop a factorization-free algorithm to compute discrete residues of rational functions, relying only on gcd computations and linear algebra.
The marginal structure quantile model (MSQM) provides a unique lens to understand the causal effect of a time-varying treatment on the full distribution of potential outcomes. Under the semiparametric framework, we derive the efficiency influence function for the MSQM, from which a new doubly robust estimator is proposed for point estimation and inference. We show that the doubly robust estimator is consistent if either of the models associated with treatment assignment or the potential outcome distributions is correctly specified, and is semiparametric efficient if both models are correct. To implement the doubly robust MSQM estimator, we propose to solve a smoothed estimating equation to facilitate efficient computation of the point and variance estimates. In addition, we develop a confounding function approach to investigate the sensitivity of several MSQM estimators when the sequential ignorability assumption is violated. Extensive simulations are conducted to examine the finite-sample performance characteristics of the proposed methods. We apply the proposed methods to the Yale New Haven Health System Electronic Health Record data to study the effect of antihypertensive medications to patients with severe hypertension and assess the robustness of findings to unmeasured baseline and time-varying confounding.
In this work, an efficient blackbox-type multigrid method is proposed for solving multipoint flux approximations of the Darcy problem on logically rectangular grids. The approach is based on a cell-centered multigrid algorithm, which combines a piecewise constant interpolation and the restriction operator by Wesseling/Khalil with a line-wise relaxation procedure. A local Fourier analysis is performed for the case of a Cartesian uniform grid. The method shows a robust convergence for different full tensor coefficient problems and several rough quadrilateral grids.
We address the problem of constructing approximations based on orthogonal polynomials that preserve an arbitrary set of moments of a given function without loosing the spectral convergence property. To this aim, we compute the constrained polynomial of best approximation for a generic basis of orthogonal polynomials. The construction is entirely general and allows us to derive structure preserving numerical methods for partial differential equations that require the conservation of some moments of the solution, typically representing relevant physical quantities of the problem. These properties are essential to capture with high accuracy the long-time behavior of the solution. We illustrate with the aid of several numerical applications to Fokker-Planck equations the generality and the performances of the present approach.
A posteriori error estimates are an important tool to bound discretization errors in terms of computable quantities avoiding regularity conditions that are often difficult to establish. For non-linear and non-differentiable problems, problems involving jumping coefficients, and finite element methods using anisotropic triangulations, such estimates often involve large factors, leading to sub-optimal error estimates. By making use of convex duality arguments, exact and explicit error representations are derived that avoid such effects.