In general, high order splitting methods suffer from an order reduction phenomena when applied to the time integration of partial differential equations with non-periodic boundary conditions. In the last decade, there were introduced several modifications to prevent the second order Strang Splitting method from such a phenomena. In this article, inspired by these recent corrector techniques, we introduce a splitting method of order three for a class of semilinear parabolic problems that avoids order reduction in the context of non-periodic boundary conditions. We give a proof for the third order convergence of the method in a simplified linear setting and confirm the result by numerical experiments. Moreover, we show numerically that the high order convergence persists for an order four variant of a splitting method, and also for a nonlinear source term.
We present a new high-order accurate spectral element solution to the two-dimensional scalar Poisson equation subject to a general Robin boundary condition. The solution is based on a simplified version of the shifted boundary method employing a continuous arbitrary order $hp$-Galerkin spectral element method as the numerical discretization procedure. The simplification relies on a polynomial correction to avoid explicitly evaluating high-order partial derivatives from the Taylor series expansion, which traditionally have been used within the shifted boundary method. In this setting, we apply an extrapolation and novel interpolation approach to project the basis functions from the true domain onto the approximate surrogate domain. The resulting solution provides a method that naturally incorporates curved geometrical features of the domain, overcomes complex and cumbersome mesh generation, and avoids problems with small-cut-cells. Dirichlet, Neumann, and general Robin boundary conditions are enforced weakly through: i) a generalized Nitsche's method and ii) a generalized Aubin's method. For this, a consistent asymptotic preserving formulation of the embedded Robin formulations is presented. We present several numerical experiments and analysis of the algorithmic properties of the different weak formulations. With this, we include convergence studies under polynomial, $p$, increase of the basis functions, mesh, $h$, refinement, and matrix conditioning to highlight the spectral and algebraic convergence features, respectively. This is done to assess the influence of errors across variational formulations, polynomial order, mesh size, and mappings between the true and surrogate boundaries.
For problems of time-harmonic scattering by rational polygonal obstacles, embedding formulae express the far-field pattern induced by any incident plane wave in terms of the far-field patterns for a relatively small (frequency-independent) set of canonical incident angles. Although these remarkable formulae are exact in theory, here we demonstrate that: (i) they are highly sensitive to numerical errors in practice, and; (ii) direct calculation of the coefficients in these formulae may be impossible for particular sets of canonical incident angles, even in exact arithmetic. Only by overcoming these practical issues can embedding formulae provide a highly efficient approach to computing the far-field pattern induced by a large number of incident angles. Here we propose solutions for problems (i) and (ii), backed up by theory and numerical experiments. Problem (i) is solved using techniques from computational complex analysis: we reformulate the embedding formula as a complex contour integral and prove that this is much less sensitive to numerical errors. In practice, this contour integral can be efficiently evaluated by residue calculus. Problem (ii) is addressed using techniques from numerical linear algebra: we oversample, considering more canonical incident angles than are necessary, thus expanding the space of valid coefficients vectors. The coefficients vectors can then be selected using either a least squares approach or column subset selection.
The $n$-vehicle exploration problem (NVEP) is a combinatorial optimization problem, which tries to find an optimal permutation of a fleet to maximize the length traveled by the last vehicle. NVEP has a fractional form of objective function, and its computational complexity of general case remains open. We show that Hamiltonian Path $\leq_P$ NVEP, and prove that NVEP is NP-complete.
This paper presents two methods for approximating a proper subset of the entries of a Hessian using only function evaluations. These approximations are obtained using the techniques called \emph{generalized simplex Hessian} and \emph{generalized centered simplex Hessian}. We show how to choose the matrices of directions involved in the computation of these two techniques depending on the entries of the Hessian of interest. We discuss the number of function evaluations required in each case and develop a general formula to approximate all order-$P$ partial derivatives. Since only function evaluations are required to compute the methods discussed in this paper, they are suitable for use in derivative-free optimization methods.
Infinite-dimensional, holomorphic functions have been studied in detail over the last several decades, due to their relevance to parametric differential equations and computational uncertainty quantification. The approximation of such functions from finitely many samples is of particular interest, due to the practical importance of constructing surrogate models to complex mathematical models of physical processes. In a previous work, [5] we studied the approximation of so-called Banach-valued, $(\boldsymbol{b},\varepsilon)$-holomorphic functions on the infinite-dimensional hypercube $[-1,1]^{\mathbb{N}}$ from $m$ (potentially adaptive) samples. In particular, we derived lower bounds for the adaptive $m$-widths for classes of such functions, which showed that certain algebraic rates of the form $m^{1/2-1/p}$ are the best possible regardless of the sampling-recovery pair. In this work, we continue this investigation by focusing on the practical case where the samples are pointwise evaluations drawn identically and independently from a probability measure. Specifically, for Hilbert-valued $(\boldsymbol{b},\varepsilon)$-holomorphic functions, we show that the same rates can be achieved (up to a small polylogarithmic or algebraic factor) for essentially arbitrary tensor-product Jacobi (ultraspherical) measures. Our reconstruction maps are based on least squares and compressed sensing procedures using the corresponding orthonormal Jacobi polynomials. In doing so, we strengthen and generalize past work that has derived weaker nonuniform guarantees for the uniform and Chebyshev measures (and corresponding polynomials) only. We also extend various best $s$-term polynomial approximation error bounds to arbitrary Jacobi polynomial expansions. Overall, we demonstrate that i.i.d.\ pointwise samples are near-optimal for the recovery of infinite-dimensional, holomorphic functions.
The prevailing statistical approach to analyzing persistence diagrams is concerned with filtering out topological noise. In this paper, we adopt a different viewpoint and aim at estimating the actual distribution of a random persistence diagram, which captures both topological signal and noise. To that effect, Chazel and Divol (2019) proved that, under general conditions, the expected value of a random persistence diagram is a measure admitting a Lebesgue density, called the persistence intensity function. In this paper, we are concerned with estimating the persistence intensity function and a novel, normalized version of it -- called the persistence density function. We present a class of kernel-based estimators based on an i.i.d. sample of persistence diagrams and derive estimation rates in the supremum norm. As a direct corollary, we obtain uniform consistency rates for estimating linear representations of persistence diagrams, including Betti numbers and persistence surfaces. Interestingly, the persistence density function delivers stronger statistical guarantees.
In contingency table analysis, one is interested in testing whether a model of interest (e.g., the independent or symmetry model) holds using goodness-of-fit tests. When the null hypothesis where the model is true is rejected, the interest turns to the degree to which the probability structure of the contingency table deviates from the model. Many indexes have been studied to measure the degree of the departure, such as the Yule coefficient and Cram\'er coefficient for the independence model, and Tomizawa's symmetry index for the symmetry model. The inference of these indexes is performed using sample proportions, which are estimates of cell probabilities, but it is well-known that the bias and mean square error (MSE) values become large without a sufficient number of samples. To address the problem, this study proposes a new estimator for indexes using Bayesian estimators of cell probabilities. Assuming the Dirichlet distribution for the prior of cell probabilities, we asymptotically evaluate the value of MSE when plugging the posterior means of cell probabilities into the index, and propose an estimator of the index using the Dirichlet hyperparameter that minimizes the value. Numerical experiments show that when the number of samples per cell is small, the proposed method has smaller values of bias and MSE than other methods of correcting estimation accuracy. We also show that the values of bias and MSE are smaller than those obtained by using the uniform and Jeffreys priors.
The proximal Galerkin finite element method is a high-order, low iteration complexity, nonlinear numerical method that preserves the geometric and algebraic structure of pointwise bound constraints in infinite-dimensional function spaces. This paper introduces the proximal Galerkin method and applies it to solve free boundary problems, enforce discrete maximum principles, and develop a scalable, mesh-independent algorithm for optimal design problems with pointwise bound constraints. This paper also provides a derivation of the latent variable proximal point (LVPP) algorithm, an unconditionally stable alternative to the interior point method. LVPP is an infinite-dimensional optimization algorithm that may be viewed as having an adaptive barrier function that is updated with a new informative prior at each (outer loop) optimization iteration. One of its main benefits is witnessed when analyzing the classical obstacle problem. Therein, we find that the original variational inequality can be replaced by a sequence of partial differential equations (PDEs) that are readily discretized and solved with, e.g., high-order finite elements. Throughout this work, we arrive at several unexpected contributions that may be of independent interest. These include (1) a semilinear PDE we refer to as the entropic Poisson equation; (2) an algebraic/geometric connection between high-order positivity-preserving discretizations and certain infinite-dimensional Lie groups; and (3) a gradient-based, bound-preserving algorithm for two-field density-based topology optimization. The complete latent variable proximal Galerkin methodology combines ideas from nonlinear programming, functional analysis, tropical algebra, and differential geometry and can potentially lead to new synergies among these areas as well as within variational and numerical analysis.
Opportunistic pharmacokinetic (PK) studies have sparse and imbalanced clinical measurement data, and the impact of sample time errors is an important concern when seeking accurate estimates of treatment response. We evaluated an approximate Bayesian model for individualized pharmacokinetics in the presence of time recording errors (TREs), considering both a short and long infusion dosing pattern. We found that the long infusion schedule generally had lower bias in estimates of the pharmacodynamic (PD) endpoint relative to the short infusion schedule. We investigated three different design strategies for their ability to mitigate the impact of TREs: (i) shifting blood draws taken during an active infusion to the post-infusion period, (ii) identifying the best next sample time by minimizing bias in the presence of TREs, and (iii) collecting additional information on a subset of patients based on estimate uncertainty or quadrature-estimated variance in the presence of TREs. Generally, the proposed strategies led to a decrease in bias of the PD estimate for the short infusion schedule, but had a negligible impact for the long infusion schedule. Dosing regimens with periods of high non-linearity may benefit from design modifications, while more stable concentration-time profiles are generally more robust to TREs with no design modifications.
The monotonicity of discrete Laplacian implies discrete maximum principle, which in general does not hold for high order schemes. The $Q^2$ spectral element method has been proven monotone on a uniform rectangular mesh. In this paper we prove the monotonicity of the $Q^2$ spectral element method on quasi-uniform rectangular meshes under certain mesh constraints. In particular, we propose a relaxed Lorenz's condition for proving monotonicity.