We propose a novel spectral method for the Allen--Cahn equation on spheres that does not necessarily require quadrature exactness assumptions. Instead of certain exactness degrees, we employ a restricted isometry relation based on the Marcinkiewicz--Zygmund system of quadrature rules to quantify the quadrature error of polynomial integrands. The new method imposes only some conditions on the polynomial degree of numerical solutions to derive the maximum principle and energy stability, and thus, differs substantially from existing methods in the literature that often rely on strict conditions on the time stepping size, Lipschitz property of the nonlinear term, or $L^{\infty}$ boundedness of numerical solutions. Moreover, the new method is suitable for long-time simulations because the time stepping size is independent of the diffusion coefficient in the equation. Inspired by the effective maximum principle recently proposed by Li (Ann. Appl. Math., 37(2): 131--290, 2021), we develop an almost sharp maximum principle that allows controllable deviation of numerical solutions from the sharp bound. Further, we show that the new method is energy stable and equivalent to the Galerkin method if the quadrature rule exhibits sufficient exactness degrees. In addition, we propose an energy-stable mixed-quadrature scheme which works well even with randomly sampled initial condition data. We validate the theoretical results about the energy stability and the almost sharp maximum principle by numerical experiments on the 2-sphere $\mathbb{S}^2$.
A Low-rank Spectral Optimization Problem (LSOP) minimizes a linear objective subject to multiple two-sided linear matrix inequalities intersected with a low-rank and spectral constrained domain set. Although solving LSOP is, in general, NP-hard, its partial convexification (i.e., replacing the domain set by its convex hull) termed "LSOP-R," is often tractable and yields a high-quality solution. This motivates us to study the strength of LSOP-R. Specifically, we derive rank bounds for any extreme point of the feasible set of LSOP-R and prove their tightness for the domain sets with different matrix spaces. The proposed rank bounds recover two well-known results in the literature from a fresh angle and also allow us to derive sufficient conditions under which the relaxation LSOP-R is equivalent to the original LSOP. To effectively solve LSOP-R, we develop a column generation algorithm with a vector-based convex pricing oracle, coupled with a rank-reduction algorithm, which ensures the output solution satisfies the theoretical rank bound. Finally, we numerically verify the strength of the LSOP-R and the efficacy of the proposed algorithms.
We investigate linear dynamical systems of second order. Uncertainty quantification is applied, where physical parameters are substituted by random variables. A stochastic Galerkin method yields a linear dynamical system of second order with high dimensionality. A structure-preserv\-ing model order reduction (MOR) produces a small linear dynamical system of second order again. We arrange an associated port-Hamiltonian (pH) formulation of first order for the second-order systems. Each pH system implies a Hamiltonian function describing an internal energy. We examine the properties of the Hamiltonian function for the stochastic Galerkin systems. We show numerical results using a test example, where both the stochastic Galerkin method and structure-preserving MOR are applied.
Inferring the means in the multivariate normal model $X \sim N_n(\theta, I)$ with unknown mean vector $\theta=(\theta_1,...,\theta_n)' \in \mathbb{R}^n$ and observed data $X=(X_1,...,X_n)'\in {\mathbb R}^n$ is a challenging task, known as the problem of many normal means (MNMs). This paper tackles two fundamental kinds of MNMs within the framework of Inferential Models (IMs). The first kind, referred to as the {\it classic} kind, is presented as is. The second kind, referred to as the {\it empirical Bayes} kind, assumes that the individual means $\theta_i$'s are drawn independently {\it a priori} from an unknown distribution $G(.)$. The IM formulation for the empirical Bayes kind utilizes numerical deconvolution, enabling prior-free probabilistic inference with over-parameterization for $G(.)$. The IM formulation for the classic kind, on the other hand, utilizes a latent random permutation, providing a novel approach for reasoning with uncertainty and deeper understanding. For uncertainty quantification within the familiar frequentist inference framework, the IM method of maximum plausibility is used for point estimation. Conservative interval estimation is obtained based on plausibility, using a Monte Carlo-based adaptive adjustment approach to construct shorter confidence intervals with targeted coverage. These methods are demonstrated through simulation studies and a real-data example. The numerical results show that the proposed methods for point estimation outperform traditional James-Stein and Efron's $g$-modeling in terms of mean square error, and the adaptive intervals are satisfactory in both coverage and efficiency. The paper concludes with suggestions for future developments and extensions of the proposed methods.
We examine the problem of variance components testing in general mixed effects models using the likelihood ratio test. We account for the presence of nuisance parameters, i.e. the fact that some untested variances might also be equal to zero. Two main issues arise in this context leading to a non regular setting. First, under the null hypothesis the true parameter value lies on the boundary of the parameter space. Moreover, due to the presence of nuisance parameters the exact location of these boundary points is not known, which prevents from using classical asymptotic theory of maximum likelihood estimation. Then, in the specific context of nonlinear mixed-effects models, the Fisher information matrix is singular at the true parameter value. We address these two points by proposing a shrinked parametric bootstrap procedure, which is straightforward to apply even for nonlinear models. We show that the procedure is consistent, solving both the boundary and the singularity issues, and we provide a verifiable criterion for the applicability of our theoretical results. We show through a simulation study that, compared to the asymptotic approach, our procedure has a better small sample performance and is more robust to the presence of nuisance parameters. A real data application is also provided.
In epidemiological studies, the capture-recapture (CRC) method is a powerful tool that can be used to estimate the number of diseased cases or potentially disease prevalence based on data from overlapping surveillance systems. Estimators derived from log-linear models are widely applied by epidemiologists when analyzing CRC data. The popularity of the log-linear model framework is largely associated with its accessibility and the fact that interaction terms can allow for certain types of dependency among data streams. In this work, we shed new light on significant pitfalls associated with the log-linear model framework in the context of CRC using real data examples and simulation studies. First, we demonstrate that the log-linear model paradigm is highly exclusionary. That is, it can exclude, by design, many possible estimates that are potentially consistent with the observed data. Second, we clarify the ways in which regularly used model selection metrics (e.g., information criteria) are fundamentally deceiving in the effort to select a best model in this setting. By focusing attention on these important cautionary points and on the fundamental untestable dependency assumption made when fitting a log-linear model to CRC data, we hope to improve the quality of and transparency associated with subsequent surveillance-based CRC estimates of case counts.
This paper explores variants of the subspace iteration algorithm for computing approximate invariant subspaces. The standard subspace iteration approach is revisited and new variants that exploit gradient-type techniques combined with a Grassmann manifold viewpoint are developed. A gradient method as well as a conjugate gradient technique are described. Convergence of the gradient-based algorithm is analyzed and a few numerical experiments are reported, indicating that the proposed algorithms are sometimes superior to a standard Chebyshev-based subspace iteration when compared in terms of number of matrix vector products, but do not require estimating optimal parameters. An important contribution of this paper to achieve this good performance is the accurate and efficient implementation of an exact line search. In addition, new convergence proofs are presented for the non-accelerated gradient method that includes a locally exponential convergence if started in a $\mathcal{O(\sqrt{\delta})}$ neighbourhood of the dominant subspace with spectral gap $\delta$.
In this article, we present the time-space Chebyshev pseudospectral method (TS-CPsM) to approximate a solution to the generalised Burgers-Fisher (gBF) equation. The Chebyshev-Gauss-Lobatto (CGL) points serve as the foundation for the recommended method, which makes use of collocations in both the time and space directions. Further, using a mapping, the non-homogeneous initial-boundary value problem is transformed into a homogeneous problem, and a system of algebraic equations is obtained. The numerical approach known as Newton-Raphson is implemented in order to get the desired results for the system. The proposed method's stability analysis has been performed. Different researchers' considerations on test problems have been explored to illustrate the robustness and practicality of the approach presented. The approximate solutions we found using the proposed method are highly accurate and significantly better than the existing results.
In this study, we examine numerical approximations for 2nd-order linear-nonlinear differential equations with diverse boundary conditions, followed by the residual corrections of the first approximations. We first obtain numerical results using the Galerkin weighted residual approach with Bernstein polynomials. The generation of residuals is brought on by the fact that our first approximation is computed using numerical methods. To minimize these residuals, we use the compact finite difference scheme of 4th-order convergence to solve the error differential equations in accordance with the error boundary conditions. We also introduce the formulation of the compact finite difference method of fourth-order convergence for the nonlinear BVPs. The improved approximations are produced by adding the error values derived from the approximations of the error differential equation to the weighted residual values. Numerical results are compared to the exact solutions and to the solutions available in the published literature to validate the proposed scheme, and high accuracy is achieved in all cases
We propose a novel collocated projection method for solving the incompressible Navier-Stokes equations with arbitrary boundaries. Our approach employs non-graded octree grids, where all variables are stored at the nodes. To discretize the viscosity and projection steps, we utilize supra-convergent finite difference approximations with sharp boundary treatments. We demonstrate the stability of our projection on uniform grids, identify a sufficient stability condition on adaptive grids, and validate these findings numerically. We further demonstrate the accuracy and capabilities of our solver with several canonical two- and three-dimensional simulations of incompressible fluid flows. Overall, our method is second-order accurate, allows for dynamic grid adaptivity with arbitrary geometries, and reduces the overhead in code development through data collocation.
Despite the growing interest in parallel-in-time methods as an approach to accelerate numerical simulations in atmospheric modelling, improving their stability and convergence remains a substantial challenge for their application to operational models. In this work, we study the temporal parallelization of the shallow water equations on the rotating sphere combined with time-stepping schemes commonly used in atmospheric modelling due to their stability properties, namely an Eulerian implicit-explicit (IMEX) method and a semi-Lagrangian semi-implicit method (SL-SI-SETTLS). The main goal is to investigate the performance of parallel-in-time methods, namely Parareal and Multigrid Reduction in Time (MGRIT), when these well-established schemes are used on the coarse discretization levels and provide insights on how they can be improved for better performance. We begin by performing an analytical stability study of Parareal and MGRIT applied to a linearized ordinary differential equation depending on the choice of coarse scheme. Next, we perform numerical simulations of two standard tests to evaluate the stability, convergence and speedup provided by the parallel-in-time methods compared to a fine reference solution computed serially. We also conduct a detailed investigation on the influence of artificial viscosity and hyperviscosity approaches, applied on the coarse discretization levels, on the performance of the temporal parallelization. Both the analytical stability study and the numerical simulations indicate a poorer stability behaviour when SL-SI-SETTLS is used on the coarse levels, compared to the IMEX scheme. With the IMEX scheme, a better trade-off between convergence, stability and speedup compared to serial simulations can be obtained under proper parameters and artificial viscosity choices, opening the perspective of the potential competitiveness for realistic models.