A numerical scheme is presented for solving the Helmholtz equation with Dirichlet or Neumann boundary conditions on piecewise smooth open curves, where the curves may have corners and multiple junctions. Existing integral equation methods for smooth open curves rely on analyzing the exact singularities of the density at endpoints for associated integral operators, explicitly extracting these singularities from the densities in the formulation, and using global quadrature to discretize the boundary integral equation. Extending these methods to handle curves with corners and multiple junctions is challenging because the singularity analysis becomes much more complex, and constructing high-order quadrature for discretizing layer potentials with singular and hypersingular kernels and singular densities is nontrivial. The proposed scheme is built upon the following two observations. First, the single-layer potential operator and the normal derivative of the double-layer potential operator serve as effective preconditioners for each other locally. Second, the recursively compressed inverse preconditioning (RCIP) method can be extended to address "implicit" second-kind integral equations. The scheme is high-order, adaptive, and capable of handling corners and multiple junctions without prior knowledge of the density singularity. It is also compatible with fast algorithms, such as the fast multipole method. The performance of the scheme is illustrated with several numerical examples.
We propose BayesPIM, a Bayesian prevalence-incidence mixture model for estimating time- and covariate-dependent disease incidence from screening and surveillance data. The method is particularly suited to settings where some individuals may have the disease at baseline, baseline tests may be missing or incomplete, and the screening test has imperfect sensitivity. Building on the existing PIMixture framework, which assumes perfect sensitivity, BayesPIM accommodates uncertain test accuracy by incorporating informative priors. By including covariates, the model can quantify heterogeneity in disease risk, thereby informing personalized screening strategies. We motivate the model using data from high-risk familial colorectal cancer (CRC) surveillance through colonoscopy, where adenomas - precursors of CRC - may already be present at baseline and remain undetected due to imperfect test sensitivity. We show that conditioning incidence and prevalence estimates on covariates explains substantial heterogeneity in adenoma risk. Using a Metropolis-within-Gibbs sampler and data augmentation, BayesPIM robustly recovers incidence times while handling latent prevalence. Informative priors on the test sensitivity stabilize estimation and mitigate non-convergence issues. Model fit can be assessed using information criteria and validated against a non-parametric estimator. In this way, BayesPIM enhances estimation accuracy and supports the development of more effective, patient-centered screening policies.
We consider estimators obtained by iterates of the conjugate gradient (CG) algorithm applied to the normal equation of prototypical statistical inverse problems. Stopping the CG algorithm early induces regularisation, and optimal convergence rates of prediction and reconstruction error are established in wide generality for an ideal oracle stopping time. Based on this insight, a fully data-driven early stopping rule $\tau$ is constructed, which also attains optimal rates, provided the error in estimating the noise level is not dominant. The error analysis of CG under statistical noise is subtle due to its nonlinear dependence on the observations. We provide an explicit error decomposition and identify two terms in the prediction error, which share important properties of classical bias and variance terms. Together with a continuous interpolation between CG iterates, this paves the way for a comprehensive error analysis of early stopping. In particular, a general oracle-type inequality is proved for the prediction error at $\tau$. For bounding the reconstruction error, a more refined probabilistic analysis, based on concentration of self-normalised Gaussian processes, is developed. The methodology also provides some new insights into early stopping for CG in deterministic inverse problems. A numerical study for standard examples shows good results in practice for early stopping at $\tau$.
The sum of quantum computing errors is the key element both for the estimation and control of errors in quantum computing and for its statistical study. In this article we analyze the sum of two independent quantum computing errors, $X_1$ and $X_2$, and we obtain the formula of the variance of the sum of these errors: $$ V(X_1+X_2)=V(X_1)+V(X_2)-\frac{V(X_1)V(X_2)}{2}. $$ We conjecture that this result holds true for general quantum computing errors and we prove the formula for independent isotropic quantum computing errors.
This work presents a numerical analysis of a Discontinuous Galerkin (DG) method for a transformed master equation modeling an open quantum system: a quantum sub-system interacting with a noisy environment. It is shown that the presented transformed master equation has a reduced computational cost in comparison to a Wigner-Fokker-Planck model of the same system for the general case of non-harmonic potentials via DG schemes. Specifics of a Discontinuous Galerkin (DG) numerical scheme adequate for the system of convection-diffusion equations obtained for our Lindblad master equation in position basis are presented. This lets us solve computationally the transformed system of interest modeling our open quantum system problem. The benchmark case of a harmonic potential is then presented, for which the numerical results are compared against the analytical steady-state solution of this problem. Two non-harmonic cases are then presented: the linear and quartic potentials are modeled via our DG framework, for which we show our numerical results.
Saturated set and its reduced case, the set of generic points, constitute two significant types of fractal-like sets in multifractal analysis of dynamical systems. In the context of infinite entropy systems, this paper aims to give some qualitative aspects of saturated sets and the set of generic points in both topological and measure-theoretic perspectives. For systems with specification property, we establish the certain variational principles for saturated sets in terms of Bowen and packing metric mean dimensions, and show the upper capacity metric mean dimension of saturated sets have full metric mean dimension. All results are useful for understanding the topological structures of dynamical systems with infinite topological entropy. As applications, we further exhibit some qualitative aspects of metric mean dimensions of level sets and the set of mean Li-Yorke pairs in infinite entropy systems.
We establish a general convergence theory of the Rayleigh--Ritz method and the refined Rayleigh--Ritz method for computing some simple eigenpair $(\lambda_{*},x_{*})$ of a given analytic regular nonlinear eigenvalue problem (NEP). In terms of the deviation $\varepsilon$ of $x_{*}$ from a given subspace $\mathcal{W}$, we establish a priori convergence results on the Ritz value, the Ritz vector and the refined Ritz vector. The results show that, as $\varepsilon\rightarrow 0$, there exists a Ritz value that unconditionally converges to $\lambda_*$ and the corresponding refined Ritz vector does so too but the Ritz vector converges conditionally and it may fail to converge and even may not be unique. We also present an error bound for the approximate eigenvector in terms of the computable residual norm of a given approximate eigenpair, and give lower and upper bounds for the error of the refined Ritz vector and the Ritz vector as well as for that of the corresponding residual norms. These results nontrivially extend some convergence results on these two methods for the linear eigenvalue problem to the NEP. Examples are constructed to illustrate the main results.
We propose a method utilizing physics-informed neural networks (PINNs) to solve Poisson equations that serve as control variates in the computation of transport coefficients via fluctuation formulas, such as the Green--Kubo and generalized Einstein-like formulas. By leveraging approximate solutions to the Poisson equation constructed through neural networks, our approach significantly reduces the variance of the estimator at hand. We provide an extensive numerical analysis of the estimators and detail a methodology for training neural networks to solve these Poisson equations. The approximate solutions are then incorporated into Monte Carlo simulations as effective control variates, demonstrating the suitability of the method for moderately high-dimensional problems where fully deterministic solutions are computationally infeasible.
The study of interpolation nodes and their associated Lebesgue constants are central to numerical analysis, impacting the stability and accuracy of polynomial approximations. In this paper, we will explore the Morrow-Patterson points, a set of interpolation nodes introduced to construct cubature formulas of a minimum number of points in the square for a fixed degree $n$. We prove that their Lebesgue constant growth is ${\cal O}(n^2)$ as was conjectured based on numerical evidence about twenty years ago in the paper by Caliari, M., De Marchi, S., Vianello, M., {\it Bivariate polynomial interpolation on the square at new nodal sets}, Appl. Math. Comput. 165(2) (2005), 261--274.
We present a novel class of projected gradient (PG) methods for minimizing a smooth but not necessarily convex function over a convex compact set. We first provide a novel analysis of the "vanilla" PG method, achieving the best-known iteration complexity for finding an approximate stationary point of the problem. We then develop an "auto-conditioned" projected gradient (AC-PG) variant that achieves the same iteration complexity without requiring the input of the Lipschitz constant of the gradient or any line search procedure. The key idea is to estimate the Lipschitz constant using first-order information gathered from the previous iterations, and to show that the error caused by underestimating the Lipschitz constant can be properly controlled. We then generalize the PG methods to the stochastic setting, by proposing a stochastic projected gradient (SPG) method and a variance-reduced stochastic gradient (VR-SPG) method, achieving new complexity bounds in different oracle settings. We also present auto-conditioned stepsize policies for both stochastic PG methods and establish comparable convergence guarantees.
We propose a novel partitioned scheme based on Eikonal equations to model the coupled propagation of the electrical signal in the His-Purkinje system and in the myocardium for cardiac electrophysiology. This scheme allows, for the first time in Eikonal-based modeling, to capture all possible signal reentries between the Purkinje network and the cardiac muscle that may occur under pathological conditions. As part of the proposed scheme, we introduce a new pseudo-time method for the Eikonal-diffusion problem in the myocardium, to correctly enforce electrical stimuli coming from the Purkinje network. We test our approach by performing numerical simulations of cardiac electrophysiology in a real biventricular geometry, under both pathological and therapeutic conditions, to demonstrate its flexibility, robustness, and accuracy.