This manuscript examines the problem of nonlinear stochastic fractional neutral integro-differential equations with weakly singular kernels. Our focus is on obtaining precise estimates to cover all possible cases of Abel-type singular kernels. Initially, we establish the existence, uniqueness, and continuous dependence on the initial value of the true solution, assuming a local Lipschitz condition and linear growth condition. Additionally, we develop the Euler-Maruyama method for the numerical solution of the equation and prove its strong convergence under the same conditions as the well-posedness. Moreover, we determine the accurate convergence rate of this method under global Lipschitz conditions and linear growth conditions. And also we have proven generalized Gronwall inequality with a multi-weakly singularity.
The consistency of the maximum likelihood estimator for mixtures of elliptically-symmetric distributions for estimating its population version is shown, where the underlying distribution $P$ is nonparametric and does not necessarily belong to the class of mixtures on which the estimator is based. In a situation where $P$ is a mixture of well enough separated but nonparametric distributions it is shown that the components of the population version of the estimator correspond to the well separated components of $P$. This provides some theoretical justification for the use of such estimators for cluster analysis in case that $P$ has well separated subpopulations even if these subpopulations differ from what the mixture model assumes.
Mendelian randomization is an instrumental variable method that utilizes genetic information to investigate the causal effect of a modifiable exposure on an outcome. In most cases, the exposure changes over time. Understanding the time-varying causal effect of the exposure can yield detailed insights into mechanistic effects and the potential impact of public health interventions. Recently, a growing number of Mendelian randomization studies have attempted to explore time-varying causal effects. However, the proposed approaches oversimplify temporal information and rely on overly restrictive structural assumptions, limiting their reliability in addressing time-varying causal problems. This paper considers a novel approach to estimate time-varying effects through continuous-time modelling by combining functional principal component analysis and weak-instrument-robust techniques. Our method effectively utilizes available data without making strong structural assumptions and can be applied in general settings where the exposure measurements occur at different timepoints for different individuals. We demonstrate through simulations that our proposed method performs well in estimating time-varying effects and provides reliable inference results when the time-varying effect form is correctly specified. The method could theoretically be used to estimate arbitrarily complex time-varying effects. However, there is a trade-off between model complexity and instrument strength. Estimating complex time-varying effects requires instruments that are unrealistically strong. We illustrate the application of this method in a case study examining the time-varying effects of systolic blood pressure on urea levels.
This work proposes a discretization of the acoustic wave equation with possibly oscillatory coefficients based on a superposition of discrete solutions to spatially localized subproblems computed with an implicit time discretization. Based on exponentially decaying entries of the global system matrices and an appropriate partition of unity, it is proved that the superposition of localized solutions is appropriately close to the solution of the (global) implicit scheme. It is thereby justified that the localized (and especially parallel) computation on multiple overlapping subdomains is reasonable. Moreover, a re-start is introduced after a certain amount of time steps to maintain a moderate overlap of the subdomains. Overall, the approach may be understood as a domain decomposition strategy in space on successive short time intervals that completely avoids inner iterations. Numerical examples are presented.
We present a divergence-free semi-implicit finite volume scheme for the simulation of the ideal magnetohydrodynamics (MHD) equations which is stable for large time steps controlled by the local transport speed at all Mach and Alfv\'en numbers. An operator splitting technique allows to treat the convective terms explicitly while the hydrodynamic pressure and the magnetic field contributions are integrated implicitly, yielding two decoupled linear implicit systems. The linearity of the implicit part is achieved by means of a semi-implicit time linearization. This structure is favorable as second-order accuracy in time can be achieved relying on the class of semi-implicit IMplicit-EXplicit Runge-Kutta (IMEX-RK) methods. In space, implicit cell-centered finite difference operators are designed to discretely preserve the divergence-free property of the magnetic field on three-dimensional Cartesian meshes. The new scheme is also particularly well suited for low Mach number flows and for the incompressible limit of the MHD equations, since no explicit numerical dissipation is added to the implicit contribution and the time step is scale independent. Likewise, highly magnetized flows can benefit from the implicit treatment of the magnetic fluxes, hence improving the computational efficiency of the novel method. The convective terms undergo a shock-capturing second order finite volume discretization to guarantee the effectiveness of the proposed method even for high Mach number flows. The new scheme is benchmarked against a series of test cases for the ideal MHD equations addressing different acoustic and Alfv\'en Mach number regimes where the performance and the stability of the new scheme is assessed.
We propose and analyze a novel approach to construct structure preserving approximations for the Poisson-Nernst-Planck equations, focusing on the positivity preserving and mass conservation properties. The strategy consists of a standard time marching step with a projection (or correction) step to satisfy the desired physical constraints (positivity and mass conservation). Based on the $L^2$ projection, we construct a second order Crank-Nicolson type finite difference scheme, which is linear (exclude the very efficient $L^2$ projection part), positivity preserving and mass conserving. Rigorous error estimates in $L^2$ norm are established, which are both second order accurate in space and time. The other choice of projection, e.g. $H^1$ projection, is discussed. Numerical examples are presented to verify the theoretical results and demonstrate the efficiency of the proposed method.
Methods for upwinding the potential vorticity in a compatible finite element discretisation of the rotating shallow water equations are studied. These include the well-known anticipated potential vorticity method (APVM), streamwise upwind Petrov-Galerkin (SUPG) method, and a recent approach where the trial functions are evaluated downstream within the reference element. In all cases the upwinding scheme conserves both potential vorticity and energy, since the antisymmetric structure of the equations is preserved. The APVM leads to a symmetric definite correction to the potential enstrophy that is dissipative and inconsistent, resulting in a turbulent state where the potential enstrophy is more strongly damped than for the other schemes. While the SUPG scheme is widely known to be consistent, since it modifies the test functions only, the downwinded trial function formulation results in the advection of downwind corrections. Results of the SUPG and downwinded trial function schemes are very similar in terms of both potential enstrophy conservation and turbulent spectra. The main difference between these schemes is in the energy conservation and residual errors. If just two nonlinear iterations are applied then the energy conservation errors are improved for the downwinded trial function formulation, reflecting a smaller residual error than for the SUPG scheme. We also present formulations by which potential enstrophy is exactly integrated at each time level. Results using these formulations are observed to be stable in the absence of dissipation, despite the uncontrolled aliasing of grid scale turbulence. Using such a formulation and the APVM with a coefficient $\mathcal{O}(100)$ times smaller that its regular value leads to turbulent spectra that are greatly improved at the grid scale over the SUPG and downwinded trial function formulations with unstable potential enstrophy errors.
We propose a numerical method to solve parameter-dependent hyperbolic partial differential equations (PDEs) with a moment approach, based on a previous work from Marx et al. (2020). This approach relies on a very weak notion of solution of nonlinear equations, namely parametric entropy measure-valued (MV) solutions, satisfying linear equations in the space of Borel measures. The infinite-dimensional linear problem is approximated by a hierarchy of convex, finite-dimensional, semidefinite programming problems, called Lasserre's hierarchy. This gives us a sequence of approximations of the moments of the occupation measure associated with the parametric entropy MV solution, which is proved to converge. In the end, several post-treatments can be performed from this approximate moments sequence. In particular, the graph of the solution can be reconstructed from an optimization of the Christoffel-Darboux kernel associated with the approximate measure, that is a powerful approximation tool able to capture a large class of irregular functions. Also, for uncertainty quantification problems, several quantities of interest can be estimated, sometimes directly such as the expectation of smooth functionals of the solutions. The performance of our approach is evaluated through numerical experiments on the inviscid Burgers equation with parametrised initial conditions or parametrised flux function.
Multiphysics simulations frequently require transferring solution fields between subproblems with non-matching spatial discretizations, typically using interpolation techniques. Standard methods are usually based on measuring the closeness between points by means of the Euclidean distance, which does not account for curvature, cuts, cavities or other non-trivial geometrical or topological features of the domain. This may lead to spurious oscillations in the interpolant in proximity to these features. To overcome this issue, we propose a modification to rescaled localized radial basis function (RL-RBF) interpolation to account for the geometry of the interpolation domain, by yielding conformity and fidelity to geometrical and topological features. The proposed method, referred to as RL-RBF-G, relies on measuring the geodesic distance between data points. RL-RBF-G removes spurious oscillations appearing in the RL-RBF interpolant, resulting in increased accuracy in domains with complex geometries. We demonstrate the effectiveness of RL-RBF-G interpolation through a convergence study in an idealized setting. Furthermore, we discuss the algorithmic aspects and the implementation of RL-RBF-G interpolation in a distributed-memory parallel framework, and present the results of a strong scalability test yielding nearly ideal results. Finally, we show the effectiveness of RL-RBF-G interpolation in multiphysics simulations by considering an application to a whole-heart cardiac electromecanics model.
We propose an efficient algorithm for matching two correlated Erd\H{o}s--R\'enyi graphs with $n$ vertices whose edges are correlated through a latent vertex correspondence. When the edge density $q= n^{- \alpha+o(1)}$ for a constant $\alpha \in [0,1)$, we show that our algorithm has polynomial running time and succeeds to recover the latent matching as long as the edge correlation is non-vanishing. This is closely related to our previous work on a polynomial-time algorithm that matches two Gaussian Wigner matrices with non-vanishing correlation, and provides the first polynomial-time random graph matching algorithm (regardless of the regime of $q$) when the edge correlation is below the square root of the Otter's constant (which is $\approx 0.338$).
Classical-quantum hybrid algorithms have recently garnered significant attention, which are characterized by combining quantum and classical computing protocols to obtain readout from quantum circuits of interest. Recent progress due to Lubasch et al in a 2019 paper provides readout for solutions to the Schrodinger and Inviscid Burgers equations, by making use of a new variational quantum algorithm (VQA) which determines the ground state of a cost function expressed with a superposition of expectation values and variational parameters. In the following, we analyze additional computational prospects in which the VQA can reliably produce solutions to other PDEs that are comparable to solutions that have been previously realized classically, which are characterized with noiseless quantum simulations. To determine the range of nonlinearities that the algorithm can process for other IVPs, we study several PDEs, first beginning with the Navier-Stokes equations and progressing to other equations underlying physical phenomena ranging from electromagnetism, gravitation, and wave propagation, from simulations of the Einstein, Boussniesq-type, Lin-Tsien, Camassa-Holm, Drinfeld-Sokolov-Wilson (DSW), and Hunter-Saxton equations. To formulate optimization routines that the VQA undergoes for numerical approximations of solutions that are obtained as readout from quantum circuits, cost functions corresponding to each PDE are provided in the supplementary section after which simulations results from hundreds of ZGR-QFT ansatzae are generated.