In this paper, a high-order and fast numerical method is investigated for the time-fractional Black-Scholes equation. In order to deal with the typical weak initial singularities of the solution, we construct a finite difference scheme with variable time steps, where the fractional derivative is approximated by the nonuniform Alikhanov formula and the sum-of-exponentials (SOE) technique. In the spatial direction, an average approximation with fourth-order accuracy is employed. The stability and the convergence with second-order in time and fourth-order in space of the proposed scheme are religiously derived by the energy method. Numerical examples are given to demonstrate the theoretical statement.
The Restricted Additive Schwarz method with impedance transmission conditions, also known as the Optimised Restricted Additive Schwarz (ORAS) method, is a simple overlapping one-level parallel domain decomposition method, which has been successfully used as an iterative solver and as a preconditioner for discretized Helmholtz boundary-value problems. In this paper, we give, for the first time, a convergence analysis for ORAS as an iterative solver -- and also as a preconditioner -- for nodal finite element Helmholtz systems of any polynomial order. The analysis starts by showing (for general domain decompositions) that ORAS as an unconventional finite element approximation of a classical parallel iterative Schwarz method, formulated at the PDE (non-discrete) level. This non-discrete Schwarz method was recently analysed in [Gong, Gander, Graham, Lafontaine, Spence, arXiv 2106.05218], and the present paper gives a corresponding discrete version of this analysis. In particular, for domain decompositions in strips in 2-d, we show that, when the mesh size is small enough, ORAS inherits the convergence properties of the Schwarz method, independent of polynomial order. The proof relies on characterising the ORAS iteration in terms of discrete `impedance-to-impedance maps', which we prove (via a novel weighted finite-element error analysis) converge as $h\rightarrow 0$ in the operator norm to their non-discrete counterparts.
In this paper, we present a quadratic auxiliary variable approach to develop a new class of energy-preserving Runge-Kutta methods for the Korteweg-de Vries equation. The quadratic auxiliary variable approach is first proposed to reformulate the original model into an equivalent system, which transforms the energy conservation law of the Korteweg-de Vries equation into two quadratic invariants of the reformulated system. Then the symplectic Runge-Kutta methods are directly employed for the reformulated model to arrive at a new kind of time semi-discrete schemes for the original problem. Under the consistent initial condition, the proposed methods are rigorously proved to maintain the original energy conservation law of the Korteweg-de Vries equation. In addition, the Fourier pseudo-spectral method is used for spatial discretization, resulting in fully discrete energy-preserving schemes. To implement the proposed methods effectively, we present a very efficient iterative technique, which not only greatly saves the calculation cost, but also achieves the purpose of practically preserving structure. Ample numerical results are addressed to confirm the expected order of accuracy, conservative property and efficiency of the proposed algorithms.
An implicit Euler finite-volume scheme for a parabolic reaction-diffusion system modeling biofilm growth is analyzed and implemented. The system consists of a degenerate-singular diffusion equation for the biomass fraction, which is coupled to a diffusion equation for the nutrient concentration, and it is solved in a bounded domain with Dirichlet boundary conditions. By transforming the biomass fraction to an entropy-type variable, it is shown that the numerical scheme preserves the lower and upper bounds of the biomass fraction. The existence and uniqueness of a discrete solution and the convergence of the scheme are proved. Numerical experiments in one and two space dimensions illustrate, respectively, the rate of convergence in space of our scheme and the temporal evolution of the biomass fraction and the nutrient concentration.
In this paper we propose a modified Lie-type spectral splitting approximation where the external potential is of quadratic type. It is proved that we can approximate the solution to a nonlinear Schroedinger equation by solving the linear problem and treating the nonlinear term separately, with a rigorous estimate of the remainder term. Furthermore, we show by means of numerical experiments that such a modified approximation is more efficient than the standard one.
In this paper, we first establish well-posedness results for one-dimensional McKean-Vlasov stochastic differential equations (SDEs) and related particle systems with a measure-dependent drift coefficient that is discontinuous in the spatial component, and a diffusion coefficient which is a Lipschitz function of the state only. We only require a fairly mild condition on the diffusion coefficient, namely to be non-zero in a point of discontinuity of the drift, while we need to impose certain structural assumptions on the measure-dependence of the drift. Second, we study fully implementable Euler-Maruyama type schemes for the particle system to approximate the solution of the one-dimensional McKean-Vlasov SDE. Here, we will prove strong convergence results in terms of the number of time-steps and number of particles. Due to the discontinuity of the drift, the convergence analysis is non-standard and the usual strong convergence order $1/2$ known for the Lipschitz case cannot be recovered for all schemes.
We propose, analyze, and test a novel continuous data assimilation two-phase flow algorithm for reservoir simulation. We show that the solutions of the algorithm, constructed using coarse mesh observations, converge at an exponential rate in time to the corresponding exact reference solution of the two-phase model. More precisely, we obtain a stability estimate which illustrates an exponential decay of the residual error between the reference and approximate solution, until the error hits a threshold depending on the order of data resolution. Numerical computations are included to demonstrate the effectiveness of this approach, as well as variants with data on sub-domains. In particular, we demonstrate numerically that synchronization is achieved for data collected from a small fraction of the domain.
The Weakly-Compressible Smoothed Particle Hydrodynamics (WCSPH) method is a Lagrangian method that is typically used for the simulation of incompressible fluids. While developing an SPH-based scheme or solver, researchers often verify their code with exact solutions, solutions from other numerical techniques, or experimental data. This typically requires a significant amount of computational effort and does not test the full capabilities of the solver. Furthermore, often this does not yield insights on the convergence of the solver. In this paper we introduce the method of manufactured solutions (MMS) to comprehensively test a WCSPH-based solver in a robust and efficient manner. The MMS is well established in the context of mesh-based numerical solvers. We show how the method can be applied in the context of Lagrangian WCSPH solvers to test the convergence and accuracy of the solver in two and three dimensions, systematically identify any problems with the solver, and test the boundary conditions in an efficient way. We demonstrate this for both a traditional WCSPH scheme as well as for some recently proposed second order convergent WCSPH schemes. Our code is open source and the results of the manuscript are reproducible.
We present a new discretization for advection-diffusion problems with Robin boundary conditions on complex time-dependent domains. The method is based on second order cut cell finite volume methods introduced by Bochkov et al. to discretize the Laplace operator and Robin boundary condition. To overcome the small cell problem, we use a splitting scheme that uses a semi-Lagrangian method to treat advection. We demonstrate second order accuracy in the $L^1$, $L^2$, and $L^\infty$ norms for both analytic test problems and numerical convergence studies. We also demonstrate the ability of the scheme to handle conversion of one concentration field to another across a moving boundary.
Tumor detection in biomedical imaging is a time-consuming process for medical professionals and is not without errors. Thus in recent decades, researchers have developed algorithmic techniques for image processing using a wide variety of mathematical methods, such as statistical modeling, variational techniques, and machine learning. In this paper, we propose a semi-automatic method for liver segmentation of 2D CT scans into three labels denoting healthy, vessel, or tumor tissue based on graph cuts. First, we create a feature vector for each pixel in a novel way that consists of the 59 intensity values in the time series data and propose a simplified perimeter cost term in the energy functional. We normalize the data and perimeter terms in the functional to expedite the graph cut without having to optimize the scaling parameter $\lambda$. In place of a training process, predetermined tissue means are computed based on sample regions identified by expert radiologists. The proposed method also has the advantage of being relatively simple to implement computationally. It was evaluated against the ground truth on a clinical CT dataset of 10 tumors and yielded segmentations with a mean Dice similarity coefficient (DSC) of .77 and mean volume overlap error (VOE) of 36.7%. The average processing time was 1.25 minutes per slice.
We propose a new method of estimation in topic models, that is not a variation on the existing simplex finding algorithms, and that estimates the number of topics K from the observed data. We derive new finite sample minimax lower bounds for the estimation of A, as well as new upper bounds for our proposed estimator. We describe the scenarios where our estimator is minimax adaptive. Our finite sample analysis is valid for any number of documents (n), individual document length (N_i), dictionary size (p) and number of topics (K), and both p and K are allowed to increase with n, a situation not handled well by previous analyses. We complement our theoretical results with a detailed simulation study. We illustrate that the new algorithm is faster and more accurate than the current ones, although we start out with a computational and theoretical disadvantage of not knowing the correct number of topics K, while we provide the competing methods with the correct value in our simulations.