A novel scheme, based on third-order Weighted Essentially Non-Oscillatory (WENO) reconstructions, is presented. It attains unconditionally optimal accuracy when the data is smooth enough, even in presence of critical points, and second-order accuracy if a discontinuity crosses the data. The key to attribute these properties to this scheme is the inclusion of an additional node in the data stencil, which is only used in the computation of the weights measuring the smoothness. The accuracy properties of this scheme are proven in detail and several numerical experiments are presented, which show that this scheme is more efficient in terms of the error reduction versus CPU time than its traditional third-order counterparts as well as several higher-order WENO schemes that are found in the literature.
This paper proposes a systematic and novel component level co-rotational (CR) framework, for upgrading existing 3D continuum finite elements to flexible multibody analysis. Without using any model reduction techniques, the high efficiency is achieved through sophisticated operations in both modeling and numerical implementation phrases. In modeling phrase, as in conventional 3D nonlinear finite analysis, the nodal absolute coordinates are used as the system generalized coordinates, therefore simple formulations of the inertia force terms can be obtained. For the elastic force terms, inspired by existing floating frame of reference formulation (FFRF) and conventional element-level CR formulation, a component-level CR modeling strategy is developed. By in combination with Schur complement theory and fully exploring the nature of the component-level CR modeling method, an extremely efficient procedure is developed, which enables us to transform the linear equations raised from each Newton-Raphson iteration step into linear systems with constant coefficient matrix. The coefficient matrix thus can be pre-calculated and decomposed only once, and at all the subsequent time steps only back substitutions are needed, which avoids frequently updating the Jacobian matrix and avoids directly solving the large-scale linearized equation in each iteration. Multiple examples are presented to demonstrate the performance of the proposed framework.
This paper focuses on the numerical scheme for multiple-delay stochastic differential equations with partially H\"older continuous drifts and locally H\"older continuous diffusion coefficients. To handle with the superlinear terms in coefficients, the truncated Euler-Maruyama scheme is employed. Under the given conditions, the convergence rates at time $T$ in both $\mathcal{L}^{1}$ and $\mathcal{L}^{2}$ senses are shown by virtue of the Yamada-Watanabe approximation technique. Moreover, the convergence rates over a finite time interval $[0,T]$ are also obtained. Additionally, it should be noted that the convergence rates will not be affected by the number of delay variables. Finally, we perform the numerical experiments on the stochastic volatility model to verify the reliability of the theoretical results.
We propose Texture Edge detection using Patch consensus (TEP) which is a training-free method to detect the boundary of texture. We propose a new simple way to identify the texture edge location, using the consensus of segmented local patch information. While on the boundary, even using local patch information, the distinction between textures are typically not clear, but using neighbor consensus give a clear idea of the boundary. We utilize local patch, and its response against neighboring regions, to emphasize the similarities and the differences across different textures. The step of segmentation of response further emphasizes the edge location, and the neighborhood voting gives consensus and stabilize the edge detection. We analyze texture as a stationary process to give insight into the patch width parameter verses the quality of edge detection. We derive the necessary condition for textures to be distinguished, and analyze the patch width with respect to the scale of textures. Various experiments are presented to validate the proposed model.
Randomized Controlled Trials (RCTs) may suffer from limited scope. In particular, samples may be unrepresentative: some RCTs over- or under- sample individuals with certain characteristics compared to the target population, for which one wants conclusions on treatment effectiveness. Re-weighting trial individuals to match the target population can improve the treatment effect estimation. In this work, we establish the exact expressions of the bias and variance of such reweighting procedures -- also called Inverse Propensity of Sampling Weighting (IPSW) -- in presence of categorical covariates for any sample size. Such results allow us to compare the theoretical performance of different versions of IPSW estimates. Besides, our results show how the performance (bias, variance, and quadratic risk) of IPSW estimates depends on the two sample sizes (RCT and target population). A by-product of our work is the proof of consistency of IPSW estimates. Results also reveal that IPSW performances are improved when the trial probability to be treated is estimated (rather than using its oracle counterpart). In addition, we study choice of variables: how including covariates that are not necessary for identifiability of the causal effect may impact the asymptotic variance. Including covariates that are shifted between the two samples but not treatment effect modifiers increases the variance while non-shifted but treatment effect modifiers do not. We illustrate all the takeaways in a didactic example, and on a semi-synthetic simulation inspired from critical care medicine.
The well-conditioned multi-product formula (MPF), proposed by [Low, Kliuchnikov, and Wiebe, 2019], is a simple high-order time-independent Hamiltonian simulation algorithm that implements a linear combination of standard product formulas of low order. While the MPF aims to simultaneously exploit commutator scaling among Hamiltonians and achieve near-optimal time and precision dependence, its lack of a rigorous error bound on the nested commutators renders its practical advantage ambiguous. In this work, we conduct a rigorous complexity analysis of the well-conditioned MPF, demonstrating explicit commutator scaling and near-optimal time and precision dependence at the same time. Using our improved complexity analysis, we present several applications of practical interest where the MPF based on a second-order product formula can achieve a polynomial speedup in both system size and evolution time, as well as an exponential speedup in precision, compared to second-order and even higher-order product formulas. Compared to post-Trotter methods, the MPF based on a second-order product formula can achieve polynomially better scaling in system size, with only poly-logarithmic overhead in evolution time and precision.
We propose an implicit Discontinuous Galerkin (DG) discretization for incompressible two-phase flows using an artificial compressibility formulation. The conservative level set (CLS) method is employed in combination with a reinitialization procedure to capture the moving interface. A projection method based on the L-stable TR-BDF2 method is adopted for the time discretization of the Navier-Stokes equations and of the level set method. Adaptive Mesh Refinement (AMR) is employed to enhance the resolution in correspondence of the interface between the two fluids. The effectiveness of the proposed approach is shown in a number of classical benchmarks. A specific analysis on the influence of different choices of the mixture viscosity is also carried out.
Backtracking linesearch is the de facto approach for minimizing continuously differentiable functions with locally Lipschitz gradient. In recent years, it has been shown that in the convex setting it is possible to avoid linesearch altogether, and to allow the stepsize to adapt based on a local smoothness estimate without any backtracks or evaluations of the function value. In this work we propose an adaptive proximal gradient method, adaPG, that uses novel estimates of the local smoothness modulus which leads to less conservative stepsize updates and that can additionally cope with nonsmooth terms. This idea is extended to the primal-dual setting where an adaptive three-term primal-dual algorithm, adaPD, is proposed which can be viewed as an extension of the PDHG method. Moreover, in this setting the "essentially" fully adaptive variant adaPD$^+$ is proposed that avoids evaluating the linear operator norm by invoking a backtracking procedure, that, remarkably, does not require extra gradient evaluations. Numerical simulations demonstrate the effectiveness of the proposed algorithms compared to the state of the art.
The random batch method (RBM) proposed in [Jin et al., J. Comput. Phys., 400(2020), 108877] for large interacting particle systems is an efficient with linear complexity in particle numbers and highly scalable algorithm for $N$-particle interacting systems and their mean-field limits when $N$ is large. We consider in this work the quantitative error estimate of RBM toward its mean-field limit, the Fokker-Planck equation. Under mild assumptions, we obtain a uniform-in-time $O(\tau^2 + 1/N)$ bound on the scaled relative entropy between the joint law of the random batch particles and the tensorized law at the mean-field limit, where $\tau$ is the time step size and $N$ is the number of particles. Therefore, we improve the existing rate in discretization step size from $O(\sqrt{\tau})$ to $O(\tau)$ in terms of the Wasserstein distance.
We propose an operator learning approach to accelerate geometric Markov chain Monte Carlo (MCMC) for solving infinite-dimensional nonlinear Bayesian inverse problems. While geometric MCMC employs high-quality proposals that adapt to posterior local geometry, it requires computing local gradient and Hessian information of the log-likelihood, incurring a high cost when the parameter-to-observable (PtO) map is defined through expensive model simulations. We consider a delayed-acceptance geometric MCMC method driven by a neural operator surrogate of the PtO map, where the proposal is designed to exploit fast surrogate approximations of the log-likelihood and, simultaneously, its gradient and Hessian. To achieve a substantial speedup, the surrogate needs to be accurate in predicting both the observable and its parametric derivative (the derivative of the observable with respect to the parameter). Training such a surrogate via conventional operator learning using input--output samples often demands a prohibitively large number of model simulations. In this work, we present an extension of derivative-informed operator learning [O'Leary-Roseberry et al., J. Comput. Phys., 496 (2024)] using input--output--derivative training samples. Such a learning method leads to derivative-informed neural operator (DINO) surrogates that accurately predict the observable and its parametric derivative at a significantly lower training cost than the conventional method. Cost and error analysis for reduced basis DINO surrogates are provided. Numerical studies on PDE-constrained Bayesian inversion demonstrate that DINO-driven MCMC generates effective posterior samples 3--9 times faster than geometric MCMC and 60--97 times faster than prior geometry-based MCMC. Furthermore, the training cost of DINO surrogates breaks even after collecting merely 10--25 effective posterior samples compared to geometric MCMC.
Irksome is a library based on the Unified Form Language (UFL) that enables automated generation of Runge--Kutta methods for time-stepping finite element spatial discretizations of partial differential equations (PDE). Allowing users to express semidiscrete forms of PDE, it generates UFL representations for the stage-coupled variational problems to be solved at each time step. The Firedrake package then generates efficient code for evaluating these variational problems and allows users a wide range of options to deploy efficient algebraic solvers in PETSc. In this paper, we describe several recent advances in Irksome. These include alternate formulations of the Runge--Kutta time-stepping methods and optimized support for diagonally implicit (DIRK) methods. Additionally, we present new and improved tools for building preconditioners for the resulting linear and linearized systems, demonstrating that these can lead to efficient approaches for solving fully implicit Runge-Kutta discretizations. The new features are demonstrated through a sequence of computational examples demonstrating the high-level interface and obtained solver performance.