We develop several statistical tests of the determinant of the diffusion coefficient of a stochastic differential equation, based on discrete observations on a time interval $[0,T]$ sampled with a time step $\Delta$. Our main contribution is to control the test Type I and Type II errors in a non asymptotic setting, i.e. when the number of observations and the time step are fixed. The test statistics are calculated from the process increments. In dimension 1, the density of the test statistic is explicit. In dimension 2, the test statistic has no explicit density but upper and lower bounds are proved. We also propose a multiple testing procedure in dimension greater than 2. Every test is proved to be of a given non-asymptotic level and separability conditions to control their power are also provided. A numerical study illustrates the properties of the tests for stochastic processes with known or estimated drifts.
The spectral clustering algorithm is often used as a binary clustering method for unclassified data by applying the principal component analysis. To study theoretical properties of the algorithm, the assumption of homoscedasticity is often supposed in existing studies. However, this assumption is restrictive and often unrealistic in practice. Therefore, in this paper, we consider the allometric extension model, that is, the directions of the first eigenvectors of two covariance matrices and the direction of the difference of two mean vectors coincide, and we provide a non-asymptotic bound of the error probability of the spectral clustering algorithm for the allometric extension model. As a byproduct of the result, we obtain the consistency of the clustering method in high-dimensional settings.
In this paper, we are interested in constructing a scheme solving compressible Navier--Stokes equations, with desired properties including high order spatial accuracy, conservation, and positivity-preserving of density and internal energy under a standard hyperbolic type CFL constraint on the time step size, e.g., $\Delta t=\mathcal O(\Delta x)$. Strang splitting is used to approximate convection and diffusion operators separately. For the convection part, i.e., the compressible Euler equation, the high order accurate postivity-preserving Runge--Kutta discontinuous Galerkin method can be used. For the diffusion part, the equation of internal energy instead of the total energy is considered, and a first order semi-implicit time discretization is used for the ease of achieving positivity. A suitable interior penalty discontinuous Galerkin method for the stress tensor can ensure the conservation of momentum and total energy for any high order polynomial basis. In particular, positivity can be proven with $\Delta t=\mathcal{O}(\Delta x)$ if the Laplacian operator of internal energy is approximated by the $\mathbb{Q}^k$ spectral element method with $k=1,2,3$. So the full scheme with $\mathbb{Q}^k$ ($k=1,2,3$) basis is conservative and positivity-preserving with $\Delta t=\mathcal{O}(\Delta x)$, which is robust for demanding problems such as solutions with low density and low pressure induced by high-speed shock diffraction. Even though the full scheme is only first order accurate in time, numerical tests indicate that higher order polynomial basis produces much better numerical solutions, e.g., better resolution for capturing the roll-ups during shock reflection.
This article introduces an innovative mathematical framework designed to tackle non-linear convex variational problems in reflexive Banach spaces. Our approach employs a versatile technique that can handle a broad range of variational problems, including standard ones. To carry out the process effectively, we utilize specialized sets known as radial dictionaries, where these dictionaries encompass diverse data types, such as tensors in Tucker format with bounded rank and Neural Networks with fixed architecture and bounded parameters. The core of our method lies in employing a greedy algorithm through dictionary optimization defined by a multivalued map. Significantly, our analysis shows that the convergence rate achieved by our approach is comparable to the Method of Steepest Descend implemented in a reflexive Banach space, where the convergence rate follows the order of $O(m^{-1})$.
This paper presents a novel approach to construct regularizing operators for severely ill-posed Fredholm integral equations of the first kind by introducing parametrized discretization. The optimal values of discretization and regularization parameters are computed simultaneously by solving a minimization problem formulated based on a regularization parameter search criterion. The effectiveness of the proposed approach is demonstrated through examples of noisy Laplace transform inversions and the deconvolution of nuclear magnetic resonance relaxation data.
We develop a hybrid scheme based on a finite difference scheme and a rescaling technique to approximate the solution of nonlinear wave equation. In order to numerically reproduce the blow-up phenomena, we propose a rule of scaling transformation, which is a variant of what was successfully used in the case of nonlinear parabolic equations. A careful study of the convergence of the proposed scheme is carried out and several numerical examples are performed in illustration.
In this paper, we present a discontinuity and cusp capturing physics-informed neural network (PINN) to solve Stokes equations with a piecewise-constant viscosity and singular force along an interface. We first reformulate the governing equations in each fluid domain separately and replace the singular force effect with the traction balance equation between solutions in two sides along the interface. Since the pressure is discontinuous and the velocity has discontinuous derivatives across the interface, we hereby use a network consisting of two fully-connected sub-networks that approximate the pressure and velocity, respectively. The two sub-networks share the same primary coordinate input arguments but with different augmented feature inputs. These two augmented inputs provide the interface information, so we assume that a level set function is given and its zero level set indicates the position of the interface. The pressure sub-network uses an indicator function as an augmented input to capture the function discontinuity, while the velocity sub-network uses a cusp-enforced level set function to capture the derivative discontinuities via the traction balance equation. We perform a series of numerical experiments to solve two- and three-dimensional Stokes interface problems and perform an accuracy comparison with the augmented immersed interface methods in literature. Our results indicate that even a shallow network with a moderate number of neurons and sufficient training data points can achieve prediction accuracy comparable to that of immersed interface methods.
Very recently, Qi and Cui extended the Perron-Frobenius theory to dual number matrices with primitive and irreducible nonnegative standard parts and proved that they have Perron eigenpair and Perron-Frobenius eigenpair. The Collatz method was also extended to find Perron eigenpair. Qi and Cui proposed two conjectures. One is the k-order power of a dual number matrix tends to zero if and only if the spectral radius of its standard part less than one, and another is the linear convergence of the Collatz method. In this paper, we confirm these conjectures and provide theoretical proof. The main contribution is to show that the Collatz method R-linearly converges with an explicit rate.
We present a multidimensional deep learning implementation of a stochastic branching algorithm for the numerical solution of fully nonlinear PDEs. This approach is designed to tackle functional nonlinearities involving gradient terms of any orders, by combining the use of neural networks with a Monte Carlo branching algorithm. In comparison with other deep learning PDE solvers, it also allows us to check the consistency of the learned neural network function. Numerical experiments presented show that this algorithm can outperform deep learning approaches based on backward stochastic differential equations or the Galerkin method, and provide solution estimates that are not obtained by those methods in fully nonlinear examples.
Quadratization of polynomial and nonpolynomial systems of ordinary differential equations is advantageous in a variety of disciplines, such as systems theory, fluid mechanics, chemical reaction modeling and mathematical analysis. A quadratization reveals new variables and structures of a model, which may be easier to analyze, simulate, control, and provides a convenient parametrization for learning. This paper presents novel theory, algorithms and software capabilities for quadratization of non-autonomous ODEs. We provide existence results, depending on the regularity of the input function, for cases when a quadratic-bilinear system can be obtained through quadratization. We further develop existence results and an algorithm that generalizes the process of quadratization for systems with arbitrary dimension that retain the nonlinear structure when the dimension grows. For such systems, we provide dimension-agnostic quadratization. An example is semi-discretized PDEs, where the nonlinear terms remain symbolically identical when the discretization size increases. As an important aspect for practical adoption of this research, we extended the capabilities of the QBee software towards both non-autonomous systems of ODEs and ODEs with arbitrary dimension. We present several examples of ODEs that were previously reported in the literature, and where our new algorithms find quadratized ODE systems with lower dimension than the previously reported lifting transformations. We further highlight an important area of quadratization: reduced-order model learning. This area can benefit significantly from working in the optimal lifting variables, where quadratic models provide a direct parametrization of the model that also avoids additional hyperreduction for the nonlinear terms. A solar wind example highlights these advantages.
For a singular integral equation on an interval of the real line, we study the behavior of the error of a delta-delta discretization. We show that the convergence is non-uniform, between order $O(h^{2})$ in the interior of the interval and a boundary layer where the consistency error does not tend to zero.