Fully coupled McKean-Vlasov forward-backward stochastic differential equations (MV-FBSDEs) arise naturally from large population optimization problems. Judging the quality of given numerical solutions for MV-FBSDEs, which usually require Picard iterations and approximations of nested conditional expectations, is typically difficult. This paper proposes an a posteriori error estimator to quantify the $L^2$-approximation error of an arbitrarily generated approximation on a time grid. We establish that the error estimator is equivalent to the global approximation error between the given numerical solution and the solution of a forward Euler discretized MV-FBSDE. A crucial and challenging step in the analysis is the proof of stability of this Euler approximation to the MV-FBSDE, which is of independent interest. We further demonstrate that, for sufficiently fine time grids, the accuracy of numerical solutions for solving the continuous MV-FBSDE can also be measured by the error estimator. The error estimates justify the use of residual-based algorithms for solving MV-FBSDEs. Numerical experiments for MV-FBSDEs arising from mean field control and games confirm the effectiveness and practical applicability of the error estimator.
In this paper, we present an error estimate of a second-order linearized finite element (FE) method for the 2D Navier-Stokes equations with variable density. In order to get error estimates, we first introduce an equivalent form of the original system. Later, we propose a general BDF2-FE method for solving this equivalent form, where the Taylor-Hood FE space is used for discretizing the Navier-Stokes equations and conforming FE space is used for discretizing density equation. We show that our scheme ensures discrete energy dissipation. Under the assumption of sufficient smoothness of strong solutions, an error estimate is presented for our numerical scheme for variable density incompressible flow in two dimensions. Finally, some numerical examples are provided to confirm our theoretical results.
In this paper, we propose a method for estimating model parameters using Small-Angle Scattering (SAS) data based on the Bayesian inference. Conventional SAS data analyses involve processes of manual parameter adjustment by analysts or optimization using gradient methods. These analysis processes tend to involve heuristic approaches and may lead to local solutions.Furthermore, it is difficult to evaluate the reliability of the results obtained by conventional analysis methods. Our method solves these problems by estimating model parameters as probability distributions from SAS data using the framework of the Bayesian inference. We evaluate the performance of our method through numerical experiments using artificial data of representative measurement target models.From the results of the numerical experiments, we show that our method provides not only high accuracy and reliability of estimation, but also perspectives on the transition point of estimability with respect to the measurement time and the lower bound of the angular domain of the measured data.
We are interested in creating statistical methods to provide informative summaries of random fields through the geometry of their excursion sets. To this end, we introduce an estimator for the length of the perimeter of excursion sets of random fields on $\mathbb{R}^2$ observed over regular square tilings. The proposed estimator acts on the empirically accessible binary digital images of the excursion regions and computes the length of a piecewise linear approximation of the excursion boundary. The estimator is shown to be consistent as the pixel size decreases, without the need of any normalization constant, and with neither assumption of Gaussianity nor isotropy imposed on the underlying random field. In this general framework, even when the domain grows to cover $\mathbb{R}^2$, the estimation error is shown to be of smaller order than the side length of the domain. For affine, strongly mixing random fields, this translates to a multivariate Central Limit Theorem for our estimator when multiple levels are considered simultaneously. Finally, we conduct several numerical studies to investigate statistical properties of the proposed estimator in the finite-sample data setting.
This paper presents a fast high-order method for the solution of two-dimensional problems of scattering by penetrable inhomogeneous media, with application to high-frequency configurations containing (possibly) discontinuous refractivities. The method relies on a hybrid direct/iterative combination of 1)~A differential volumetric formulation (which is based on the use of appropriate Chebyshev differentiation matrices enacting the Laplace operator) and, 2)~A second-kind boundary integral formulation. The approach enjoys low dispersion and high-order accuracy for smooth refractivities, as well as second-order accuracy (while maintaining low dispersion) in the discontinuous refractivity case. The solution approach proceeds by application of Impedance-to-Impedance (ItI) maps to couple the volumetric and boundary discretizations. The volumetric linear algebra solutions are obtained by means of a multifrontal solver, and the coupling with the boundary integral formulation is achieved via an application of the iterative linear-algebra solver GMRES. In particular, the existence and uniqueness theory presented in the present paper provides an affirmative answer to an open question concerning the existence of a uniquely solvable second-kind ItI-based formulation for the overall scattering problem under consideration. Relying on a modestly-demanding scatterer-dependent precomputation stage (requiring in practice a computing cost of the order of $O(N^{\alpha})$ operations, with $\alpha \approx 1.07$, for an $N$-point discretization), together with fast ($O(N)$-cost) single-core runs for each incident field considered, the proposed algorithm can effectively solve scattering problems for large and complex objects possibly containing strong refractivity contrasts and discontinuities.
A class of averaging block nonlinear Kaczmarz methods is developed for the solution of the nonlinear system of equations. The convergence theory of the proposed method is established under suitable assumptions and the upper bounds of the convergence rate for the proposed method with both constant stepsize and adaptive stepsize are derived. Numerical experiments are presented to verify the efficiency of the proposed method, which outperforms the existing nonlinear Kaczmarz methods in terms of the number of iteration steps and computational costs.
Gait, the manner of walking, has been proven to be a reliable biometric with uses in surveillance, marketing and security. A promising new direction for the field is training gait recognition systems without explicit human annotations, through self-supervised learning approaches. Such methods are heavily reliant on strong augmentations for the same walking sequence to induce more data variability and to simulate additional walking variations. Current data augmentation schemes are heuristic and cannot provide the necessary data variation as they are only able to provide simple temporal and spatial distortions. In this work, we propose GaitMorph, a novel method to modify the walking variation for an input gait sequence. Our method entails the training of a high-compression model for gait skeleton sequences that leverages unlabelled data to construct a discrete and interpretable latent space, which preserves identity-related features. Furthermore, we propose a method based on optimal transport theory to learn latent transport maps on the discrete codebook that morph gait sequences between variations. We perform extensive experiments and show that our method is suitable to synthesize additional views for an input sequence.
Recent tropical cyclones, e.g., Hurricane Harvey (2017), have lead to significant rainfall and resulting runoff with accompanying flooding. When the runoff interacts with storm surge, the resulting floods can be greatly amplified and lead to effects that cannot be modeled by simple superposition of its distinctive sources. In an effort to develop accurate numerical simulations of runoff, surge, and compounding floods, we develop a local discontinuous Galerkin method for modified shallow water equations. In this modification, nonzero sources to the continuity equation are included to incorporate rainfall into the model using parametric rainfall models from literature as well as hindcast data. The discontinuous Galerkin spatial discretization is accompanied with a strong stability preserving explicit Runge Kutta time integrator. Hence, temporal stability is ensured through the CFL condition and we exploit the embarrassingly parallel nature of the developed method using MPI parallelization. We demonstrate the capabilities of the developed method though a sequence of physically relevant numerical tests, including small scale test cases based on laboratory measurements and large scale experiments with Hurricane Harvey in the Gulf of Mexico. The results highlight the conservation properties and robustness of the developed method and show the potential of compound flood modeling using our approach.
We present a novel multilevel Monte Carlo approach for estimating quantities of interest for stochastic partial differential equations (SPDEs). Drawing inspiration from [Giles and Szpruch: Antithetic multilevel Monte Carlo estimation for multi-dimensional SDEs without L\'evy area simulation, Annals of Appl. Prob., 2014], we extend the antithetic Milstein scheme for finite-dimensional stochastic differential equations to Hilbert space-valued SPDEs. Our method has the advantages of both Euler and Milstein discretizations, as it is easy to implement and does not involve intractable L\'evy area terms. Moreover, the antithetic correction in our method leads to the same variance decay in a MLMC algorithm as the standard Milstein method, resulting in significantly lower computational complexity than a corresponding MLMC Euler scheme. Our approach is applicable to a broader range of non-linear diffusion coefficients and does not require any commutative properties. The key component of our MLMC algorithm is a truncated Milstein-type time stepping scheme for SPDEs, which accelerates the rate of variance decay in the MLMC method when combined with an antithetic coupling on the fine scales. We combine the truncated Milstein scheme with appropriate spatial discretizations and noise approximations on all scales to obtain a fully discrete scheme and show that the antithetic coupling does not introduce an additional bias.
Machine learning models often need to be robust to noisy input data. The effect of real-world noise (which is often random) on model predictions is captured by a model's local robustness, i.e., the consistency of model predictions in a local region around an input. However, the na\"ive approach to computing local robustness based on Monte-Carlo sampling is statistically inefficient, leading to prohibitive computational costs for large-scale applications. In this work, we develop the first analytical estimators to efficiently compute local robustness of multi-class discriminative models using local linear function approximation and the multivariate Normal CDF. Through the derivation of these estimators, we show how local robustness is connected to concepts such as randomized smoothing and softmax probability. We also confirm empirically that these estimators accurately and efficiently compute the local robustness of standard deep learning models. In addition, we demonstrate these estimators' usefulness for various tasks involving local robustness, such as measuring robustness bias and identifying examples that are vulnerable to noise perturbation in a dataset. By developing these analytical estimators, this work not only advances conceptual understanding of local robustness, but also makes its computation practical, enabling the use of local robustness in critical downstream applications.
Physics-informed neural networks (PINNs) offer a novel and efficient approach to solving partial differential equations (PDEs). Their success lies in the physics-informed loss, which trains a neural network to satisfy a given PDE at specific points and to approximate the solution. However, the solutions to PDEs are inherently infinite-dimensional, and the distance between the output and the solution is defined by an integral over the domain. Therefore, the physics-informed loss only provides a finite approximation, and selecting appropriate collocation points becomes crucial to suppress the discretization errors, although this aspect has often been overlooked. In this paper, we propose a new technique called good lattice training (GLT) for PINNs, inspired by number theoretic methods for numerical analysis. GLT offers a set of collocation points that are effective even with a small number of points and for multi-dimensional spaces. Our experiments demonstrate that GLT requires 2--20 times fewer collocation points (resulting in lower computational cost) than uniformly random sampling or Latin hypercube sampling, while achieving competitive performance.