We present new Dirichlet-Neumann and Neumann-Dirichlet algorithms with a time domain decomposition applied to unconstrained parabolic optimal control problems. After a spatial semi-discretization, we use the Lagrange multiplier approach to derive a coupled forward-backward optimality system, which can then be solved using a time domain decomposition. Due to the forward-backward structure of the optimality system, three variants can be found for the Dirichlet-Neumann and Neumann-Dirichlet algorithms. We analyze their convergence behavior and determine the optimal relaxation parameter for each algorithm. Our analysis reveals that the most natural algorithms are actually only good smoothers, and there are better choices which lead to efficient solvers. We illustrate our analysis with numerical experiments.
This manuscript studies the numerical solution of the time-fractional Burgers-Huxley equation in a reproducing kernel Hilbert space. The analytical solution of the equation is obtained in terms of a convergent series with easily computable components. It is observed that the approximate solution uniformly converges to the exact solution for the aforementioned equation. Also, the convergence of the proposed method is investigated. Numerical examples are given to demonstrate the validity and applicability of the presented method. The numerical results indicate that the proposed method is powerful and effective with a small computational overhead.
One of the main challenges for interpreting black-box models is the ability to uniquely decompose square-integrable functions of non-mutually independent random inputs into a sum of functions of every possible subset of variables. However, dealing with dependencies among inputs can be complicated. We propose a novel framework to study this problem, linking three domains of mathematics: probability theory, functional analysis, and combinatorics. We show that, under two reasonable assumptions on the inputs (non-perfect functional dependence and non-degenerate stochastic dependence), it is always possible to decompose uniquely such a function. This ``canonical decomposition'' is relatively intuitive and unveils the linear nature of non-linear functions of non-linearly dependent inputs. In this framework, we effectively generalize the well-known Hoeffding decomposition, which can be seen as a particular case. Oblique projections of the black-box model allow for novel interpretability indices for evaluation and variance decomposition. Aside from their intuitive nature, the properties of these novel indices are studied and discussed. This result offers a path towards a more precise uncertainty quantification, which can benefit sensitivity analyses and interpretability studies, whenever the inputs are dependent. This decomposition is illustrated analytically, and the challenges to adopting these results in practice are discussed.
We propose MNPCA, a novel non-linear generalization of (2D)$^2${PCA}, a classical linear method for the simultaneous dimension reduction of both rows and columns of a set of matrix-valued data. MNPCA is based on optimizing over separate non-linear mappings on the left and right singular spaces of the observations, essentially amounting to the decoupling of the two sides of the matrices. We develop a comprehensive theoretical framework for MNPCA by viewing it as an eigenproblem in reproducing kernel Hilbert spaces. We study the resulting estimators on both population and sample levels, deriving their convergence rates and formulating a coordinate representation to allow the method to be used in practice. Simulations and a real data example demonstrate MNPCA's good performance over its competitors.
Projection-based testing for mean trajectory differences in two groups of irregularly and sparsely observed functional data has garnered significant attention in the literature because it accommodates a wide spectrum of group differences and (non-stationary) covariance structures. This article presents the derivation of the theoretical power function and the introduction of a comprehensive power and sample size (PASS) calculation toolkit tailored to the projection-based testing method developed by Wang (2021). Our approach accommodates a wide spectrum of group difference scenarios and a broad class of covariance structures governing the underlying processes. Through extensive numerical simulation, we demonstrate the robustness of this testing method by showcasing that its statistical power remains nearly unaffected even when a certain percentage of observations are missing, rendering it 'missing-immune'. Furthermore, we illustrate the practical utility of this test through analysis of two randomized controlled trials of Parkinson's disease. To facilitate implementation, we provide a user-friendly R package fPASS, complete with a detailed vignette to guide users through its practical application. We anticipate that this article will significantly enhance the usability of this potent statistical tool across a range of biostatistical applications, with a particular focus on its relevance in the design of clinical trials.
A finite element based computational scheme is developed and employed to assess a duality based variational approach to the solution of the linear heat and transport PDE in one space dimension and time, and the nonlinear system of ODEs of Euler for the rotation of a rigid body about a fixed point. The formulation turns initial-(boundary) value problems into degenerate elliptic boundary value problems in (space)-time domains representing the Euler-Lagrange equations of suitably designed dual functionals in each of the above problems. We demonstrate reasonable success in approximating solutions of this range of parabolic, hyperbolic, and ODE primal problems, which includes energy dissipation as well as conservation, by a unified dual strategy lending itself to a variational formulation. The scheme naturally associates a family of dual solutions to a unique primal solution; such `gauge invariance' is demonstrated in our computed solutions of the heat and transport equations, including the case of a transient dual solution corresponding to a steady primal solution of the heat equation. Primal evolution problems with causality are shown to be correctly approximated by non-causal dual problems.
We couple the L1 discretization of the Caputo fractional derivative in time with the Galerkin scheme to devise a linear numerical method for the semilinear subdiffusion equation. Two important points that we make are: nonsmooth initial data and time-dependent diffusion coefficient. We prove the stability and convergence of the method under weak assumptions concerning regularity of the diffusivity. We find optimal pointwise in space and global in time errors, which are verified with several numerical experiments.
This paper addresses the problem of designing the {\it continuous-discrete} unscented Kalman filter (UKF) implementation methods. More precisely, the aim is to propose the MATLAB-based UKF algorithms for {\it accurate} and {\it robust} state estimation of stochastic dynamic systems. The accuracy of the {\it continuous-discrete} nonlinear filters heavily depends on how the implementation method manages the discretization error arisen at the filter prediction step. We suggest the elegant and accurate implementation framework for tracking the hidden states by utilizing the MATLAB built-in numerical integration schemes developed for solving ordinary differential equations (ODEs). The accuracy is boosted by the discretization error control involved in all MATLAB ODE solvers. This keeps the discretization error below the tolerance value provided by users, automatically. Meanwhile, the robustness of the UKF filtering methods is examined in terms of the stability to roundoff. In contrast to the pseudo-square-root UKF implementations established in engineering literature, which are based on the one-rank Cholesky updates, we derive the stable square-root methods by utilizing the $J$-orthogonal transformations for calculating the Cholesky square-root factors.
We propose a finite element discretization for the steady, generalized Navier-Stokes equations for fluids with shear-dependent viscosity, completed with inhomogeneous Dirichlet boundary conditions and an inhomogeneous divergence constraint. We establish (weak) convergence of discrete solutions as well as a priori error estimates for the velocity vector field and the scalar kinematic pressure. Numerical experiments complement the theoretical findings.
We present a multigrid algorithm to solve efficiently the large saddle-point systems of equations that typically arise in PDE-constrained optimization under uncertainty. The algorithm is based on a collective smoother that at each iteration sweeps over the nodes of the computational mesh, and solves a reduced saddle-point system whose size depends on the number $N$ of samples used to discretized the probability space. We show that this reduced system can be solved with optimal $O(N)$ complexity. We test the multigrid method on three problems: a linear-quadratic problem, possibly with a local or a boundary control, for which the multigrid method is used to solve directly the linear optimality system; a nonsmooth problem with box constraints and $L^1$-norm penalization on the control, in which the multigrid scheme is used within a semismooth Newton iteration; a risk-adverse problem with the smoothed CVaR risk measure where the multigrid method is called within a preconditioned Newton iteration. In all cases, the multigrid algorithm exhibits excellent performances and robustness with respect to the parameters of interest.
Markov chain Monte Carlo (MCMC) algorithms are based on the construction of a Markov Chain with transition probabilities $P_\mu(x,\cdot)$, where $\mu$ indicates an invariant distribution of interest. In this work, we look at these transition probabilities as functions of their invariant distributions, and we develop a notion of derivative in the invariant distribution of a MCMC kernel. We build around this concept a set of tools that we refer to as Markov Chain Monte Carlo Calculus. This allows us to compare Markov chains with different invariant distributions within a suitable class via what we refer to as mean value inequalities. We explain how MCMC Calculus provides a natural framework to study algorithms using an approximation of an invariant distribution, also illustrating how it suggests practical guidelines for MCMC algorithms efficiency. We conclude this work by showing how the tools developed can be applied to prove convergence of interacting and sequential MCMC algorithms, which arise in the context of particle filtering.