In this paper, we present algorithms and implementations for the end-to-end GPU acceleration of matrix-free low-order-refined preconditioning of high-order finite element problems. The methods described here allow for the construction of effective preconditioners for high-order problems with optimal memory usage and computational complexity. The preconditioners are based on the construction of a spectrally equivalent low-order discretization on a refined mesh, which is then amenable to, for example, algebraic multigrid preconditioning. The constants of equivalence are independent of mesh size and polynomial degree. For vector finite element problems in $H({\rm curl})$ and $H({\rm div})$ (e.g. for electromagnetic or radiation diffusion problems) a specially constructed interpolation-histopolation basis is used to ensure fast convergence. Detailed performance studies are carried out to analyze the efficiency of the GPU algorithms. The kernel throughput of each of the main algorithmic components is measured, and the strong and weak parallel scalability of the methods is demonstrated. The different relative weighting and significance of the algorithmic components on GPUs and CPUs is discussed. Results on problems involving adaptively refined nonconforming meshes are shown, and the use of the preconditioners on a large-scale magnetic diffusion problem using all spaces of the finite element de Rham complex is illustrated.
Simulating propagation of acoustic waves via solving a system of three-coupled first-order linear differential equations using a k-space pseudo-spectral method is popular for biomedical applications, firstly because of availability of an open-source toolbox for implementation of this numerical approach, and secondly because of its efficiency. The k-space pseudo-spectral method is efficient, because it allows coarser computational grids and larger time steps than finite difference and finite element methods for the same accuracy. The goal of this study is to compare this numerical wave solver with an analytical solution to the wave equation using the Green's function for computing propagation of acoustic waves in homogeneous media. This comparison is done in the frequency domain. Using the k-Wave solver, a match to the Green's function is obtained after modifying the approach taken for including mass source in the linearised equation of continuity (conservation of mass) in the associated system of wave equations.
Neural networks have recently allowed solving many ill-posed inverse problems with unprecedented performance. Physics informed approaches already progressively replace carefully hand-crafted reconstruction algorithms in real applications. However, these networks suffer from a major defect: when trained on a given forward operator, they do not generalize well to a different one. The aim of this paper is twofold. First, we show through various applications that training the network with a family of forward operators allows solving the adaptivity problem without compromising the reconstruction quality significantly. Second, we illustrate that this training procedure allows tackling challenging blind inverse problems. Our experiments include partial Fourier sampling problems arising in magnetic resonance imaging (MRI), computerized tomography (CT) and image deblurring.
Accelerated MRI aims to find a pair of samplers and reconstructors to reduce acquisition time while maintaining the reconstruction quality. Most of the existing works focus on finding either sparse samplers with a fixed reconstructor or finding reconstructors with a fixed sampler. Recently, people have begun to consider learning samplers and reconstructors jointly. In this paper, we propose an alternating training framework for finding a good pair of samplers and reconstructors via deep reinforcement learning (RL). In particular, we propose a novel sparse-reward Partially Observed Markov Decision Process (POMDP) to formulate the MRI sampling trajectory. Compared to the existing works that utilize dense-reward POMDPs, the proposed sparse-reward POMDP is more computationally efficient and has a provable advantage over dense-reward POMDPs. We evaluate our method on fastMRI, a public benchmark MRI dataset, and it achieves state-of-the-art reconstruction performances.
Mixtures of regression are a powerful class of models for regression learning with respect to a highly uncertain and heterogeneous response variable of interest. In addition to being a rich predictive model for the response given some covariates, the parameters in this model class provide useful information about the heterogeneity in the data population, which is represented by the conditional distributions for the response given the covariates associated with a number of distinct but latent subpopulations. In this paper, we investigate conditions of strong identifiability, rates of convergence for conditional density and parameter estimation, and the Bayesian posterior contraction behavior arising in finite mixture of regression models, under exact-fitted and over-fitted settings and when the number of components is unknown. This theory is applicable to common choices of link functions and families of conditional distributions employed by practitioners. We provide simulation studies and data illustrations, which shed some light on the parameter learning behavior found in several popular regression mixture models reported in the literature.
In this paper we propose a novel second-order accurate well balanced scheme for shallow water equations in general covariant coordinates over manifolds. In our approach, once the gravitational field is defined for the specific case, one equipotential surface is detected and parametrized by a frame of general covariant coordinates. This surface is the manifold whose covariant parametrization induces a metric tensor. The model is then re-written in a hyperbolic form with a tuple of conserved variables composed both of the evolving physical quantities and the metric coefficients. This formulation allows the numerical scheme to automatically compute the curvature of the manifold as long as the physical variables are evolved.
This work studies the problem of transfer learning under the functional linear regression model framework, which aims to improve the estimation and prediction of the target model by leveraging the information from related source models. We measure the relatedness between target and source models using Reproducing Kernel Hilbert Spaces (RKHS) norm, allowing the type of information being transferred to be interpreted by the structural properties of the spaces. Two transfer learning algorithms are proposed: one transfers information from source tasks when we know which sources to use, while the other one aggregates multiple transfer learning results from the first algorithm to achieve robust transfer learning without prior information about the sources. Furthermore, we establish the optimal convergence rates for the prediction risk in the target model, making the statistical gain via transfer learning mathematically provable. The theoretical analysis of the prediction risk also provides insights regarding what factors are affecting the transfer learning effect, i.e. what makes source tasks useful to the target task. We demonstrate the effectiveness of the proposed transfer learning algorithms on extensive synthetic data as well as real financial data application.
We develop a new second-order unstaggered path-conservative central-upwind (PCCU) scheme for ideal and shallow water magnetohydrodynamics (MHD) equations. The new scheme possesses several important properties: it locally preserves the divergence-free constraint, it does not rely on any (approximate) Riemann problem solver, and it robustly produces high-resolution and non-oscillatory results. The derivation of the scheme is based on the Godunov-Powell nonconservative modifications of the studied MHD systems. The local divergence-free property is enforced by augmenting the modified systems with the evolution equations for the corresponding derivatives of the magnetic field components. These derivatives are then used to design a special piecewise linear reconstruction of the magnetic field, which guarantees a non-oscillatory nature of the resulting scheme. In addition, the proposed PCCU discretization accounts for the jump of the nonconservative product terms across cell interfaces, thereby ensuring stability. We test the proposed PCCU scheme on several benchmarks for both ideal and shallow water MHD systems. The obtained numerical results illustrate the performance of the new scheme, its robustness, and its ability not only to achieve high resolution, but also preserve the positivity of computed quantities such as density, pressure, and water depth.
The conjoining of dynamical systems and deep learning has become a topic of great interest. In particular, neural differential equations (NDEs) demonstrate that neural networks and differential equation are two sides of the same coin. Traditional parameterised differential equations are a special case. Many popular neural network architectures, such as residual networks and recurrent networks, are discretisations. NDEs are suitable for tackling generative problems, dynamical systems, and time series (particularly in physics, finance, ...) and are thus of interest to both modern machine learning and traditional mathematical modelling. NDEs offer high-capacity function approximation, strong priors on model space, the ability to handle irregular data, memory efficiency, and a wealth of available theory on both sides. This doctoral thesis provides an in-depth survey of the field. Topics include: neural ordinary differential equations (e.g. for hybrid neural/mechanistic modelling of physical systems); neural controlled differential equations (e.g. for learning functions of irregular time series); and neural stochastic differential equations (e.g. to produce generative models capable of representing complex stochastic dynamics, or sampling from complex high-dimensional distributions). Further topics include: numerical methods for NDEs (e.g. reversible differential equations solvers, backpropagation through differential equations, Brownian reconstruction); symbolic regression for dynamical systems (e.g. via regularised evolution); and deep implicit models (e.g. deep equilibrium models, differentiable optimisation). We anticipate this thesis will be of interest to anyone interested in the marriage of deep learning with dynamical systems, and hope it will provide a useful reference for the current state of the art.
The Q-learning algorithm is known to be affected by the maximization bias, i.e. the systematic overestimation of action values, an important issue that has recently received renewed attention. Double Q-learning has been proposed as an efficient algorithm to mitigate this bias. However, this comes at the price of an underestimation of action values, in addition to increased memory requirements and a slower convergence. In this paper, we introduce a new way to address the maximization bias in the form of a "self-correcting algorithm" for approximating the maximum of an expected value. Our method balances the overestimation of the single estimator used in conventional Q-learning and the underestimation of the double estimator used in Double Q-learning. Applying this strategy to Q-learning results in Self-correcting Q-learning. We show theoretically that this new algorithm enjoys the same convergence guarantees as Q-learning while being more accurate. Empirically, it performs better than Double Q-learning in domains with rewards of high variance, and it even attains faster convergence than Q-learning in domains with rewards of zero or low variance. These advantages transfer to a Deep Q Network implementation that we call Self-correcting DQN and which outperforms regular DQN and Double DQN on several tasks in the Atari 2600 domain.
Dynamic programming (DP) solves a variety of structured combinatorial problems by iteratively breaking them down into smaller subproblems. In spite of their versatility, DP algorithms are usually non-differentiable, which hampers their use as a layer in neural networks trained by backpropagation. To address this issue, we propose to smooth the max operator in the dynamic programming recursion, using a strongly convex regularizer. This allows to relax both the optimal value and solution of the original combinatorial problem, and turns a broad class of DP algorithms into differentiable operators. Theoretically, we provide a new probabilistic perspective on backpropagating through these DP operators, and relate them to inference in graphical models. We derive two particular instantiations of our framework, a smoothed Viterbi algorithm for sequence prediction and a smoothed DTW algorithm for time-series alignment. We showcase these instantiations on two structured prediction tasks and on structured and sparse attention for neural machine translation.