This paper introduces non-linear dimension reduction in factor-augmented vector autoregressions to analyze the effects of different economic shocks. I argue that controlling for non-linearities between a large-dimensional dataset and the latent factors is particularly useful during turbulent times of the business cycle. In simulations, I show that non-linear dimension reduction techniques yield good forecasting performance, especially when data is highly volatile. In an empirical application, I identify a monetary policy as well as an uncertainty shock excluding and including observations of the COVID-19 pandemic. Those two applications suggest that the non-linear FAVAR approaches are capable of dealing with the large outliers caused by the COVID-19 pandemic and yield reliable results in both scenarios.
We present a new high-order accurate spectral element solution to the two-dimensional scalar Poisson equation subject to a general Robin boundary condition. The solution is based on a simplified version of the shifted boundary method employing a continuous arbitrary order $hp$-Galerkin spectral element method as the numerical discretization procedure. The simplification relies on a polynomial correction to avoid explicitly evaluating high-order partial derivatives from the Taylor series expansion, which traditionally have been used within the shifted boundary method. In this setting, we apply an extrapolation and novel interpolation approach to project the basis functions from the true domain onto the approximate surrogate domain. The resulting solution provides a method that naturally incorporates curved geometrical features of the domain, overcomes complex and cumbersome mesh generation, and avoids problems with small-cut-cells. Dirichlet, Neumann, and general Robin boundary conditions are enforced weakly through: i) a generalized Nitsche's method and ii) a generalized Aubin's method. For this, a consistent asymptotic preserving formulation of the embedded Robin formulations is presented. We present several numerical experiments and analysis of the algorithmic properties of the different weak formulations. With this, we include convergence studies under polynomial, $p$, increase of the basis functions, mesh, $h$, refinement, and matrix conditioning to highlight the spectral and algebraic convergence features, respectively. This is done to assess the influence of errors across variational formulations, polynomial order, mesh size, and mappings between the true and surrogate boundaries.
Bayesian optimal design of experiments is a well-established approach to planning experiments. Briefly, a probability distribution, known as a statistical model, for the responses is assumed which is dependent on a vector of unknown parameters. A utility function is then specified which gives the gain in information for estimating the true value of the parameters using the Bayesian posterior distribution. A Bayesian optimal design is given by maximising the expectation of the utility with respect to the joint distribution given by the statistical model and prior distribution for the true parameter values. The approach takes account of the experimental aim via specification of the utility and of all assumed sources of uncertainty via the expected utility. However, it is predicated on the specification of the statistical model. Recently, a new type of statistical inference, known as Gibbs (or General Bayesian) inference, has been advanced. This is Bayesian-like, in that uncertainty on unknown quantities is represented by a posterior distribution, but does not necessarily rely on specification of a statistical model. Thus the resulting inference should be less sensitive to misspecification of the statistical model. The purpose of this paper is to propose Gibbs optimal design: a framework for optimal design of experiments for Gibbs inference. The concept behind the framework is introduced along with a computational approach to find Gibbs optimal designs in practice. The framework is demonstrated on exemplars including linear models, and experiments with count and time-to-event responses.
We address speech enhancement based on variational autoencoders, which involves learning a speech prior distribution in the time-frequency (TF) domain. A zero-mean complex-valued Gaussian distribution is usually assumed for the generative model, where the speech information is encoded in the variance as a function of a latent variable. In contrast to this commonly used approach, we propose a weighted variance generative model, where the contribution of each spectrogram time-frame in parameter learning is weighted. We impose a Gamma prior distribution on the weights, which would effectively lead to a Student's t-distribution instead of Gaussian for speech generative modeling. We develop efficient training and speech enhancement algorithms based on the proposed generative model. Our experimental results on spectrogram auto-encoding and speech enhancement demonstrate the effectiveness and robustness of the proposed approach compared to the standard unweighted variance model.
We propose an augmented Lagrangian-based preconditioner to accelerate the convergence of Krylov subspace methods applied to linear systems of equations with a block three-by-three structure such as those arising from mixed finite element discretizations of the coupled Stokes-Darcy flow problem. We analyze the spectrum of the preconditioned matrix and we show how the new preconditioner can be efficiently applied. Numerical experiments are reported to illustrate the effectiveness of the preconditioner in conjunction with flexible GMRES for solving linear systems of equations arising from a 3D test problem.
For solving two-dimensional incompressible flow in the vorticity form by the fourth-order compact finite difference scheme and explicit strong stability preserving (SSP) temporal discretizations, we show that the simple bound-preserving limiter in [Li H., Xie S., Zhang X., SIAM J. Numer. Anal., 56 (2018)]. can enforce the strict bounds of the vorticity, if the velocity field satisfies a discrete divergence free constraint. For reducing oscillations, a modified TVB limiter adapted from [Cockburn B., Shu CW., SIAM J. Numer. Anal., 31 (1994)] is constructed without affecting the bound-preserving property. This bound-preserving finite difference method can be used for any passive convection equation with a divergence free velocity field.
The proximal Galerkin finite element method is a high-order, low iteration complexity, nonlinear numerical method that preserves the geometric and algebraic structure of pointwise bound constraints in infinite-dimensional function spaces. This paper introduces the proximal Galerkin method and applies it to solve free boundary problems, enforce discrete maximum principles, and develop a scalable, mesh-independent algorithm for optimal design problems with pointwise bound constraints. This paper also provides a derivation of the latent variable proximal point (LVPP) algorithm, an unconditionally stable alternative to the interior point method. LVPP is an infinite-dimensional optimization algorithm that may be viewed as having an adaptive barrier function that is updated with a new informative prior at each (outer loop) optimization iteration. One of its main benefits is witnessed when analyzing the classical obstacle problem. Therein, we find that the original variational inequality can be replaced by a sequence of partial differential equations (PDEs) that are readily discretized and solved with, e.g., high-order finite elements. Throughout this work, we arrive at several unexpected contributions that may be of independent interest. These include (1) a semilinear PDE we refer to as the entropic Poisson equation; (2) an algebraic/geometric connection between high-order positivity-preserving discretizations and certain infinite-dimensional Lie groups; and (3) a gradient-based, bound-preserving algorithm for two-field density-based topology optimization. The complete latent variable proximal Galerkin methodology combines ideas from nonlinear programming, functional analysis, tropical algebra, and differential geometry and can potentially lead to new synergies among these areas as well as within variational and numerical analysis.
Feature attribution is a fundamental task in both machine learning and data analysis, which involves determining the contribution of individual features or variables to a model's output. This process helps identify the most important features for predicting an outcome. The history of feature attribution methods can be traced back to General Additive Models (GAMs), which extend linear regression models by incorporating non-linear relationships between dependent and independent variables. In recent years, gradient-based methods and surrogate models have been applied to unravel complex Artificial Intelligence (AI) systems, but these methods have limitations. GAMs tend to achieve lower accuracy, gradient-based methods can be difficult to interpret, and surrogate models often suffer from stability and fidelity issues. Furthermore, most existing methods do not consider users' contexts, which can significantly influence their preferences. To address these limitations and advance the current state-of-the-art, we define a novel feature attribution framework called Context-Aware Feature Attribution Through Argumentation (CA-FATA). Our framework harnesses the power of argumentation by treating each feature as an argument that can either support, attack or neutralize a prediction. Additionally, CA-FATA formulates feature attribution as an argumentation procedure, and each computation has explicit semantics, which makes it inherently interpretable. CA-FATA also easily integrates side information, such as users' contexts, resulting in more accurate predictions.
The multispecies Landau collision operator describes the two-particle, small scattering angle or grazing collisions in a plasma made up of different species of particles such as electrons and ions. Recently, a structure preserving deterministic particle method arXiv:1910.03080 has been developed for the single species spatially homogeneous Landau equation. This method relies on a regularization of the Landau collision operator so that an approximate solution, which is a linear combination of Dirac delta distributions, is well-defined. Based on a weak form of the regularized Landau equation, the time dependent locations of the Dirac delta functions satisfy a system of ordinary differential equations. In this work, we extend this particle method to the multispecies case, and examine its conservation of mass, momentum, and energy, and decay of entropy properties. We show that the equilibrium distribution of the regularized multispecies Landau equation is a Maxwellian distribution, and state a critical condition on the regularization parameters that guarantees a species independent equilibrium temperature. A convergence study comparing an exact multispecies BKW solution to the particle solution shows approximately 2nd order accuracy. Important physical properties such as conservation, decay of entropy, and equilibrium distribution of the particle method are demonstrated with several numerical examples.
Boundary value problems involving elliptic PDEs such as the Laplace and the Helmholtz equations are ubiquitous in physics and engineering. Many such problems have alternative formulations as integral equations that are mathematically more tractable than their PDE counterparts. However, the integral equation formulation poses a challenge in solving the dense linear systems that arise upon discretization. In cases where iterative methods converge rapidly, existing methods that draw on fast summation schemes such as the Fast Multipole Method are highly efficient and well established. More recently, linear complexity direct solvers that sidestep convergence issues by directly computing an invertible factorization have been developed. However, storage and compute costs are high, which limits their ability to solve large-scale problems in practice. In this work, we introduce a distributed-memory parallel algorithm based on an existing direct solver named ``strong recursive skeletonization factorization.'' The analysis of its parallel scalability applies generally to a class of existing methods that exploit the so-called strong admissibility. Specifically, we apply low-rank compression to certain off-diagonal matrix blocks in a way that minimizes data movement. Given a compression tolerance, our method constructs an approximate factorization of a discretized integral operator (dense matrix), which can be used to solve linear systems efficiently in parallel. Compared to iterative algorithms, our method is particularly suitable for problems involving ill-conditioned matrices or multiple right-hand sides. Large-scale numerical experiments are presented to demonstrate the performance of our implementation using the Julia language.
We propose a method for computing the Lyapunov exponents of renewal equations (delay equations of Volterra type) and of coupled systems of renewal and delay differential equations. The method consists in the reformulation of the delay equation as an abstract differential equation, the reduction of the latter to a system of ordinary differential equations via pseudospectral collocation, and the application of the standard discrete QR method. The effectiveness of the method is shown experimentally and a MATLAB implementation is provided.