The proximal Galerkin finite element method is a high-order, nonlinear numerical method that preserves the geometric and algebraic structure of bound constraints in infinite-dimensional function spaces. This paper introduces the proximal Galerkin method and applies it to solve free-boundary problems, enforce discrete maximum principles, and develop scalable, mesh-independent algorithms for optimal design. The paper begins with a derivation of the latent variable proximal point (LVPP) method: an unconditionally stable alternative to the interior point method. LVPP is an infinite-dimensional optimization algorithm that may be viewed as having an adaptive (Bayesian) barrier function that is updated with a new informative prior at each (outer loop) optimization iteration. One of the main benefits of this algorithm is witnessed when analyzing the classical obstacle problem. Therein, we find that the original variational inequality can be replaced by a sequence of semilinear partial differential equations (PDEs) that are readily discretized and solved with, e.g., high-order finite elements. Throughout this work, we arrive at several unexpected contributions that may be of independent interest. These include (1) a semilinear PDE we refer to as the entropic Poisson equation; (2) an algebraic/geometric connection between high-order positivity-preserving discretizations and infinite-dimensional Lie groups; and (3) a gradient-based, bound-preserving algorithm for two-field density-based topology optimization. The complete latent variable proximal Galerkin methodology combines ideas from nonlinear programming, functional analysis, tropical algebra, and differential geometry and can potentially lead to new synergies among these areas as well as within variational and numerical analysis.
Testing cross-sectional independence in panel data models is of fundamental importance in econometric analysis with high-dimensional panels. Recently, econometricians began to turn their attention to the problem in the presence of serial dependence. The existing procedure for testing cross-sectional independence with serial correlation is based on the sum of the sample cross-sectional correlations, which generally performs well when the alternative has dense cross-sectional correlations, but suffers from low power against sparse alternatives. To deal with sparse alternatives, we propose a test based on the maximum of the squared sample cross-sectional correlations. Furthermore, we propose a combined test to combine the p-values of the max based and sum based tests, which performs well under both dense and sparse alternatives. The combined test relies on the asymptotic independence of the max based and sum based test statistics, which we show rigorously. We show that the proposed max based and combined tests have attractive theoretical properties and demonstrate the superior performance via extensive simulation results. We apply the two new tests to analyze the weekly returns on the securities in the S\&P 500 index under the Fama-French three-factor model, and confirm the usefulness of the proposed combined test in detecting cross-sectional independence.
A general a posteriori error analysis applies to five lowest-order finite element methods for two fourth-order semi-linear problems with trilinear non-linearity and a general source. A quasi-optimal smoother extends the source term to the discrete trial space, and more importantly, modifies the trilinear term in the stream-function vorticity formulation of the incompressible 2D Navier-Stokes and the von K\'{a}rm\'{a}n equations. This enables the first efficient and reliable a posteriori error estimates for the 2D Navier-Stokes equations in the stream-function vorticity formulation for Morley, two discontinuous Galerkin, $C^0$ interior penalty, and WOPSIP discretizations with piecewise quadratic polynomials.
The Johnson--Lindenstrauss (JL) lemma is a powerful tool for dimensionality reduction in modern algorithm design. The lemma states that any set of high-dimensional points in a Euclidean space can be flattened to lower dimensions while approximately preserving pairwise Euclidean distances. Random matrices satisfying this lemma are called JL transforms (JLTs). Inspired by existing $s$-hashing JLTs with exactly $s$ nonzero elements on each column, the present work introduces an ensemble of sparse matrices encompassing so-called $s$-hashing-like matrices whose expected number of nonzero elements on each column is~$s$. The independence of the sub-Gaussian entries of these matrices and the knowledge of their exact distribution play an important role in their analyses. Using properties of independent sub-Gaussian random variables, these matrices are demonstrated to be JLTs, and their smallest and largest singular values are estimated non-asymptotically using a technique from geometric functional analysis. As the dimensions of the matrix grow to infinity, these singular values are proved to converge almost surely to fixed quantities (by using the universal Bai--Yin law), and in distribution to the Gaussian orthogonal ensemble (GOE) Tracy--Widom law after proper rescalings. Understanding the behaviors of extreme singular values is important in general because they are often used to define a measure of stability of matrix algorithms. For example, JLTs were recently used in derivative-free optimization algorithmic frameworks to select random subspaces in which are constructed random models or poll directions to achieve scalability, whence estimating their smallest singular value in particular helps determine the dimension of these subspaces.
Solving multiphysics-based inverse problems for geological carbon storage monitoring can be challenging when multimodal time-lapse data are expensive to collect and costly to simulate numerically. We overcome these challenges by combining computationally cheap learned surrogates with learned constraints. Not only does this combination lead to vastly improved inversions for the important fluid-flow property, permeability, it also provides a natural platform for inverting multimodal data including well measurements and active-source time-lapse seismic data. By adding a learned constraint, we arrive at a computationally feasible inversion approach that remains accurate. This is accomplished by including a trained deep neural network, known as a normalizing flow, which forces the model iterates to remain in-distribution, thereby safeguarding the accuracy of trained Fourier neural operators that act as surrogates for the computationally expensive multiphase flow simulations involving partial differential equation solves. By means of carefully selected experiments, centered around the problem of geological carbon storage, we demonstrate the efficacy of the proposed constrained optimization method on two different data modalities, namely time-lapse well and time-lapse seismic data. While permeability inversions from both these two modalities have their pluses and minuses, their joint inversion benefits from either, yielding valuable superior permeability inversions and CO2 plume predictions near, and far away, from the monitoring wells.
Transition amplitudes and transition probabilities are relevant to many areas of physics simulation, including the calculation of response properties and correlation functions. These quantities can also be related to solving linear systems of equations. Here we present three related algorithms for calculating transition probabilities. First, we extend a previously published short-depth algorithm, allowing for the two input states to be non-orthogonal. Building on this first procedure, we then derive a higher-depth algorithm based on Trotterization and Richardson extrapolation that requires fewer circuit evaluations. Third, we introduce a tunable algorithm that allows for trading off circuit depth and measurement complexity, yielding an algorithm that can be tailored to specific hardware characteristics. Finally, we implement proof-of-principle numerics for models in physics and chemistry and for a subroutine in variational quantum linear solving (VQLS). The primary benefits of our approaches are that (a) arbitrary non-orthogonal states may now be used with small increases in quantum resources, (b) we (like another recently proposed method) entirely avoid subroutines such as the Hadamard test that may require three-qubit gates to be decomposed, and (c) in some cases fewer quantum circuit evaluations are required as compared to the previous state-of-the-art in NISQ algorithms for transition probabilities.
Iterative refinement (IR) is a popular scheme for solving a linear system of equations based on gradually improving the accuracy of an initial approximation. Originally developed to improve upon the accuracy of Gaussian elimination, interest in IR has been revived because of its suitability for execution on fast low-precision hardware such as analog devices and graphics processing units. IR generally converges when the error associated with the solution method is small, but is known to diverge when this error is large. We propose and analyze a novel enhancement to the IR algorithm by adding a line search optimization step that guarantees the algorithm will not diverge. Numerical experiments verify our theoretical results and illustrate the effectiveness of our proposed scheme.
In this paper we introduce a multilevel Picard approximation algorithm for semilinear parabolic partial integro-differential equations (PIDEs). We prove that the numerical approximation scheme converges to the unique viscosity solution of the PIDE under consideration. To that end, we derive a Feynman-Kac representation for the unique viscosity solution of the semilinear PIDE, extending the classical Feynman-Kac representation for linear PIDEs. Furthermore, we show that the algorithm does not suffer from the curse of dimensionality, i.e. the computational complexity of the algorithm is bounded polynomially in the dimension $d$ and the reciprocal of the prescribed accuracy $\varepsilon$. We also provide a numerical example in up to 10'000 dimensions to demonstrate its applicability.
Auditory spatial attention detection (ASAD) aims to decode the attended spatial location with EEG in a multiple-speaker setting. ASAD methods are inspired by the brain lateralization of cortical neural responses during the processing of auditory spatial attention, and show promising performance for the task of auditory attention decoding (AAD) with neural recordings. In the previous ASAD methods, the spatial distribution of EEG electrodes is not fully exploited, which may limit the performance of these methods. In the present work, by transforming the original EEG channels into a two-dimensional (2D) spatial topological map, the EEG data is transformed into a three-dimensional (3D) arrangement containing spatial-temporal information. And then a 3D deep convolutional neural network (DenseNet-3D) is used to extract temporal and spatial features of the neural representation for the attended locations. The results show that the proposed method achieves higher decoding accuracy than the state-of-the-art (SOTA) method (94.4% compared to XANet's 90.6%) with 1-second decision window for the widely used KULeuven (KUL) dataset, and the code to implement our work is available on Github: //github.com/xuxiran/ASAD_DenseNet
We introduce a novel ridge detection algorithm for time-frequency (TF) analysis, particularly tailored for intricate nonstationary time series encompassing multiple non-sinusoidal oscillatory components. The algorithm is rooted in the distinctive geometric patterns that emerge in the TF domain due to such non-sinusoidal oscillations. We term this method \textit{shape-adaptive mode decomposition-based multiple harmonic ridge detection} (\textsf{SAMD-MHRD}). A swift implementation is available when supplementary information is at hand. We demonstrate the practical utility of \textsf{SAMD-MHRD} through its application to a real-world challenge. We employ it to devise a cutting-edge walking activity detection algorithm, leveraging accelerometer signals from an inertial measurement unit across diverse body locations of a moving subject.
When modelling discontinuities (interfaces) using the finite element method, the standard approach is to use a conforming finite-element mesh in which the mesh matches the interfaces. However, this approach can prove cumbersome if the geometry is complex, in particular in 3D. In this work, we develop an efficient technique for a non-conforming finite-element treatment of weak discontinuities by using laminated microstructures. The approach is inspired by the so-called composite voxel technique that has been developed for FFT-based spectral solvers in computational homogenization. The idea behind the method is rather simple. Each finite element that is cut by an interface is treated as a simple laminate with the volume fraction of the phases and the lamination orientation determined in terms of the actual geometrical arrangement of the interface within the element. The approach is illustrated by several computational examples relevant to the micromechanics of heterogeneous materials. Elastic and elastic-plastic materials at small and finite strain are considered in the examples. The performance of the proposed method is compared to two alternative, simple methods showing that the new approach is in most cases superior to them while maintaining the simplicity.