Due to its highly oscillating solution, the Helmholtz equation is numerically challenging to solve. To obtain a reasonable solution, a mesh size that is much smaller than the reciprocal of the wavenumber is typically required (known as the pollution effect). High order schemes are desirable, because they are better in mitigating the pollution effect. In this paper, we present a high order compact finite difference method for 2D Helmholtz equations with singular sources, which can also handle any possible combinations of boundary conditions (Dirichlet, Neumann, and impedance) on a rectangular domain. Our method achieves a sixth order consistency for a constant wavenumber, and a fifth order consistency for a piecewise constant wavenumber. To reduce the pollution effect, we propose a new pollution minimization strategy that is based on the average truncation error of plane waves. Our numerical experiments demonstrate the superiority of our proposed finite difference scheme with reduced pollution effect to several state-of-the-art finite difference schemes, particularly in the critical pre-asymptotic region where $\textsf{k} h$ is near $1$ with $\textsf{k}$ being the wavenumber and $h$ the mesh size.
Physics-informed neural networks (PINNs) have proven a suitable mathematical scaffold for solving inverse ordinary (ODE) and partial differential equations (PDE). Typical inverse PINNs are formulated as soft-constrained multi-objective optimization problems with several hyperparameters. In this work, we demonstrate that inverse PINNs can be framed in terms of maximum-likelihood estimators (MLE) to allow explicit error propagation from interpolation to the physical model space through Taylor expansion, without the need of hyperparameter tuning. We explore its application to high-dimensional coupled ODEs constrained by differential algebraic equations that are common in transient chemical and biological kinetics. Furthermore, we show that singular-value decomposition (SVD) of the ODE coupling matrices (reaction stoichiometry matrix) provides reduced uncorrelated subspaces in which PINNs solutions can be represented and over which residuals can be projected. Finally, SVD bases serve as preconditioners for the inversion of covariance matrices in this hyperparameter-free robust application of MLE to ``kinetics-informed neural networks''.
The first globally convergent numerical method for a Coefficient Inverse Problem (CIP) for the Riemannian Radiative Transfer Equation (RRTE) is constructed. This is a version of the so-called \textquotedblleft convexification" method, which has been pursued by this research group for a number of years for some other CIPs for PDEs. Those PDEs are significantly different from the RRTE. The presence of the Carleman Weight Function (CWF) in the numerical scheme is the key element which insures the global convergence. Convergence analysis is presented along with the results of numerical experiments, which confirm the theory. RRTE governs the propagation of photons in the diffuse medium in the case when they propagate along geodesic lines between their collisions. Geodesic lines are generated by the spatially variable dielectric constant of the medium.
Given a graph $G=(V,E)$, for a vertex set $S\subseteq V$, let $N(S)$ denote the set of vertices in $V$ that have a neighbor in $S$. In this paper, we prove the following Hall-type statement. Let $k \ge 2$ be an integer. Let $X$ be a vertex set in the undirected graph $G$ such that for each subset $S$ of $X$ it holds $|N(S)|\ge \frac1k {|S|}$. Then $G$ has a matching of size at least $\frac{|X|}{k+1}$. Using this statement, we derive tight bounds for the estimators of the matching size in planar graphs. These estimators are used in designing sublinear space algorithms for approximating the maching size in the data stream model of computation. In particular, we show the number of locally superior vertices, introduced in \cite{Jowhari23}, is a $3$ factor approximation of the matching size in planar graphs. The previous analysis proved a $3.5$ approximation factor. In another consequence, we show a simple setting of an estimator by Esfandiari \etal \cite{EHLMO15} achieves $3$ factor approximation of the matching size in planar graphs. Namely, let $s$ be the number of edges with both endpoints having degree at most $2$ and let $h$ be the number of vertices with degree at least $3$. We show when the graph is planar, the size of matching is at least $\frac{s+h}3$. This result generalizes a known fact that every planar graph on $n$ vertices with minimum degree $3$ has a matching of size at least $\frac{n}3$.
In this paper, we present a comprehensive convergence analysis of Laguerre spectral approximations for analytic functions. By exploiting contour integral techniques from complex analysis, we prove rigorously that Laguerre projection and interpolation methods of degree $n$ converge at the root-exponential rate $O(\exp(-2\rho\sqrt{n}))$ with $\rho>0$ when the underlying function is analytic inside and on a parabola with focus at the origin and vertex at $z=-\rho^2$. The extension to several important applications are also discussed, including Laguerre spectral differentiations, Gauss-Laguerre quadrature rules and the Weeks method for the inversion of Laplace transform, and some sharp convergence rate estimates are derived. Numerical experiments are presented to verify the theoretical results.
Recovering whole-body mesh by inferring the abstract pose and shape parameters from visual content can obtain 3D bodies with realistic structures. However, the inferring process is highly non-linear and suffers from image-mesh misalignment, resulting in inaccurate reconstruction. In contrast, 3D keypoint estimation methods utilize the volumetric representation to achieve pixel-level accuracy but may predict unrealistic body structures. To address these issues, this paper presents a novel hybrid inverse kinematics solution, HybrIK, that integrates the merits of 3D keypoint estimation and body mesh recovery in a unified framework. HybrIK directly transforms accurate 3D joints to body-part rotations via twist-and-swing decomposition. The swing rotations are analytically solved with 3D joints, while the twist rotations are derived from visual cues through neural networks. To capture comprehensive whole-body details, we further develop a holistic framework, HybrIK-X, which enhances HybrIK with articulated hands and an expressive face. HybrIK-X is fast and accurate by solving the whole-body pose with a one-stage model. Experiments demonstrate that HybrIK and HybrIK-X preserve both the accuracy of 3D joints and the realistic structure of the parametric human model, leading to pixel-aligned whole-body mesh recovery. The proposed method significantly surpasses the state-of-the-art methods on various benchmarks for body-only, hand-only, and whole-body scenarios. Code and results can be found at //jeffli.site/HybrIK-X/
The high-order gas-kinetic scheme (HGKS) features good robustness, high efficiency and satisfactory accuracy,the performaence of which can be further improved combined with WENO-AO (WENO with adaptive order) scheme for reconstruction. To reduce computational costs in the reconstruction procedure, this paper proposes to combine HGKS with a hybrid WENO-AO scheme. The hybrid WENO-AO scheme reconstructs target variables using upwind linear approximation directly if all extreme points of the reconstruction polynomials for these variables are outside the large stencil. Otherwise, the WENO-AO scheme is used. Unlike combining the hybrid WENO scheme with traditional Riemann solvers, the troubled cell indicator of the hybrid WENO-AO method is fully utilized in the spatial reconstruction process of HGKS. During normal and tangential reconstruction, the gas-kinetic scheme flux not only needs to reconstruct the conservative variables on the left and right interfaces but also to reconstruct the derivative terms of the conservative variables. By reducing the number of times that the WENO-AO scheme is used, the calculation cost is reduced. The high-order gas-kinetic scheme with the hybrid WENO-AO method retains original robustness and accuracy of the WENO5-AO GKS, while exhibits higher computational efficiency.
In $d$ dimensions, approximating an arbitrary function oscillating with frequency $\lesssim k$ requires $\sim k^d$ degrees of freedom. A numerical method for solving the Helmholtz equation (with wavenumber $k$) suffers from the pollution effect if, as $k\to \infty$, the total number of degrees of freedom needed to maintain accuracy grows faster than this natural threshold. While the $h$-version of the finite element method (FEM) (where accuracy is increased by decreasing the meshwidth $h$ and keeping the polynomial degree $p$ fixed) suffers from the pollution effect, the celebrated papers [Melenk, Sauter 2010], [Melenk, Sauter 2011], [Esterhazy, Melenk 2012], and [Melenk, Parsania, Sauter 2013] showed that the $hp$-FEM (where accuracy is increased by decreasing the meshwidth $h$ and increasing the polynomial degree $p$) applied to a variety of constant-coefficient Helmholtz problems does not suffer from the pollution effect. The heart of the proofs of these results is a PDE result splitting the solution of the Helmholtz equation into "high" and "low" frequency components. In this expository paper we prove this splitting for the constant-coefficient Helmholtz equation in full space (i.e., in $\mathbb{R}^d$) using only integration by parts and elementary properties of the Fourier transform; this is in contrast to the proof for this set-up in [Melenk, Sauter 2010] which uses somewhat-involved bounds on Bessel and Hankel functions. The proof in this paper is motivated by the recent proof in [Lafontaine, Spence, Wunsch 2022] of this splitting for the variable-coefficient Helmholtz equation in full space; indeed, the proof in [Lafontaine, Spence, Wunsch 2022] uses more-sophisticated tools that reduce to the elementary ones above for constant coefficients.
We develop novel theory and algorithms for computing approximate solution to $Ax=b$, or to $A^TAx=A^Tb$, where $A$ is an $m \times n$ real matrix of arbitrary rank. First, we describe the {\it Triangle Algorithm} (TA), where given an ellipsoid $E_{A,\rho}=\{Ax: \Vert x \Vert \leq \rho\}$, in each iteration it either computes successively improving approximation $b_k=Ax_k \in E_{A,\rho}$, or proves $b \not \in E_{A, \rho}$. We then extend TA for computing an approximate solution or minimum-norm solution. Next, we develop a dynamic version of TA, the {\it Centering Triangle Algorithm} (CTA), generating residuals $r_k=b - Ax_k$ via iterations of the simple formula, $F_1(r)=r-(r^THr/r^TH^2r)Hr$, where $H=A$ when $A$ is symmetric PSD, otherwise $H=AA^T$ but need not be computed explicitly. More generally, CTA extends to a family of iteration function, $F_t( r)$, $t=1, \dots, m$ satisfying: On the one hand, given $t \leq m$ and $r_0=b-Ax_0$, where $x_0=A^Tw_0$ with $w_0 \in \mathbb{R}^m$ arbitrary, for all $k \geq 1$, $r_k=F_t(r_{k-1})=b-Ax_k$ and $A^Tr_k$ converges to zero. Algorithmically, if $H$ is invertible with condition number $\kappa$, in $k=O( (\kappa/t) \ln \varepsilon^{-1})$ iterations $\Vert r_k \Vert \leq \varepsilon$. If $H$ is singular with $\kappa^+$ the ratio of its largest to smallest positive eigenvalues, in $k =O(\kappa^+/t\varepsilon)$ iterations either $\Vert r_k \Vert \leq \varepsilon$ or $\Vert A^T r_k\Vert= O(\sqrt{\varepsilon})$. If $N$ is the number of nonzero entries of $A$, each iteration take $O(Nt+t^3)$ operations. On the other hand, given $r_0=b-Ax_0$, suppose its minimal polynomial with respect to $H$ has degree $s$. Then $Ax=b$ is solvable if and only if $F_{s}(r_0)=0$. Moreover, exclusively $A^TAx=A^Tb$ is solvable, if and only if $F_{s}(r_0) \not= 0$ but $A^T F_s(r_0)=0$. Additionally, $\{F_t(r_0)\}_{t=1}^s$ is computable in $O(Ns+s^3)$ operations.
PDE-constrained inverse problems are some of the most challenging and computationally demanding problems in computational science today. Fine meshes that are required to accurately compute the PDE solution introduce an enormous number of parameters and require large scale computing resources such as more processors and more memory to solve such systems in a reasonable time. For inverse problems constrained by time dependent PDEs, the adjoint method that is often employed to efficiently compute gradients and higher order derivatives requires solving a time-reversed, so-called adjoint PDE that depends on the forward PDE solution at each timestep. This necessitates the storage of a high dimensional forward solution vector at every timestep. Such a procedure quickly exhausts the available memory resources. Several approaches that trade additional computation for reduced memory footprint have been proposed to mitigate the memory bottleneck, including checkpointing and compression strategies. In this work, we propose a close-to-ideal scalable compression approach using autoencoders to eliminate the need for checkpointing and substantial memory storage, thereby reducing both the time-to-solution and memory requirements. We compare our approach with checkpointing and an off-the-shelf compression approach on an earth-scale ill-posed seismic inverse problem. The results verify the expected close-to-ideal speedup for both the gradient and Hessian-vector product using the proposed autoencoder compression approach. To highlight the usefulness of the proposed approach, we combine the autoencoder compression with the data-informed active subspace (DIAS) prior to show how the DIAS method can be affordably extended to large scale problems without the need of checkpointing and large memory.
In this paper, we first study the projections onto the set of unit dual quaternions, and the set of dual quaternion vectors with unit norms. Then we propose a power method for computing the dominant eigenvalue of a dual quaternion Hermitian matrix, and show its convergence and convergence rate under mild conditions. Based upon these, we reformulate the simultaneous localization and mapping (SLAM) problem as a rank-one dual quaternion completion problem. A two-block coordinate descent method is proposed to solve this problem. One block subproblem can be reduced to compute the best rank-one approximation of a dual quaternion Hermitian matrix, which can be computed by the power method. The other block has a closed-form solution. Numerical experiments are presented to show the efficiency of our proposed power method.