In this paper, we consider numerical approximations for solving the inductionless magnetohydrodynamic (MHD) equations. By utilizing the scalar auxiliary variable (SAV) approach for dealing with the convective and coupling terms, we propose some first- and second-order schemes for this system. These schemes are linear, decoupled, unconditionally energy stable, and only require solving a sequence of differential equations with constant coefficients at each time step. We further derive a rigorous error analysis for the first-order scheme, establishing optimal convergence rates for the velocity, pressure, current density and electric potential in the two-dimensional case. Numerical examples are presented to verify the theoretical findings and show the performances of the schemes.
We present a semi-sparsity model for 3D triangular mesh denoising, which is motivated by the success of semi-sparsity regularization in image processing applications. We demonstrate that such a regularization model can be also applied for graphic processing and gives rise to similar simultaneous-fitting results in preserving sharp features and piece-wise smoothing surfaces. Specifically, we first describe the piecewise constant function spaces associated with the differential operators on triangular meshes and then show how to extend the semi-sparsity model to meshes denoising. To verify its effectiveness, we present an efficient iterative algorithm based on the alternating direction method of multipliers (ADMM) technique and show the experimental results on synthetic and real scanning data against the state-of-the-arts both visually and quantitatively.
We propose a novel spectral method for the Allen--Cahn equation on spheres that does not necessarily require quadrature exactness assumptions. Instead of certain exactness degrees, we employ a restricted isometry relation based on the Marcinkiewicz--Zygmund system of quadrature rules to quantify the quadrature error of polynomial integrands. The new method imposes only some conditions on the polynomial degree of numerical solutions to derive the maximum principle and energy stability, and thus, differs substantially from existing methods in the literature that often rely on strict conditions on the time stepping size, Lipschitz property of the nonlinear term, or $L^{\infty}$ boundedness of numerical solutions. Moreover, the new method is suitable for long-time simulations because the time stepping size is independent of the diffusion coefficient in the equation. Inspired by the effective maximum principle recently proposed by Li (Ann. Appl. Math., 37(2): 131--290, 2021), we develop an almost sharp maximum principle that allows controllable deviation of numerical solutions from the sharp bound. Further, we show that the new method is energy stable and equivalent to the Galerkin method if the quadrature rule exhibits sufficient exactness degrees. In addition, we propose an energy-stable mixed-quadrature scheme which works well even with randomly sampled initial condition data. We validate the theoretical results about the energy stability and the almost sharp maximum principle by numerical experiments on the 2-sphere $\mathbb{S}^2$.
The ParaOpt algorithm was recently introduced as a time-parallel solver for optimal-control problems with a terminal-cost objective, and convergence results have been presented for the linear diffusive case with implicit-Euler time integrators. We reformulate ParaOpt for tracking problems and provide generalized convergence analyses for both objectives. We focus on linear diffusive equations and prove convergence bounds that are generic in the time integrators used. For large problem dimensions, ParaOpt's performance depends crucially on having a good preconditioner to solve the arising linear systems. For the case where ParaOpt's cheap, coarse-grained propagator is linear, we introduce diagonalization-based preconditioners inspired by recent advances in the ParaDiag family of methods. These preconditioners not only lead to a weakly-scalable ParaOpt version, but are themselves invertible in parallel, making maximal use of available concurrency. They have proven convergence properties in the linear diffusive case that are generic in the time discretization used, similarly to our ParaOpt results. Numerical results confirm that the iteration count of the iterative solvers used for ParaOpt's linear systems becomes constant in the limit of an increasing processor count. The paper is accompanied by a sequential MATLAB implementation.
In Lipschitz two-dimensional domains, we study a Brinkman-Darcy-Forchheimer problem on the weighted spaces $\mathbf{H}_0^1(\omega,\Omega) \times L^2(\omega,\Omega)/\mathbb{R}$, where $\omega$ belongs to the Muckenhoupt class $A_2$. Under a suitable smallness assumption, we establish the existence and uniqueness of a solution. We propose a finite element scheme and obtain a quasi-best approximation result in energy norm \`a la C\'ea under the assumption that $\Omega$ is convex. We also devise an a posteriori error estimator and investigate its reliability and efficiency properties. Finally, we design a simple adaptive strategy that yields optimal experimental rates of convergence for the numerical examples that we perform.
Motivated by applications in distributed storage, distributed computing, and homomorphic secret sharing, we study communication-efficient schemes for computing linear combinations of coded symbols. Specifically, we design low-bandwidth schemes that evaluate the weighted sum of $\ell$ coded symbols in a codeword $\pmb{c}\in\mathbb{F}^n$, when we are given access to $d$ of the remaining components in $\pmb{c}$. Formally, suppose that $\mathbb{F}$ is a field extension of $\mathbb{B}$ of degree $t$. Let $\pmb{c}$ be a codeword in a Reed-Solomon code of dimension $k$ and our task is to compute the weighted sum of $\ell$ coded symbols. In this paper, for some $s<t$, we provide an explicit scheme that performs this task by downloading $d(t-s)$ sub-symbols in $\mathbb{B}$ from $d$ available nodes, whenever $d\geq \ell|\mathbb{B}|^s-\ell+k$. In many cases, our scheme outperforms previous schemes in the literature. Furthermore, we provide a characterization of evaluation schemes for general linear codes. Then in the special case of Reed-Solomon codes, we use this characterization to derive a lower bound for the evaluation bandwidth.
In this paper we consider the generalized inverse iteration for computing ground states of the Gross-Pitaevskii eigenvector problem (GPE). For that we prove explicit linear convergence rates that depend on the maximum eigenvalue in magnitude of a weighted linear eigenvalue problem. Furthermore, we show that this eigenvalue can be bounded by the first spectral gap of a linearized Gross-Pitaevskii operator, recovering the same rates as for linear eigenvector problems. With this we establish the first local convergence result for the basic inverse iteration for the GPE without damping. We also show how our findings directly generalize to extended inverse iterations, such as the Gradient Flow Discrete Normalized (GFDN) proposed in [W. Bao, Q. Du, SIAM J. Sci. Comput., 25 (2004)] or the damped inverse iteration suggested in [P. Henning, D. Peterseim, SIAM J. Numer. Anal., 53 (2020)]. Our analysis also reveals why the inverse iteration for the GPE does not react favourably to spectral shifts. This empirical observation can now be explained with a blow-up of a weighting function that crucially contributes to the convergence rates. Our findings are illustrated by numerical experiments.
This paper presents a scalable multigrid preconditioner targeting large-scale systems arising from discontinuous Petrov-Galerkin (DPG) discretizations of high-frequency wave operators. This work is built on previously developed multigrid preconditioning techniques of Petrides and Demkowicz (Comput. Math. Appl. 87 (2021) pp. 12-26) and extends the convergence results from $\mathcal{O}(10^7)$ degrees of freedom (DOFs) to $\mathcal{O}(10^9)$ DOFs using a new scalable parallel MPI/OpenMP implementation. Novel contributions of this paper include an alternative definition of coarse-grid systems based on restriction of fine-grid operators, yielding superior convergence results. In the uniform refinement setting, a detailed convergence study is provided, demonstrating h and p robust convergence and linear dependence with respect to the wave frequency. The paper concludes with numerical results on hp-adaptive simulations including a large-scale seismic modeling benchmark problem with high material contrast.
The $\Sigma$-QMAC problem is introduced, involving $S$ servers, $K$ classical ($\mathbb{F}_d$) data streams, and $T$ independent quantum systems. Data stream ${\sf W}_k, k\in[K]$ is replicated at a subset of servers $\mathcal{W}(k)\subset[S]$, and quantum system $\mathcal{Q}_t, t\in[T]$ is distributed among a subset of servers $\mathcal{E}(t)\subset[S]$ such that Server $s\in\mathcal{E}(t)$ receives subsystem $\mathcal{Q}_{t,s}$ of $\mathcal{Q}_t=(\mathcal{Q}_{t,s})_{s\in\mathcal{E}(t)}$. Servers manipulate their quantum subsystems according to their data and send the subsystems to a receiver. The total download cost is $\sum_{t\in[T]}\sum_{s\in\mathcal{E}(t)}\log_d|\mathcal{Q}_{t,s}|$ qudits, where $|\mathcal{Q}|$ is the dimension of $\mathcal{Q}$. The states and measurements of $(\mathcal{Q}_t)_{t\in[T]}$ are required to be separable across $t\in[T]$ throughout, but for each $t\in[T]$, the subsystems of $\mathcal{Q}_{t}$ can be prepared initially in an arbitrary (independent of data) entangled state, manipulated arbitrarily by the respective servers, and measured jointly by the receiver. From the measurements, the receiver must recover the sum of all data streams. Rate is defined as the number of dits ($\mathbb{F}_d$ symbols) of the desired sum computed per qudit of download. The capacity of $\Sigma$-QMAC, i.e., the supremum of achievable rates is characterized for arbitrary data and entanglement distributions $\mathcal{W}, \mathcal{E}$. Coding based on the $N$-sum box abstraction is optimal in every case.
We study the computational scalability of a Gaussian process (GP) framework for solving general nonlinear partial differential equations (PDEs). This framework transforms solving PDEs to solving quadratic optimization problem with nonlinear constraints. Its complexity bottleneck lies in computing with dense kernel matrices obtained from pointwise evaluations of the covariance kernel of the GP and its partial derivatives at collocation points. We present a sparse Cholesky factorization algorithm for such kernel matrices based on the near-sparsity of the Cholesky factor under a new ordering of Diracs and derivative measurements. We rigorously identify the sparsity pattern and quantify the exponentially convergent accuracy of the corresponding Vecchia approximation of the GP, which is optimal in the Kullback-Leibler divergence. This enables us to compute $\epsilon$-approximate inverse Cholesky factors of the kernel matrices with complexity $O(N\log^d(N/\epsilon))$ in space and $O(N\log^{2d}(N/\epsilon))$ in time. With the sparse factors, gradient-based optimization methods become scalable. Furthermore, we can use the oftentimes more efficient Gauss-Newton method, for which we apply the conjugate gradient algorithm with the sparse factor of a reduced kernel matrix as a preconditioner to solve the linear system. We numerically illustrate our algorithm's near-linear space/time complexity for a broad class of nonlinear PDEs such as the nonlinear elliptic, Burgers, and Monge-Amp\`ere equations. In summary, we provide a fast, scalable, and accurate method for solving general PDEs with GPs.
The aim of this paper is to study the shape optimization method for solving the Bernoulli free boundary problem, a well-known ill-posed problem that seeks the unknown free boundary through Cauchy data. Different formulations have been proposed in the literature that differ in the choice of the objective functional. Specifically, it was shown respectively in [14] and [16] that tracking Neumann data is well-posed but tracking Dirichlet data is not. In this paper we propose a new well-posed objective functional that tracks Dirichlet data at the free boundary. By calculating the Euler derivative and the shape Hessian of the objective functional we show that the new formulation is well-posed, i.e., the shape Hessian is coercive at the minimizers. The coercivity of the shape Hessian may ensure the existence of optimal solutions for the nonlinear Ritz-Galerkin approximation method and its convergence, thus is crucial for the formulation. As a summary, we conclude that tracking Dirichlet or Neumann data in its energy norm is not sufficient, but tracking it in a half an order higher norm will be well-posed. To support our theoretical results we carry out extensive numerical experiments.