We generalize quantum-classical PCPs, first introduced by Weggemans, Folkertsma and Cade (TQC 2024), to allow for $q$ quantum queries to a polynomially-sized classical proof ($\mathsf{QCPCP}_{Q,c,s}[q]$). Exploiting a connection with the polynomial method, we prove that for any constant $q$, promise gap $c-s = \Omega(1/\text{poly}(n))$ and $\delta>0$, we have $\mathsf{QCPCP}_{Q,c,s}[q] \subseteq \mathsf{QCPCP}_{1-\delta,1/2+\delta}[3] \subseteq \mathsf{BQ} \cdot \mathsf{NP}$, where $\mathsf{BQ} \cdot \mathsf{NP}$ is the class of promise problems with quantum reductions to an $\mathsf{NP}$-complete problem. Surprisingly, this shows that we can amplify the promise gap from inverse polynomial to constant for constant query quantum-classical PCPs, and that any quantum-classical PCP making any constant number of quantum queries can be simulated by one that makes only three classical queries. Nevertheless, even though we can achieve promise gap amplification, our result also gives strong evidence that there exists no constant query quantum-classical PCP for $\mathsf{QCMA}$, as it is unlikely that $\mathsf{QCMA} \subseteq \mathsf{BQ} \cdot \mathsf{NP}$, which we support by giving oracular evidence. In the (poly-)logarithmic query regime, we show for any positive integer $c$, there exists an oracle relative to which $\mathsf{QCPCP}[\mathcal{O}(\log^c n)] \subsetneq \mathsf{QCPCP}_Q[\mathcal{O}(\log^c n)]$, contrasting the constant query case where the equivalence of both query models holds relative to any oracle. Finally, we connect our results to more general quantum-classical interactive proof systems.
Motivated by the need for the rigorous analysis of the numerical stability of variational least-squares kernel-based methods for solving second-order elliptic partial differential equations, we provide previously lacking stability inequalities. This fills a significant theoretical gap in the previous work [Comput. Math. Appl. 103 (2021) 1-11], which provided error estimates based on a conjecture on the stability. With the stability estimate now rigorously proven, we complete the theoretical foundations and compare the convergence behavior to the proven rates. Furthermore, we establish another stability inequality involving weighted-discrete norms, and provide a theoretical proof demonstrating that the exact quadrature weights are not necessary for the weighted least-squares kernel-based collocation method to converge. Our novel theoretical insights are validated by numerical examples, which showcase the relative efficiency and accuracy of these methods on data sets with large mesh ratios. The results confirm our theoretical predictions regarding the performance of variational least-squares kernel-based method, least-squares kernel-based collocation method, and our new weighted least-squares kernel-based collocation method. Most importantly, our results demonstrate that all methods converge at the same rate, validating the convergence theory of weighted least-squares in our proven theories.
High-dimensional parabolic partial differential equations (PDEs) often involve large-scale Hessian matrices, which are computationally expensive for deep learning methods relying on automatic differentiation to compute derivatives. This work aims to address this issue. In the proposed method, the PDE is reformulated into a martingale formulation, which allows the computation of loss functions to be derivative-free and parallelized in time-space domain. Then, the martingale formulation is enforced using a Galerkin method via adversarial learning techniques, which eliminate the need of computing conditional expectations in the margtingale property. This method is further extended to solve Hamilton-Jacobi-Bellman (HJB) equations and the associated Stochastic optimal control problems, enabling the simultaneous solution of the value function and optimal feedback control in a derivative-free manner. Numerical results demonstrate the effectiveness and efficiency of the proposed method, capable of solving HJB equations accurately with dimensionality up to 10,000.
A new variant of the GMRES method is presented for solving linear systems with the same matrix and subsequently obtained multiple right-hand sides. The new method keeps such properties of the classical GMRES algorithm as follows. Both bases of the search space and its image are maintained orthonormal that increases the robustness of the method. Moreover there is no need to store both bases since they are effectively represented within a common basis. Along with it our method is theoretically equivalent to the GCR method extended for a case of multiple right-hand sides but is more numerically robust and requires less memory. The main result of the paper is a mechanism of adding an arbitrary direction vector to the search space that can be easily adopted for flexible GMRES or GMRES with deflated restarting.
We propose and justify a matrix reduction method for calculating the optimal approximation of an observed matrix $A \in {\mathbb C}^{m \times n}$ by a sum $\sum_{i=1}^p \sum_{j=1}^q B_iX_{ij}C_j$ of matrix products where each $B_i \in {\mathbb C}^{m \times g_i}$ and $C_j \in {\mathbb C}^{h_j \times n}$ is known and where the unknown matrix kernels $X_{ij}$ are determined by minimizing the Frobenius norm of the error. The sum can be represented as a bounded linear mapping $BXC$ with unknown kernel $X$ from a prescribed subspace ${\mathcal T} \subseteq {\mathbb C}^n$ onto a prescribed subspace ${\mathcal S} \subseteq {\mathbb C}^m$ defined respectively by the collective domains and ranges of the given matrices $C_1,\ldots,C_q$ and $B_1,\ldots,B_p$. We show that the optimal kernel is $X = B^{\dag}AC^{\dag}$ and that the optimal approximation $BB^{\dag}AC^{\dag}C$ is the projection of the observed mapping $A$ onto a mapping from ${\mathcal T}$ to ${\mathcal S}$. If $A$ is large $B$ and $C$ may also be large and direct calculation of $B^{\dag}$ and $C^{\dag}$ becomes unwieldy and inefficient. { The proposed method avoids} this difficulty by reducing the solution process to finding the pseudo-inverses of a collection of much smaller matrices. This significantly reduces the computational burden.
We study the finite element approximation of problems involving the weighted $p$-Laplacian for $p \in (1,\infty)$ and weights belonging to the Muckenhoupt class $A_1$. In particular, we consider an equation and an obstacle problem for the weighted $p$-Laplacian and derive error estimates in both cases. The analysis is based on the language of weighted Orlicz and Orlicz--Sobolev spaces.
We consider a fully discretized numerical scheme for parabolic stochastic partial differential equations with multiplicative noise. Our abstract framework can be applied to formulate a non-iterative domain decomposition approach. Such methods can help to parallelize the code and therefore lead to a more efficient implementation. The domain decomposition is integrated through the Douglas-Rachford splitting scheme, where one split operator acts on one part of the domain. For an efficient space discretization of the underlying equation, we chose the discontinuous Galerkin method as this suits the parallelization strategy well. For this fully discretized scheme, we provide a strong space-time convergence result. We conclude the manuscript with numerical experiments validating our theoretical findings.
Density deconvolution deals with the estimation of the probability density function $f$ of a random signal from $n\geq1$ data observed with independent and known additive random noise. This is a classical problem in statistics, for which frequentist and Bayesian nonparametric approaches are available to estimate $f$ in static or batch domains. In this paper, we consider the problem of density deconvolution in a streaming or online domain, and develop a principled sequential approach to estimate $f$. By relying on a quasi-Bayesian sequential (learning) model for the data, often referred to as Newton's algorithm, we obtain a sequential deconvolution estimate $f_{n}$ of $f$ that is of easy evaluation, computationally efficient, and with constant computational cost as data increase, which is desirable for streaming data. In particular, local and uniform Gaussian central limit theorems for $f_{n}$ are established, leading to asymptotic credible intervals and bands for $f$, respectively. We provide the sequential deconvolution estimate $f_{n}$ with large sample asymptotic guarantees under the quasi-Bayesian sequential model for the data, proving a merging with respect to the direct density estimation problem, and also under a ``true" frequentist model for the data, proving consistency. An empirical validation of our methods is presented on synthetic and real data, also comparing with respect to a kernel approach and a Bayesian nonparametric approach with a Dirichlet process mixture prior.
Finite element discretization of Stokes problems can result in singular, inconsistent saddle point linear algebraic systems. This inconsistency can cause many iterative methods to fail to converge. In this work, we consider the lowest-order weak Galerkin finite element method to discretize Stokes flow problems and study a consistency enforcement by modifying the right-hand side of the resulting linear system. It is shown that the modification of the scheme does not affect the optimal-order convergence of the numerical solution. Moreover, inexact block diagonal and triangular Schur complement preconditioners and the minimal residual method (MINRES) and the generalized minimal residual method (GMRES) are studied for the iterative solution of the modified scheme. Bounds for the eigenvalues and the residual of MINRES/GMRES are established. Those bounds show that the convergence of MINRES and GMRES is independent of the viscosity parameter and mesh size. The convergence of the modified scheme and effectiveness of the preconditioners are verified using numerical examples in two and three dimensions.
We propose a $C^0$ interior penalty method for the fourth-order stream function formulation of the surface Stokes problem. The scheme utilizes continuous, piecewise polynomial spaces defined on an approximate surface. We show that the resulting discretization is positive definite and derive error estimates in various norms in terms of the polynomial degree of the finite element space as well as the polynomial degree to define the geometry approximation. A notable feature of the scheme is that it does not explicitly depend on the Gauss curvature of the surface. This is achieved via a novel integration-by-parts formula for the surface biharmonic operator.
Runge-Kutta methods have an irreplaceable position among numerical methods designed to solve ordinary differential equations. Especially, implicit ones are suitable for approximating solutions of stiff initial value problems. We propose a new way of deriving coefficients of implicit Runge-Kutta methods. This approach based on repeated integrals yields both new and well-known Butcher's tableaux. We discuss the properties of newly derived methods and compare them with standard collocation implicit Runge-Kutta methods in a series of numerical experiments, including the Prothero-Robinson problem.