In this paper, we introduce a framework for the discretization of a class of constrained Hamilton-Jacobi equations, a system coupling a Hamilton-Jacobi equation with a Lagrange multiplier determined by the constraint. The equation is non-local, and the constraint has bounded variations. We show that, under a set of general hypothesis, the approximation obtained with a finite-differences monotonic scheme, converges towards the viscosity solution of the constrained Hamilton-Jacobi equation. Constrained Hamilton-Jacobi equations often arise as the long time and small mutation asymptotics of population models in quantitative genetics. As an example, we detail the construction of a scheme for the limit of an integral Lotka-Volterra equation. We also construct and analyze an Asymptotic-Preserving (AP) scheme for the model outside of the asymptotics. We prove that it is stable along the transition towards the asymptotics. The theoretical analysis of the schemes is illustrated and discussed with numerical simulations. The AP scheme is also used to conjecture the asymptotic behavior of the integral Lotka-Volterra equation, when the environment varies in time.
We propose a high precision algorithm for solving the Gelfand-Levitan-Marchenko equation. The algorithm is based on the block version of the Toeplitz Inner-Bordering algorithm of Levinson's type. To approximate integrals, we use the high-precision one-sided and two-sided Gregory quadrature formulas. Also we use the Woodbury formula to construct a computational algorithm. This makes it possible to use the almost Toeplitz structure of the matrices for the fast calculations.
In this chapter, we address the challenge of exploring the posterior distributions of Bayesian inverse problems with computationally intensive forward models. We consider various multivariate proposal distributions, and compare them with single-site Metropolis updates. We show how fast, approximate models can be leveraged to improve the MCMC sampling efficiency.
When applying the classical multistep schemes for solving differential equations, one often faces the dilemma that smaller time steps are needed with higher-order schemes, making it impractical to use high-order schemes for stiff problems. We construct in this paper a new class of BDF and implicit-explicit (IMEX) schemes for parabolic type equations based on the Taylor expansions at time $t^{n+\beta}$ with $\beta > 1$ being a tunable parameter. These new schemes, with a suitable $\beta$, allow larger time steps at higher-order for stiff problems than that is allowed with a usual higher-order scheme. For parabolic type equations, we identify an explicit uniform multiplier for the new second- to fourth-order schemes, and conduct rigorously stability and error analysis by using the energy argument. We also present ample numerical examples to validate our findings.
This paper studies the $p$-biharmonic equation on graphs, which arises in point cloud processing and can be interpreted as a natural extension of the graph $p$-Laplacian from the perspective of hypergraph. The asymptotic behavior of the solution is investigated when the random geometric graph is considered and the number of data points goes to infinity. We show that the continuum limit is an appropriately weighted $p$-biharmonic equation with homogeneous Neumann boundary conditions. The result relies on the uniform $L^p$ estimates for solutions and gradients of nonlocal and graph Poisson equations. The $L^\infty$ estimates of solutions are also obtained as a byproduct.
Motivated by the widespread use of approximate derivatives in machine learning and optimization, we study inexact subgradient methods with non-vanishing additive errors and step sizes. In the nonconvex semialgebraic setting, under boundedness assumptions, we prove that the method provides points that eventually fluctuate close to the critical set at a distance proportional to $\epsilon^\rho$ where $\epsilon$ is the error in subgradient evaluation and $\rho$ relates to the geometry of the problem. In the convex setting, we provide complexity results for the averaged values. We also obtain byproducts of independent interest, such as descent-like lemmas for nonsmooth nonconvex problems and some results on the limit of affine interpolants of differential inclusions.
In this paper, to address the optimization problem on a compact matrix manifold, we introduce a novel algorithmic framework called the Transformed Gradient Projection (TGP) algorithm, using the projection onto this compact matrix manifold. Compared with the existing algorithms, the key innovation in our approach lies in the utilization of a new class of search directions and various stepsizes, including the Armijo, nonmonotone Armijo, and fixed stepsizes, to guide the selection of the next iterate. Our framework offers flexibility by encompassing the classical gradient projection algorithms as special cases, and intersecting the retraction-based line-search algorithms. Notably, our focus is on the Stiefel or Grassmann manifold, revealing that many existing algorithms in the literature can be seen as specific instances within our proposed framework, and this algorithmic framework also induces several new special cases. Then, we conduct a thorough exploration of the convergence properties of these algorithms, considering various search directions and stepsizes. To achieve this, we extensively analyze the geometric properties of the projection onto compact matrix manifolds, allowing us to extend classical inequalities related to retractions from the literature. Building upon these insights, we establish the weak convergence, convergence rate, and global convergence of TGP algorithms under three distinct stepsizes. In cases where the compact matrix manifold is the Stiefel or Grassmann manifold, our convergence results either encompass or surpass those found in the literature. Finally, through a series of numerical experiments, we observe that the TGP algorithms, owing to their increased flexibility in choosing search directions, outperform classical gradient projection and retraction-based line-search algorithms in several scenarios.
We analyze a bilinear optimal control problem for the Stokes--Brinkman equations: the control variable enters the state equations as a coefficient. In two- and three-dimensional Lipschitz domains, we perform a complete continuous analysis that includes the existence of solutions and first- and second-order optimality conditions. We also develop two finite element methods that differ fundamentally in whether the admissible control set is discretized or not. For each of the proposed methods, we perform a convergence analysis and derive a priori error estimates; the latter under the assumption that the domain is convex. Finally, assuming that the domain is Lipschitz, we develop an a posteriori error estimator for each discretization scheme and obtain a global reliability bound.
In this paper, we provide a theoretical analysis of a type of operator learning method without data reliance based on the classical finite element approximation, which is called the finite element operator network (FEONet). We first establish the convergence of this method for general second-order linear elliptic PDEs with respect to the parameters for neural network approximation. In this regard, we address the role of the condition number of the finite element matrix in the convergence of the method. Secondly, we derive an explicit error estimate for the self-adjoint case. For this, we investigate some regularity properties of the solution in certain function classes for a neural network approximation, verifying the sufficient condition for the solution to have the desired regularity. Finally, we will also conduct some numerical experiments that support the theoretical findings, confirming the role of the condition number of the finite element matrix in the overall convergence.
Quantum hypothesis testing (QHT) has been traditionally studied from the information-theoretic perspective, wherein one is interested in the optimal decay rate of error probabilities as a function of the number of samples of an unknown state. In this paper, we study the sample complexity of QHT, wherein the goal is to determine the minimum number of samples needed to reach a desired error probability. By making use of the wealth of knowledge that already exists in the literature on QHT, we characterize the sample complexity of binary QHT in the symmetric and asymmetric settings, and we provide bounds on the sample complexity of multiple QHT. In more detail, we prove that the sample complexity of symmetric binary QHT depends logarithmically on the inverse error probability and inversely on the negative logarithm of the fidelity. As a counterpart of the quantum Stein's lemma, we also find that the sample complexity of asymmetric binary QHT depends logarithmically on the inverse type II error probability and inversely on the quantum relative entropy. We then provide lower and upper bounds on the sample complexity of multiple QHT, with it remaining an intriguing open question to improve these bounds. The final part of our paper outlines and reviews how sample complexity of QHT is relevant to a broad swathe of research areas and can enhance understanding of many fundamental concepts, including quantum algorithms for simulation and search, quantum learning and classification, and foundations of quantum mechanics. As such, we view our paper as an invitation to researchers coming from different communities to study and contribute to the problem of sample complexity of QHT, and we outline a number of open directions for future research.
In this paper, we propose a novel multiscale model reduction strategy tailored to address the Poisson equation within heterogeneous perforated domains. The numerical simulation of this intricate problem is impeded by its multiscale characteristics, necessitating an exceptionally fine mesh to adequately capture all relevant details. To overcome the challenges inherent in the multiscale nature of the perforations, we introduce a coarse space constructed using the Constraint Energy Minimizing Generalized Multiscale Finite Element Method (CEM-GMsFEM). This involves constructing basis functions through a sequence of local energy minimization problems over eigenspaces containing localized information pertaining to the heterogeneities. Through our analysis, we demonstrate that the oversampling layers depend on the local eigenvalues, thereby implicating the local geometry as well. Additionally, we provide numerical examples to illustrate the efficacy of the proposed scheme.