Error estimates for kernel interpolation in Reproducing Kernel Hilbert Spaces (RKHS) usually assume quite restrictive properties on the shape of the domain, especially in the case of infinitely smooth kernels like the popular Gaussian kernel. In this paper we leverage an analysis of greedy kernel algorithms to prove that it is possible to obtain convergence results (in the number of interpolation points) for kernel interpolation for arbitrary domains $\Omega \subset \mathbb{R}^d$, thus allowing for non-Lipschitz domains including e.g. cusps and irregular boundaries. Especially we show that, when going to a smaller domain $\tilde{\Omega} \subset \Omega \subset \mathbb{R}^d$, the convergence rate does not deteriorate - i.e. the convergence rates are stable with respect to going to a subset. The impact of this result is explained on the examples of kernels of finite as well as infinite smoothness like the Gaussian kernel. A comparison to approximation in Sobolev spaces is drawn, where the shape of the domain $\Omega$ has an impact on the approximation properties. Numerical experiments illustrate and confirm the experiments.
We provide a new sequent calculus that enjoys syntactic cut-elimination and strongly terminating backward proof search for the intuitionistic Strong L\"ob logic $\sf{iSL}$, an intuitionistic modal logic with a provability interpretation. A novel measure on sequents is used to prove both the termination of the naive backward proof search strategy, and the admissibility of cut in a syntactic and direct way, leading to a straightforward cut-elimination procedure. All proofs have been formalised in the interactive theorem prover Coq.
The proximal Galerkin finite element method is a high-order, low iteration complexity, nonlinear numerical method that preserves the geometric and algebraic structure of bound constraints in infinite-dimensional function spaces. This paper introduces the proximal Galerkin method and applies it to solve free boundary problems, enforce discrete maximum principles, and develop scalable, mesh-independent algorithms for optimal design. The paper leads to a derivation of the latent variable proximal point (LVPP) algorithm: an unconditionally stable alternative to the interior point method. LVPP is an infinite-dimensional optimization algorithm that may be viewed as having an adaptive barrier function that is updated with a new informative prior at each (outer loop) optimization iteration. One of the main benefits of this algorithm is witnessed when analyzing the classical obstacle problem. Therein, we find that the original variational inequality can be replaced by a sequence of semilinear partial differential equations (PDEs) that are readily discretized and solved with, e.g., high-order finite elements. Throughout this work, we arrive at several unexpected contributions that may be of independent interest. These include (1) a semilinear PDE we refer to as the entropic Poisson equation; (2) an algebraic/geometric connection between high-order positivity-preserving discretizations and certain infinite-dimensional Lie groups; and (3) a gradient-based, bound-preserving algorithm for two-field density-based topology optimization. The complete latent variable proximal Galerkin methodology combines ideas from nonlinear programming, functional analysis, tropical algebra, and differential geometry and can potentially lead to new synergies among these areas as well as within variational and numerical analysis.
We introduce new control-volume finite-element discretization schemes suitable for solving the Stokes problem. Within a common framework, we present different approaches for constructing such schemes. The first and most established strategy employs a non-overlapping partitioning into control volumes. The second represents a new idea by splitting into two sets of control volumes, the first set yielding a partition of the domain and the second containing the remaining overlapping control volumes required for stability. The third represents a hybrid approach where finite volumes are combined with finite elements based on a hierarchical splitting of the ansatz space. All approaches are based on typical finite element function spaces but yield locally mass and momentum conservative discretization schemes that can be interpreted as finite volume schemes. We apply all strategies to the inf-sub stable MINI finite-element pair. Various test cases, including convergence tests and the numerical observation of the boundedness of the number of preconditioned Krylov solver iterations, as well as more complex scenarios of flow around obstacles or through a three-dimensional vessel bifurcation, demonstrate the stability and robustness of the schemes.
Blind quantum computation (BQC) protocols enable quantum algorithms to be executed on third-party quantum agents while keeping the data and algorithm confidential. The previous proposals for measurement-based BQC require preparing a highly entangled cluster state. In this paper, we show that such a requirement is not necessary. Our protocol only requires pre-shared bell pairs between delegated quantum agents, and there is no requirement for any classical or quantum information exchange between agents during the execution. Our proposal requires fewer quantum resources than previous proposals by eliminating the need for a universal cluster state.
The aim of this paper is to combine several Ivev-like modal systems characterized by 4-valued non-deterministic matrices (Nmatrices) with IDM4, a 4-valued expansion of Belnap-Dunn's logic FDE with an implication introduced by Pynko in 1999. In order to to this, we introduce a new methodology for combining logics which are characterized by means of swap structures, based on what we call superposition of snapshots. In particular, the combination of IDM4 with Tm, the 4-valued Ivlev's version of KT, will be analyzed with more details. From the semantical perspective, the idea is to combine the 4-valued swap structures (Nmatrices) for Tm (and several of its extensions) with the 4-valued twist structure (logical matrix) for IDM4. This superposition produces a universe of 6 snapshots, with 3 of them being designated. The multioperators over the new universe are defined by combining the specifications of the given swap and twist structures. This gives origin to 6 different paradefinite Ivlev-like modal logics, each one of them characterized by a 6-valued Nmatrix, and conservatively extending the original modal logic and IDM4. This important feature allows us to consider the proposed construction as a genuine technique for combining logics. In addition, it is possible to define in the combined logics a classicality operator in the sense of logics of evidence and truth (LETs). A sound and complete Hilbert-style axiomatization is also presented for the 6 combined systems, as well as a very simple Prolog program which implements the swap structures semantics for the 6 systems, which gives a decision procedure for satisfiability, refutability and validity of formulas in these logics.
We present a framework for approximate Bayesian inference when only a limited number of noisy log-likelihood evaluations can be obtained due to computational constraints, which is becoming increasingly common for applications of complex models. We model the log-likelihood function using a Gaussian process (GP) and the main methodological innovation is to apply this model to emulate the progression that an exact Metropolis-Hastings (MH) sampler would take if it was applicable. Informative log-likelihood evaluation locations are selected using a sequential experimental design strategy until the MH accept/reject decision is done accurately enough according to the GP model. The resulting approximate sampler is conceptually simple and sample-efficient. It is also more robust to violations of GP modelling assumptions compared with earlier, related "Bayesian optimisation-like" methods tailored for Bayesian inference. We discuss some theoretical aspects and various interpretations of the resulting approximate MH sampler, and demonstrate its benefits in the context of Bayesian and generalised Bayesian likelihood-free inference for simulator-based statistical models.
We describe an algorithm that allows one to find dense packing configurations of a number of congruent disks in arbitrary domains in two or more dimensions. We have applied it to a large class of two dimensional domains such as rectangles, ellipses, crosses, multiply connected domains and even to the cardioid. For many of the cases that we have studied no previous result was available. The fundamental idea in our approach is the introduction of "image" disks, which allows one to work with a fixed container, thus lifting the limitations of the packing algorithms of \cite{Nurmela97,Amore21,Amore23}. We believe that the extension of our algorithm to three (or higher) dimensional containers (not considered here) can be done straightforwardly.
We combine the unbiased estimators in Rhee and Glynn (Operations Research: 63(5), 1026-1043, 2015) and the Heston model with stochastic interest rates. Specifically, we first develop a semi-exact log-Euler scheme for the Heston model with stochastic interest rates. Then, under mild assumptions, we show that the convergence rate in the $L^2$ norm is $O(h)$, where $h$ is the step size. The result applies to a large class of models, such as the Heston-Hull-While model, the Heston-CIR model and the Heston-Black-Karasinski model. Numerical experiments support our theoretical convergence rate.
We present a multigrid algorithm to solve efficiently the large saddle-point systems of equations that typically arise in PDE-constrained optimization under uncertainty. The algorithm is based on a collective smoother that at each iteration sweeps over the nodes of the computational mesh, and solves a reduced saddle-point system whose size depends on the number $N$ of samples used to discretized the probability space. We show that this reduced system can be solved with optimal $O(N)$ complexity. We test the multigrid method on three problems: a linear-quadratic problem for which the multigrid method is used to solve directly the linear optimality system; a nonsmooth problem with box constraints and $L^1$-norm penalization on the control, in which the multigrid scheme is used within a semismooth Newton iteration; a risk-adverse problem with the smoothed CVaR risk measure where the multigrid method is called within a preconditioned Newton iteration. In all cases, the multigrid algorithm exhibits very good performances and robustness with respect to all parameters of interest.
We introduce an extended discontinuous Galerkin discretization of hyperbolic-parabolic problems on multidimensional semi-infinite domains. Building on previous work on the one-dimensional case, we split the strip-shaped computational domain into a bounded region, discretized by means of discontinuous finite elements using Legendre basis functions, and an unbounded subdomain, where scaled Laguerre functions are used as a basis. Numerical fluxes at the interface allow for a seamless coupling of the two regions. The resulting coupling strategy is shown to produce accurate numerical solutions in tests on both linear and non-linear scalar and vectorial model problems. In addition, an efficient absorbing layer can be simulated in the semi-infinite part of the domain in order to damp outgoing signals with negligible spurious reflections at the interface. By tuning the scaling parameter of the Laguerre basis functions, the extended DG scheme simulates transient dynamics over large spatial scales with a substantial reduction in computational cost at a given accuracy level compared to standard single-domain discontinuous finite element techniques.