In general, providing an axiomatization for an arbitrary logic is a task that may require some ingenuity. In the case of logics defined by a finite logical matrix (three-valued logics being a particularly simple example), the generation of suitable finite axiomatizations can be completely automatized, essentially by expressing the matrix tables via inference rules. In this chapter we illustrate how two formalisms, the 3-labelled calculi of Baaz, Ferm\"uller and Zach and the multiple-conclusion (or Set-Set) Hilbert-style calculi of Shoesmith and Smiley, may be uniformly employed to axiomatize logics defined by a three-valued logical matrix. The generating procedure common to both formalisms can be described as follows: first (i) convert the matrix semantics into rule form (we refer to this step as the generating subprocedure) and then (ii) simplify the set of rules thus obtained, essentially relying on the defining properties of any Tarskian consequence relation (we refer to this step as the streamlining subprocedure). We illustrate through some examples that, if a minimal expressiveness assumption is met (namely, if the matrix defining the logic is monadic), then it is straightforward to define effective translations guaranteeing the equivalence between the 3-labelled and the Set-Set approach.
In the context of interactive theorem provers based on a dependent type theory, automation tactics (dedicated decision procedures, call of automated solvers, ...) are often limited to goals which are exactly in some expected logical fragment. This very often prevents users from applying these tactics in other contexts, even similar ones. This paper discusses the design and the implementation of pre-processing operations for automating formal proofs in the Coq proof assistant. It presents the implementation of a wide variety of predictible, atomic goal transformations, which can be composed in various ways to target different backends. A gallery of examples illustrates how it helps to expand significantly the power of automation engines.
We extend generalized functional linear models under independence to a situation in which a functional covariate is related to a scalar response variable that exhibits spatial dependence. For estimation, we apply basis expansion and truncation for dimension reduction of the covariate process followed by a composite likelihood estimating equation to handle the spatial dependency. We develop asymptotic results for the proposed model under a repeating lattice asymptotic context, allowing us to construct a confidence interval for the spatial dependence parameter and a confidence band for the parameter function. A binary conditionals model is presented as a concrete illustration and is used in simulation studies to verify the applicability of the asymptotic inferential results.
Diffusion probabilistic models (DPMs) have rapidly evolved to be one of the predominant generative models for the simulation of synthetic data, for instance, for computer vision, audio, natural language processing, or biomolecule generation. Here, we propose using DPMs for the generation of synthetic individual location trajectories (ILTs) which are sequences of variables representing physical locations visited by individuals. ILTs are of major importance in mobility research to understand the mobility behavior of populations and to ultimately inform political decision-making. We represent ILTs as multi-dimensional categorical random variables and propose to model their joint distribution using a continuous DPM by first applying the diffusion process in a continuous unconstrained space and then mapping the continuous variables into a discrete space. We demonstrate that our model can synthesize realistic ILPs by comparing conditionally and unconditionally generated sequences to real-world ILPs from a GNSS tracking data set which suggests the potential use of our model for synthetic data generation, for example, for benchmarking models used in mobility research.
The main reason for query model's prominence in complexity theory and quantum computing is the presence of concrete lower bounding techniques: polynomial and adversary method. There have been considerable efforts to give lower bounds using these methods, and to compare/relate them with other measures based on the decision tree. We explore the value of these lower bounds on quantum query complexity and their relation with other decision tree based complexity measures for the class of symmetric functions, arguably one of the most natural and basic sets of Boolean functions. We show an explicit construction for the dual of the positive adversary method and also of the square root of private coin certificate game complexity for any total symmetric function. This shows that the two values can't be distinguished for any symmetric function. Additionally, we show that the recently introduced measure of spectral sensitivity gives the same value as both positive adversary and approximate degree for every total symmetric Boolean function. Further, we look at the quantum query complexity of Gap Majority, a partial symmetric function. It has gained importance recently in regard to understanding the composition of randomized query complexity. We characterize the quantum query complexity of Gap Majority and show a lower bound on noisy randomized query complexity (Ben-David and Blais, FOCS 2020) in terms of quantum query complexity. Finally, we study how large certificate complexity and block sensitivity can be as compared to sensitivity for symmetric functions (even up to constant factors). We show tight separations, i.e., give upper bounds on possible separations and construct functions achieving the same.
Error estimates of cubic interpolated pseudo-particle scheme (CIP scheme) for the one-dimensional advection equation with periodic boundary conditions are presented. The CIP scheme is a semi-Lagrangian method involving the piecewise cubic Hermite interpolation. Although it is numerically known that the space-time accuracy of the scheme is third order, its rigorous proof remains an open problem. In this paper, denoting the spatial and temporal mesh sizes by $ h $ and $ \Delta t $ respectively, we prove an error estimate $ O(\Delta t^3 + \frac{h^4}{\Delta t}) $ in $ L^2 $ norm theoretically, which justifies the above-mentioned prediction if $ h = O(\Delta t) $. The proof is based on properties of the interpolation operator; the most important one is that it behaves as the $ L^2 $ projection for the second-order derivatives. We remark that the same strategy perfectly works as well to address an error estimate for the semi-Lagrangian method with the cubic spline interpolation.
Estimating parameters from data is a fundamental problem in physics, customarily done by minimizing a loss function between a model and observed statistics. In scattering-based analysis, researchers often employ their domain expertise to select a specific range of wavevectors for analysis, a choice that can vary depending on the specific case. We introduce another paradigm that defines a probabilistic generative model from the beginning of data processing and propagates the uncertainty for parameter estimation, termed ab initio uncertainty quantification (AIUQ). As an illustrative example, we demonstrate this approach with differential dynamic microscopy (DDM) that extracts dynamical information through Fourier analysis at a selected range of wavevectors. We first show that DDM is equivalent to fitting a temporal variogram in the reciprocal space using a latent factor model as the generative model. Then we derive the maximum marginal likelihood estimator, which optimally weighs information at all wavevectors, therefore eliminating the need to select the range of wavevectors. Furthermore, we substantially reduce the computational cost by utilizing the generalized Schur algorithm for Toeplitz covariances without approximation. Simulated studies validate that AIUQ significantly improves estimation accuracy and enables model selection with automated analysis. The utility of AIUQ is also demonstrated by three distinct sets of experiments: first in an isotropic Newtonian fluid, pushing limits of optically dense systems compared to multiple particle tracking; next in a system undergoing a sol-gel transition, automating the determination of gelling points and critical exponent; and lastly, in discerning anisotropic diffusive behavior of colloids in a liquid crystal. These outcomes collectively underscore AIUQ's versatility to capture system dynamics in an efficient and automated manner.
Boundary value problems involving elliptic PDEs such as the Laplace and the Helmholtz equations are ubiquitous in mathematical physics and engineering. Many such problems can be alternatively formulated as integral equations that are mathematically more tractable. However, an integral-equation formulation poses a significant computational challenge: solving large dense linear systems that arise upon discretization. In cases where iterative methods converge rapidly, existing methods that draw on fast summation schemes such as the Fast Multipole Method are highly efficient and well-established. More recently, linear complexity direct solvers that sidestep convergence issues by directly computing an invertible factorization have been developed. However, storage and computation costs are high, which limits their ability to solve large-scale problems in practice. In this work, we introduce a distributed-memory parallel algorithm based on an existing direct solver named ``strong recursive skeletonization factorization.'' Specifically, we apply low-rank compression to certain off-diagonal matrix blocks in a way that minimizes computation and data movement. Compared to iterative algorithms, our method is particularly suitable for problems involving ill-conditioned matrices or multiple right-hand sides. Large-scale numerical experiments are presented to show the performance of our Julia implementation.
The uniform interpolation property in a given logic can be understood as the definability of propositional quantifiers. We mechanise the computation of these quantifiers and prove correctness in the Coq proof assistant for three modal logics, namely: (1) the modal logic K, for which a pen-and-paper proof exists; (2) G\"odel-L\"ob logic GL, for which our formalisation clarifies an important point in an existing, but incomplete, sequent-style proof; and (3) intuitionistic strong L\"ob logic iSL, for which this is the first proof-theoretic construction of uniform interpolants. Our work also yields verified programs that allow one to compute the propositional quantifiers on any formula in this logic.
This paper studies the complexity of classical modal logics and of their extension with fixed-point operators, using translations to transfer results across logics. In particular, we show several complexity results for multi-agent logics via translations to and from the $\mu$-calculus and modal logic, which allow us to transfer known upper and lower bounds. We also use these translations to introduce a terminating tableau system for the logics we study, based on Kozen's tableau for the $\mu$-calculus, and the one of Fitting and Massacci for modal logic. Finally, we show how to encode the tableaux we introduced into $\mu$-calculus formulas. This encoding provides upper bounds for the satisfiability checking of the few logics we previously did not have algorithms for.
We derive information-theoretic generalization bounds for supervised learning algorithms based on the information contained in predictions rather than in the output of the training algorithm. These bounds improve over the existing information-theoretic bounds, are applicable to a wider range of algorithms, and solve two key challenges: (a) they give meaningful results for deterministic algorithms and (b) they are significantly easier to estimate. We show experimentally that the proposed bounds closely follow the generalization gap in practical scenarios for deep learning.