We define game semantics for the constructive $\mu$-calculus and prove its equivalence to bi-relational semantics. As an application, we use the game semantics to prove that the $\mu$-calculus collapses to modal logic over the modal logic $\mathsf{IS5}$. We then show the completeness of $\mathsf{IS5}$ extended with fixed-point operators.
We introduce a simple and stable computational method for ill-posed partial differential equation (PDE) problems. The method is based on Schr\"odingerization, introduced in [S. Jin, N. Liu and Y. Yu, arXiv:2212.13969][S. Jin, N. Liu and Y. Yu, Phys. Rev. A, 108 (2023), 032603], which maps all linear PDEs into Schr\"odinger-type equations in one higher dimension, for quantum simulations of these PDEs. Although the original problem is ill-posed, the Schr\"odingerized equations are Hamiltonian systems and time-reversible, allowing stable computation both forward and backward in time. The original variable can be recovered by data from suitably chosen domain in the extended dimension. We will use the backward heat equation and the linear convection equation with imaginary wave speed as examples. Error analysis of these algorithms are conducted and verified numerically. The methods are applicable to both classical and quantum computers, and we also lay out quantum algorithms for these methods. Moreover, we introduce a smooth initialization for the Schr\"odingerized equation which will lead to essentially spectral accuracy for the approximation in the extended space, if a spectral method is used. Consequently, the extra qubits needed due to the extra dimension, if a qubit based quantum algorithm is used, for both well-posed and ill-posed problems, becomes almost $\log\log {1/\varepsilon}$ where $\varepsilon$ is the desired precision. This optimizes the complexity of the Schr\"odingerization based quantum algorithms for any non-unitary dynamical system introduced in [S. Jin, N. Liu and Y. Yu, arXiv:2212.13969][S. Jin, N. Liu and Y. Yu, Phys. Rev. A, 108 (2023), 032603].
We consider a class of stochastic gradient optimization schemes. Assuming that the objective function is strongly convex, we prove weak error estimates which are uniform in time for the error between the solution of the numerical scheme, and the solutions of continuous-time modified (or high-resolution) differential equations at first and second orders, with respect to the time-step size. At first order, the modified equation is deterministic, whereas at second order the modified equation is stochastic and depends on a modified objective function. We go beyond existing results where the error estimates have been considered only on finite time intervals and were not uniform in time. This allows us to then provide a rigorous complexity analysis of the method in the large time and small time step size regimes.
In this paper, we examine a finite element approximation of the steady $p(\cdot)$-Navier-Stokes equations ($p(\cdot)$ is variable dependent) and prove orders of convergence by assuming natural fractional regularity assumptions on the velocity vector field and the kinematic pressure. Compared to previous results, we treat the convective term and employ a more practicable discretization of the power-law index $p(\cdot)$. Numerical experiments confirm the quasi-optimality of the $\textit{a priori}$ error estimates (for the velocity) with respect to fractional regularity assumptions on the velocity vector field and the kinematic pressure.
QAC$^0$ is the class of constant-depth quantum circuits with polynomially many ancillary qubits, where Toffoli gates on arbitrarily many qubits are allowed. In this work, we show that the parity function cannot be computed in QAC$^0$, resolving a long-standing open problem in quantum circuit complexity more than twenty years old. As a result, this proves ${\rm QAC}^0 \subsetneqq {\rm QAC}_{\rm wf}^0$. We also show that any QAC circuit of depth $d$ that approximately computes parity on $n$ bits requires $2^{\widetilde{\Omega}(n^{1/d})}$ ancillary qubits, which is close to tight. This implies a similar lower bound on approximately preparing cat states using QAC circuits. Finally, we prove a quantum analog of the Linial-Mansour-Nisan theorem for QAC$^0$. This implies that, for any QAC$^0$ circuit $U$ with $a={\rm poly}(n)$ ancillary qubits, and for any $x\in\{0,1\}^n$, the correlation between $Q(x)$ and the parity function is bounded by ${1}/{2} + 2^{-\widetilde{\Omega}(n^{1/d})}$, where $Q(x)$ denotes the output of measuring the output qubit of $U|x,0^a\rangle$. All the above consequences rely on the following technical result. If $U$ is a QAC$^0$ circuit with $a={\rm poly}(n)$ ancillary qubits, then there is a distribution $\mathcal{D}$ of bounded polynomials of degree polylog$(n)$ such that with high probability, a random polynomial from $\mathcal{D}$ approximates the function $\langle x,0^a| U^\dag Z_{n+1} U |x,0^a\rangle$ for a large fraction of $x\in \{0,1\}^n$. This result is analogous to the Razborov-Smolensky result on the approximation of AC$^0$ circuits by random low-degree polynomials.
In decision-making, maxitive functions are used for worst-case and best-case evaluations. Maxitivity gives rise to a rich structure that is well-studied in the context of the pointwise order. In this article, we investigate maxitivity with respect to general preorders and provide a representation theorem for such functionals. The results are illustrated for different stochastic orders in the literature, including the usual stochastic order, the increasing convex/concave order, and the dispersive order.
Our goal is to compute excited states for the nonlinear Schr{\"o}dinger equation in the radial setting. We introduce a new technique based on the Nehari manifold approach and give a comparison with the classical shooting method. We observe that the Nehari method allows to accurately compute excited states on large domains but is relatively slow compared to the shooting method.
Interpreting data with mathematical models is an important aspect of real-world applied mathematical modeling. Very often we are interested to understand the extent to which a particular data set informs and constrains model parameters. This question is closely related to the concept of parameter identifiability, and in this article we present a series of computational exercises to introduce tools that can be used to assess parameter identifiability, estimate parameters and generate model predictions. Taking a likelihood-based approach, we show that very similar ideas and algorithms can be used to deal with a range of different mathematical modelling frameworks. The exercises and results presented in this article are supported by a suite of open access codes that can be accessed on GitHub.
The problem of identifying the satisfiability threshold of random $3$-SAT formulas has received a lot of attention during the last decades and has inspired the study of other threshold phenomena in random combinatorial structures. The classical assumption in this line of research is that, for a given set of $n$ Boolean variables, each clause is drawn uniformly at random among all sets of three literals from these variables, independently from other clauses. Here, we keep the uniform distribution of each clause, but deviate significantly from the independence assumption and consider richer families of probability distributions. For integer parameters $n$, $m$, and $k$, we denote by $\DistFamily_k(n,m)$ the family of probability distributions that produce formulas with $m$ clauses, each selected uniformly at random from all sets of three literals from the $n$ variables, so that the clauses are $k$-wise independent. Our aim is to make general statements about the satisfiability or unsatisfiability of formulas produced by distributions in $\DistFamily_k(n,m)$ for different values of the parameters $n$, $m$, and $k$.
We propose a novel, highly efficient, second-order accurate, long-time unconditionally stable numerical scheme for a class of finite-dimensional nonlinear models that are of importance in geophysical fluid dynamics. The scheme is highly efficient in the sense that only a (fixed) symmetric positive definite linear problem (with varying right hand sides) is involved at each time-step. The solutions to the scheme are uniformly bounded for all time. We show that the scheme is able to capture the long-time dynamics of the underlying geophysical model, with the global attractors as well as the invariant measures of the scheme converge to those of the original model as the step size approaches zero. In our numerical experiments, we take an indirect approach, using long-term statistics to approximate the invariant measures. Our results suggest that the convergence rate of the long-term statistics, as a function of terminal time, is approximately first order using the Jensen-Shannon metric and half-order using the L1 metric. This implies that very long time simulation is needed in order to capture a few significant digits of long time statistics (climate) correct. Nevertheless, the second order scheme's performance remains superior to that of the first order one, requiring significantly less time to reach a small neighborhood of statistical equilibrium for a given step size.
The goal of explainable Artificial Intelligence (XAI) is to generate human-interpretable explanations, but there are no computationally precise theories of how humans interpret AI generated explanations. The lack of theory means that validation of XAI must be done empirically, on a case-by-case basis, which prevents systematic theory-building in XAI. We propose a psychological theory of how humans draw conclusions from saliency maps, the most common form of XAI explanation, which for the first time allows for precise prediction of explainee inference conditioned on explanation. Our theory posits that absent explanation humans expect the AI to make similar decisions to themselves, and that they interpret an explanation by comparison to the explanations they themselves would give. Comparison is formalized via Shepard's universal law of generalization in a similarity space, a classic theory from cognitive science. A pre-registered user study on AI image classifications with saliency map explanations demonstrate that our theory quantitatively matches participants' predictions of the AI.