Driven by exploring the power of quantum computation with a limited number of qubits, we present a novel complete characterization for space-bounded quantum computation, which encompasses settings with one-sided error (unitary coRQL) and two-sided error (BQL), approached from a quantum state testing perspective: - The first family of natural complete problems for unitary coRQL, i.e., space-bounded quantum state certification for trace distance and Hilbert-Schmidt distance; - A new family of natural complete problems for BQL, i.e., space-bounded quantum state testing for trace distance, Hilbert-Schmidt distance, and quantum entropy difference. In the space-bounded quantum state testing problem, we consider two logarithmic-qubit quantum circuits (devices) denoted as $Q_0$ and $Q_1$, which prepare quantum states $\rho_0$ and $\rho_1$, respectively, with access to their ``source code''. Our goal is to decide whether $\rho_0$ is $\epsilon_1$-close to or $\epsilon_2$-far from $\rho_1$ with respect to a specified distance-like measure. Interestingly, unlike time-bounded state testing problems, our results reveal that the space-bounded state testing problems all correspond to the same class. Moreover, our algorithms on the trace distance inspire an algorithmic Holevo-Helstrom measurement, implying QSZK is in QIP(2) with a quantum linear-space honest prover. Our results primarily build upon a space-efficient variant of the quantum singular value transformation (QSVT) introduced by Gily\'en, Su, Low, and Wiebe (STOC 2019), which is of independent interest. Our technique provides a unified approach for designing space-bounded quantum algorithms. Specifically, we show that implementing QSVT for any bounded polynomial that approximates a piecewise-smooth function incurs only a constant overhead in terms of the space required for special forms of the projected unitary encoding.
This paper develops and discusses a residual-based a posteriori error estimate and a space--time adaptive algorithm for solving parabolic surface partial differential equations on closed stationary surfaces. The full discretization uses the surface finite element method in space and the backward Euler method in time. The proposed error indicator bounds the error quantities globally in space from above and below, and globally in time from above and locally from below. A space--time adaptive algorithm is proposed using the derived error indicator. Numerical experiments illustrate and complement the theory.
We present a space-time continuous-Galerkin finite element method for solving incompressible Navier-Stokes equations. To ensure stability of the discrete variational problem, we apply ideas from the variational multi-scale method. The finite element problem is posed on the ``full" space-time domain, considering time as another dimension. We provide a rigorous analysis of the stability and convergence of the stabilized formulation. And finally, we apply this method on two benchmark problems in computational fluid dynamics, namely, lid-driven cavity flow and flow past a circular cylinder. We validate the current method with existing results from literature and show that very large space-time blocks can be solved using our approach.
The paper considers standard iterative methods for solving the generalized Stokes problem arising from the time and space approximation of the time-dependent incompressible Navier-Stokes equations. Various preconditioning techniques are considered (Cahouet&Chabard and augmented Lagrangian), and one investigates whether these methods can compete with traditional pressure-correction and velocity-correction methods in terms of CPU time per degree of freedom and per time step. Numerical tests on fine unstructured meshes (68 millions degrees of freedoms) demonstrate convergence rates that are independent of the mesh size and improve with the Reynolds number. Three conclusions are drawn from the paper: (1) Although very good parallel scalability is observed for the augmented Lagrangian method, thorough tests on large problems reveal that the overall CPU time per degree of freedom and per time step is best for the standard Cahouet&Chabar preconditioner. (2) Whether solving the pressure Schur complement problem or solving the full couple system at once does not make any significant difference in term of CPU time per degree of freedom and per time step. (3) All the methods tested in the paper, whether matrix-free or not, are on average 30 times slower than traditional pressure-correction and velocity-correction methods. Hence, although all these methods are very efficient for solving steady state problems, they are not yet competitive for solving time-dependent problems.
In shape-constrained nonparametric inference, it is often necessary to perform preliminary tests to verify whether a probability mass function (p.m.f.) satisfies qualitative constraints such as monotonicity, convexity or in general $k$-monotonicity. In this paper, we are interested in testing $k$-monotonicity of a compactly supported p.m.f. and we put our main focus on monotonicity and convexity; i.e., $k \in \{1,2\}$. We consider new testing procedures that are directly derived from the definition of $k$-monotonicity and rely exclusively on the empirical measure, as well as tests that are based on the projection of the empirical measure on the class of $k$-monotone p.m.f.s. The asymptotic behaviour of the introduced test statistics is derived and a simulation study is performed to assess the finite sample performance of all the proposed tests. Applications to real datasets are presented to illustrate the theory.
In this contribution we study the formal ability of a multi-resolution-times lattice Boltzmann scheme to approximate isothermal and thermal compressible Navier Stokes equations with a single particle distribution. More precisely, we consider a total of 12 classical square lattice Boltzmann schemes with prescribed sets of conserved and nonconserved moments. The question is to determine the algebraic expressions of the equilibrium functions for the nonconserved moments and the relaxation parameters associated to each scheme. We compare the fluid equations and the result of the Taylor expansion method at second order accuracy for bidimensional examples with a maximum of 17 velocities and three-dimensional schemes with at most 33 velocities. In some cases, it is not possible to fit exactly the physical model. For several examples, we adjust the Navier Stokes equations and propose nontrivial expressions for the equilibria.
With the goal of obtaining strong relaxations for binary polynomial optimization problems, we introduce the pseudo-Boolean polytope defined as the convex hull of the set of binary points satisfying a collection of equations containing pseudo-Boolean functions. By representing the pseudo-Boolean polytope via a signed hypergraph, we obtain sufficient conditions under which this polytope has a polynomial-size extended formulation. Our new framework unifies and extends all prior results on the existence of polynomial-size extended formulations for the convex hull of the feasible region of binary polynomial optimization problems of degree at least three.
This paper studies the complexity of classical modal logics and of their extension with fixed-point operators, using translations to transfer results across logics. In particular, we show several complexity results for multi-agent logics via translations to and from the $\mu$-calculus and modal logic, which allow us to transfer known upper and lower bounds. We also use these translations to introduce terminating and non-terminating tableau systems for the logics we study, based on Kozen's tableau for the $\mu$-calculus and the one of Fitting and Massacci for modal logic. Finally, we describe these tableaux with $\mu$-calculus formulas, thus reducing the satisfiability of each of these logics to the satisfiability of the $\mu$-calculus, resulting in a general 2EXP upper bound for satisfiability testing.
A key characteristic of the anomalous sub-solution equation is that the solution exhibits algebraic decay rate over long time intervals, which is often refered to the Mittag-Leffler type stability. For a class of power nonlinear sub-diffusion models with variable coefficients, we prove that their solutions have Mittag-Leffler stability when the source functions satisfy natural decay assumptions. That is the solutions have the decay rate $\|u(t)\|_{L^{s}(\Omega)}=O\left( t^{-(\alpha+\beta)/\gamma} \right)$ as $t\rightarrow\infty$, where $\alpha$, $\gamma$ are positive constants, $\beta\in(-\alpha,\infty)$ and $s\in (1,\infty)$. Then we develop the structure preserving algorithm for this type of model. For the complete monotonicity-preserving ($\mathcal{CM}$-preserving) schemes developed by Li and Wang (Commun. Math. Sci., 19(5):1301-1336, 2021), we prove that they satisfy the discrete comparison principle for time fractional differential equations with variable coefficients. Then, by carefully constructing the fine the discrete supsolution and subsolution, we obtain the long time optimal decay rate of the numerical solution $\|u_{n}\|_{L^{s}(\Omega)}=O\left( t_n^{-(\alpha+\beta)/\gamma} \right)$ as $t_{n}\rightarrow\infty$, which is fully agree with the theoretical solution. Finally, we validated the analysis results through numerical experiments.
We investigate the proof complexity of systems based on positive branching programs, i.e. non-deterministic branching programs (NBPs) where, for any 0-transition between two nodes, there is also a 1-transition. Positive NBPs compute monotone Boolean functions, just like negation-free circuits or formulas, but constitute a positive version of (non-uniform) NL, rather than P or NC1, respectively. The proof complexity of NBPs was investigated in previous work by Buss, Das and Knop, using extension variables to represent the dag-structure, over a language of (non-deterministic) decision trees, yielding the system eLNDT. Our system eLNDT+ is obtained by restricting their systems to a positive syntax, similarly to how the 'monotone sequent calculus' MLK is obtained from the usual sequent calculus LK by restricting to negation-free formulas. Our main result is that eLNDT+ polynomially simulates eLNDT over positive sequents. Our proof method is inspired by a similar result for MLK by Atserias, Galesi and Pudl\'ak, that was recently improved to a bona fide polynomial simulation via works of Je\v{r}\'abek and Buss, Kabanets, Kolokolova and Kouck\'y. Along the way we formalise several properties of counting functions within eLNDT+ by polynomial-size proofs and, as a case study, give explicit polynomial-size poofs of the propositional pigeonhole principle.
The solution approximation for partial differential equations (PDEs) can be substantially improved using smooth basis functions. The recently introduced mollified basis functions are constructed through mollification, or convolution, of cell-wise defined piecewise polynomials with a smooth mollifier of certain characteristics. The properties of the mollified basis functions are governed by the order of the piecewise functions and the smoothness of the mollifier. In this work, we exploit the high-order and high-smoothness properties of the mollified basis functions for solving PDEs through the point collocation method. The basis functions are evaluated at a set of collocation points in the domain. In addition, boundary conditions are imposed at a set of boundary collocation points distributed over the domain boundaries. To ensure the stability of the resulting linear system of equations, the number of collocation points is set larger than the total number of basis functions. The resulting linear system is overdetermined and is solved using the least square technique. The presented numerical examples confirm the convergence of the proposed approximation scheme for Poisson, linear elasticity, and biharmonic problems. We study in particular the influence of the mollifier and the spatial distribution of the collocation points.