We introduce two models of space-bounded quantum interactive proof systems, ${\sf QIPL}$ and ${\sf QIP_{\rm U}L}$. The ${\sf QIP_{\rm U}L}$ model, a space-bounded variant of quantum interactive proofs (${\sf QIP}$) introduced by Watrous (CC 2003) and Kitaev and Watrous (STOC 2000), restricts verifier actions to unitary circuits. In contrast, ${\sf QIPL}$ allows logarithmically many intermediate measurements per verifier action (with a high-concentration condition on yes instances), making it the weakest model that encompasses the classical model of Condon and Ladner (JCSS 1995). We characterize the computational power of ${\sf QIPL}$ and ${\sf QIP_{\rm U}L}$. When the message number $m$ is polynomially bounded, ${\sf QIP_{\rm U}L} \subsetneq {\sf QIPL}$ unless ${\sf P} = {\sf NP}$: - ${\sf QIPL}$ exactly characterizes ${\sf NP}$. - ${\sf QIP_{\rm U}L}$ is contained in ${\sf P}$ and contains ${\sf SAC}^1 \cup {\sf BQL}$, where ${\sf SAC}^1$ denotes problems solvable by classical logarithmic-depth, semi-unbounded fan-in circuits. However, this distinction vanishes when $m$ is constant. Our results further indicate that intermediate measurements uniquely impact space-bounded quantum interactive proofs, unlike in space-bounded quantum computation, where ${\sf BQL}={\sf BQ_{\rm U}L}$. We also introduce space-bounded unitary quantum statistical zero-knowledge (${\sf QSZK_{\rm U}L}$), a specific form of ${\sf QIP_{\rm U}L}$ proof systems with statistical zero-knowledge against any verifier. This class is a space-bounded variant of quantum statistical zero-knowledge (${\sf QSZK}$) defined by Watrous (SICOMP 2009). We prove that ${\sf QSZK_{\rm U}L} = {\sf BQL}$, implying that the statistical zero-knowledge property negates the computational advantage typically gained from the interaction.
We study the finite element approximation of problems involving the weighted $p$-Laplacian for $p \in (1,\infty)$ and weights belonging to the Muckenhoupt class $A_1$. In particular, we consider an equation and an obstacle problem for the weighted $p$-Laplacian and derive error estimates in both cases. The analysis is based on the language of weighted Orlicz and Orlicz--Sobolev spaces.
The phenomenon of finite time blow-up in hydrodynamic partial differential equations is central in analysis and mathematical physics. While numerical studies have guided theoretical breakthroughs, it is challenging to determine if the observed computational results are genuine or mere numerical artifacts. Here we identify numerical signatures of blow-up. Our study is based on the complexified Euler equations in two dimensions, where instant blow-up is expected. Via a geometrically consistent spatiotemporal discretization, we perform several numerical experiments and verify their computational stability. We then identify a signature of blow-up based on the growth rates of the supremum norm of the vorticity with increasing spatial resolution. The study aims to be a guide for cross-checking the validity for future numerical experiments of suspected blow-up in equations where the analysis is not yet resolved.
In reinsurance, Poisson and Negative binomial distributions are employed for modeling frequency. However, the incomplete data regarding reported incurred claims above a priority level presents challenges in estimation. This paper focuses on frequency estimation using Schnieper's framework for claim numbering. We demonstrate that Schnieper's model is consistent with a Poisson distribution for the total number of claims above a priority at each year of development, providing a robust basis for parameter estimation. Additionally, we explain how to build an alternative assumption based on a Negative binomial distribution, which yields similar results. The study includes a bootstrap procedure to manage uncertainty in parameter estimation and a case study comparing assumptions and evaluating the impact of the bootstrap approach.
Density deconvolution deals with the estimation of the probability density function $f$ of a random signal from $n\geq1$ data observed with independent and known additive random noise. This is a classical problem in statistics, for which frequentist and Bayesian nonparametric approaches are available to estimate $f$ in static or batch domains. In this paper, we consider the problem of density deconvolution in a streaming or online domain, and develop a principled sequential approach to estimate $f$. By relying on a quasi-Bayesian sequential (learning) model for the data, often referred to as Newton's algorithm, we obtain a sequential deconvolution estimate $f_{n}$ of $f$ that is of easy evaluation, computationally efficient, and with constant computational cost as data increase, which is desirable for streaming data. In particular, local and uniform Gaussian central limit theorems for $f_{n}$ are established, leading to asymptotic credible intervals and bands for $f$, respectively. We provide the sequential deconvolution estimate $f_{n}$ with large sample asymptotic guarantees under the quasi-Bayesian sequential model for the data, proving a merging with respect to the direct density estimation problem, and also under a ``true" frequentist model for the data, proving consistency. An empirical validation of our methods is presented on synthetic and real data, also comparing with respect to a kernel approach and a Bayesian nonparametric approach with a Dirichlet process mixture prior.
In this contribution we study the formal ability of a multi-resolution-times lattice Boltzmann scheme to approximate isothermal and thermal compressible Navier Stokes equations with a single particle distribution. More precisely, we consider a total of 12 classical square lattice Boltzmann schemes with prescribed sets of conserved and nonconserved moments. The question is to determine the algebraic expressions of the equilibrium functions for the nonconserved moments and the relaxation parameters associated to each scheme. We compare the fluid equations and the result of the Taylor expansion method at second order accuracy for bidimensional examples with a maximum of 17 velocities and three-dimensional schemes with at most 33 velocities. In some cases, it is not possible to fit exactly the physical model. For several examples, we adjust the Navier Stokes equations and propose nontrivial expressions for the equilibria.
In this paper, we study the differential properties of $x^d$ over $\mathbb{F}_{p^n}$ with $d=p^{2l}-p^{l}+1$. By studying the differential equation of $x^d$ and the number of rational points on some curves over finite fields, we completely determine differential spectrum of $x^{d}$. Then we investigate the $c$-differential uniformity of $x^{d}$. We also calculate the value distribution of a class of exponential sum related to $x^d$. In addition, we obtain a class of six-weight consta-cyclic codes, whose weight distribution is explicitly determined. Part of our results is a complement of the works shown in [\ref{H1}, \ref{H2}] which mainly focus on cross correlations.
Let $\mathcal A$ the affine algebra given by the ring $\mathbb{F}_q[X_1,X_2,\ldots,X_\ell]/ I$, where $I$ is the ideal $\langle t_1(X_1), t_2(X_2), \ldots, t_\ell(X_\ell) \rangle$ with each $t_i(X_i)$, $1\leq i\leq \ell$, being a square-free polynomial over $\mathbb{F}_q$. This paper studies the $k$-Galois hulls of $\lambda$-constacyclic codes over $\mathcal A$ regarding their idempotent generators. For this, first, we define the $k$-Galois inner product over $\mathcal A$ and find the form of the generators of the $k$-Galois dual and the $k$-Galois hull of a $\lambda$-constacyclic code over $\mathcal A$. Then, we derive a formula for the $k$-Galois hull dimension of a $\lambda$-constacyclic code. Further, we provide a condition for a $\lambda$-constacyclic code to be $k$-Galois LCD. Finally, we give some examples of the use of these codes in constructing entanglement-assisted quantum error-correcting codes.
We propose a generalized free energy potential for active systems, including both stochastic master equations and deterministic nonlinear chemical reaction networks. Our generalized free energy is defined variationally as the "most irreversible" state observable. This variational principle is motivated from several perspectives, including large deviations theory, thermodynamic uncertainty relations, Onsager theory, and information-theoretic optimal transport. In passive systems, the most irreversible observable is the usual free energy potential and its irreversibility is the entropy production rate (EPR). In active systems, the most irreversible observable is the generalized free energy and its irreversibility gives the excess EPR, the nonstationary contribution to dissipation. The remaining "housekeeping" EPR is a genuine nonequilibrium contribution that quantifies the nonconservative nature of the forces. We derive far-from-equilibrium thermodynamic speed limits for excess EPR, applicable to both linear and nonlinear systems. Our approach overcomes several limitations of the steady-state potential and the Hatano-Sasa (adiabatic/nonadiabatic) decomposition, as we demonstrate in several examples.
We consider the discretization of a class of nonlinear parabolic equations by discontinuous Galerkin time-stepping methods and establish a priori as well as conditional a posteriori error estimates. Our approach is motivated by the error analysis in [9] for Runge-Kutta methods for nonlinear parabolic equations; in analogy to [9], the proofs are based on maximal regularity properties of discontinuous Galerkin methods for non-autonomous linear parabolic equations.
We consider the vorticity formulation of the Euler equations describing the flow of a two-dimensional incompressible ideal fluid on the sphere. Zeitlin's model provides a finite-dimensional approximation of the vorticity formulation that preserves the underlying geometric structure: it consists of an isospectral Lie--Poisson flow on the Lie algebra of skew-Hermitian matrices. We propose an approximation of Zeitlin's model based on a time-dependent low-rank factorization of the vorticity matrix and evolve a basis of eigenvectors according to the Euler equations. In particular, we show that the approximate flow remains isospectral and Lie--Poisson and that the error in the solution, in the approximation of the Hamiltonian and of the Casimir functions only depends on the approximation of the vorticity matrix at the initial time. The computational complexity of solving the approximate model is shown to scale quadratically with the order of the vorticity matrix and linearly if a further approximation of the stream function is introduced.