In this work, we use the theory of quantum states over time to define an entropy $S(\rho,\mathcal{E})$ associated with quantum processes $(\rho,\mathcal{E})$, where $\rho$ is a state and $\mathcal{E}$ is a quantum channel responsible for the dynamical evolution of $\rho$. The entropy $S(\rho,\mathcal{E})$ is a generalization of the von Neumann entropy in the sense that $S(\rho,\mathrm{id})=S(\rho)$ (where $\mathrm{id}$ denotes the identity channel), and is a dynamical analogue of the quantum joint entropy for bipartite states. Such an entropy is then used to define dynamical formulations of the quantum conditional entropy and quantum mutual information, and we show such information measures satisfy many desirable properties, such as a quantum entropic Bayes' rule. We also use our entropy function to quantify the information loss/gain associated with the dynamical evolution of quantum systems, which enables us to formulate a precise notion of information conservation for quantum processes.
In this paper, we develop a domain-decomposition method for the generalized Poisson-Boltzmann equation based on a solvent-excluded surface which is widely used in computational chemistry. The solver requires to solve a generalized screened Poisson (GSP) equation defined in $\mathbb{R}^3$ with a space-dependent dielectric permittivity and an ion-exclusion function that accounts for Steric effects. Potential theory arguments transform the GSP equation into two-coupled equations defined in a bounded domain. Then, the Schwarz decomposition method is used to formulate local problems by decomposing the cavity into overlapping balls and only solving a set of coupled sub-equations in each ball in which, the spherical harmonics and the Legendre polynomials are used as basis functions in the angular and radial directions. A series of numerical experiments are presented to test the method.
G\'acs' coarse-grained algorithmic entropy leverages universal computation to quantify the information content of any given physical state. Unlike the Boltzmann and Shannon-Gibbs entropies, it requires no prior commitment to macrovariables or probabilistic ensembles. Whereas earlier work had made loose connections between the entropy of thermodynamic systems and information-processing systems, the algorithmic entropy formally unifies them both. After adapting G\'acs' definition to Markov processes, we prove a very general second law of thermodynamics, and discuss its advantages over previous formulations. Finally, taking inspiration from Maxwell's demon, we model an information engine powered by compressible data.
The spectral clustering algorithm is often used as a binary clustering method for unclassified data by applying the principal component analysis. To study theoretical properties of the algorithm, the assumption of homoscedasticity is often supposed in existing studies. However, this assumption is restrictive and often unrealistic in practice. Therefore, in this paper, we consider the allometric extension model, that is, the directions of the first eigenvectors of two covariance matrices and the direction of the difference of two mean vectors coincide, and we provide a non-asymptotic bound of the error probability of the spectral clustering algorithm for the allometric extension model. As a byproduct of the result, we obtain the consistency of the clustering method in high-dimensional settings.
This work considers the low-rank approximation of a matrix $A(t)$ depending on a parameter $t$ in a compact set $D \subset \mathbb{R}^d$. Application areas that give rise to such problems include computational statistics and dynamical systems. Randomized algorithms are an increasingly popular approach for performing low-rank approximation and they usually proceed by multiplying the matrix with random dimension reduction matrices (DRMs). Applying such algorithms directly to $A(t)$ would involve different, independent DRMs for every $t$, which is not only expensive but also leads to inherently non-smooth approximations. In this work, we propose to use constant DRMs, that is, $A(t)$ is multiplied with the same DRM for every $t$. The resulting parameter-dependent extensions of two popular randomized algorithms, the randomized singular value decomposition and the generalized Nystr\"{o}m method, are computationally attractive, especially when $A(t)$ admits an affine linear decomposition with respect to $t$. We perform a probabilistic analysis for both algorithms, deriving bounds on the expected value as well as failure probabilities for the approximation error when using Gaussian random DRMs. Both, the theoretical results and numerical experiments, show that the use of constant DRMs does not impair their effectiveness; our methods reliably return quasi-best low-rank approximations.
Given a boolean formula $\Phi$(X, Y, Z), the Max\#SAT problem asks for finding a partial model on the set of variables X, maximizing its number of projected models over the set of variables Y. We investigate a strict generalization of Max\#SAT allowing dependencies for variables in X, effectively turning it into a synthesis problem. We show that this new problem, called DQMax\#SAT, subsumes both the DQBF and DSSAT problems. We provide a general resolution method, based on a reduction to Max\#SAT, together with two improvements for dealing with its inherent complexity. We further discuss a concrete application of DQMax\#SAT for symbolic synthesis of adaptive attackers in the field of program security. Finally, we report preliminary results obtained on the resolution of benchmark problems using a prototype DQMax\#SAT solver implementation.
We explore a link between complexity and physics for circuits of given functionality. Taking advantage of the connection between circuit counting problems and the derivation of ensembles in statistical mechanics, we tie the entropy of circuits of a given functionality and fixed number of gates to circuit complexity. We use thermodynamic relations to connect the quantity analogous to the equilibrium temperature to the exponent describing the exponential growth of the number of distinct functionalities as a function of complexity. This connection is intimately related to the finite compressibility of typical circuits. Finally, we use the thermodynamic approach to formulate a framework for the obfuscation of programs of arbitrary length -- an important problem in cryptography -- as thermalization through recursive mixing of neighboring sections of a circuit, which can viewed as the mixing of two containers with ``gases of gates''. This recursive process equilibrates the average complexity and leads to the saturation of the circuit entropy, while preserving functionality of the overall circuit. The thermodynamic arguments hinge on ergodicity in the space of circuits which we conjecture is limited to disconnected ergodic sectors due to fragmentation. The notion of fragmentation has important implications for the problem of circuit obfuscation as it implies that there are circuits with same size and functionality that cannot be connected via local moves. Furthermore, we argue that fragmentation is unavoidable unless the complexity classes NP and coNP coincide, a statement that implies the collapse of the polynomial hierarchy of complexity theory to its first level.
We consider {\it local} balances of momentum and angular momentum for the incompressible Navier-Stokes equations. First, we formulate new weak forms of the physical balances (conservation laws) of these quantities, and prove they are equivalent to the usual conservation law formulations. We then show that continuous Galerkin discretizations of the Navier-Stokes equations using the EMAC form of the nonlinearity preserve discrete analogues of the weak form conservation laws, both in the Eulerian formulation and the Lagrangian formulation (which are not equivalent after discretizations). Numerical tests illustrate the new theory.
We develop a hybrid scheme based on a finite difference scheme and a rescaling technique to approximate the solution of nonlinear wave equation. In order to numerically reproduce the blow-up phenomena, we propose a rule of scaling transformation, which is a variant of what was successfully used in the case of nonlinear parabolic equations. A careful study of the convergence of the proposed scheme is carried out and several numerical examples are performed in illustration.
We study a variant of quantum hypothesis testing wherein an additional 'inconclusive' measurement outcome is added, allowing one to abstain from attempting to discriminate the hypotheses. The error probabilities are then conditioned on a successful attempt, with inconclusive trials disregarded. We completely characterise this task in both the single-shot and asymptotic regimes, providing exact formulas for the optimal error probabilities. In particular, we prove that the asymptotic error exponent of discriminating any two quantum states $\rho$ and $\sigma$ is given by the Hilbert projective metric $D_{\max}(\rho\|\sigma) + D_{\max}(\sigma \| \rho)$ in asymmetric hypothesis testing, and by the Thompson metric $\max \{ D_{\max}(\rho\|\sigma), D_{\max}(\sigma \| \rho) \}$ in symmetric hypothesis testing. This endows these two quantities with fundamental operational interpretations in quantum state discrimination. Our findings extend to composite hypothesis testing, where we show that the asymmetric error exponent with respect to any convex set of density matrices is given by a regularisation of the Hilbert projective metric. We apply our results also to quantum channels, showing that no advantage is gained by employing adaptive or even more general discrimination schemes over parallel ones, in both the asymmetric and symmetric settings. Our state discrimination results make use of no properties specific to quantum mechanics and are also valid in general probabilistic theories.
We introduce and study a scale of operator classes on the annulus that is motivated by the $\mathcal{C}_{\rho}$ classes of $\rho$-contractions of Nagy and Foia\c{s}. In particular, our classes are defined in terms of the contractivity of the double-layer potential integral operator over the annulus. We prove that if, in addition, complete contractivity is assumed, then one obtains a complete characterization involving certain variants of the $\mathcal{C}_{\rho}$ classes. Recent work of Crouzeix-Greenbaum and Schwenninger-de Vries allows us to also obtain relevant K-spectral estimates, generalizing existing results from the literature on the annulus. Finally, we exhibit a special case where these estimates can be significantly strengthened.