We propose a quantum soft-covering problem for a given general quantum channel and one of its output states, which consists in finding the minimum rank of an input state needed to approximate the given channel output. We then prove a one-shot quantum covering lemma in terms of smooth min-entropies by leveraging decoupling techniques from quantum Shannon theory. This covering result is shown to be equivalent to a coding theorem for rate distortion under a posterior (reverse) channel distortion criterion [Atif, Sohail, Pradhan, arXiv:2302.00625]. Both one-shot results directly yield corollaries about the i.i.d. asymptotics, in terms of the coherent information of the channel. The power of our quantum covering lemma is demonstrated by two additional applications: first, we formulate a quantum channel resolvability problem, and provide one-shot as well as asymptotic upper and lower bounds. Secondly, we provide new upper bounds on the unrestricted and simultaneous identification capacities of quantum channels, in particular separating for the first time the simultaneous identification capacity from the unrestricted one, proving a long-standing conjecture of the last author.
A general a posteriori error analysis applies to five lowest-order finite element methods for two fourth-order semi-linear problems with trilinear non-linearity and a general source. A quasi-optimal smoother extends the source term to the discrete trial space, and more importantly, modifies the trilinear term in the stream-function vorticity formulation of the incompressible 2D Navier-Stokes and the von K\'{a}rm\'{a}n equations. This enables the first efficient and reliable a posteriori error estimates for the 2D Navier-Stokes equations in the stream-function vorticity formulation for Morley, two discontinuous Galerkin, $C^0$ interior penalty, and WOPSIP discretizations with piecewise quadratic polynomials.
Solving multiphysics-based inverse problems for geological carbon storage monitoring can be challenging when multimodal time-lapse data are expensive to collect and costly to simulate numerically. We overcome these challenges by combining computationally cheap learned surrogates with learned constraints. Not only does this combination lead to vastly improved inversions for the important fluid-flow property, permeability, it also provides a natural platform for inverting multimodal data including well measurements and active-source time-lapse seismic data. By adding a learned constraint, we arrive at a computationally feasible inversion approach that remains accurate. This is accomplished by including a trained deep neural network, known as a normalizing flow, which forces the model iterates to remain in-distribution, thereby safeguarding the accuracy of trained Fourier neural operators that act as surrogates for the computationally expensive multiphase flow simulations involving partial differential equation solves. By means of carefully selected experiments, centered around the problem of geological carbon storage, we demonstrate the efficacy of the proposed constrained optimization method on two different data modalities, namely time-lapse well and time-lapse seismic data. While permeability inversions from both these two modalities have their pluses and minuses, their joint inversion benefits from either, yielding valuable superior permeability inversions and CO2 plume predictions near, and far away, from the monitoring wells.
Transition amplitudes and transition probabilities are relevant to many areas of physics simulation, including the calculation of response properties and correlation functions. These quantities can also be related to solving linear systems of equations. Here we present three related algorithms for calculating transition probabilities. First, we extend a previously published short-depth algorithm, allowing for the two input states to be non-orthogonal. Building on this first procedure, we then derive a higher-depth algorithm based on Trotterization and Richardson extrapolation that requires fewer circuit evaluations. Third, we introduce a tunable algorithm that allows for trading off circuit depth and measurement complexity, yielding an algorithm that can be tailored to specific hardware characteristics. Finally, we implement proof-of-principle numerics for models in physics and chemistry and for a subroutine in variational quantum linear solving (VQLS). The primary benefits of our approaches are that (a) arbitrary non-orthogonal states may now be used with small increases in quantum resources, (b) we (like another recently proposed method) entirely avoid subroutines such as the Hadamard test that may require three-qubit gates to be decomposed, and (c) in some cases fewer quantum circuit evaluations are required as compared to the previous state-of-the-art in NISQ algorithms for transition probabilities.
The convergence analysis for least-squares finite element methods led to various adaptive mesh-refinement strategies: Collective marking algorithms driven by the built-in a posteriori error estimator or an alternative explicit residual-based error estimator as well as a separate marking strategy based on the alternative error estimator and an optimal data approximation algorithm. This paper reviews and discusses available convergence results. In addition, all three strategies are investigated empirically for a set of benchmarks examples of second-order elliptic partial differential equations in two spatial dimensions. Particular interest is on the choice of the marking and refinement parameters and the approximation of the given data. The numerical experiments are reproducible using the author's software package octAFEM available on the platform Code Ocean.
In this paper we introduce a multilevel Picard approximation algorithm for semilinear parabolic partial integro-differential equations (PIDEs). We prove that the numerical approximation scheme converges to the unique viscosity solution of the PIDE under consideration. To that end, we derive a Feynman-Kac representation for the unique viscosity solution of the semilinear PIDE, extending the classical Feynman-Kac representation for linear PIDEs. Furthermore, we show that the algorithm does not suffer from the curse of dimensionality, i.e. the computational complexity of the algorithm is bounded polynomially in the dimension $d$ and the reciprocal of the prescribed accuracy $\varepsilon$. We also provide a numerical example in up to 10'000 dimensions to demonstrate its applicability.
Quantum computing is a growing field where the information is processed by two-levels quantum states known as qubits. Current physical realizations of qubits require a careful calibration, composed by different experiments, due to noise and decoherence phenomena. Among the different characterization experiments, a crucial step is to develop a model to classify the measured state by discriminating the ground state from the excited state. In this proceedings we benchmark multiple classification techniques applied to real quantum devices.
Unveiling the underlying governing equations of nonlinear dynamic systems remains a significant challenge, especially when encountering noisy observations and no prior knowledge available. This study proposes R-DISCOVER, a framework designed to robustly uncover open-form partial differential equations (PDEs) from limited and noisy data. The framework operates through two alternating update processes: discovering and embedding. The discovering phase employs symbolic representation and a reinforcement learning (RL)-guided hybrid PDE generator to efficiently produce diverse open-form PDEs with tree structures. A neural network-based predictive model fits the system response and serves as the reward evaluator for the generated PDEs. PDEs with superior fits are utilized to iteratively optimize the generator via the RL method and the best-performing PDE is selected by a parameter-free stability metric. The embedding phase integrates the initially identified PDE from the discovering process as a physical constraint into the predictive model for robust training. The traversal of PDE trees automates the construction of the computational graph and the embedding process without human intervention. Numerical experiments demonstrate our framework's capability to uncover governing equations from nonlinear dynamic systems with limited and highly noisy data and outperform other physics-informed neural network-based discovery methods. This work opens new potential for exploring real-world systems with limited understanding.
We study a finite volume scheme approximating a parabolic-elliptic Keller-Segel system with power law diffusion with exponent $\gamma \in [1,3]$ and periodic boundary conditions. We derive conditional a posteriori bounds for the error measured in the $L^\infty(0,T;H^1(\Omega))$ norm for the chemoattractant and by a quasi-norm-like quantity for the density. These results are based on stability estimates and suitable conforming reconstructions of the numerical solution. We perform numerical experiments showing that our error bounds are linear in mesh width and elucidating the behaviour of the error estimator under changes of $\gamma$.
We introduce a novel ridge detection algorithm for time-frequency (TF) analysis, particularly tailored for intricate nonstationary time series encompassing multiple non-sinusoidal oscillatory components. The algorithm is rooted in the distinctive geometric patterns that emerge in the TF domain due to such non-sinusoidal oscillations. We term this method \textit{shape-adaptive mode decomposition-based multiple harmonic ridge detection} (\textsf{SAMD-MHRD}). A swift implementation is available when supplementary information is at hand. We demonstrate the practical utility of \textsf{SAMD-MHRD} through its application to a real-world challenge. We employ it to devise a cutting-edge walking activity detection algorithm, leveraging accelerometer signals from an inertial measurement unit across diverse body locations of a moving subject.
Vector addition systems with states (VASS) are a popular model for concurrent systems. However, many decision problems have prohibitively high complexity. Therefore, it is sometimes useful to consider overapproximating semantics in which these problems can be decided more efficiently. We study an overapproximation, called monus semantics, that slightly relaxes the semantics of decrements: A key property of a vector addition systems is that in order to decrement a counter, this counter must have a positive value. In contrast, our semantics allows decrements of zero-valued counters: If such a transition is executed, the counter just remains zero. It turns out that if only a subset of transitions is used with monus semantics (and the others with classical semantics), then reachability is undecidable. However, we show that if monus semantics is used throughout, reachability remains decidable. In particular, we show that reachability for VASS with monus semantics is as hard as that of classical VASS (i.e. Ackermann-hard), while the zero-reachability and coverability are easier (i.e. EXPSPACE-complete and NP-complete, respectively). We provide a comprehensive account of the complexity of the general reachability problem, reachability of zero configurations, and coverability under monus semantics. We study these problems in general VASS, two-dimensional VASS, and one-dimensional VASS, with unary and binary counter updates.