The max-relative entropy together with its smoothed version is a basic tool in quantum information theory. In this paper, we derive the exact exponent for the asymptotic decay of the small modification of the quantum state in smoothing the max-relative entropy based on purified distance. We then apply this result to the problem of privacy amplification against quantum side information, and we obtain an upper bound for the exponent of the asymptotic decreasing of the insecurity, measured using either purified distance or relative entropy. Our upper bound complements the earlier lower bound established by Hayashi, and the two bounds match when the rate of randomness extraction is above a critical value. Thus, for the case of high rate, we have determined the exact security exponent. Following this, we give examples and show that in the low-rate case, neither the upper bound nor the lower bound is tight in general. This exhibits a picture similar to that of the error exponent in channel coding. Lastly, we investigate the asymptotics of equivocation and its exponent under the security measure using the sandwiched R\'enyi divergence of order $s\in (1,2]$, which has not been addressed previously in the quantum setting.
In the literature, the reliability analysis of one-shot devices is found under accelerated life testing in the presence of various stress factors. The application of one-shot devices can be extended to the bio-medical field, where we often evidence that inflicted with a certain disease, survival time would be under different stress factors like environmental stress, co-morbidity, the severity of disease etc. This work is concerned with a one-shot device data analysis and applies it to SEER Gallbladder cancer data. The two-parameter logistic exponential distribution is applied as a lifetime distribution. For robust parameter estimation, weighted minimum density power divergence estimators (WMDPDE) is obtained along with the conventional maximum likelihood estimators (MLE). The asymptotic behaviour of the WMDPDE and the robust test statistic based on the density power divergence measure are also studied. The performances of estimators are evaluated through extensive simulation experiments. Later those developments are applied to SEER Gallbladder cancer data. Citing the importance of knowing exactly when to inspect the one-shot devices put to the test, a search for optimum inspection times is performed. This optimization is designed to minimize a defined cost function which strikes a trade-off between the precision of the estimation and experimental cost. The search is accomplished through the population-based heuristic optimization method Genetic Algorithm.
In this work, we use the theory of quantum states over time to define an entropy $S(\rho,\mathcal{E})$ associated with quantum processes $(\rho,\mathcal{E})$, where $\rho$ is a state and $\mathcal{E}$ is a quantum channel responsible for the dynamical evolution of $\rho$. The entropy $S(\rho,\mathcal{E})$ is a generalization of the von Neumann entropy in the sense that $S(\rho,\mathrm{id})=S(\rho)$ (where $\mathrm{id}$ denotes the identity channel), and is a dynamical analogue of the quantum joint entropy for bipartite states. Such an entropy is then used to define dynamical formulations of the quantum conditional entropy and quantum mutual information, and we show such information measures satisfy many desirable properties, such as a quantum entropic Bayes' rule. We also use our entropy function to quantify the information loss/gain associated with the dynamical evolution of quantum systems, which enables us to formulate a precise notion of information conservation for quantum processes.
Classical branching programs are studied to understand the space complexity of computational problems. Prior to this work, Nakanishi and Ablayev had separately defined two different quantum versions of branching programs that we refer to as NQBP and AQBP. However, none of them, to our satisfaction, captures the intuitive idea of being able to query different variables in superposition in one step of a branching program traversal. Here we propose a quantum branching program model, referred to as GQBP, with that ability. To motivate our definition, we explicitly give examples of GQBP for n-bit Deutsch-Jozsa, n-bit Parity, and 3-bit Majority with optimal lengths. We the show several equivalences, namely, between GQBP and AQBP, GQBP and NQBP, and GQBP and query complexities (using either oracle gates and a QRAM to query input bits). In way this unifies the different results that we have for the two earlier branching programs, and also connects them to query complexity. We hope that GQBP can be used to prove space and space-time lower bounds for quantum solutions to combinatorial problems.
In this paper, we investigate the realization of covert communication in a general radar-communication cooperation system, which includes integrated sensing and communications as a special example. We explore the possibility of utilizing the sensing ability of radar to track and jam the aerial adversary target attempting to detect the transmission. Based on the echoes from the target, the extended Kalman filtering technique is employed to predict its trajectory as well as the corresponding channels. Depending on the maneuvering altitude of adversary target, two channel models are considered, with the aim of maximizing the covert transmission rate by jointly designing the radar waveform and communication transmit beamforming vector based on the constructed channels. For the free-space propagation model, by decoupling the joint design, we propose an efficient algorithm to guarantee that the target cannot detect the transmission. For the Rician fading model, since the multi-path components cannot be estimated, a robust joint transmission scheme is proposed based on the property of the Kullback-Leibler divergence. The convergence behaviour, tracking MSE, false alarm and missed detection probabilities, and covert transmission rate are evaluated. Simulation results show that the proposed algorithms achieve accurate tracking. For both channel models, the proposed sensing-assisted covert transmission design is able to guarantee the covertness, and significantly outperforms the conventional schemes.
Amortized variational inference (A-VI) is a method for approximating the intractable posterior distributions that arise in probabilistic models. The defining feature of A-VI is that it learns a global inference function that maps each observation to its local latent variable's approximate posterior. This stands in contrast to the more classical factorized (or mean-field) variational inference (F-VI), which directly learns the parameters of the approximating distribution for each latent variable. In deep generative models, A-VI is used as a computational trick to speed up inference for local latent variables. In this paper, we study A-VI as a general alternative to F-VI for approximate posterior inference. A-VI cannot produce an approximation with a lower Kullback-Leibler divergence than F-VI's optimal solution, because the amortized family is a subset of the factorized family. Thus a central theoretical problem is to characterize when A-VI still attains F-VI's optimal solution. We derive conditions on both the model and the inference function under which A-VI can theoretically achieve F-VI's optimum. We show that for a broad class of hierarchical models, including deep generative models, it is possible to close the gap between A-VI and F-VI. Further, for an even broader class of models, we establish when and how to expand the domain of the inference function to make amortization a feasible strategy. Finally, we prove that for certain models -- including hidden Markov models and Gaussian processes -- A-VI cannot match F-VI's solution, no matter how expressive the inference function is. We also study A-VI empirically [...]
This paper proposes a hierarchy of numerical fluxes for the compressible flow equations which are kinetic-energy and pressure equilibrium preserving and asymptotically entropy conservative, i.e., they are able to arbitrarily reduce the numerical error on entropy production due to the spatial discretization. The fluxes are based on the use of the harmonic mean for internal energy and only use algebraic operations, making them less computationally expensive than the entropy-conserving fluxes based on the logarithmic mean. The use of the geometric mean is also explored and identified to be well-suited to reduce errors on entropy evolution. Results of numerical tests confirmed the theoretical predictions and the entropy-conserving capabilities of a selection of schemes have been compared.
This paper is devoted to the statistical and numerical properties of the geometric median, and its applications to the problem of robust mean estimation via the median of means principle. Our main theoretical results include (a) an upper bound for the distance between the mean and the median for general absolutely continuous distributions in R^d, and examples of specific classes of distributions for which these bounds do not depend on the ambient dimension d; (b) exponential deviation inequalities for the distance between the sample and the population versions of the geometric median, which again depend only on the trace-type quantities and not on the ambient dimension. As a corollary, we deduce improved bounds for the (geometric) median of means estimator that hold for large classes of heavy-tailed distributions. Finally, we address the error of numerical approximation, which is an important practical aspect of any statistical estimation procedure. We demonstrate that the objective function minimized by the geometric median satisfies a "local quadratic growth" condition that allows one to translate suboptimality bounds for the objective function to the corresponding bounds for the numerical approximation to the median itself, and propose a simple stopping rule applicable to any optimization method which yields explicit error guarantees. We conclude with the numerical experiments including the application to estimation of mean values of log-returns for S&P 500 data.
We investigate covert communication over general memoryless classical-quantum channels with fixed finite-size input alphabets. We show that the square root law (SRL) governs covert communication in this setting when product of $n$ input states is used: $L_{\rm SRL}\sqrt{n}+o(\sqrt{n})$ covert bits (but no more) can be reliably transmitted in $n$ uses of classical-quantum channel, where $L_{\rm SRL}>0$ is a channel-dependent constant that we call covert capacity. We also show that ensuring covertness requires $J_{\rm SRL}\sqrt{n}+o(\sqrt{n})$ bits secret shared by the communicating parties prior to transmission, where $J_{\rm SRL}\geq0$ is a channel-dependent constant. We assume a quantum-powerful adversary that can perform an arbitrary joint (entangling) measurement on all $n$ channel uses. We determine the single-letter expressions for $L_{\rm SRL}$ and $J_{\rm SRL}$, and establish conditions when $J_{\rm SRL}=0$ (i.e., no pre-shared secret is needed). Finally, we evaluate the scenarios where covert communication is not governed by the SRL.
We extend three related results from the analysis of influences of Boolean functions to the quantum setting, namely the KKL Theorem, Friedgut's Junta Theorem and Talagrand's variance inequality for geometric influences. Our results are derived by a joint use of recently studied hypercontractivity and gradient estimates. These generic tools also allow us to derive generalizations of these results in a general von Neumann algebraic setting beyond the case of the quantum hypercube, including examples in infinite dimensions relevant to quantum information theory such as continuous variables quantum systems. Finally, we comment on the implications of our results as regards to noncommutative extensions of isoperimetric type inequalities, quantum circuit complexity lower bounds and the learnability of quantum observables.
We study the problem of fairly allocating $m$ indivisible items among $n$ agents. Envy-free allocations, in which each agent prefers her bundle to the bundle of every other agent, need not exist in the worst case. However, when agents have additive preferences and the value $v_{i,j}$ of agent $i$ for item $j$ is drawn independently from a distribution $D_i$, envy-free allocations exist with high probability when $m \in \Omega( n \log n / \log \log n )$. In this paper, we study the existence of envy-free allocations under stochastic valuations far beyond the additive setting. We introduce a new stochastic model in which each agent's valuation is sampled by first fixing a worst-case function, and then drawing a uniformly random renaming of the items, independently for each agent. This strictly generalizes known settings; for example, $v_{i,j} \sim D_i$ may be seen as picking a random (instead of a worst-case) additive function before renaming. We prove that random renaming is sufficient to ensure that envy-free allocations exist with high probability in very general settings. When valuations are non-negative and ``order-consistent,'' a valuation class that generalizes additive, budget-additive, unit-demand, and single-minded agents, SD-envy-free allocations (a stronger notion of fairness than envy-freeness) exist for $m \in \omega(n^2)$ when $n$ divides $m$, and SD-EFX allocations exist for all $m \in \omega(n^2)$. The dependence on $n$ is tight, that is, for $m \in O(n^2)$ envy-free allocations don't exist with constant probability. For the case of arbitrary valuations (allowing non-monotone, negative, or mixed-manna valuations) and $n=2$ agents, we prove envy-free allocations exist with probability $1 - \Theta(1/m)$ (and this is tight).