As an essential ingredient of quantum networks, quantum conference key agreement (QCKA) provides unconditional secret keys among multiple parties, which enables only legitimate users to decrypt the encrypted message. Recently, some QCKA protocols employing twin-field was proposed to promote transmission distance. These protocols, however, suffer from relatively low conference key rate and short transmission distance over asymmetric channels, which demands a prompt solution in practice. Here, we consider a tripartite QCKA protocol utilizing the idea of sending-or-not-sending twin-field scheme and propose a high-efficiency QCKA over asymmetric channels by removing the symmetry parameters condition. Besides, we provide a composable finite-key analysis with rigorous security proof against general attacks by exploiting the entropic uncertainty relation for multiparty system. Our protocol greatly improves the feasibility to establish conference keys over asymmetric channels.
Commitment is an important cryptographic primitive. It is well known that noisy channels are a promising resource to realize commitment in an information-theoretically secure manner. However, oftentimes, channel behaviour may be poorly characterized thereby limiting the commitment throughput and/or degrading the security guarantees; particularly problematic is when a dishonest party, unbeknown to the honest one, can maliciously alter the channel characteristics. Reverse elastic channels (RECs) are an interesting class of such unreliable channels, where only a dishonest committer, say, Alice can maliciously alter the channel. RECs have attracted recent interest in the study of several cryptographic primitives. Our principal contribution is the REC commitment capacity characterization; this proves a recent related conjecture. A key result is our tight converse which analyses a specific cheating strategy by Alice. RECs are closely related to the classic unfair noisy channels (UNCs); elastic channels (ECs), where only a dishonest receiver Bob can alter the channel, are similarly related. In stark contrast to UNCs, both RECs and ECs always exhibit positive commitment throughput for all non-trivial parameters. Interestingly, our results show that channels with exclusive one-sided elasticity for dishonest parties, exhibit a fundamental asymmetry where a committer with one-sided elasticity has a more debilitating effect on the commitment throughput than a receiver.
Directional beams are key to enabling wireless communications at high-frequency bands such as millimeter-wave and terahertz. Beam alignment (BA) methods allow the transceivers to adjust the directions of these beams in an efficient manner by exploiting the channel sparsity at high frequencies. This paper considers an uplink scenario consisting of a user equipment (UE) and a base station (BS), where the channel between the UE and BS consists of multiple paths. The BS wishes to localize the angle of arrival of each of these paths with a given resolution. At each time slot of the BA, the UE transmits a BA packet and the BS uses hybrid beamforming to scan its angular region. In order to minimize the expected BA duration, a group testing framework is devised, and the associated novel analog and hybrid BA strategies are described. Simulation studies suggest that the proposed schemes outperform the state-of-the-art multi-path BA methods.
A locally testable code is an error-correcting code that admits very efficient probabilistic tests of membership. Tensor codes provide a simple family of combinatorial constructions of locally testable codes that generalize the family of Reed-Muller codes. The natural test for tensor codes, the axis-parallel line vs. point test, plays an essential role in constructions of probabilistically checkable proofs. We analyze the axis-parallel line vs. point test as a two-prover game and show that the test is sound against quantum provers sharing entanglement. Our result implies the quantum-soundness of the low individual degree test, which is an essential component of the MIP* = RE theorem. Our proof also generalizes to the infinite-dimensional commuting-operator model of quantum provers.
We consider the symmetric binary perceptron model, a simple model of neural networks that has gathered significant attention in the statistical physics, information theory and probability theory communities, with recent connections made to the performance of learning algorithms in Baldassi et al. '15. We establish that the partition function of this model, normalized by its expected value, converges to a lognormal distribution. As a consequence, this allows us to establish several conjectures for this model: (i) it proves the contiguity conjecture of Aubin et al. '19 between the planted and unplanted models in the satisfiable regime; (ii) it establishes the sharp threshold conjecture; (iii) it proves the frozen 1-RSB conjecture in the symmetric case, conjectured first by Krauth-M\'ezard '89 in the asymmetric case. In a recent work of Perkins-Xu '21, the last two conjectures were also established by proving that the partition function concentrates on an exponential scale, under an analytical assumption on a real-valued function. This left open the contiguity conjecture and the lognormal limit characterization, which are established here unconditionally, with the analytical assumption verified. In particular, our proof technique relies on a dense counter-part of the small graph conditioning method, which was developed for sparse models in the celebrated work of Robinson and Wormald.
Optimization of parameterized quantum circuits is indispensable for applications of near-term quantum devices to computational tasks with variational quantum algorithms (VQAs). However, the existing optimization algorithms for VQAs require an excessive number of quantum-measurement shots in estimating expectation values of observables or iterating updates of circuit parameters, whose cost has been a crucial obstacle for practical use. To address this problem, we develop an efficient framework, \textit{stochastic gradient line Bayesian optimization} (SGLBO), for the circuit optimization with fewer measurement shots. The SGLBO reduces the cost of measurement shots by estimating an appropriate direction of updating the parameters based on stochastic gradient descent (SGD) and further by utilizing Bayesian optimization (BO) to estimate the optimal step size in each iteration of the SGD. We formulate an adaptive measurement-shot strategy to achieve the optimization feasibly without relying on precise expectation-value estimation and many iterations; moreover, we show that a technique of suffix averaging can significantly reduce the effect of statistical and hardware noise in the optimization for the VQAs. Our numerical simulation demonstrates that the SGLBO augmented with these techniques can drastically reduce the required number of measurement shots, improve the accuracy in the optimization, and enhance the robustness against the noise compared to other state-of-art optimizers in representative tasks for the VQAs. These results establish a framework of quantum-circuit optimizers integrating two different optimization approaches, SGD and BO, to reduce the cost of measurement shots significantly.
Since the cyberspace consolidated as fifth warfare dimension, the different actors of the defense sector began an arms race toward achieving cyber superiority, on which research, academic and industrial stakeholders contribute from a dual vision, mostly linked to a large and heterogeneous heritage of developments and adoption of civilian cybersecurity capabilities. In this context, augmenting the conscious of the context and warfare environment, risks and impacts of cyber threats on kinetic actuations became a critical rule-changer that military decision-makers are considering. A major challenge on acquiring mission-centric Cyber Situational Awareness (CSA) is the dynamic inference and assessment of the vertical propagations from situations that occurred at the mission supportive Information and Communications Technologies (ICT), up to their relevance at military tactical, operational and strategical views. In order to contribute on acquiring CSA, this paper addresses a major gap in the cyber defence state-of-the-art: the dynamic identification of Key Cyber Terrains (KCT) on a mission-centric context. Accordingly, the proposed KCT identification approach explores the dependency degrees among tasks and assets defined by commanders as part of the assessment criteria. These are correlated with the discoveries on the operational network and the asset vulnerabilities identified thorough the supported mission development. The proposal is presented as a reference model that reveals key aspects for mission-centric KCT analysis and supports its enforcement and further enforcement by including an illustrative application case.
In this paper, we investigate the impact of correlated fading on the performance of wireless multiple access channels (MAC) in the presence and absence of side information (SI) at transmitters, where the fading coefficients are modeled according to the Fisher-Snedecor F distribution. Specifically, we represent two scenarios: (i) clear MAC (i.e, without SI at transmitters), (ii) doubly dirty MAC (i.e., with the non-causally known SI at transmitters). For both system models, we derive the closed-form expressions for the outage probability in independent fading conditions. Besides, exploiting copula theory, we obtain exact analytical expressions for the outage probability under the positive dependence fading conditions in both considered models. Finally, the efficiency of the analytical results is illustrated numerically.
Queue length violation probability, i.e., the tail distribution of the queue length, is a widely used statistical quality-of-service (QoS) metric in wireless communications. Many previous works conducted tail distribution analysis on the control policies with the assumption that the condition of the large deviations theory (LDT) is satisfied. LDT indicates that the tail distribution of the queue length has a linear-decay-rate exponent. However, there are many control policies which do not meet that assumption, while the optimal control policy may be included in these policies. In this paper, we put our focus on the analysis of the tail distribution of the queue length from the perspective of cross-layer design in wireless link transmission. Specifically, we divide the wireless link transmission systems into three scenarios according to the decay rate of the queue-length tail distribution with the finite average power consumption. A heuristic policy is conceived to prove that the arbitrary-decay-rate tail distribution with the finite average power consumption exists in Rayleigh fading channels. Based on this heuristic policy, we generalize the analysis to Nakagami-m fading channels. Numerical results with approximation validate our analysis.
As data are increasingly being stored in different silos and societies becoming more aware of data privacy issues, the traditional centralized training of artificial intelligence (AI) models is facing efficiency and privacy challenges. Recently, federated learning (FL) has emerged as an alternative solution and continue to thrive in this new reality. Existing FL protocol design has been shown to be vulnerable to adversaries within or outside of the system, compromising data privacy and system robustness. Besides training powerful global models, it is of paramount importance to design FL systems that have privacy guarantees and are resistant to different types of adversaries. In this paper, we conduct the first comprehensive survey on this topic. Through a concise introduction to the concept of FL, and a unique taxonomy covering: 1) threat models; 2) poisoning attacks and defenses against robustness; 3) inference attacks and defenses against privacy, we provide an accessible review of this important topic. We highlight the intuitions, key techniques as well as fundamental assumptions adopted by various attacks and defenses. Finally, we discuss promising future research directions towards robust and privacy-preserving federated learning.
In this paper, we study the optimal convergence rate for distributed convex optimization problems in networks. We model the communication restrictions imposed by the network as a set of affine constraints and provide optimal complexity bounds for four different setups, namely: the function $F(\xb) \triangleq \sum_{i=1}^{m}f_i(\xb)$ is strongly convex and smooth, either strongly convex or smooth or just convex. Our results show that Nesterov's accelerated gradient descent on the dual problem can be executed in a distributed manner and obtains the same optimal rates as in the centralized version of the problem (up to constant or logarithmic factors) with an additional cost related to the spectral gap of the interaction matrix. Finally, we discuss some extensions to the proposed setup such as proximal friendly functions, time-varying graphs, improvement of the condition numbers.