Communication security could be enhanced at physical layer but at the cost of complex algorithms and redundant hardware, which would render traditional physical layer security (PLS) techniques unsuitable for use with resource-constrained communication systems. This work investigates a waveform-defined security (WDS) framework, which differs fundamentally from traditional PLS techniques used in today's systems. The framework is not dependent on channel conditions such as signal power advantage and channel state information (CSI). Therefore, the framework is more reliable than channel dependent beamforming and artificial noise (AN) techniques. In addition, the framework is more than just increasing the cost of eavesdropping. By intentionally tuning waveform patterns to weaken signal feature diversity and enhance feature similarity, eavesdroppers will not be able to identify correctly signal formats. The wrong classification of signal formats would result in subsequent detection errors even when an eavesdropper uses brute-force detection techniques. To get a robust WDS framework, three impact factors, namely training data feature, oversampling factor and bandwidth compression factor (BCF) offset, are investigated. An optimal WDS waveform pattern is obtained at the end after a joint study of the three factors. To ensure a valid eavesdropping model, artificial intelligence (AI) dependent signal classifiers are designed followed by optimal performance achievable signal detectors. To show the compatibility in available communication systems, the WDS framework is successfully integrated in IEEE 802.11a with nearly no adding computational complexity. Finally, a low-cost software-defined radio (SDR) experiment is designed to verify the feasibility of the WDS framework in resource-constrained communications.
This paper investigates reconfigurable intelligent surface (RIS)-assisted secure multiuser communication systems subject to hardware impairments (HIs). We jointly optimize the beamforming vectors at the base station (BS) and the phase shifts of the reflecting elements at the RIS so as to maximize the weighted minimum secrecy rate (WMSR), subject to both transmission power constraints at the BS and unit-modulus constraints at the RIS. To address the formulated optimization problem, we first decouple it into two tractable subproblems and then use the block coordinate descent (BCD) method to alternately optimize the subproblems. Two different methods are proposed to solve the two obtained subproblems. The first method transforms each subproblem into a second order cone programming (SOCP) problem, which can be directly solved using CVX. The second method leverages the Minorization- Maximization (MM) algorithm. Specifically, we first derive a concave approximation function, which is a lower bound of the original objective function, and then the two subproblems are transformed into two simple surrogate problems with closedform solutions. Simulation results verify the performance gains of the proposed robust transmission method over existing nonrobust designs. In addition, the MM algorithm is shown to have much lower complexity than the SOCP-based algorithm.
With the advent of the Internet of Things (IoT), establishing a secure channel between smart devices becomes crucial. Recent research proposes zero-interaction pairing (ZIP), which enables pairing without user assistance by utilizing devices' physical context (e.g., ambient audio) to obtain a shared secret key. The state-of-the-art ZIP schemes suffer from three limitations: (1) prolonged pairing time (i.e., minutes or hours), (2) vulnerability to brute-force offline attacks on a shared key, and (3) susceptibility to attacks caused by predictable context (e.g., replay attack) because they rely on limited entropy of physical context to protect a shared key. We address these limitations, proposing FastZIP, a novel ZIP scheme that significantly reduces pairing time while preventing offline and predictable context attacks. In particular, we adapt a recently introduced Fuzzy Password-Authenticated Key Exchange (fPAKE) protocol and utilize sensor fusion, maximizing their advantages. We instantiate FastZIP for intra-car device pairing to demonstrate its feasibility and show how the design of FastZIP can be adapted to other ZIP use cases. We implement FastZIP and evaluate it by driving four cars for a total of 800 km. We achieve up to three times shorter pairing time compared to the state-of-the-art ZIP schemes while assuring robust security with adversarial error rates below 0.5%.
This paper studies the problem of distributed spectrum/channel access for cognitive radio-enabled unmanned aerial vehicles (CUAVs) that overlay upon primary channels. Under the framework of cooperative spectrum sensing and opportunistic transmission, a one-shot optimization problem for channel allocation, aiming to maximize the expected cumulative weighted reward of multiple CUAVs, is formulated. To handle the uncertainty due to the lack of prior knowledge about the primary user activities as well as the lack of the channel-access coordinator, the original problem is cast into a competition and cooperation hybrid multi-agent reinforcement learning (CCH-MARL) problem in the framework of Markov game (MG). Then, a value-iteration-based RL algorithm, which features upper confidence bound-Hoeffding (UCB-H) strategy searching, is proposed by treating each CUAV as an independent learner (IL). To address the curse of dimensionality, the UCB-H strategy is further extended with a double deep Q-network (DDQN). Numerical simulations show that the proposed algorithms are able to efficiently converge to stable strategies, and significantly improve the network performance when compared with the benchmark algorithms such as the vanilla Q-learning and DDQN algorithms.
This work considers mitigation of information leakage between communication and sensing operations in joint communication and sensing systems. Specifically, a discrete memoryless state-dependent broadcast channel model is studied in which (i) the presence of feedback enables a transmitter to simultaneously achieve reliable communication and channel state estimation; (ii) one of the receivers is treated as an eavesdropper whose state should be estimated but which should remain oblivious to a part of the transmitted information. The model abstracts the challenges behind security for joint communication and sensing if one views the channel state as a characteristic of the receiver, e.g., its location. For independent identically distributed (i.i.d.) states, perfect output feedback, and when part of the transmitted message should be kept secret, a partial characterization of the secrecy-distortion region is developed. The characterization is exact when the broadcast channel is either physically-degraded or reversely-physically-degraded. The characterization is also extended to the situation in which the entire transmitted message should be kept secret. The benefits of a joint approach compared to separation-based secure communication and state-sensing methods are illustrated with a binary joint communication and sensing model.
Designing encoding and decoding circuits to reliably send messages over many uses of a noisy channel is a central problem in communication theory. When studying the optimal transmission rates achievable with asymptotically vanishing error it is usually assumed that these circuits can be implemented using noise-free gates. While this assumption is satisfied for classical machines in many scenarios, it is not expected to be satisfied in the near term future for quantum machines where decoherence leads to faults in the quantum gates. As a result, fundamental questions regarding the practical relevance of quantum channel coding remain open. By combining techniques from fault-tolerant quantum computation with techniques from quantum communication, we initiate the study of these questions. We introduce fault-tolerant versions of quantum capacities quantifying the optimal communication rates achievable with asymptotically vanishing total error when the encoding and decoding circuits are affected by gate errors with small probability. Our main results are threshold theorems for the classical and quantum capacity: For every quantum channel $T$ and every $\epsilon>0$ there exists a threshold $p(\epsilon,T)$ for the gate error probability below which rates larger than $C-\epsilon$ are fault-tolerantly achievable with vanishing overall communication error, where $C$ denotes the usual capacity. Our results are not only relevant in communication over large distances, but also on-chip, where distant parts of a quantum computer might need to communicate under higher levels of noise than affecting the local gates.
We introduce the problem of determining if the mode of the output distribution of a quantum circuit (given as a black-box) is larger than a given threshold, named HighDist, and a similar problem based on the absolute values of the amplitudes, named HighAmp. We design quantum algorithms for promised versions of these problems whose space complexities are logarithmic in the size of the domain of the distribution, but the query complexities are independent. Using these, we further design algorithms to estimate the largest probability and the largest amplitude among the output distribution of a quantum black-box. All of these allow us to improve the query complexity of a few recently studied problems, namely, $k$-distinctness and its gapped version, estimating the largest frequency in an array, estimating the min-entropy of a distribution, and the non-linearity of a Boolean function, in the $\tilde{O}(1)$-qubits scenario. The time-complexities of almost all of our algorithms have a small overhead over their query complexities making them efficiently implementable on currently available quantum backends.
Computing the noisy sum of real-valued vectors is an important primitive in differentially private learning and statistics. In private federated learning applications, these vectors are held by client devices, leading to a distributed summation problem. Standard Secure Multiparty Computation (SMC) protocols for this problem are susceptible to poisoning attacks, where a client may have a large influence on the sum, without being detected. In this work, we propose a poisoning-robust private summation protocol in the multiple-server setting, recently studied in PRIO. We present a protocol for vector summation that verifies that the Euclidean norm of each contribution is approximately bounded. We show that by relaxing the security constraint in SMC to a differential privacy like guarantee, one can improve over PRIO in terms of communication requirements as well as the client-side computation. Unlike SMC algorithms that inevitably cast integers to elements of a large finite field, our algorithms work over integers/reals, which may allow for additional efficiencies.
Deep neural networks (DNNs) are increasingly being used in a variety of traditional radiofrequency (RF) problems. Previous work has shown that while DNN classifiers are typically more accurate than traditional signal processing algorithms, they are vulnerable to intentionally crafted adversarial perturbations which can deceive the DNN classifiers and significantly reduce their accuracy. Such intentional adversarial perturbations can be used by RF communications systems to avoid reactive-jammers and interception systems which rely on DNN classifiers to identify their target modulation scheme. While previous research on RF adversarial perturbations has established the theoretical feasibility of such attacks using simulation studies, critical questions concerning real-world implementation and viability remain unanswered. This work attempts to bridge this gap by defining class-specific and sample-independent adversarial perturbations which are shown to be effective yet computationally feasible in real-time and time-invariant. We demonstrate the effectiveness of these attacks over-the-air across a physical channel using software-defined radios (SDRs). Finally, we demonstrate that these adversarial perturbations can be emitted from a source other than the communications device, making these attacks practical for devices that cannot manipulate their transmitted signals at the physical layer.
In the past few decades, artificial intelligence (AI) technology has experienced swift developments, changing everyone's daily life and profoundly altering the course of human society. The intention of developing AI is to benefit humans, by reducing human labor, bringing everyday convenience to human lives, and promoting social good. However, recent research and AI applications show that AI can cause unintentional harm to humans, such as making unreliable decisions in safety-critical scenarios or undermining fairness by inadvertently discriminating against one group. Thus, trustworthy AI has attracted immense attention recently, which requires careful consideration to avoid the adverse effects that AI may bring to humans, so that humans can fully trust and live in harmony with AI technologies. Recent years have witnessed a tremendous amount of research on trustworthy AI. In this survey, we present a comprehensive survey of trustworthy AI from a computational perspective, to help readers understand the latest technologies for achieving trustworthy AI. Trustworthy AI is a large and complex area, involving various dimensions. In this work, we focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being. For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems. We also discuss the accordant and conflicting interactions among different dimensions and discuss potential aspects for trustworthy AI to investigate in the future.
BERT-based architectures currently give state-of-the-art performance on many NLP tasks, but little is known about the exact mechanisms that contribute to its success. In the current work, we focus on the interpretation of self-attention, which is one of the fundamental underlying components of BERT. Using a subset of GLUE tasks and a set of handcrafted features-of-interest, we propose the methodology and carry out a qualitative and quantitative analysis of the information encoded by the individual BERT's heads. Our findings suggest that there is a limited set of attention patterns that are repeated across different heads, indicating the overall model overparametrization. While different heads consistently use the same attention patterns, they have varying impact on performance across different tasks. We show that manually disabling attention in certain heads leads to a performance improvement over the regular fine-tuned BERT models.