How to provide information security while fulfilling ultra reliability and low-latency requirements is one of the major concerns for enabling the next generation of ultra-reliable and low-latency communications service (xURLLC), specially in machine-type communications. In this work, we investigate the reliability-security tradeoff via defining the leakage-failure probability, which is a metric that jointly characterizes both reliability and security performances for short-packet transmissions. We discover that the system performance can be enhanced by counter-intuitively allocating fewer resources for the transmission with finite blocklength (FBL) codes. In order to solve the corresponding optimization problem for the joint resource allocation, we propose an optimization framework, that leverages lower-bounded approximations for the decoding error probability in the FBL regime. We characterize the convexity of the reformulated problem and establish an efficient iterative searching method, the convergence of which is guaranteed. To show the extendability of the framework, we further discuss the blocklength allocation schemes with practical requirements of reliable-secure performance, as well as the transmissions with the statistical channel state information (CSI). Numerical results verify the accuracy of the proposed approach and demonstrate the reliability-security tradeoff under various setups.
Centralized training with decentralized execution (CTDE) is a widely-used learning paradigm that has achieved significant success in complex tasks. However, partial observability issues and the absence of effectively shared signals between agents often limit its effectiveness in fostering cooperation. While communication can address this challenge, it simultaneously reduces the algorithm's practicality. Drawing inspiration from human team cooperative learning, we propose a novel paradigm that facilitates a gradual shift from explicit communication to tacit cooperation. In the initial training stage, we promote cooperation by sharing relevant information among agents and concurrently reconstructing this information using each agent's local trajectory. We then combine the explicitly communicated information with the reconstructed information to obtain mixed information. Throughout the training process, we progressively reduce the proportion of explicitly communicated information, facilitating a seamless transition to fully decentralized execution without communication. Experimental results in various scenarios demonstrate that the performance of our method without communication can approaches or even surpasses that of QMIX and communication-based methods.
The two biggest problems with wireless sensor networks are security and energy usage. In sensing devices, malicious nodes could be found in large numbers. The researchers have proposed several methods to find these rogue nodes. To prevent assaults on these networks and data transmission, the data must be secured. Data aggregation aids in reducing the number of messages transmitted within the network, which in turn lowers total network energy consumption. Additionally, when decrypting the aggregated data, the base station can distinguish between encrypted and consolidated analysis based on top of the cryptographic keys. By examining the effectiveness of the data aggregation in this research. To solve the above problem, the system provides a method in which an efficient cluster agent is preferred pedestal on its location at the access point and energy availability. The sensor network's energy consumption is reduced by selecting an effective cluster agent, extending the network's lifespan. The cluster's agent is in indict of compiling data for each member node. The clustering agent validates the data and tosses any errors before aggregation. The clustering agent only aggregates confirmed data. To provide end-to-end anonymity, ElGamal elliptic curve (ECE) encryption is used to secure the client data and reassign the encrypted information en route for the cluster agent. Only the base station (BS) can decrypt the data. Furthermore, an ID-based signature system is utilized to enable authenticity. This research presents a technique for recuperating lost data. The access point employs a cache-based backup system to search for lost data.
Evil twin attack on Wi-Fi network has been a challenging security problem and several solutions have been proposed to this problem. In general, evil twin attack aims to exfiltrate data, like Wi-Fi and service credentials, from the client devices and considered as a serious threat at MAC layer. IoT devices with its companion apps provides different pairing methods for provisioning. The "SmartConfig Mode", the one proposed by Texas Instrument (TI) and the "Access Point pairing mode (AP mode)" are the most common pairing modes provided by the application developer and vendor of the IoT devices. Especially, AP mode use Wi-Fi connectivity to setup IoT devices where a device activates an access point to which the mobile device running the corresponding mobile application is required to connect. In this paper, we have used evil twin attack as a weapon to test the security posture of IoT devices that use Wi-Fi network to set them up. We have designed, implemented and applied a system, called iTieProbe, that can be used in ethical hacking for discovering certain vulnerabilities during such setup. AP mode successfully completes when the mobile device is able to communicate with the IoT device via a home router over a Wi-Fi network. Our proposed system, iTieProbe, is capable of discovering several serious vulnerabilities in the commercial IoT devices that use AP mode or similar approach. We evaluated iTieProbe's efficacy on 9 IoT devices, like IoT cameras, smart plugs, Echo Dot and smart bulbs, and discovered that several of these IoT devices have certain serious threats, like leaking Wi-Fi credential of home router and creating fake IoT device, during the setup of the IoT devices.
Private information retrieval (PIR) is a privacy setting that allows a user to download a required message from a set of messages stored in a system of databases without revealing the index of the required message to the databases. PIR was introduced under computational privacy guarantees, and is recently re-formulated to provide information-theoretic guarantees, resulting in \emph{information theoretic privacy}. Subsequently, many important variants of the basic PIR problem have been studied focusing on fundamental performance limits as well as achievable schemes. More recently, a variety of conceptual extensions of PIR have been introduced, such as, private set intersection (PSI), private set union (PSU), and private read-update-write (PRUW). Some of these extensions are mainly intended to solve the privacy issues that arise in distributed learning applications due to the extensive dependency of machine learning on users' private data. In this article, we first provide an introduction to basic PIR with examples, followed by a brief description of its immediate variants. We then provide a detailed discussion on the conceptual extensions of PIR, along with potential research directions.
In this work, we carry out the first, in-depth, privacy analysis of Decentralized Learning -- a collaborative machine learning framework aimed at addressing the main limitations of federated learning. We introduce a suite of novel attacks for both passive and active decentralized adversaries. We demonstrate that, contrary to what is claimed by decentralized learning proposers, decentralized learning does not offer any security advantage over federated learning. Rather, it increases the attack surface enabling any user in the system to perform privacy attacks such as gradient inversion, and even gain full control over honest users' local model. We also show that, given the state of the art in protections, privacy-preserving configurations of decentralized learning require fully connected networks, losing any practical advantage over the federated setup and therefore completely defeating the objective of the decentralized approach.
Unmanned aerial vehicle (UAV)-assisted communication is becoming a streamlined technology in providing improved coverage to the internet-of-things (IoT) based devices. Rapid deployment, portability, and flexibility are some of the fundamental characteristics of UAVs, which make them ideal for effectively managing emergency-based IoT applications. This paper studies a UAV-assisted wireless IoT network relying on non-orthogonal multiple access (NOMA) to facilitate uplink connectivity for devices spread over a disaster region. The UAV setup is capable of relaying the information to the cellular base station (BS) using decode and forward relay protocol. By jointly utilizing the concepts of unsupervised machine learning (ML) and solving the resulting non-convex problem, we can maximize the total energy efficiency (EE) of IoT devices spread over a disaster region. Our proposed approach uses a combination of k-medoids and Silhouette analysis to perform resource allocation, whereas, power optimization is performed using iterative methods. In comparison to the exhaustive search method, our proposed scheme solves the EE maximization problem with much lower complexity and at the same time improves the overall energy consumption of the IoT devices. Moreover, in comparison to a modified version of greedy algorithm, our proposed approach improves the total EE of the system by 19% for a fixed 50k target number of bits.
Operators from various industries have been pushing the adoption of wireless sensing nodes for industrial monitoring, and such efforts have produced sizeable condition monitoring datasets that can be used to build diagnosis algorithms capable of warning maintenance engineers of impending failure or identifying current system health conditions. However, single operators may not have sufficiently large fleets of systems or component units to collect sufficient data to develop data-driven algorithms. Collecting a satisfactory quantity of fault patterns for safety-critical systems is particularly difficult due to the rarity of faults. Federated learning (FL) has emerged as a promising solution to leverage datasets from multiple operators to train a decentralized asset fault diagnosis model while maintaining data confidentiality. However, there are still considerable obstacles to overcome when it comes to optimizing the federation strategy without leaking sensitive data and addressing the issue of client dataset heterogeneity. This is particularly prevalent in fault diagnosis applications due to the high diversity of operating conditions and system configurations. To address these two challenges, we propose a novel clustering-based FL algorithm where clients are clustered for federating based on dataset similarity. To quantify dataset similarity between clients without explicitly sharing data, each client sets aside a local test dataset and evaluates the other clients' model prediction accuracy and uncertainty on this test dataset. Clients are then clustered for FL based on relative prediction accuracy and uncertainty.
We propose a security verification framework for cryptographic protocols using machine learning. In recent years, as cryptographic protocols have become more complex, research on automatic verification techniques has been focused on. The main technique is formal verification. However, the formal verification has two problems: it requires a large amount of computational time and does not guarantee decidability. We propose a method that allows security verification with computational time on the order of linear with respect to the size of the protocol using machine learning. In training machine learning models for security verification of cryptographic protocols, a sufficient amount of data, i.e., a set of protocol data with security labels, is difficult to collect from academic papers and other sources. To overcome this issue, we propose a way to create arbitrarily large datasets by automatically generating random protocols and assigning security labels to them using formal verification tools. Furthermore, to exploit structural features of protocols, we construct a neural network that processes a protocol along its series and tree structures. We evaluate the proposed method by applying it to verification of practical cryptographic protocols.
Buhrman, Cleve and Wigderson (STOC'98) showed that for every Boolean function f : {-1,1}^n to {-1,1} and G in {AND_2, XOR_2}, the bounded-error quantum communication complexity of the composed function f o G equals O(Q(f) log n), where Q(f) denotes the bounded-error quantum query complexity of f. This is achieved by Alice running the optimal quantum query algorithm for f, using a round of O(log n) qubits of communication to implement each query. This is in contrast with the classical setting, where it is easy to show that R^{cc}(f o G) is at most 2R(f), where R^{cc} and R denote bounded-error communication and query complexity, respectively. We show that the O(log n) overhead is required for some functions in the quantum setting, and thus the BCW simulation is tight. We note here that prior to our work, the possibility of Q^{cc}(f o G) = O(Q(f)), for all f and all G in {AND_2, XOR_2}, had not been ruled out. More specifically, we show the following. - We show that the log n overhead is *not* required when f is symmetric, generalizing a result of Aaronson and Ambainis for the Set-Disjointness function (Theory of Computing'05). - In order to prove the above, we design an efficient distributed version of noisy amplitude amplification that allows us to prove the result when f is the OR function. - In view of our first result above, one may ask whether the log n overhead in the BCW simulation can be avoided even when f is transitive, which is a weaker notion of symmetry. We give a strong negative answer by showing that the log n overhead is still necessary for some transitive functions even when we allow the quantum communication protocol an error probability that can be arbitrarily close to 1/2. - We also give, among other things, a general recipe to construct functions for which the log n overhead is required in the BCW simulation in the bounded-error communication model.
Games and simulators can be a valuable platform to execute complex multi-agent, multiplayer, imperfect information scenarios with significant parallels to military applications: multiple participants manage resources and make decisions that command assets to secure specific areas of a map or neutralize opposing forces. These characteristics have attracted the artificial intelligence (AI) community by supporting development of algorithms with complex benchmarks and the capability to rapidly iterate over new ideas. The success of artificial intelligence algorithms in real-time strategy games such as StarCraft II have also attracted the attention of the military research community aiming to explore similar techniques in military counterpart scenarios. Aiming to bridge the connection between games and military applications, this work discusses past and current efforts on how games and simulators, together with the artificial intelligence algorithms, have been adapted to simulate certain aspects of military missions and how they might impact the future battlefield. This paper also investigates how advances in virtual reality and visual augmentation systems open new possibilities in human interfaces with gaming platforms and their military parallels.