The Internet of Things (IoT) comprises of a heterogeneous mix of smart devices which vary widely in their size, usage, energy capacity, computational power etc. IoT devices are typically connected to the Cloud via Fog nodes for fast processing and response times. In a rush to deploy devices quickly into the real-world and to maximize market share, the issue of security is often considered as an afterthought by the manufacturers of such devices. Some well-known security concerns of IoT are - data confidentiality, authentication of devices, location privacy, device integrity etc. We believe that the majority of security schemes proposed to date are too heavyweight for them to be of any practical value for the IoT. In this paper we propose a lightweight encryption scheme loosely based on the classic one-time pad, and make use of hash functions for the generation and management of keys. Our scheme imposes minimal computational and storage requirements on the network nodes, which makes it a viable candidate for the encryption of data transmitted by IoT devices in the Fog.
Hyperledger Fabric (HLF), one of the most popular private blockchains, has recently received attention for blockchain-enabled Internet of Things (IoT). However, for IoT applications to handle time-sensitive data, the processing latency in HLF has emerged as a new challenge. In this article, therefore, we establish a practical HLF latency model for HLF-enabled IoT. We first discuss the structure and transaction flow of HLF-enabled IoT. After implementing real HLF, we capture the latencies that each transaction experiences and show that the total latency of HLF can be modeled as a Gamma distribution, which is validated by conducting a goodness-of-fit test (i.e., the Kolmogorov-Smirnov (KS) test). We also provide the parameter values of the modeled latency distribution for various HLF environments. Furthermore, we explore the impacts of three important HLF parameters including transaction generation rate, block size, and block-generation timeout on the HLF latency. As a result, this article provides design insights on minimizing the latency for HLF-enabled IoT.
Federated learning has recently emerged as a paradigm promising the benefits of harnessing rich data from diverse sources to train high quality models, with the salient features that training datasets never leave local devices. Only model updates are locally computed and shared for aggregation to produce a global model. While federated learning greatly alleviates the privacy concerns as opposed to learning with centralized data, sharing model updates still poses privacy risks. In this paper, we present a system design which offers efficient protection of individual model updates throughout the learning procedure, allowing clients to only provide obscured model updates while a cloud server can still perform the aggregation. Our federated learning system first departs from prior works by supporting lightweight encryption and aggregation, and resilience against drop-out clients with no impact on their participation in future rounds. Meanwhile, prior work largely overlooks bandwidth efficiency optimization in the ciphertext domain and the support of security against an actively adversarial cloud server, which we also fully explore in this paper and provide effective and efficient mechanisms. Extensive experiments over several benchmark datasets (MNIST, CIFAR-10, and CelebA) show our system achieves accuracy comparable to the plaintext baseline, with practical performance.
In recent years, Fully Homomorphic Encryption (FHE) has undergone several breakthroughs and advancements leading to a leap in performance. Today, performance is no longer a major barrier to adoption. Instead, it is the complexity of developing an efficient FHE application that currently limits deploying FHE in practice and at scale. Several FHE compilers have emerged recently to ease FHE development. However, none of these answer how to automatically transform imperative programs to secure and efficient FHE implementations. This is a fundamental issue that needs to be addressed before we can realistically expect broader use of FHE. Automating these transformations is challenging because the restrictive set of operations in FHE and their non-intuitive performance characteristics require programs to be drastically transformed to achieve efficiency. In addition, existing tools are monolithic and focus on individual optimizations. Therefore, they fail to fully address the needs of end-to-end FHE development. In this paper, we present HECO, a new end-to-end design for FHE compilers that takes high-level imperative programs and emits efficient and secure FHE implementations. In our design, we take a broader view of FHE development, extending the scope of optimizations beyond the cryptographic challenges existing tools focus on.
Augmented reality (AR) has drawn great attention in recent years. However, current AR devices have drawbacks, e.g., weak computation ability and large power consumption. To solve the problem, mobile edge computing (MEC) can be introduced as a key technology to offload data and computation from AR devices to MEC servers via 5th Generation Mobile Communication Technology (5G) networks. To this end, a context-based MEC platform for AR services in 5G networks is proposed in this paper. On the platform, MEC is employed as a data processing center while AR devices are simplified as universal input/output devices, which overcomes their limitations and achieves better user experience. Moreover, the proof-of-concept (PoC) hardware prototype of the platform, and two typical use cases providing AR services of navigation and face recognition respectively are implemented to demonstrate the feasibility and effectiveness of the platform. Finally, the performance of the platform is also numerically evaluated, and the results validate the system design and agree well with the design expectations.
Even though recent years have seen many attacks exposing severe vulnerabilities in federated learning (FL), a holistic understanding of what enables these attacks and how they can be mitigated effectively is still lacking. In this work we demystify the inner workings of existing targeted attacks. We provide new insights into why these attacks are possible and why a definitive solution to FL robustness is challenging. We show that the need for ML algorithms to memorize tail data has significant implications for FL integrity. This phenomenon has largely been studied in the context of privacy; our analysis sheds light on its implications for ML integrity. In addition, we show how constraints on client updates can effectively improve robustness. To incorporate these constraints into secure FL protocols, we design and develop RoFL, a new secure FL system that enables constraints to be expressed and enforced on high-dimensional encrypted model updates. In essence, RoFL augments existing secure FL aggregation protocols with zero-knowledge proofs. Due to the scale of FL, realizing these checks efficiently presents a paramount challenge. We introduce several optimizations at the ML layer that allow us to reduce the number of cryptographic checks needed while preserving the effectiveness of our defenses. We show that RoFL scales to the sizes of models used in real-world FL deployments.
Nowadays, the industrial Internet of Things (IIoT) has played an integral role in Industry 4.0 and produced massive amounts of data for industrial intelligence. These data locate on decentralized devices in modern factories. To protect the confidentiality of industrial data, federated learning (FL) was introduced to collaboratively train shared machine learning models. However, the local data collected by different devices skew in class distribution and degrade industrial FL performance. This challenge has been widely studied at the mobile edge, but they ignored the rapidly changing streaming data and clustering nature of factory devices, and more seriously, they may threaten data security. In this paper, we propose FedGS, which is a hierarchical cloud-edge-end FL framework for 5G empowered industries, to improve industrial FL performance on non-i.i.d. data. Taking advantage of naturally clustered factory devices, FedGS uses a gradient-based binary permutation algorithm (GBP-CS) to select a subset of devices within each factory and build homogeneous super nodes participating in FL training. Then, we propose a compound-step synchronization protocol to coordinate the training process within and among these super nodes, which shows great robustness against data heterogeneity. The proposed methods are time-efficient and can adapt to dynamic environments, without exposing confidential industrial data in risky manipulation. We prove that FedGS has better convergence performance than FedAvg and give a relaxed condition under which FedGS is more communication-efficient. Extensive experiments show that FedGS improves accuracy by 3.5% and reduces training rounds by 59% on average, confirming its superior effectiveness and efficiency on non-i.i.d. data.
Fueled by advances in distributed deep learning (DDL), recent years have witnessed a rapidly growing demand for resource-intensive distributed/parallel computing to process DDL computing jobs. To resolve network communication bottleneck and load balancing issues in distributed computing, the so-called ``ring-all-reduce'' decentralized architecture has been increasingly adopted to remove the need for dedicated parameter servers. To date, however, there remains a lack of theoretical understanding on how to design resource optimization algorithms for efficiently scheduling ring-all-reduce DDL jobs in computing clusters. This motivates us to fill this gap by proposing a series of new resource scheduling designs for ring-all-reduce DDL jobs. Our contributions in this paper are three-fold: i) We propose a new resource scheduling analytical model for ring-all-reduce deep learning, which covers a wide range of objectives in DDL performance optimization (e.g., excessive training avoidance, energy efficiency, fairness); ii) Based on the proposed performance analytical model, we develop an efficient resource scheduling algorithm called GADGET (greedy ring-all-reduce distributed graph embedding technique), which enjoys a provable strong performance guarantee; iii) We conduct extensive trace-driven experiments to demonstrate the effectiveness of the GADGET approach and its superiority over the state of the art.
Contextual bandit algorithms have been recently studied under the federated learning setting to satisfy the demand of keeping data decentralized and pushing the learning of bandit models to the client side. But limited by the required communication efficiency, existing solutions are restricted to linear models to exploit their closed-form solutions for parameter estimation. Such a restricted model choice greatly hampers these algorithms' practical utility. In this paper, we take the first step to addressing this challenge by studying generalized linear bandit models under a federated learning setting. We propose a communication-efficient solution framework that employs online regression for local update and offline regression for global update. We rigorously proved that, though the setting is more general and challenging, our algorithm can attain sub-linear rate in both regret and communication cost, which is also validated by our extensive empirical evaluations.
A memoryless state-dependent multiple-access channel (MAC) is considered, where two transmitters wish to convey their messages to a single receiver while simultaneously sensing (estimating) the respective states via generalized feedbacks. For this channel, an improved inner bound is provided on the \emph{fundamental rate-distortions tradeoff} which characterizes the communication rates the transmitters can achieve while simultaneously ensuring that their state-estimates satisfy desired distortion criteria. The new inner bound is based on a scheme where each transmitter codes over the generalized feedback so as to improve the state estimation at the other transmitter. This is in contrast to the schemes proposed for point-to-point and broadcast channels where coding is used only for the transmission of messages and the optimal estimators operate on a symbol-by-symbol basis on the sequences of channel inputs and feedback outputs.
In recent years, mobile devices have gained increasingly development with stronger computation capability and larger storage. Some of the computation-intensive machine learning and deep learning tasks can now be run on mobile devices. To take advantage of the resources available on mobile devices and preserve users' privacy, the idea of mobile distributed machine learning is proposed. It uses local hardware resources and local data to solve machine learning sub-problems on mobile devices, and only uploads computation results instead of original data to contribute to the optimization of the global model. This architecture can not only relieve computation and storage burden on servers, but also protect the users' sensitive information. Another benefit is the bandwidth reduction, as various kinds of local data can now participate in the training process without being uploaded to the server. In this paper, we provide a comprehensive survey on recent studies of mobile distributed machine learning. We survey a number of widely-used mobile distributed machine learning methods. We also present an in-depth discussion on the challenges and future directions in this area. We believe that this survey can demonstrate a clear overview of mobile distributed machine learning and provide guidelines on applying mobile distributed machine learning to real applications.