We present a low-overhead mechanism for self-sovereign identification and communication of IoT agents in constrained networks. Our main contribution is to enable native use of Decentralized Identifiers (DIDs) and DID-based secure communication on constrained networks, whereas previous works either did not consider the issue or relied on proxy-based architectures. We propose a new extension to DIDs along with a more concise serialization method for DID metadata. Moreover, in order to reduce the security overhead over transmitted messages, we adopted a binary message envelope. We implemented these proposals within the context of Swarm Computing, an approach for decentralized IoT. Results showed that our proposal reduces the size of identity metadata in almost four times and security overhead up to five times. We observed that both techniques are required to enable operation on constrained networks.
New-generation wireless networks are designed to support a wide range of services with diverse key performance indicators (KPIs) requirements. A fundamental component of such networks, and a pivotal factor to the fulfillment of the target KPIs, is the virtual radio access network (vRAN), which allows high flexibility on the control of the radio link. However, to fully exploit the potentiality of vRANs, an efficient mapping of the rapidly varying context to radio control decisions is not only essential, but also challenging owing to the interdependence of user traffic demand, channel conditions, and resource allocation. Here, we propose CAREM, a reinforcement learning framework for dynamic radio resource allocation in heterogeneous vRANs, which selects the best available link and transmission parameters for packet transfer, so as to meet the KPI requirements. To show its effectiveness, we develop a testbed for proof-of-concept. Experimental results demonstrate that CAREM enables an efficient radio resource allocation under different settings and traffic demand. Also, compared to the closest existing scheme based on neural network and the standard LTE, CAREM exhibits an improvement of one order of magnitude in packet loss and latency, while it provides a 65% latency improvement relatively to the contextual bandit approach.
This paper studies a stochastic variant of the vehicle routing problem (VRP) where both customer locations and demands are uncertain. In particular, potential customers are not restricted to a predefined customer set but are continuously spatially distributed in a given service area. The objective is to maximize the served demands while fulfilling vehicle capacities and time restrictions. We call this problem the VRP with stochastic customers and demands (VRPSCD). For this problem, we first propose a Markov Decision Process (MDP) formulation representing the classical centralized decision-making perspective where one decision-maker establishes the routes of all vehicles. While the resulting formulation turns out to be intractable, it provides us with the ground to develop a new MDP formulation of the VRPSCD representing a decentralized decision-making framework, where vehicles autonomously establish their own routes. This new formulation allows us to develop several strategies to reduce the dimension of the state and action spaces, resulting in a considerably more tractable problem. We solve the decentralized problem via Reinforcement Learning, and in particular, we develop a Q-learning algorithm featuring state-of-the-art acceleration techniques such as Replay Memory and Double Q Network. Computational results show that our method considerably outperforms two commonly adopted benchmark policies (random and heuristic). Moreover, when comparing with existing literature, we show that our approach can compete with specialized methods developed for the particular case of the VRPSCD where customer locations and expected demands are known in advance. Finally, we show that the value functions and policies obtained by our algorithm can be easily embedded in Rollout algorithms, thus further improving their performances.
Bitcoin's success as a cryptocurrency enabled it to penetrate into many daily life transactions. Its problems regarding the transaction fees and long validation times are addressed through an innovative concept called the Lightning Network (LN) which works on top of Bitcoin by leveraging off-chain transactions. This made Bitcoin an attractive micro-payment solution that can also be used within certain IoT applications (e.g., toll payments) since it eliminates the need for traditional centralized payment systems. Nevertheless, it is not possible to run LN and Bitcoin on resource-constrained IoT devices due to their storage, memory, and processing requirements. Therefore, in this paper, we propose an efficient and secure protocol that enables an IoT device to use LN's functions through a gateway LN node even if it is not trusted. The idea is to involve the IoT device only in signing operations, which is possible by replacing LN's original 2-of-2 multisignature channels with 3-of-3 multisignature channels. Once the gateway is delegated to open a channel for the IoT device in a secure manner, our protocol enforces the gateway to request the IoT device's cryptographic signature for all further operations on the channel such as sending payments or closing the channel. LN's Bitcoin transactions are revised to incorporate the 3-of-3 multisignature channels. In addition, we propose other changes to protect the IoT device's funds from getting stolen in possible revoked state broadcast attempts. We evaluated the proposed protocol using a Raspberry Pi considering a toll payment scenario. Our results show that timely payments can be sent and the computational and communication delays associated with the protocol are negligible.
Due to the surge of cloud-assisted AI services, the problem of designing resilient prediction serving systems that can effectively cope with stragglers/failures and minimize response delays has attracted much interest. The common approach for tackling this problem is replication which assigns the same prediction task to multiple workers. This approach, however, is very inefficient and incurs significant resource overheads. Hence, a learning-based approach known as parity model (ParM) has been recently proposed which learns models that can generate parities for a group of predictions in order to reconstruct the predictions of the slow/failed workers. While this learning-based approach is more resource-efficient than replication, it is tailored to the specific model hosted by the cloud and is particularly suitable for a small number of queries (typically less than four) and tolerating very few (mostly one) number of stragglers. Moreover, ParM does not handle Byzantine adversarial workers. We propose a different approach, named Approximate Coded Inference (ApproxIFER), that does not require training of any parity models, hence it is agnostic to the model hosted by the cloud and can be readily applied to different data domains and model architectures. Compared with earlier works, ApproxIFER can handle a general number of stragglers and scales significantly better with the number of queries. Furthermore, ApproxIFER is robust against Byzantine workers. Our extensive experiments on a large number of datasets and model architectures also show significant accuracy improvement by up to 58% over the parity model approaches.
With the recent advances of IoT (Internet of Things) new and more robust security frameworks are needed to detect and mitigate new forms of cyber-attacks, which exploit complex and heterogeneity IoT networks, as well as, the existence of many vulnerabilities in IoT devices. With the rise of blockchain technologies service providers pay considerable attention to better understand and adopt blockchain technologies in order to have better secure and trusted systems for own organisations and their customers. The present paper introduces a high level guide for the senior officials and decision makers in the organisations and technology managers for blockchain security framework by design principle for trust and adoption in IoT environments. The paper discusses Cyber-Trust project blockchain technology development as a representative case study for offered security framework. Security and privacy by design approach is introduced as an important consideration in setting up the framework.
High percentage penetrations of renewable energy generations introduce significant uncertainty into power systems. It requires grid operators to solve alternative current optimal power flow (AC-OPF) problems more frequently for economical and reliable operation in both transmission and distribution grids. In this paper, we develop a Deep Neural Network (DNN) approach, called DeepOPF, for solving AC-OPF problems in a fraction of the time used by conventional solvers. A key difficulty for applying machine learning techniques for solving AC-OPF problems lies in ensuring that the obtained solutions respect the equality and inequality physical and operational constraints. Generalized the 2-stage procedure in [1], [2], DeepOPF first trains a DNN model to predict a set of independent operating variables and then directly compute the remaining dependable ones by solving power flow equations. Such an approach not only preserves the power-flow balance equality constraints but also reduces the number of variables to predict by the DNN, cutting down the number of neurons and training data needed. DeepOPF then employs a penalty approach with a zero-order gradient estimation technique in the training process to preserve the remaining inequality constraints. As another contribution, we drive a condition for tuning the size of the DNN according to the desired approximation accuracy, which measures the DNN generalization capability. It provides theoretical justification for using DNN to solve the AC-OPF problem. Simulation results of IEEE 30/118/300-bus and a synthetic 2000-bus test cases show that DeepOPF speeds up the computing time by up to two orders of magnitude as compared to a state-of-the-art solver, at the expense of $<$0.1% cost difference.
IP addresses and port numbers (network based identifiers hereafter) in packets are two major identifiers for network devices to identify systems and roles of hosts sending and receiving packets for access control lists, priority control, etc. However, in modern system design on cloud, such as microservices architecture, network based identifiers are inefficient for network devices to identify systems and roles of hosts. This is because, due to autoscaling and automatic deployment of new software, many VMs and containers consisting of the system (workload hereafter) are frequently created and deleted on servers whose resources are available, and network based identifiers are assigned based on servers where containers and VMs are running. In this paper, we propose a new system, Acila, to classify packets based on the identity of a workload at network devices, by marking packets with the necessary information extracted from the identity that usually stored in orchestrators or controllers. We then implement Acila and show that packet filtering and priority control can be implemented with Acila, and entries for them with Acila is more efficient than conventional network based identifiers approach, with little overhead on performance
Many resource allocation problems in the cloud can be described as a basic Virtual Network Embedding Problem (VNEP): finding mappings of request graphs (describing the workloads) onto a substrate graph (describing the physical infrastructure). In the offline setting, the two natural objectives are profit maximization, i.e., embedding a maximal number of request graphs subject to the resource constraints, and cost minimization, i.e., embedding all requests at minimal overall cost. The VNEP can be seen as a generalization of classic routing and call admission problems, in which requests are arbitrary graphs whose communication endpoints are not fixed. Due to its applications, the problem has been studied intensively in the networking community. However, the underlying algorithmic problem is hardly understood. This paper presents the first fixed-parameter tractable approximation algorithms for the VNEP. Our algorithms are based on randomized rounding. Due to the flexible mapping options and the arbitrary request graph topologies, we show that a novel linear program formulation is required. Only using this novel formulation the computation of convex combinations of valid mappings is enabled, as the formulation needs to account for the structure of the request graphs. Accordingly, to capture the structure of request graphs, we introduce the graph-theoretic notion of extraction orders and extraction width and show that our algorithms have exponential runtime in the request graphs' maximal width. Hence, for request graphs of fixed extraction width, we obtain the first polynomial-time approximations. Studying the new notion of extraction orders we show that (i) computing extraction orders of minimal width is NP-hard and (ii) that computing decomposable LP solutions is in general NP-hard, even when restricting request graphs to planar ones.
Network Virtualization is one of the most promising technologies for future networking and considered as a critical IT resource that connects distributed, virtualized Cloud Computing services and different components such as storage, servers and application. Network Virtualization allows multiple virtual networks to coexist on same shared physical infrastructure simultaneously. One of the crucial keys in Network Virtualization is Virtual Network Embedding, which provides a method to allocate physical substrate resources to virtual network requests. In this paper, we investigate Virtual Network Embedding strategies and related issues for resource allocation of an Internet Provider(InP) to efficiently embed virtual networks that are requested by Virtual Network Operators(VNOs) who share the same infrastructure provided by the InP. In order to achieve that goal, we design a heuristic Virtual Network Embedding algorithm that simultaneously embeds virtual nodes and virtual links of each virtual network request onto physic infrastructure. Through extensive simulations, we demonstrate that our proposed scheme improves significantly the performance of Virtual Network Embedding by enhancing the long-term average revenue as well as acceptance ratio and resource utilization of virtual network requests compared to prior algorithms.
Neural machine translation (NMT) suffers a performance deficiency when a limited vocabulary fails to cover the source or target side adequately, which happens frequently when dealing with morphologically rich languages. To address this problem, previous work focused on adjusting translation granularity or expanding the vocabulary size. However, morphological information is relatively under-considered in NMT architectures, which may further improve translation quality. We propose a novel method, which can not only reduce data sparsity but also model morphology through a simple but effective mechanism. By predicting the stem and suffix separately during decoding, our system achieves an improvement of up to 1.98 BLEU compared with previous work on English to Russian translation. Our method is orthogonal to different NMT architectures and stably gains improvements on various domains.