In future IoT environments it is expected that the role of personal devices of mobile users in the physical area where IoT devices are deployed will become more and more important. In particular, due to the push towards decentralisation of services towards the edge, it is likely that a significant share of data generated by IoT devices will be needed by other (mobile) nodes nearby, while global Internet access will be limited only to a small fraction of data. In this context, opportunistic networking schemes can be adopted to build efficient content-centric protocols, through which data generated by IoT devices (or by mobile nodes themselves) can be accessed by the other nodes nearby. In this paper, we propose MobCCN, which is an ICN-compliant protocol for this heterogeneous environment. MobCCN is designed to implement the routing and forwarding mechanisms of the main ICN realisations, such as CCN. The original aspect of MobCCN is to implement an efficient opportunistic networking routing scheme to populate the Forwarding Interest Base (FIB) tables of the nodes, in order to guide the propagation of Interest packets towards nodes that store the required data. Specifically, MobCCN defines the utility of each node as a forwarder of Interest packets for a certain type of content, such that Interest packets can be propagated along a positive utility gradient, until reaching some node storing the data. We evaluate MobCCN against protocols representing two possible endpoints of the spectrum, respectively in terms of minimising the data delivery delay and the resource consumption. Performance results show that MobCCN is very effective and efficient, as it guarantees very high delivery rates and low delays, while keeping the total generated traffic at a reasonable level and also saving local resources.
A few potential IoT communication protocols at the application layer have been proposed, including MQTT, CoAP and REST HTTP, with the latter being the protocol of choice for software developers due to its compatibility with the existing systems. We present a theoretical model of the expected buffer size on the REST HTTP client buffer in IoT devices under lossy wireless conditions, and validate the study experimentally. The results show that increasing the buffer size in IoT devices does not always improve performance in lossy environments, hence demonstrating the importance of benchmarking the buffer size in IoT systems deploying REST HTTP.
The demand for large-scale deep learning is increasing, and distributed training is the current mainstream solution. Ring AllReduce is widely used as a data parallel decentralized algorithm. However, in a heterogeneous environment, each worker calculates the same amount of data, so that there is a lot of waiting time loss among different workers, which makes the algorithm unable to adapt well to heterogeneous clusters. Resources are not used as they should be. In this paper, we design an implementation of static allocation algorithm. The dataset is artificially allocated to each worker, and samples are drawn proportionally for training, thereby speeding up the training speed of the network in a heterogeneous environment. We verify the convergence and influence on training speed of the network model under this algorithm on one machine with multi-card and multi-machine with multi-card. On this basis of feasibility, we propose a self-adaptive allocation algorithm that allows each machine to find the data it needs to adapt to the current environment. The self-adaptive allocation algorithm can reduce the training time by nearly one-third to half compared to the same proportional allocation.In order to better show the applicability of the algorithm in heterogeneous clusters, We replace a poorly performing worker with a good performing worker or add a poorly performing worker to the heterogeneous cluster. Experimental results show that training time will decrease as the overall performance improves. Therefore, it means that resources are fully used. Further, this algorithm is not only suitable for straggler problems, but also for most heterogeneous situations. It can be used as a plug-in for AllReduce and its variant algorithms.
Modern cloud computing systems contain hundreds to thousands of computing and storage servers. Such a scale, combined with ever-growing system complexity, is causing a key challenge to failure and resource management for dependable cloud computing. Autonomic failure detection is a crucial technique for understanding emergent, cloud-wide phenomena and self-managing cloud resources for system-level dependability assurance. To detect failures, we need to monitor the cloud execution and collect runtime performance data. These data are usually unlabeled, and thus a prior failure history is not always available in production clouds. In this paper, we present a \emph{self-evolving anomaly detection} (SEAD) framework for cloud dependability assurance. Our framework self-evolves by recursively exploring newly verified anomaly records and continuously updating the anomaly detector online. As a distinct advantage of our framework, cloud system administrators only need to check a small number of detected anomalies, and their decisions are leveraged to update the detector. Thus, the detector evolves following the upgrade of system hardware, update of the software stack, and change of user workloads. Moreover, we design two types of detectors, one for general anomaly detection and the other for type-specific anomaly detection. With the help of self-evolving techniques, our detectors can achieve 88.94\% in sensitivity and 94.60\% in specificity on average, which makes them suitable for real-world deployment.
The response in data flows transmission in real time is analyzed, for access network scenarios, in which said flows converge on an outgoing link, competing to achieve a certain level of quality of service. The concurrence of these types of flows can generate bursts of packets, which in certain circumstances can compromise the capacity of the buffers to absorb packets in congestion periods. In addition, an analysis of the characteristics of buffers in access devices is presented, especially their size and packet loss. In particular, it describes how these characteristics can affect the quality of multimedia applications when bursty traffic is generated, it also presents possible effects on the traffic of other applications that share a common link.
For IoT to reach its full potential, the sharing and reuse of information in different applications and across verticals is of paramount importance. However, there are a plethora of IoT platforms using different representations, protocols and interaction patterns. To address this issue, the Fed4IoT project has developed an IoT virtualization platform that, on the one hand, integrates information from many different source platforms and, on the other hand, makes the information required by the respective users available in the target platform of choice. To enable this, information is translated into a common, neutral exchange format. The format of choice is NGSI-LD, which is being standardized by the ETSI Industry Specification Group on Context Information Management (ETSI ISG CIM). Thing Visors are the components that translate the source information to NGSI-LD, which is then delivered to the target platform and translated into the target format. ThingVisors can be implemented by hand, but this requires significant human effort, especially considering the heterogeneity of low level information produced by a multitude of sensors. Thus, supporting the human developer and, ideally, fully automating the process of extracting and enriching data and translating it to NGSI-LD is a crucial step. Machine learning is a promising approach for this, but it typically requires large amounts of hand-labelled data for training, an effort that makes it unrealistic in many IoT scenarios. A programmatic labelling approach called knowledge infusion that encodes expert knowledge is used for matching a schema or ontology extracted from the data with a target schema or ontology, providing the basis for annotating the data and facilitating the translation to NGSI-LD.
The evolution of connected and automated vehicles (CAVs) technology is boosting the development of innovative solutions for the sixth generation (6G) of Vehicular-to-Everything (V2X) networks. Lower frequency networks provide control of millimeter waves (mmWs) or sub-THz beam-based 6G communications. In CAVs, the mmW/Sub-THz guarantees a huge amount of bandwidth (>1GHz) and a high data rate (> 10 Gbit/s), enhancing the safety of CAVs applications. However, high-frequency is impaired by severe path-loss, and line of sight (LoS) propagation can be easily blocked. Static and dynamic blocking (e.g., by non-connected vehicles) heavily affects V2X links, and thus, in a multi-vehicular case, the knowledge of LoS (or visibility) mapping is mandatory for stable connections and proactive beam pointing that might involve relays whenever necessary. In this paper, we design a criterion for dynamic LoS-map estimation, and we propose a novel framework for relay of opportunity selection to enable high-quality and stable V2X links. Relay selection is based on cooperative sensing to cope with LoS blockage conditions. LoS-map is dynamically estimated on top of the static map of the environment by merging the perceptive sensors' data to achieve cooperative awareness of the surrounding scenario. Multiple relay selection architectures are based on centralized and decentralized strategies. 3GPP standard-compliant simulation is the framework methodology adopted herein to reproduce real-world urban vehicular environments and vehicles' mobility patterns.
Dynamic replication is a wide-spread multi-copy routing approach for efficiently coping with the intermittent connectivity in mobile opportunistic networks. According to it, a node forwards a message replica to an encountered node based on a utility value that captures the latter's fitness for delivering the message to the destination. The popularity of the approach stems from its flexibility to effectively operate in networks with diverse characteristics without requiring special customization. Nonetheless, its drawback is the tendency to produce a high number of replicas that consume limited resources such as energy and storage. To tackle the problem we make the observation that network nodes can be grouped, based on their utility values, into clusters that portray different delivery capabilities. We exploit this finding to transform the basic forwarding strategy, which is to move a packet using nodes of increasing utility, and actually forward it through clusters of increasing delivery capability. The new strategy works in synergy with the basic dynamic replication algorithms and is fully configurable, in the sense that it can be used with virtually any utility function. We also extend our approach to work with two utility functions at the same time, a feature that is especially efficient in mobile networks that exhibit social characteristics. By conducting experiments in a wide set of real-life networks, we empirically show that our method is robust in reducing the overall number of replicas in networks with diverse connectivity characteristics without at the same time hindering delivery efficiency.
We present a flexible public transit network design model which optimizes a social access objective while guaranteeing that the system's costs and transit times remain within a preset margin of their current levels. The purpose of the model is to find a set of minor, immediate modifications to an existing bus network that can give more communities access to the chosen services while having a minimal impact on the current network's operator costs and user costs. Design decisions consist of reallocation of existing resources in order to adjust line frequencies and capacities. We present a hybrid tabu search/simulated annealing algorithm for the solution of this optimization-based model. As a case study we apply the model to the problem of improving equity of access to primary health care facilities in the Chicago metropolitan area. The results of the model suggest that it is possible to achieve better primary care access equity through reassignment of existing buses and implementation of express runs, while leaving overall service levels relatively unaffected.
In view of the security issues of the Internet of Things (IoT), considered better combining edge computing and blockchain with the IoT, integrating attribute-based encryption (ABE) and attribute-based access control (ABAC) models with attributes as the entry point, an attribute-based encryption and access control scheme (ABE-ACS) has been proposed. Facing Edge-Iot, which is a heterogeneous network composed of most resource-limited IoT devices and some nodes with higher computing power. For the problems of high resource consumption and difficult deployment of existing blockchain platforms, we design a lightweight blockchain (LBC) with improvement of the proof-of-work consensus. For the access control policies, the threshold tree and LSSS are used for conversion and assignment, stored in the blockchain to protect the privacy of the policy. For device and data, six smart contracts are designed to realize the ABAC and penalty mechanism, with which ABE is outsourced to edge nodes for privacy and integrity. Thus, our scheme realizing Edge-Iot privacy protection, data and device controlled access. The security analysis shows that the proposed scheme is secure and the experimental results show that our LBC has higher throughput and lower resources consumption, the cost of encryption and decryption of our scheme is desirable.
Recommender systems rely on large datasets of historical data and entail serious privacy risks. A server offering recommendations as a service to a client might leak more information than necessary regarding its recommendation model and training dataset. At the same time, the disclosure of the client's preferences to the server is also a matter of concern. Providing recommendations while preserving privacy in both senses is a difficult task, which often comes into conflict with the utility of the system in terms of its recommendation-accuracy and efficiency. Widely-purposed cryptographic primitives such as secure multi-party computation and homomorphic encryption offer strong security guarantees, but in conjunction with state-of-the-art recommender systems yield far-from-practical solutions. We precisely define the above notion of security and propose CryptoRec, a novel recommendations-as-a-service protocol, which encompasses a crypto-friendly recommender system. This model possesses two interesting properties: (1) It models user-item interactions in a user-free latent feature space in which it captures personalized user features by an aggregation of item features. This means that a server with a pre-trained model can provide recommendations for a client without having to re-train the model with the client's preferences. Nevertheless, re-training the model still improves accuracy. (2) It only uses addition and multiplication operations, making the model straightforwardly compatible with homomorphic encryption schemes.