Smart farming is a recent innovation in the agriculture sector that can improve the agricultural yield by using smarter, automated, and data driven farm processes that interact with IoT devices deployed on farms. A cloud-fog infrastructure provides an effective platform to execute IoT applications. While fog computing satisfies the real-time processing need of delay-sensitive IoT services by bringing virtualized services closer to the IoT devices, cloud computing allows execution of applications with higher computational requirements. The deployment of IoT applications is a critical challenge as cloud and fog nodes vary in terms of their resource availability and use different cost models. Moreover, diversity in resource, quality of service (QoS) and security requirements of IoT applications make the problem even more complex. In this paper, we model IoT application placement as an optimization problem that aims at minimizing the cost while satisfying the QoS and security constraints. The problem is formulated using Integer Linear Programming (ILP). The ILP model is evaluated for a small-scale scenario. The evaluation shows the impact of QoS and security requirement on the cost. We also study the impact of relaxing security constraint on the placement decision.
Millions of battery-powered sensors deployed for monitoring purposes in a multitude of scenarios, e.g., agriculture, smart cities, industry, etc., require energy-efficient solutions to prolong their lifetime. When these sensors observe a phenomenon distributed in space and evolving in time, it is expected that collected observations will be correlated in time and space. In this paper, we propose a Deep Reinforcement Learning (DRL) based scheduling mechanism capable of taking advantage of correlated information. We design our solution using the Deep Deterministic Policy Gradient (DDPG) algorithm. The proposed mechanism is capable of determining the frequency with which sensors should transmit their updates, to ensure accurate collection of observations, while simultaneously considering the energy available. To evaluate our scheduling mechanism, we use multiple datasets containing environmental observations obtained in multiple real deployments. The real observations enable us to model the environment with which the mechanism interacts as realistically as possible. We show that our solution can significantly extend the sensors' lifetime. We compare our mechanism to an idealized, all-knowing scheduler to demonstrate that its performance is near-optimal. Additionally, we highlight the unique feature of our design, energy-awareness, by displaying the impact of sensors' energy levels on the frequency of updates.
The growth of the Internet and its associated technologies; including digital services have tremendously impacted our society. However, scholars have noted a trend in data flow and collection; and have alleged mass surveillance and digital supremacy. To this end therefore, nations of the world such as Russia, China, Germany, Canada, France and Brazil among others have taken steps toward changing the narrative. The question now is, should Africans join these giants in this school of thought on digital sovereignty or fold their hands to remain on the other side of the divide? This question among others are the main reasons that provoked the thoughts of putting this paper together. This is with a view to demystifying the strategies to reconfigure data infrastructure in the context of Africa. It also highlights the benefits of digital technologies and its propensity to foster all round development in the continent as it relates to economic face-lift, employment creation, national security, among others. There is therefore a need for African nations to design appropriate blueprint to ensure security of her digital infrastructure and the flow of data within her cyber space. In addition, a roadmap in the immediate, short- and long-term in accordance with the framework of African developmental goals should be put in place to guide the implementation.
This work presents a technique for localization of a smart infrastructure node, consisting of a fisheye camera, in a prior map. These cameras can detect objects that are outside the line of sight of the autonomous vehicles (AV) and send that information to AVs using V2X technology. However, in order for this information to be of any use to the AV, the detected objects should be provided in the reference frame of the prior map that the AV uses for its own navigation. Therefore, it is important to know the accurate pose of the infrastructure camera with respect to the prior map. Here we propose to solve this localization problem in two steps, \textit{(i)} we perform feature matching between perspective projection of fisheye image and bird's eye view (BEV) satellite imagery from the prior map to estimate an initial camera pose, \textit{(ii)} we refine the initialization by maximizing the Mutual Information (MI) between intensity of pixel values of fisheye image and reflectivity of 3D LiDAR points in the map data. We validate our method on simulated data and also present results with real world data.
Smart cities will be characterized by a variety of intelligent and networked services, each with specific requirements for the underlying network infrastructure. While smart city architectures and services have been studied extensively, little attention has been paid to the network technology. The KIGLIS research project, consisting of a consortium of companies, universities and research institutions, focuses on artificial intelligence for optimizing fiber-optic networks of a smart city, with a special focus on future mobility applications, such as automated driving. In this paper, we present early results on our process of collecting smart city requirements for communication networks, which will lead towards reference infrastructure and architecture solutions. Finally, we suggest directions in which artificial intelligence will improve smart city networks.
Hardware Security Modules (HSMs) are trusted machines that perform sensitive operations in critical ecosystems. They are usually required by law in financial and government digital services. The most important feature of an HSM is its ability to store sensitive credentials and cryptographic keys inside a tamper-resistant hardware, so that every operation is done internally through a suitable API, and such sensitive data are never exposed outside the device. HSMs are now conveniently provided in the cloud, meaning that the physical machines are remotely hosted by some provider and customers can access them through a standard API. The property of keeping sensitive data inside the device is even more important in this setting as a vulnerable application might expose the full API to an attacker. Unfortunately, in the last 20+ years a multitude of practical API-level attacks have been found and proved feasible in real devices. The latest version of PKCS#11, the most popular standard API for HSMs, does not address these issues leaving all the flaws possible. In this paper, we propose the first secure HSM configuration that does not require any restriction or modification of the PKCS#11 API and is suitable to cloud HSM solutions, where compliance to the standard API is of paramount importance. The configuration relies on a careful separation of roles among the different HSM users so that known API flaws are not exploitable by any attacker taking control of the application. We prove the correctness of the configuration by providing a formal model in the state-of-the-art Tamarin prover and we show how to implement the configuration in a real cloud HSM solution.
Anomaly detection is increasingly important to handle the amount of sensor data in Edge and Fog environments, Smart Cities, as well as in Industry 4.0. To ensure good results, the utilized ML models need to be updated periodically to adapt to seasonal changes and concept drifts in the sensor data. Although the increasing resource availability at the edge can allow for in-situ execution of model training directly on the devices, it is still often offloaded to fog devices or the cloud. In this paper, we propose Local-Optimistic Scheduling (LOS), a method for executing periodic ML model training jobs in close proximity to the data sources, without overloading lightweight edge devices. Training jobs are offloaded to nearby neighbor nodes as necessary and the resource consumption is optimized to meet the training period while still ensuring enough resources for further training executions. This scheduling is accomplished in a decentralized, collaborative and opportunistic manner, without full knowledge of the infrastructure and workload. We evaluated our method in an edge computing testbed on real-world datasets. The experimental results show that LOS places the training executions close to the input sensor streams, decreases the deviation between training time and training period by up to 40% and increases the amount of successfully scheduled training jobs compared to an in-situ execution.
Cancer segmentation in whole-slide images is a fundamental step for viable tumour burden estimation, which is of great value for cancer assessment. However, factors like vague boundaries or small regions dissociated from viable tumour areas make it a challenging task. Considering the usefulness of multi-scale features in various vision-related tasks, we present a structure-aware scale-adaptive feature selection method for efficient and accurate cancer segmentation. Based on a segmentation network with a popular encoder-decoder architecture, a scale-adaptive module is proposed for selecting more robust features to represent the vague, non-rigid boundaries. Furthermore, a structural similarity metric is proposed for better tissue structure awareness to deal with small region segmentation. In addition, advanced designs including several attention mechanisms and the selective-kernel convolutions are applied to the baseline network for comparative study purposes. Extensive experimental results show that the proposed structure-aware scale-adaptive networks achieve outstanding performance on liver cancer segmentation when compared to top ten submitted results in the challenge of PAIP 2019. Further evaluation on colorectal cancer segmentation shows that the scale-adaptive module improves the baseline network or outperforms the other excellent designs of attention mechanisms when considering the tradeoff between efficiency and accuracy.
Terrestrial and satellite communication networks often rely on two-hop wireless architectures with an access channel followed by backhaul links. Examples include Cloud-Radio Access Networks (C-RAN) and Low-Earth Orbit (LEO) satellite systems. Furthermore, communication services characterized by the coexistence of heterogeneous requirements are emerging as key use cases. This paper studies the performance of critical service (CS) and non-critical service (NCS) for Internet of Things (IoT) systems sharing a grant-free channel consisting of radio access and backhaul segments. On the radio access segment, IoT devices send packets to a set of non-cooperative access points (APs) using slotted ALOHA (SA). The APs then forward correctly received messages to a base station over a shared wireless backhaul segment adopting SA. We study first a simplified erasure channel model, which is well suited for satellite applications. Then, in order to account for terrestrial scenarios, the impact of fading is considered. Among the main conclusions, we show that orthogonal inter-service resource allocation is generally preferred for NCS devices, while non-orthogonal protocols can improve the throughput and packet success rate of CS devices for both terrestrial and satellite scenarios.
In future IoT environments it is expected that the role of personal devices of mobile users in the physical area where IoT devices are deployed will become more and more important. In particular, due to the push towards decentralisation of services towards the edge, it is likely that a significant share of data generated by IoT devices will be needed by other (mobile) nodes nearby, while global Internet access will be limited only to a small fraction of data. In this context, opportunistic networking schemes can be adopted to build efficient content-centric protocols, through which data generated by IoT devices (or by mobile nodes themselves) can be accessed by the other nodes nearby. In this paper, we propose MobCCN, which is an ICN-compliant protocol for this heterogeneous environment. MobCCN is designed to implement the routing and forwarding mechanisms of the main ICN realisations, such as CCN. The original aspect of MobCCN is to implement an efficient opportunistic networking routing scheme to populate the Forwarding Interest Base (FIB) tables of the nodes, in order to guide the propagation of Interest packets towards nodes that store the required data. Specifically, MobCCN defines the utility of each node as a forwarder of Interest packets for a certain type of content, such that Interest packets can be propagated along a positive utility gradient, until reaching some node storing the data. We evaluate MobCCN against protocols representing two possible endpoints of the spectrum, respectively in terms of minimising the data delivery delay and the resource consumption. Performance results show that MobCCN is very effective and efficient, as it guarantees very high delivery rates and low delays, while keeping the total generated traffic at a reasonable level and also saving local resources.
In recent years with the rise of Cloud Computing (CC), many companies providing services in the cloud, are empowered a new series of services to their catalog, such as data mining (DM) and data processing, taking advantage of the vast computing resources available to them. Different service definition proposals have been proposed to address the problem of describing services in CC in a comprehensive way. Bearing in mind that each provider has its own definition of the logic of its services, and specifically of DM services, it should be pointed out that the possibility of describing services in a flexible way between providers is fundamental in order to maintain the usability and portability of this type of CC services. The use of semantic technologies based on the proposal offered by Linked Data (LD) for the definition of services, allows the design and modelling of DM services, achieving a high degree of interoperability. In this article a schema for the definition of DM services on CC is presented, in addition are considered all key aspects of service in CC, such as prices, interfaces, Software Level Agreement, instances or workflow of experimentation, among others. The proposal presented is based on LD, so that it reuses other schemata obtaining a best definition of the service. For the validation of the schema, a series of DM services have been created where some of the best known algorithms such as \textit{Random Forest} or \textit{KMeans} are modeled as services.