With the development in information and communications technology (ICT) and drones such as Internet-of-Things (IoT), edge computing, image processing, and autonomous drones, solutions supporting search and rescue (SAR) missions can be developed with more intelligent capabilities. In most of the drone and unmanned aerial vehicle (UAV) based systems supporting SAR missions, several drones deployed in different areas acquire images and videos that are sent to a ground control station (GCS) for processing and detecting a missing person. Although this offers many advantages, such as easy management and deployment, the approach still has many limitations. For example, when a connection between a drone and a GCS has some problems, the quality of service cannot be maintained. Many drone and UAV-based systems do not support flexibility, transparency, security, and traceability. In this paper, we propose a novel Internet-of-Drones (IoD) architecture using blockchain technology. We implement the proposed architecture with different drones, edge servers, and a Hyperledger blockchain network. The proof-of-concept design demonstrates that the proposed architecture can offer high-level services such as prolonging the operating time of a drone, improving the capability of detecting humans accurately, and a high level of transparency, traceability, and security.
In the 21st century, the industry of drones, also known as Unmanned Aerial Vehicles (UAVs), has witnessed a rapid increase with its large number of airspace users. The tremendous benefits of this technology in civilian applications such as hostage rescue and parcel delivery will integrate smart cities in the future. Nowadays, the affordability of commercial drones expands its usage at a large scale. However, the development of drone technology is associated with vulnerabilities and threats due to the lack of efficient security implementations. Moreover, the complexity of UAVs in software and hardware triggers potential security and privacy issues. Thus, posing significant challenges for the industry, academia, and governments. In this paper, we extensively survey the security and privacy issues of UAVs by providing a systematic classification at four levels: Hardware-level, Software-level, Communication-level, and Sensor-level. In particular, for each level, we thoroughly investigate (1) common vulnerabilities affecting UAVs for potential attacks from malicious actors, (2) existing threats that are jeopardizing the civilian application of UAVs, (3) active and passive attacks performed by the adversaries to compromise the security and privacy of UAVs, (4) possible countermeasures and mitigation techniques to protect UAVs from such malicious activities. In addition, we summarize the takeaways that highlight lessons learned about UAVs' security and privacy issues. Finally, we conclude our survey by presenting the critical pitfalls and suggesting promising future research directions for security and privacy of UAVs.
Disruptive changes in vehicles and transportation have been triggered by automated, connected, electrified and shared mobility. Autonomous vehicles, like Internet data packets, are transported from one address to another through the road network. The Internet has become a general network transmission paradigm, and the Energy Internet is a successful application of this paradigm to the field of energy. By introducing the Internet paradigm to the field of transportation, this paper is the first to propose the Transportation Internet. Based on the concept of the Transportation Internet, fundamental models, such as the switching, routing, and hierarchical models, are established to form basic theories; new architectures, such as transportation routers and software defined transportation, are proposed to make transportation interconnected and open; system verifications, such as prototype and simulation, are also carried out to prove feasibility and advancement. The Transportation Internet, which is of far-reaching significance in science and industry, has brought systematic breakthroughs in theory, architecture, and technology, explored innovative research directions, and provided an Internet-like solution for the new generation of transportation.
Designing an incentive compatible auction that maximizes expected revenue is a central problem in Auction Design. Theoretical approaches to the problem have hit some limits in the past decades and analytical solutions are known for only a few simple settings. Computational approaches to the problem through the use of LPs have their own set of limitations. Building on the success of deep learning, a new approach was recently proposed by Duetting et al. (2019) in which the auction is modeled by a feed-forward neural network and the design problem is framed as a learning problem. The neural architectures used in that work are general purpose and do not take advantage of any of the symmetries the problem could present, such as permutation equivariance. In this work, we consider auction design problems that have permutation-equivariant symmetry and construct a neural architecture that is capable of perfectly recovering the permutation-equivariant optimal mechanism, which we show is not possible with the previous architecture. We demonstrate that permutation-equivariant architectures are not only capable of recovering previous results, they also have better generalization properties.
Wireless Body Area Networks (WBANs) comprise a network of sensors subcutaneously implanted or placed near the body surface and facilitate continuous monitoring of health parameters of a patient. Research endeavours involving WBAN are directed towards effective transmission of detected parameters to a Local Processing Unit (LPU, usually a mobile device) and analysis of the parameters at the LPU or a back-end cloud. An important concern in WBAN is the lightweight nature of WBAN nodes and the need to conserve their energy. This is especially true for subcutaneously implanted nodes that cannot be recharged or regularly replaced. Work in energy conservation is mostly aimed at optimising the routing of signals to minimise energy expended. In this paper, a simple yet innovative approach to energy conservation and detection of alarming health status is proposed. Energy conservation is ensured through a two-tier approach wherein the first tier eliminates `uninteresting' health parameter readings at the site of a sensing node and prevents these from being transmitted across the WBAN to the LPU. A reading is categorised as uninteresting if it deviates very slightly from its immediately preceding reading and does not provide new insight on the patient's well being. In addition to this, readings that are faulty and emanate from possible sensor malfunctions are also eliminated. These eliminations are done at the site of the sensor using algorithms that are light enough to effectively function in the extremely resource-constrained environments of the sensor nodes. We notice, through experiments, that this eliminates and thus reduces around 90% of the readings that need to be transmitted to the LPU leading to significant energy savings. Furthermore, the proper functioning of these algorithms in such constrained environments is confirmed and validated over a hardware simulation set up. The second tier of assessment includes a proposed anomaly detection model at the LPU that is capable of identifying anomalies from streaming health parameter readings and indicates an adverse medical condition. In addition to being able to handle streaming data, the model works within the resource-constrained environments of an LPU and eliminates the need of transmitting the data to a back-end cloud, ensuring further energy savings. The anomaly detection capability of the model is validated using data available from the critical care units of hospitals and is shown to be superior to other anomaly detection techniques.
It has been a long time that computer architecture and systems are optimized to enable efficient execution of machine learning (ML) algorithms or models. Now, it is time to reconsider the relationship between ML and systems, and let ML transform the way that computer architecture and systems are designed. This embraces a twofold meaning: the improvement of designers' productivity, and the completion of the virtuous cycle. In this paper, we present a comprehensive review of work that applies ML for system design, which can be grouped into two major categories, ML-based modelling that involves predictions of performance metrics or some other criteria of interest, and ML-based design methodology that directly leverages ML as the design tool. For ML-based modelling, we discuss existing studies based on their target level of system, ranging from the circuit level to the architecture/system level. For ML-based design methodology, we follow a bottom-up path to review current work, with a scope of (micro-)architecture design (memory, branch prediction, NoC), coordination between architecture/system and workload (resource allocation and management, data center management, and security), compiler, and design automation. We further provide a future vision of opportunities and potential directions, and envision that applying ML for computer architecture and systems would thrive in the community.
Detection and recognition of text in natural images are two main problems in the field of computer vision that have a wide variety of applications in analysis of sports videos, autonomous driving, industrial automation, to name a few. They face common challenging problems that are factors in how text is represented and affected by several environmental conditions. The current state-of-the-art scene text detection and/or recognition methods have exploited the witnessed advancement in deep learning architectures and reported a superior accuracy on benchmark datasets when tackling multi-resolution and multi-oriented text. However, there are still several remaining challenges affecting text in the wild images that cause existing methods to underperform due to there models are not able to generalize to unseen data and the insufficient labeled data. Thus, unlike previous surveys in this field, the objectives of this survey are as follows: first, offering the reader not only a review on the recent advancement in scene text detection and recognition, but also presenting the results of conducting extensive experiments using a unified evaluation framework that assesses pre-trained models of the selected methods on challenging cases, and applies the same evaluation criteria on these techniques. Second, identifying several existing challenges for detecting or recognizing text in the wild images, namely, in-plane-rotation, multi-oriented and multi-resolution text, perspective distortion, illumination reflection, partial occlusion, complex fonts, and special characters. Finally, the paper also presents insight into the potential research directions in this field to address some of the mentioned challenges that are still encountering scene text detection and recognition techniques.
We survey research on self-driving cars published in the literature focusing on autonomous cars developed since the DARPA challenges, which are equipped with an autonomy system that can be categorized as SAE level 3 or higher. The architecture of the autonomy system of self-driving cars is typically organized into the perception system and the decision-making system. The perception system is generally divided into many subsystems responsible for tasks such as self-driving-car localization, static obstacles mapping, moving obstacles detection and tracking, road mapping, traffic signalization detection and recognition, among others. The decision-making system is commonly partitioned as well into many subsystems responsible for tasks such as route planning, path planning, behavior selection, motion planning, and control. In this survey, we present the typical architecture of the autonomy system of self-driving cars. We also review research on relevant methods for perception and decision making. Furthermore, we present a detailed description of the architecture of the autonomy system of the UFES's car, IARA. Finally, we list prominent autonomous research cars developed by technology companies and reported in the media.
Deep Learning has enabled remarkable progress over the last years on a variety of tasks, such as image recognition, speech recognition, and machine translation. One crucial aspect for this progress are novel neural architectures. Currently employed architectures have mostly been developed manually by human experts, which is a time-consuming and error-prone process. Because of this, there is growing interest in automated neural architecture search methods. We provide an overview of existing work in this field of research and categorize them according to three dimensions: search space, search strategy, and performance estimation strategy.
The ever-growing interest witnessed in the acquisition and development of unmanned aerial vehicles (UAVs), commonly known as drones in the past few years, has brought generation of a very promising and effective technology. Because of their characteristic of small size and fast deployment, UAVs have shown their effectiveness in collecting data over unreachable areas and restricted coverage zones. Moreover, their flexible-defined capacity enables them to collect information with a very high level of detail, leading to high resolution images. UAVs mainly served in military scenario. However, in the last decade, they have being broadly adopted in civilian applications as well. The task of aerial surveillance and situation awareness is usually completed by integrating intelligence, surveillance, observation, and navigation systems, all interacting in the same operational framework. To build this capability, UAV's are well suited tools that can be equipped with a wide variety of sensors, such as cameras or radars. Deep learning has been widely recognized as a prominent approach in different computer vision applications. Specifically, one-stage object detector and two-stage object detector are regarded as the most important two groups of Convolutional Neural Network based object detection methods. One-stage object detector could usually outperform two-stage object detector in speed; however, it normally trails in detection accuracy, compared with two-stage object detectors. In this study, focal loss based RetinaNet, which works as one-stage object detector, is utilized to be able to well match the speed of regular one-stage detectors and also defeat two-stage detectors in accuracy, for UAV based object detection. State-of-the-art performance result has been showed on the UAV captured image dataset-Stanford Drone Dataset (SDD).
This paper identifies the factors that have an impact on mobile recommender systems. Recommender systems have become a technology that has been widely used by various online applications in situations where there is an information overload problem. Numerous applications such as e-Commerce, video platforms and social networks provide personalized recommendations to their users and this has improved the user experience and vendor revenues. The development of recommender systems has been focused mostly on the proposal of new algorithms that provide more accurate recommendations. However, the use of mobile devices and the rapid growth of the internet and networking infrastructure has brought the necessity of using mobile recommender systems. The links between web and mobile recommender systems are described along with how the recommendations in mobile environments can be improved. This work is focused on identifying the links between web and mobile recommender systems and to provide solid future directions that aim to lead in a more integrated mobile recommendation domain.