Mobile and Internet of Things devices are generating enormous amounts of multi-modal data due to their exponential growth and accessibility. As a result, these data sources must be directly analyzed in real time at the network edge rather than relying on the cloud. Significant processing power at the network's edge has made it possible to gather data and make decisions prior to data being sent to the cloud. Moreover, security problems have significantly towered as a result of the rapid expansion of mobile devices, Internet of Things (IoT) devices, and various network points. It's much harder than ever to guarantee the privacy of sensitive data, including customer information. This systematic literature review depicts the fact that new technologies are a great weapon to fight with the attack and threats to the edge computing security.
This paper presents an approach to provide strong assurance of the secure execution of distributed event-driven applications on shared infrastructures, while relying on a small Trusted Computing Base. We build upon and extend security primitives provided by Trusted Execution Environments (TEEs) to guarantee authenticity and integrity properties of applications, and to secure control of input and output devices. More specifically, we guarantee that if an output is produced by the application, it was allowed to be produced by the application's source code based on an authentic trace of inputs. We present an integrated open-source framework to develop, deploy, and use such applications across heterogeneous TEEs. Beyond authenticity and integrity, our framework optionally provides confidentiality and a notion of availability, and facilitates software development at a high level of abstraction over the platform-specific TEE layer. We support event-driven programming to develop distributed enclave applications in Rust and C for heterogeneous TEE, including Intel SGX, ARM TrustZone and Sancus. In this article we discuss the workings of our approach, the extensions we made to the Sancus processor, and the integration of our development model with commercial TEEs. Our evaluation of security and performance aspects show that TEEs, together with our programming model, form a basis for powerful security architectures for dependable systems in domains such as Industrial Control Systems and the Internet of Things, illustrating our framework's unique suitability for a broad range of use cases which combine cloud processing, mobile and edge devices, and lightweight sensing and actuation.
The development of smart city transport systems, including self-driving cars, leads to an increase in the threat of hostile interference in the processes of vehicle control. This interference may disrupt the normal functioning of the transport system, and, if is performed covertly, the system can be negatively affected for a long period of time. This paper develops a simulation stochastic cellular automata model of traffic on a circular two-lane road based on the Sakai-Nishinari-Fukui-Schadschneider (S-NFS) rules. In the presented model, in addition to ordinary vehicles, there are covertly counteracting vehicles; their task is to reduce the quantity indicators (such as traffic flux) of the transport system using special rules of behavior. Three such rules are considered and compared: two lane-changing rules and one slow-down rule. It is shown that such counteracting vehicles can affect the traffic flow, mainly in the region of the maximum of the fundamental diagram, that is, at average values of the vehicle density. In free-flowing traffic or in a traffic jam, the influence of the counteracting vehicle is negligible regardless of its rules of behavior.
Although extensive research in emergency collision avoidance has been carried out for straight or curved roads in a highway scenario, a general method that could be implemented for all road environments has not been thoroughly explored. Moreover, most current algorithms don't consider collision mitigation in an emergency. This functionality is essential since the problem may have no feasible solution. We propose a safe controller using model predictive control and artificial potential function to address these problems. A new artificial potential function inspired by line charge is proposed as the cost function for our model predictive controller. The vehicle dynamics and actuator limitations are set as constraints. The new artificial potential function considers the shape of all objects. In particular, the artificial potential function we proposed has the flexibility to fit the shape of the road structures, such as the intersection. We could also realize collision mitigation for a specific part of the vehicle by increasing the charge quantity at the corresponding place. We have tested our methods in 192 cases from 8 different scenarios in simulation with two different models. The simulation results show that the success rate of the proposed safe controller is 20% higher than using HJ-reachability with system decomposition by using a unicycle model. It could also decrease 43% of collision that happens at the pre-assigned part. The method is further validated in a dynamic bicycle model.
Given a computable sequence of natural numbers, it is a natural task to find a G\"odel number of a program that generates this sequence. It is easy to see that this problem is neither continuous nor computable. In algorithmic learning theory this problem is well studied from several perspectives and one question studied there is for which sequences this problem is at least learnable in the limit. Here we study the problem on all computable sequences and we classify the Weihrauch complexity of it. For this purpose we can, among other methods, utilize the amalgamation technique known from learning theory. As a benchmark for the classification we use closed and compact choice problems and their jumps on natural numbers, and we argue that these problems correspond to induction and boundedness principles, as they are known from the Kirby-Paris hierarchy in reverse mathematics. We provide a topological as well as a computability-theoretic classification, which reveal some significant differences.
Block-based programming languages like Scratch are increasingly popular for programming education and end-user programming. Recent program analyses build on the insight that source code can be modelled using techniques from natural language processing. Many of the regularities of source code that support this approach are due to the syntactic overhead imposed by textual programming languages. This syntactic overhead, however, is precisely what block-based languages remove in order to simplify programming. Consequently, it is unclear how well this modelling approach performs on block-based programming languages. In this paper, we investigate the applicability of language models for the popular block-based programming language Scratch. We model Scratch programs using n-gram models, the most essential type of language model, and transformers, a popular deep learning model. Evaluation on the example tasks of code completion and bug finding confirm that blocks inhibit predictability, but the use of language models is nevertheless feasible. Our findings serve as foundation for improving tooling and analyses for block-based languages.
By interacting, synchronizing, and cooperating with its physical counterpart in real time, digital twin is promised to promote an intelligent, predictive, and optimized modern city. Via interconnecting massive physical entities and their virtual twins with inter-twin and intra-twin communications, the Internet of digital twins (IoDT) enables free data exchange, dynamic mission cooperation, and efficient information aggregation for composite insights across vast physical/virtual entities. However, as IoDT incorporates various cutting-edge technologies to spawn the new ecology, severe known/unknown security flaws and privacy invasions of IoDT hinders its wide deployment. Besides, the intrinsic characteristics of IoDT such as \emph{decentralized structure}, \emph{information-centric routing} and \emph{semantic communications} entail critical challenges for security service provisioning in IoDT. To this end, this paper presents an in-depth review of the IoDT with respect to system architecture, enabling technologies, and security/privacy issues. Specifically, we first explore a novel distributed IoDT architecture with cyber-physical interactions and discuss its key characteristics and communication modes. Afterward, we investigate the taxonomy of security and privacy threats in IoDT, discuss the key research challenges, and review the state-of-the-art defense approaches. Finally, we point out the new trends and open research directions related to IoDT.
With the advent of 5G commercialization, the need for more reliable, faster, and intelligent telecommunication systems are envisaged for the next generation beyond 5G (B5G) radio access technologies. Artificial Intelligence (AI) and Machine Learning (ML) are not just immensely popular in the service layer applications but also have been proposed as essential enablers in many aspects of B5G networks, from IoT devices and edge computing to cloud-based infrastructures. However, most of the existing surveys in B5G security focus on the performance of AI/ML models and their accuracy, but they often overlook the accountability and trustworthiness of the models' decisions. Explainable AI (XAI) methods are promising techniques that would allow system developers to identify the internal workings of AI/ML black-box models. The goal of using XAI in the security domain of B5G is to allow the decision-making processes of the security of systems to be transparent and comprehensible to stakeholders making the systems accountable for automated actions. In every facet of the forthcoming B5G era, including B5G technologies such as RAN, zero-touch network management, E2E slicing, this survey emphasizes the role of XAI in them and the use cases that the general users would ultimately enjoy. Furthermore, we presented the lessons learned from recent efforts and future research directions on top of the currently conducted projects involving XAI.
Federated learning (FL) is an emerging, privacy-preserving machine learning paradigm, drawing tremendous attention in both academia and industry. A unique characteristic of FL is heterogeneity, which resides in the various hardware specifications and dynamic states across the participating devices. Theoretically, heterogeneity can exert a huge influence on the FL training process, e.g., causing a device unavailable for training or unable to upload its model updates. Unfortunately, these impacts have never been systematically studied and quantified in existing FL literature. In this paper, we carry out the first empirical study to characterize the impacts of heterogeneity in FL. We collect large-scale data from 136k smartphones that can faithfully reflect heterogeneity in real-world settings. We also build a heterogeneity-aware FL platform that complies with the standard FL protocol but with heterogeneity in consideration. Based on the data and the platform, we conduct extensive experiments to compare the performance of state-of-the-art FL algorithms under heterogeneity-aware and heterogeneity-unaware settings. Results show that heterogeneity causes non-trivial performance degradation in FL, including up to 9.2% accuracy drop, 2.32x lengthened training time, and undermined fairness. Furthermore, we analyze potential impact factors and find that device failure and participant bias are two potential factors for performance degradation. Our study provides insightful implications for FL practitioners. On the one hand, our findings suggest that FL algorithm designers consider necessary heterogeneity during the evaluation. On the other hand, our findings urge system providers to design specific mechanisms to mitigate the impacts of heterogeneity.
Edge intelligence refers to a set of connected systems and devices for data collection, caching, processing, and analysis in locations close to where data is captured based on artificial intelligence. The aim of edge intelligence is to enhance the quality and speed of data processing and protect the privacy and security of the data. Although recently emerged, spanning the period from 2011 to now, this field of research has shown explosive growth over the past five years. In this paper, we present a thorough and comprehensive survey on the literature surrounding edge intelligence. We first identify four fundamental components of edge intelligence, namely edge caching, edge training, edge inference, and edge offloading, based on theoretical and practical results pertaining to proposed and deployed systems. We then aim for a systematic classification of the state of the solutions by examining research results and observations for each of the four components and present a taxonomy that includes practical problems, adopted techniques, and application goals. For each category, we elaborate, compare and analyse the literature from the perspectives of adopted techniques, objectives, performance, advantages and drawbacks, etc. This survey article provides a comprehensive introduction to edge intelligence and its application areas. In addition, we summarise the development of the emerging research field and the current state-of-the-art and discuss the important open issues and possible theoretical and technical solutions.
Driven by the visions of Internet of Things and 5G communications, the edge computing systems integrate computing, storage and network resources at the edge of the network to provide computing infrastructure, enabling developers to quickly develop and deploy edge applications. Nowadays the edge computing systems have received widespread attention in both industry and academia. To explore new research opportunities and assist users in selecting suitable edge computing systems for specific applications, this survey paper provides a comprehensive overview of the existing edge computing systems and introduces representative projects. A comparison of open source tools is presented according to their applicability. Finally, we highlight energy efficiency and deep learning optimization of edge computing systems. Open issues for analyzing and designing an edge computing system are also studied in this survey.