Autonomous truck and trailer configurations face challenges when operating in reverse due to the lack of sensing on the trailer. It is anticipated that sensor packages will be installed on existing trailers to extend autonomous operations while operating in reverse in uncontrolled environments, like a customer's loading dock. Power Line Communication (PLC) between the trailer and the tractor cannot support high bandwidth and low latency communication. This paper explores the impact of using Ethernet or a wireless medium for commercial trailer-tractor communication on the lifecycle and operation of trailer electronic control units (ECUs) from a Systems Engineering perspective to address system requirements, integration, and security. Additionally, content-based and host-based networking approaches for in-vehicle communication, such as Named Data Networking (NDN) and IP-based networking are compared. Implementation, testing and evaluation of prototype trailer ECU communication with the tractor ECUs over Ethernet is shown by transmitting different data types simultaneously. The implementation is tested with two networking approaches, Named Data Networking, and Data Distribution Service (DDS) and the test indicated that NDN over TCP is an efficient approach that is capable of meeting automotive communication requirements. Using Ethernet or a wireless harness and NDN for commercial trailer Anti-Lock Braking System (ABS) ECU provides adequate resources for the operation of autonomous trucks and the expansion of its capabilities, and at the same time significantly reduces the complexities compared to when new features are added to legacy communication systems. Using a wireless medium for tractor-trailer communication will bring new cybersecurity challenges and requirements which requires new development and lifecycle considerations.
Thanks to the augmented convenience, safety advantages, and potential commercial value, Intelligent vehicles (IVs) have attracted wide attention throughout the world. Although a few of autonomous driving unicorns assert that IVs will be commercially deployable by 2025, their implementation is still restricted to small-scale validation due to various issues, among which precise computation of control commands or trajectories by planning methods remains a prerequisite for IVs. This paper aims to review state-of-the-art planning methods, including pipeline planning and end-to-end planning methods. In terms of pipeline methods, a survey of selecting algorithms is provided along with a discussion of the expansion and optimization mechanisms, whereas in end-to-end methods, the training approaches and verification scenarios of driving tasks are points of concern. Experimental platforms are reviewed to facilitate readers in selecting suitable training and validation methods. Finally, the current challenges and future directions are discussed. The side-by-side comparison presented in this survey not only helps to gain insights into the strengths and limitations of the reviewed methods but also assists with system-level design choices.
In this work, we study the problem of real-time tracking and reconstruction of an information source with the purpose of actuation. A device monitors an $N$-state Markov process and transmits status updates to a receiver over a wireless erasure channel. We consider a set of joint sampling and transmission policies, including a semantics-aware one, and we study their performance with respect to relevant metrics. Specifically, we investigate the real-time reconstruction error and its variance, the consecutive error, the cost of memory error, and the cost of actuation error. Furthermore, we propose a randomized stationary sampling and transmission policy and derive closed-form expressions for all aforementioned metrics. We then formulate an optimization problem for minimizing the real-time reconstruction error subject to a sampling cost constraint. Our results show that in the scenario of constrained sampling generation, the optimal randomized stationary policy outperforms all other sampling policies when the source is rapidly evolving. Otherwise, the semantics-aware policy performs the best.
IEEE 802.1 Time-sensitive Networking~(TSN) standards are envisioned to replace legacy network protocols in critical domains to ensure reliable and deterministic communication over off-the-shelf Ethernet equipment. However, they lack security countermeasures and can even impose new attack vectors that may lead to hazardous consequences. This paper presents the first open-source security monitoring and intrusion detection mechanism, TSNZeek, for IEEE 802.1 TSN protocols. We extend an existing monitoring tool, Zeek, with a new packet parsing grammar to process TSN data traffic and a rule-based attack detection engine for TSN-specific threats. We also discuss various security-related configuration and design aspects for IEEE 802.1 TSN monitoring. Our experiments show that TSNZeek causes only ~5% CPU overhead on top of Zeek and successfully detects various threats in a real TSN testbed.
This work considers the problem of mitigating information leakage between communication and sensing in systems jointly performing both operations. Specifically, a discrete memoryless state-dependent broadcast channel model is studied in which (i) the presence of feedback enables a transmitter to convey information, while simultaneously performing channel state estimation; (ii) one of the receivers is treated as an eavesdropper whose state should be estimated but which should remain oblivious to part of the transmitted information. The model abstracts the challenges behind security for joint communication and sensing if one views the channel state as a key attribute, e.g., location. For independent and identically distributed states, perfect output feedback, and when part of the transmitted message should be kept secret, a partial characterization of the secrecy-distortion region is developed. The characterization is exact when the broadcast channel is either physically-degraded or reversely-physically-degraded. The partial characterization is also extended to the situation in which the entire transmitted message should be kept secret. The benefits of a joint approach compared to separation-based secure communication and state-sensing methods are illustrated with binary joint communication and sensing models.
We investigate the age of information (AoI) of a relay-assisted cooperative communication system, where a source node sends status update packets to the destination node as timely as possible with the aid of a relay node. For time-slotted systems without relaying, prior works have shown that the source should generate and send a new packet to the destination every time slot to minimize the average AoI, regardless of whether the destination has successfully decoded the packet in the previous slot. However, when a dedicated relay is involved, whether the relay can improve the AoI performance requires an in-depth study. In particular, the packet generation and transmission strategy of the source should be carefully designed to cooperate with the relay. Depending on whether the source and the relay are allowed to transmit simultaneously, two relay-assisted schemes are investigated: time division multiple access (TDMA) and non-orthogonal multiple access (NOMA) schemes. A key challenge in deriving their theoretical average AoI is that the destination has different probabilities of successfully receiving an update packet in different time slots. We model each scheme using a Markov chain to derive the corresponding closed-form average AoI. Interestingly, our theoretical analysis indicates that the relay-assisted schemes can only outperform the non-relay scheme in average AoI when the signal-to-noise ratio of the source-destination link is below -2dB. Furthermore, comparing the merits of relay-assisted schemes, simulation results show that the TDMA scheme has a lower energy consumption, while the NOMA counterpart typically achieves a lower average AoI.
Designing effective routing strategies for mobile wireless networks is challenging due to the need to seamlessly adapt routing behavior to spatially diverse and temporally changing network conditions. In this work, we use deep reinforcement learning (DeepRL) to learn a scalable and generalizable single-copy routing strategy for such networks. We make the following contributions: i) we design a reward function that enables the DeepRL agent to explicitly trade-off competing network goals, such as minimizing delay vs. the number of transmissions per packet; ii) we propose a novel set of relational neighborhood, path, and context features to characterize mobile wireless networks and model device mobility independently of a specific network topology; and iii) we use a flexible training approach that allows us to combine data from all packets and devices into a single offline centralized training set to train a single DeepRL agent. To evaluate generalizeability and scalability, we train our DeepRL agent on one mobile network scenario and then test it on other mobile scenarios, varying the number of devices and transmission ranges. Our results show our learned single-copy routing strategy outperforms all other strategies in terms of delay except for the optimal strategy, even on scenarios on which the DeepRL agent was not trained.
Connected and Automated Vehicles (CAVs) are one of the emerging technologies in the automotive domain that has the potential to alleviate the issues of accidents, traffic congestion, and pollutant emissions, leading to a safe, efficient, and sustainable transportation system. Machine learning-based methods are widely used in CAVs for crucial tasks like perception, motion planning, and motion control, where machine learning models in CAVs are solely trained using the local vehicle data, and the performance is not certain when exposed to new environments or unseen conditions. Federated learning (FL) is an effective solution for CAVs that enables a collaborative model development with multiple vehicles in a distributed learning framework. FL enables CAVs to learn from a wide range of driving environments and improve their overall performance while ensuring the privacy and security of local vehicle data. In this paper, we review the progress accomplished by researchers in applying FL to CAVs. A broader view of the various data modalities and algorithms that have been implemented on CAVs is provided. Specific applications of FL are reviewed in detail, and an analysis of the challenges and future scope of research are presented.
Intelligent vehicles (IVs) have attracted wide attention thanks to the augmented convenience, safety advantages, and potential commercial value. Although a few of autonomous driving unicorns assert that IVs will be commercially deployable by 2025, their deployment is still restricted to small-scale validation due to various issues, among which safety, reliability, and generalization of planning methods are prominent concerns. Precise computation of control commands or trajectories by planning methods remains a prerequisite for IVs, owing to perceptual imperfections under complex environments, which pose an obstacle to the successful commercialization of IVs. This paper aims to review state-of-the-art planning methods, including pipeline planning and end-to-end planning methods. In terms of pipeline methods, a survey of selecting algorithms is provided along with a discussion of the expansion and optimization mechanisms, whereas in end-to-end methods, the training approaches and verification scenarios of driving tasks are points of concern. Experimental platforms are reviewed to facilitate readers in selecting suitable training and validation methods. Finally, the current challenges and future directions are discussed. The side-by-side comparison presented in this survey helps to gain insights into the strengths and limitations of the reviewed methods, which also assists with system-level design choices.
This survey paper is an expanded version of an invited keynote at the ThEdu'22 workshop, August 2022, in Haifa (Israel). After a short introduction on the developments of CAS, DGS and other useful technologies, we show implications in Mathematics Education, and in the broader frame of STEAM Education. In particular, we discuss the transformation of Mathematics Education into exploration-discovery-conjecture-proof scheme, avoiding usage as a black box . This scheme fits well into the so-called 4 C's of 21st Century Education. Communication and Collaboration are emphasized not only between humans, but also between machines, and between man and machine. Specific characteristics of the outputs enhance the need of Critical Thinking. The usage of automated commands for exploration and discovery is discussed, with mention of limitations where they exist. We illustrate the topic with examples from parametric integrals (describing a "cognitive neighborhood" of a mathematical notion), plane geometry, and the study of plane curves (envelopes, isoptic curves). Some of the examples are fully worked out, others are explained and references are given.
Deep neural networks (DNNs) have achieved unprecedented success in the field of artificial intelligence (AI), including computer vision, natural language processing and speech recognition. However, their superior performance comes at the considerable cost of computational complexity, which greatly hinders their applications in many resource-constrained devices, such as mobile phones and Internet of Things (IoT) devices. Therefore, methods and techniques that are able to lift the efficiency bottleneck while preserving the high accuracy of DNNs are in great demand in order to enable numerous edge AI applications. This paper provides an overview of efficient deep learning methods, systems and applications. We start from introducing popular model compression methods, including pruning, factorization, quantization as well as compact model design. To reduce the large design cost of these manual solutions, we discuss the AutoML framework for each of them, such as neural architecture search (NAS) and automated pruning and quantization. We then cover efficient on-device training to enable user customization based on the local data on mobile devices. Apart from general acceleration techniques, we also showcase several task-specific accelerations for point cloud, video and natural language processing by exploiting their spatial sparsity and temporal/token redundancy. Finally, to support all these algorithmic advancements, we introduce the efficient deep learning system design from both software and hardware perspectives.