As outlined by the Intergovernmental Panel on Climate Change, electric vehicles (EVs) offer the greatest decarbonisation potential for land transport, in addition to other benefits, including reduced fuel and maintenance costs, improved air quality, reduced noise pollution, and improved national fuel security. Owing to these benefits, governments worldwide are planning and rolling out EV-favourable policies, and major car manufacturers are committing to fully electrifying their offerings over the coming decades. With the number of EVs on the roads expected to increase, it is imperative to understand the effect of EVs on transport and energy systems. While unmanaged charging of EVs could potentially add stress to the electricity grid, managed charging of EVs could be beneficial to the grid in terms of improved demand-supply management and improved integration of renewable energy sources into the grid, as well as offer other ancillary services. To assess the impact of EVs on the electricity grid and their potential use as batteries-on-wheels through smart charging capabilities, decision-makers need to understand how current EV owners drive and charge their vehicles. As such, an emerging area of research focuses on understanding these behaviours. Some studies have used stated preference surveys of non-EV owners or data collected from EV trials to estimate EV driving and charging patterns. Other studies have tried to decipher EV owners' behaviour based on data collected from national surveys or as reported by EV owners. This study aims to fill this gap in the literature by collecting data on real-world driving and charging patterns of 239 EVs across Australia. To this effect, data collection from current EV owners via an application programming interface platform began in November 2021 and is currently live.
Rate-splitting multiple access (RSMA) has emerged as a novel, general, and powerful framework for the design and optimization of non-orthogonal transmission, multiple access (MA), and interference management strategies for future wireless networks. Through information and communication theoretic analysis, RSMA has been shown to be optimal (from a Degrees-of-Freedom region perspective) in several transmission scenarios. Compared to the conventional MA strategies used in 5G, RSMA enables spectral efficiency (SE), energy efficiency (EE), coverage, user fairness, reliability, and quality of service (QoS) enhancements for a wide range of network loads (including both underloaded and overloaded regimes) and user channel conditions. Furthermore, it enjoys a higher robustness against imperfect channel state information at the transmitter (CSIT) and entails lower feedback overhead and complexity. Despite its great potential to fundamentally change the physical (PHY) layer and media access control (MAC) layer of wireless communication networks, RSMA is still confronted with many challenges on the road towards standardization. In this paper, we present the first comprehensive overview on RSMA by providing a survey of the pertinent state-of-the-art research, detailing its architecture, taxonomy, and various appealing applications, as well as comparing with existing MA schemes in terms of their overall frameworks, performance, and complexities. An in-depth discussion of future RSMA research challenges is also provided to inspire future research on RSMA-aided wireless communication for beyond 5G systems.
Manufacturing companies face challenges when it comes to quickly adapting their production control to fluctuating demands or changing requirements. Control approaches that encapsulate production functions as services have shown to be promising in order to increase the flexibility of Cyber-Physical Production Systems. But an existing challenge of such approaches is finding a production plan based on provided functionalities for a demanded product, especially when there is no direct (i.e., syntactic) match between demanded and provided functions. While there is a variety of approaches to production planning, flexible production poses specific requirements that are not covered by existing research. In this contribution, we first capture these requirements for flexible production environments. Afterwards, an overview of current Artificial Intelligence approaches that can be utilized in order to overcome the aforementioned challenges is given. For this purpose, we focus on planning algorithms, but also consider models of production systems that can act as inputs to these algorithms. Approaches from both symbolic AI planning as well as approaches based on Machine Learning are discussed and eventually compared against the requirements. Based on this comparison, a research agenda is derived.
Falls, highly common in the constantly increasing global aging population, can have a variety of negative effects on their health, well-being, and quality of life, including restricting their capabilities to conduct Activities of Daily Living (ADLs), which are crucial for one's sustenance. Timely assistance during falls is highly necessary, which involves tracking the indoor location of the elderly during their diverse navigational patterns associated with ADLs to detect the precise location of a fall. With the decreasing caregiver population on a global scale, it is important that the future of intelligent living environments can detect falls during ADLs while being able to track the indoor location of the elderly in the real world. To address these challenges, this work proposes a cost-effective and simplistic design paradigm for an Ambient Assisted Living system that can capture multimodal components of user behaviors during ADLs that are necessary for performing fall detection and indoor localization in a simultaneous manner in the real world. Proof of concept results from real-world experiments are presented to uphold the effective working of the system. The findings from two comparison studies with prior works in this field are also presented to uphold the novelty of this work. The first comparison study shows how the proposed system outperforms prior works in the areas of indoor localization and fall detection in terms of the effectiveness of its software design and hardware design. The second comparison study shows that the cost for the development of this system is the least as compared to prior works in these fields, which involved real-world development of the underlining systems, thereby upholding its cost-effective nature.
This paper presents a multi-layer motion planning and control architecture for autonomous racing, capable of avoiding static obstacles, performing active overtakes, and reaching velocities above 75 $m/s$. The used offline global trajectory generation and the online model predictive controller are highly based on optimization and dynamic models of the vehicle, where the tires and camber effects are represented in an extended version of the basic Pacejka Magic Formula. The proposed single-track model is identified and validated using multi-body motorsport libraries which allow simulating the vehicle dynamics properly, especially useful when real experimental data are missing. The fundamental regularization terms and constraints of the controller are tuned to reduce the rate of change of the inputs while assuring an acceptable velocity and path tracking. The motion planning strategy consists of a Fren\'et-Frame-based planner which considers a forecast of the opponent produced by a Kalman filter. The planner chooses the collision-free path and velocity profile to be tracked on a 3 seconds horizon to realize different goals such as following and overtaking. The proposed solution has been applied on a Dallara AV-21 racecar and tested at oval race tracks achieving lateral accelerations up to 25 $m/s^{2}$.
Fifth Generation (5G) technology is an emerging and fast adopting technology which is being utilized in most of the novel applications that require highly reliable low-latency communications. It has the capability to provide greater coverage, better access, and best suited for high density networks. Having all these benefits, it clearly implies that 5G could be used to satisfy the requirements of Autonomous vehicles. Automated driving Vehicles and systems are developed with a promise to provide comfort, safe and efficient drive reducing the risk of life. But, recently there are fatalities due to these autonomous vehicles and systems. This is due to the lack of robust state-of-art which has to be improved further. With the advent of 5G technology and rise of autonomous vehicles (AVs), road safety is going to get more secure with less human errors. However, integration of 5G and AV is still at its infant stage with several research challenges that needs to be addressed. This survey first starts with a discussion on the current advancements in AVs, automation levels, enabling technologies and 5G requirements. Then, we focus on the emerging techniques required for integrating 5G technology with AVs, impact of 5G and B5G technologies on AVs along with security concerns in AVs. The paper also provides a comprehensive survey of recent developments in terms of standardisation activities on 5G autonomous vehicle technology and current projects. The article is finally concluded with lessons learnt, future research directions and challenges.
Encouraged by decision makers' appetite for future information on topics ranging from elections to pandemics, and enabled by the explosion of data and computational methods, model based forecasts have garnered increasing influence on a breadth of decisions in modern society. Using several classic examples from fisheries management, I demonstrate that selecting the model or models that produce the most accurate and precise forecast (measured by statistical scores) can sometimes lead to worse outcomes (measured by real-world objectives). This can create a forecast trap, in which the outcomes such as fish biomass or economic yield decline while the manager becomes increasingly convinced that these actions are consistent with the best models and data available. The forecast trap is not unique to this example, but a fundamental consequence of non-uniqueness of models. Existing practices promoting a broader set of models are the best way to avoid the trap.
High-end vehicles have been furnished with a number of electronic control units (ECUs), which provide upgrading functions to enhance the driving experience. The controller area network (CAN) is a well-known protocol that connects these ECUs because of its modesty and efficiency. However, the CAN bus is vulnerable to various types of attacks. Although the intrusion detection system (IDS) is proposed to address the security problem of the CAN bus, most previous studies only provide alerts when attacks occur without knowing the specific type of attack. Moreover, an IDS is designed for a specific car model due to diverse car manufacturers. In this study, we proposed a novel deep learning model called supervised contrastive (SupCon) ResNet, which can handle multiple attack identification on the CAN bus. Furthermore, the model can be used to improve the performance of a limited-size dataset using a transfer learning technique. The capability of the proposed model is evaluated on two real car datasets. When tested with the car hacking dataset, the experiment results show that the SupCon ResNet model improves the overall false-negative rates of four types of attack by four times on average, compared to other models. In addition, the model achieves the highest F1 score at 0.9994 on the survival dataset by utilizing transfer learning. Finally, the model can adapt to hardware constraints in terms of memory size and running time.
The existence of representative datasets is a prerequisite of many successful artificial intelligence and machine learning models. However, the subsequent application of these models often involves scenarios that are inadequately represented in the data used for training. The reasons for this are manifold and range from time and cost constraints to ethical considerations. As a consequence, the reliable use of these models, especially in safety-critical applications, is a huge challenge. Leveraging additional, already existing sources of knowledge is key to overcome the limitations of purely data-driven approaches, and eventually to increase the generalization capability of these models. Furthermore, predictions that conform with knowledge are crucial for making trustworthy and safe decisions even in underrepresented scenarios. This work provides an overview of existing techniques and methods in the literature that combine data-based models with existing knowledge. The identified approaches are structured according to the categories integration, extraction and conformity. Special attention is given to applications in the field of autonomous driving.
Autonomous driving has achieved a significant milestone in research and development over the last decade. There is increasing interest in the field as the deployment of self-operating vehicles on roads promises safer and more ecologically friendly transportation systems. With the rise of computationally powerful artificial intelligence (AI) techniques, autonomous vehicles can sense their environment with high precision, make safe real-time decisions, and operate more reliably without human interventions. However, intelligent decision-making in autonomous cars is not generally understandable by humans in the current state of the art, and such deficiency hinders this technology from being socially acceptable. Hence, aside from making safe real-time decisions, the AI systems of autonomous vehicles also need to explain how these decisions are constructed in order to be regulatory compliant across many jurisdictions. Our study sheds a comprehensive light on developing explainable artificial intelligence (XAI) approaches for autonomous vehicles. In particular, we make the following contributions. First, we provide a thorough overview of the present gaps with respect to explanations in the state-of-the-art autonomous vehicle industry. We then show the taxonomy of explanations and explanation receivers in this field. Thirdly, we propose a framework for an architecture of end-to-end autonomous driving systems and justify the role of XAI in both debugging and regulating such systems. Finally, as future research directions, we provide a field guide on XAI approaches for autonomous driving that can improve operational safety and transparency towards achieving public approval by regulators, manufacturers, and all engaged stakeholders.
The growing energy and performance costs of deep learning have driven the community to reduce the size of neural networks by selectively pruning components. Similarly to their biological counterparts, sparse networks generalize just as well, if not better than, the original dense networks. Sparsity can reduce the memory footprint of regular networks to fit mobile devices, as well as shorten training time for ever growing networks. In this paper, we survey prior work on sparsity in deep learning and provide an extensive tutorial of sparsification for both inference and training. We describe approaches to remove and add elements of neural networks, different training strategies to achieve model sparsity, and mechanisms to exploit sparsity in practice. Our work distills ideas from more than 300 research papers and provides guidance to practitioners who wish to utilize sparsity today, as well as to researchers whose goal is to push the frontier forward. We include the necessary background on mathematical methods in sparsification, describe phenomena such as early structure adaptation, the intricate relations between sparsity and the training process, and show techniques for achieving acceleration on real hardware. We also define a metric of pruned parameter efficiency that could serve as a baseline for comparison of different sparse networks. We close by speculating on how sparsity can improve future workloads and outline major open problems in the field.