Autonomous systems in the road transportation network require intelligent mechanisms that cope with uncertainty to foresee the future. In this paper, we propose a multi-stage probabilistic approach for trajectory forecasting: trajectory transformation to displacement space, clustering of displacement time series, trajectory proposals, and ranking proposals. We introduce a new deep feature clustering method, underlying self-conditioned GAN, which copes better with distribution shifts than traditional methods. Additionally, we propose novel distance-based ranking proposals to assign probabilities to the generated trajectories that are more efficient yet accurate than an auxiliary neural network. The overall system surpasses context-free deep generative models in human and road agents trajectory data while performing similarly to point estimators when comparing the most probable trajectory.
Recent advances in sensing and communication have paved the way for collective perception in traffic management, with real-time data sharing among multiple entities. While vehicle-based collective perception has gained traction, infrastructure-based approaches, which entail the real-time sharing and merging of sensing data from different roadside sensors for object detection, grapple with challenges in placement strategy and high ex-post evaluation costs. Despite anecdotal evidence of their effectiveness, many current deployments rely on engineering heuristics and face budget constraints that limit post-deployment adjustments. This paper introduces polynomial-time heuristic algorithms and a simulation tool for the ex-ante evaluation of infrastructure sensor deployment. By modeling it as an integer programming problem, we guide decisions on sensor locations, heights, and configurations to harmonize cost, installation constraints, and coverage. Our simulation engine, integrated with open-source urban driving simulators, enables us to evaluate the effectiveness of each sensor deployment solution through the lens of object detection. A case study with infrastructure LiDARs revealed that the incremental benefit derived from integrating additional low-resolution LiDARs could surpass that of incorporating more high-resolution ones. The results reinforce the necessity of investigating the cost-performance tradeoff prior to deployment. The code for our simulation experiments can be found at //github.com/dajiangsuo/SEIP.
We investigate a new paradigm that uses differentiable SLAM architectures in a self-supervised manner to train end-to-end deep learning models in various LiDAR based applications. To the best of our knowledge there does not exist any work that leverages SLAM as a training signal for deep learning based models. We explore new ways to improve the efficiency, robustness, and adaptability of LiDAR systems with deep learning techniques. We focus on the potential benefits of differentiable SLAM architectures for improving performance of deep learning tasks such as classification, regression as well as SLAM. Our experimental results demonstrate a non-trivial increase in the performance of two deep learning applications - Ground Level Estimation and Dynamic to Static LiDAR Translation, when used with differentiable SLAM architectures. Overall, our findings provide important insights that enhance the performance of LiDAR based navigation systems. We demonstrate that this new paradigm of using SLAM Loss signal while training LiDAR based models can be easily adopted by the community.
This expository manuscript presents generalized expressions for the low-frequency voltage gain and terminal impedances of each of the three fundamental bipolar-amplifier topologies (i.e., common emitter, common base, and common collector). Unlike the formulas that students typically learn and designers typically use, the equations presented in this tutorial assume the most general set of conditions: finite output resistance and base-collector current gain, a load resistor at each non-input terminal of the transistor, and a "feedback" resistor between the base and collector terminals. Although perhaps algebraically complex at first glance, emphasis is placed on mathematical elegance and ease of use -- expressions are formulated in terms of sub-terms that capture important aspects of the circuit's behavior. Similarities in the mathematical structure of the results reveal a deeper conceptual connection between different amplifier topologies and, ultimately, a reciprocity relationship between the base and emitter terminals. Familiar approximate expressions are subsumed as special cases. Tables consolidating the expressions in an organized fashion are provided. Companion results for metal-oxide-semiconductor (MOS) single-transistor amplifiers are also included.
High assurance of information-flow security (IFS) for concurrent systems is challenging. A promising way for formal verification of concurrent systems is the rely-guarantee method. However, existing compositional reasoning approaches for IFS concentrate on language-based IFS. It is often not applicable for system-level security, such as multicore operating system kernels, in which secrecy of actions should also be considered. On the other hand, existing studies on the rely-guarantee method are basically built on concurrent programming languages, by which semantics of concurrent systems cannot be completely captured in a straightforward way. In order to formally verify state-action based IFS for concurrent systems, we propose a rely-guarantee-based compositional reasoning approach for IFS in this paper. We first design a language by incorporating ``Event'' into concurrent languages and give the IFS semantics of the language. As a primitive element, events offer an extremely neat framework for modeling system and are not necessarily atomic in our language. For compositional reasoning of IFS, we use rely-guarantee specification to define new forms of unwinding conditions (UCs) on events, i.e., event UCs. By a rely-guarantee proof system of the language and the soundness of event UCs, we have that event UCs imply IFS of concurrent systems. In such a way, we relax the atomicity constraint of actions in traditional UCs and provide a compositional reasoning way for IFS in which security proof of systems can be discharged by independent security proof on individual events. Finally, we mechanize the approach in Isabelle/HOL and develop a formal specification and its IFS proof for multicore separation kernels as a study case according to an industrial standard -- ARINC 653.
Optimization-based methods are commonly applied in autonomous driving trajectory planners, which transform the continuous-time trajectory planning problem into a finite nonlinear program with constraints imposed at finite collocation points. However, potential violations between adjacent collocation points can occur. To address this issue thoroughly, we propose a safety-guaranteed collision-avoidance model to mitigate collision risks within optimization-based trajectory planners. This model introduces an embodied footprint, an enlarged representation of the vehicle's nominal footprint. If the embodied footprints do not collide with obstacles at finite collocation points, then the ego vehicle's nominal footprint is guaranteed to be collision-free at any of the infinite moments between adjacent collocation points. According to our theoretical analysis, we define the geometric size of an embodied footprint as a simple function of vehicle velocity and curvature. Particularly, we propose a trajectory optimizer with the embodied footprints that can theoretically set an appropriate number of collocation points prior to the optimization process. We conduct this research to enhance the foundation of optimization-based planners in robotics. Comparative simulations and field tests validate the completeness, solution speed, and solution quality of our proposal.
As the landscape of devices that interact with the electrical grid expands, also the complexity of the scenarios that arise from these interactions increases. Validation methods and tools are typically domain specific and are designed to approach mainly component level testing. For this kind of applications, software and hardware-in-the-loop based simulations as well as lab experiments are all tools that allow testing with different degrees of accuracy at various stages in the development life-cycle. However, things are vastly different when analysing the tools and the methodology available for performing system-level validation. Until now there are no available well-defined approaches for testing complex use cases involving components from different domains. Smart grid applications would typically include a relatively large number of physical devices, software components, as well as communication technology, all working hand in hand. This paper explores the possibilities that are opened in terms of testing by the integration of a real-time simulator into co-simulation environments. Three practical implementations of such systems together with performance metrics are discussed. Two control-related examples are selected in order to show the capabilities of the proposed approach.
Human affect recognition has been a significant topic in psychophysics and computer vision. However, the currently published datasets have many limitations. For example, most datasets contain frames that contain only information about facial expressions. Due to the limitations of previous datasets, it is very hard to either understand the mechanisms for affect recognition of humans or generalize well on common cases for computer vision models trained on those datasets. In this work, we introduce a brand new large dataset, the Video-based Emotion and Affect Tracking in Context Dataset (VEATIC), that can conquer the limitations of the previous datasets. VEATIC has 124 video clips from Hollywood movies, documentaries, and home videos with continuous valence and arousal ratings of each frame via real-time annotation. Along with the dataset, we propose a new computer vision task to infer the affect of the selected character via both context and character information in each video frame. Additionally, we propose a simple model to benchmark this new computer vision task. We also compare the performance of the pretrained model using our dataset with other similar datasets. Experiments show the competing results of our pretrained model via VEATIC, indicating the generalizability of VEATIC. Our dataset is available at //veatic.github.io.
Despite the general consensus in transport research community that model calibration and validation are necessary to enhance model predictive performance, there exist significant inconsistencies in the literature. This is primarily due to a lack of consistent definitions, and a unified and statistically sound framework. In this paper, we provide a general and rigorous formulation of the model calibration and validation problem, and highlight its relation to statistical inference. We also conduct a comprehensive review of the steps and challenges involved, as well as point out inconsistencies, before providing suggestions on improving the current practices. This paper is intended to help the practitioners better understand the nature of model calibration and validation, and to promote statistically rigorous and correct practices. Although the examples are drawn from a transport research background - and that is our target audience - the content in this paper is equally applicable to other modelling contexts.
Signalized intersections in arterial roads result in persistent vehicle idling and excess accelerations, contributing to fuel consumption and CO2 emissions. There has thus been a line of work studying eco-driving control strategies to reduce fuel consumption and emission levels at intersections. However, methods to devise effective control strategies across a variety of traffic settings remain elusive. In this paper, we propose a reinforcement learning (RL) approach to learn effective eco-driving control strategies. We analyze the potential impact of a learned strategy on fuel consumption, CO2 emission, and travel time and compare with naturalistic driving and model-based baselines. We further demonstrate the generalizability of the learned policies under mixed traffic scenarios. Simulation results indicate that scenarios with 100% penetration of connected autonomous vehicles (CAV) may yield as high as 18% reduction in fuel consumption and 25% reduction in CO2 emission levels while even improving travel speed by 20%. Furthermore, results indicate that even 25% CAV penetration can bring at least 50% of the total fuel and emission reduction benefits.
Graph Convolutional Network (GCN) has been widely applied in transportation demand prediction due to its excellent ability to capture non-Euclidean spatial dependence among station-level or regional transportation demands. However, in most of the existing research, the graph convolution was implemented on a heuristically generated adjacency matrix, which could neither reflect the real spatial relationships of stations accurately, nor capture the multi-level spatial dependence of demands adaptively. To cope with the above problems, this paper provides a novel graph convolutional network for transportation demand prediction. Firstly, a novel graph convolution architecture is proposed, which has different adjacency matrices in different layers and all the adjacency matrices are self-learned during the training process. Secondly, a layer-wise coupling mechanism is provided, which associates the upper-level adjacency matrix with the lower-level one. It also reduces the scale of parameters in our model. Lastly, a unitary network is constructed to give the final prediction result by integrating the hidden spatial states with gated recurrent unit, which could capture the multi-level spatial dependence and temporal dynamics simultaneously. Experiments have been conducted on two real-world datasets, NYC Citi Bike and NYC Taxi, and the results demonstrate the superiority of our model over the state-of-the-art ones.