Traffic accidents are a threat to human lives, particularly pedestrians causing premature deaths. Therefore, it is necessary to devise systems to prevent accidents in advance and respond proactively, using potential risky situations as one of the surrogate safety measurements. This study introduces a new concept of a pedestrian safety system that combines the field and the centralized processes. The system can warn of upcoming risks immediately in the field and improve the safety of risk frequent areas by assessing the safety levels of roads without actual collisions. In particular, this study focuses on the latter by introducing a new analytical framework for a crosswalk safety assessment with behaviors of vehicle/pedestrian and environmental features. We obtain these behavioral features from actual traffic video footage in the city with complete automatic processing. The proposed framework mainly analyzes these behaviors in multidimensional perspectives by constructing a data cube structure, which combines the LSTM based predictive collision risk estimation model and the on line analytical processing operations. From the PCR estimation model, we categorize the severity of risks as four levels and apply the proposed framework to assess the crosswalk safety with behavioral features. Our analytic experiments are based on two scenarios, and the various descriptive results are harvested the movement patterns of vehicles and pedestrians by road environment and the relationships between risk levels and car speeds. Thus, the proposed framework can support decision makers by providing valuable information to improve pedestrian safety for future accidents, and it can help us better understand their behaviors near crosswalks proactively. In order to confirm the feasibility and applicability of the proposed framework, we implement and apply it to actual operating CCTVs in Osan City, Korea.
We propose a Stochastic MPC (SMPC) formulation for autonomous driving at traffic intersections which incorporates multi-modal predictions of surrounding vehicles for collision avoidance constraints. The multi-modal predictions are obtained with Gaussian Mixture Models (GMM) and constraints are formulated as chance-constraints. Our main theoretical contribution is a SMPC formulation that optimizes over a novel feedback policy class designed to exploit additional structure in the GMM predictions, and that is amenable to convex programming. The use of feedback policies for prediction is motivated by the need for reduced conservatism in handling multi-modal predictions of the surrounding vehicles, especially prevalent in traffic intersection scenarios. We evaluate our algorithm along axes of mobility, comfort, conservatism and computational efficiency at a simulated intersection in CARLA. Our simulations use a kinematic bicycle model and multimodal predictions trained on a subset of the Lyft Level 5 prediction dataset. To demonstrate the impact of optimizing over feedback policies, we compare our algorithm with two SMPC baselines that handle multi-modal collision avoidance chance constraints by optimizing over open-loop sequences.
Although researchers increasingly adopt machine learning to model travel behavior, they predominantly focus on prediction accuracy, ignoring the ethical challenges embedded in machine learning algorithms. This study introduces an important missing dimension - computational fairness - to travel behavior analysis. We first operationalize computational fairness by equality of opportunity, then differentiate between the bias inherent in data and the bias introduced by modeling. We then demonstrate the prediction disparities in travel behavior modeling using the 2017 National Household Travel Survey (NHTS) and the 2018-2019 My Daily Travel Survey in Chicago. Empirically, deep neural network (DNN) and discrete choice models (DCM) reveal consistent prediction disparities across multiple social groups: both over-predict the false negative rate of frequent driving for the ethnic minorities, the low-income and the disabled populations, and falsely predict a higher travel burden of the socially disadvantaged groups and the rural populations than reality. Comparing DNN with DCM, we find that DNN can outperform DCM in prediction disparities because of DNN's smaller misspecification error. To mitigate prediction disparities, this study introduces an absolute correlation regularization method, which is evaluated with synthetic and real-world data. The results demonstrate the prevalence of prediction disparities in travel behavior modeling, and the disparities still persist regarding a variety of model specifics such as the number of DNN layers, batch size and weight initialization. Since these prediction disparities can exacerbate social inequity if prediction results without fairness adjustment are used for transportation policy making, we advocate for careful consideration of the fairness problem in travel behavior modeling, and the use of bias mitigation algorithms for fair transport decisions.
The automation of internal logistics and inventory-related tasks is one of the main challenges of modern-day manufacturing corporations since it allows a more effective application of their human resources. Nowadays, Autonomous Mobile Robots (AMR) are state of the art technologies for such applications due to their great adaptability in dynamic environments, replacing more traditional solutions such as Automated Guided Vehicles (AGV), which are quite limited in terms of flexibility and require expensive facility updates for their installation. The application of Artificial Intelligence (AI) to increase AMRs capabilities has been contributing for the development of more sophisticated and efficient robots. Nevertheless, multi-robot coordination and cooperation for solving complex tasks is still a hot research line with increasing interest. This work proposes a Multi-Agent System for coordinating multiple TIAGo robots in tasks related to the manufacturing ecosystem such as the transportation and dispatching of raw materials, finished products and tools. Furthermore, the system is showcased in a realistic simulation using both Gazebo and Robot Operating System (ROS).
Current research efforts in aeroelasticity aim at including higher fidelity aerodynamic results into the simulation frameworks. In the present effort, the Python--based Fluid--Structure Interaction framework of the well known SU2 code has been updated and extended to allow for efficient and fully open-source simulations of detailed aeroelastic phenomena. The interface has been standardised for easier inclusion of other external solvers and the comunication scheme between processors revisited. A native solver has been introduced to solve the structural equations coming from a Nastran--like Finite Element Model. The use of high level programming allows to perform simulations with ease and minimum human work. On the other hand, the Computational Fluid Dynamics code of choice has efficient lower level functions that provide a quick turnaround time. Further, the aerodynamic code is currently actively developed and exhibits interesting features for computational aeroelasticity, including an effective means of deforming the mesh. The developed software has been assessed against three different test cases, of increasing complexity. The first test involved the comparison with analytical results for a pitching-plunging airfoil. The second tackled a three-dimensional transonic wing, comparing experimental results. Finally, an entire wind tunnel test, with a flexible half-plane model, has been simulated. In all these tests the code performed well, increasing the confidence that it will be useful for a large range of applications, even in industrial settings. The final goal of the research is to provide with an excellent and free alternative for aeroelastic simulations, that will leverage the use of high-fidelity in the common practise.
This paper presents a novel control approach for autonomous systems operating under uncertainty. We combine Model Predictive Path Integral (MPPI) control with Covariance Steering (CS) theory to obtain a robust controller for general nonlinear systems. The proposed Covariance-Controlled Model Predictive Path Integral (CC-MPPI) controller addresses the performance degradation observed in some MPPI implementations owing to unexpected disturbances and uncertainties. Namely, in cases where the environment changes too fast or the simulated dynamics during the MPPI rollouts do not capture the noise and uncertainty in the actual dynamics, the baseline MPPI implementation may lead to divergence. The proposed CC-MPPI controller avoids divergence by controlling the dispersion of the rollout trajectories at the end of the prediction horizon. Furthermore, the CC-MPPI has adjustable trajectory sampling distributions that can be changed according to the environment to achieve efficient sampling. Numerical examples using a ground vehicle navigating in challenging environments demonstrate the proposed approach.
Autonomous navigation of mobile robots is an essential aspect in use cases such as delivery, assistance or logistics. Although traditional planning methods are well integrated into existing navigation systems, they struggle in highly dynamic environments. On the other hand, Deep-Reinforcement-Learning-based methods show superior performance in dynamic obstacle avoidance but are not suitable for long-range navigation and struggle with local minima. In this paper, we propose a Deep-Reinforcement-Learning-based control switch, which has the ability to select between different planning paradigms based solely on sensor data observations. Therefore, we develop an interface to efficiently operate multiple model-based, as well as learning-based local planners and integrate a variety of state-of-the-art planners to be selected by the control switch. Subsequently, we evaluate our approach against each planner individually and found improvements in navigation performance especially for highly dynamic scenarios. Our planner was able to prefer learning-based approaches in situations with a high number of obstacles while relying on the traditional model-based planners in long corridors or empty spaces.
Still to this day, academic credentials are primarily paper-based, and the process to verify the authenticity of such documents is costly, time-consuming, and prone to human error and fraud. Digitally signed documents facilitate a cost-effective verification process. However, vulnerability to fraud remains due to reliance on centralized authorities that lack full transparency. In this paper, we present the mechanisms we designed to create secure and machine-verifiable academic credentials. Our protocol models a diploma as an evolving set of immutable credentials. The credentials are built as a tree-based data structure with linked time-stamping, where portions of credentials are distributed over a set of smart contracts. Our design prevents fraud of diplomas and eases the detection of degree mills, while increasing the transparency and trust in the issuer's procedures. Our evaluation shows that our solution offers a certification system with strong cryptographic security and imposes a high level of transparency of the certification process. We achieve these benefits with acceptable costs compared to existing solutions that lack such transparency.
Solving robotic navigation tasks via reinforcement learning (RL) is challenging due to their sparse reward and long decision horizon nature. However, in many navigation tasks, high-level (HL) task representations, like a rough floor plan, are available. Previous work has demonstrated efficient learning by hierarchal approaches consisting of path planning in the HL representation and using sub-goals derived from the plan to guide the RL policy in the source task. However, these approaches usually neglect the complex dynamics and sub-optimal sub-goal-reaching capabilities of the robot during planning. This work overcomes these limitations by proposing a novel hierarchical framework that utilizes a trainable planning policy for the HL representation. Thereby robot capabilities and environment conditions can be learned utilizing collected rollout data. We specifically introduce a planning policy based on value iteration with a learned transition model (VI-RL). In simulated robotic navigation tasks, VI-RL results in consistent strong improvement over vanilla RL, is on par with vanilla hierarchal RL on single layouts but more broadly applicable to multiple layouts, and is on par with trainable HL path planning baselines except for a parking task with difficult non-holonomic dynamics where it shows marked improvements.
Autonomous driving is regarded as one of the most promising remedies to shield human beings from severe crashes. To this end, 3D object detection serves as the core basis of such perception system especially for the sake of path planning, motion prediction, collision avoidance, etc. Generally, stereo or monocular images with corresponding 3D point clouds are already standard layout for 3D object detection, out of which point clouds are increasingly prevalent with accurate depth information being provided. Despite existing efforts, 3D object detection on point clouds is still in its infancy due to high sparseness and irregularity of point clouds by nature, misalignment view between camera view and LiDAR bird's eye of view for modality synergies, occlusions and scale variations at long distances, etc. Recently, profound progress has been made in 3D object detection, with a large body of literature being investigated to address this vision task. As such, we present a comprehensive review of the latest progress in this field covering all the main topics including sensors, fundamentals, and the recent state-of-the-art detection methods with their pros and cons. Furthermore, we introduce metrics and provide quantitative comparisons on popular public datasets. The avenues for future work are going to be judiciously identified after an in-deep analysis of the surveyed works. Finally, we conclude this paper.
We develop a novel human trajectory prediction system that incorporates the scene information (Scene-LSTM) as well as individual pedestrian movement (Pedestrian-LSTM) trained simultaneously within static crowded scenes. We superimpose a two-level grid structure (grid cells and subgrids) on the scene to encode spatial granularity plus common human movements. The Scene-LSTM captures the commonly traveled paths that can be used to significantly influence the accuracy of human trajectory prediction in local areas (i.e. grid cells). We further design scene data filters, consisting of a hard filter and a soft filter, to select the relevant scene information in a local region when necessary and combine it with Pedestrian-LSTM for forecasting a pedestrian's future locations. The experimental results on several publicly available datasets demonstrate that our method outperforms related works and can produce more accurate predicted trajectories in different scene contexts.