We consider a Gathering problem for n autonomous mobile robots with persistent memory called light in an asynchronous scheduler (ASYNC). It is well known that Gathering is impossible when robots have no lights in basic common models, if the system is semi-synchronous (SSYNC) or even centralized (only one robot is active in each time). It is known that Gathering can be solved by robots with 10 colors of lights in ASYNC. This result is obtained by combining the following results. (1) The simulation of SSYNC robots with k colors by ASYNC robots with 5k colors, and (2) Gathering is solved by SSYNC robots with 2 colors. In this paper, we improve the result by reducing the number of colors and show that Gathering can be solved by ASYNC robots with 3 colors of lights. We also show that we can construct a simulation algorithm of any unfair SSYNC algorithm using k colors by ASYNC robots with 3k colors, where unfairness does not guarantee that every robot is activated infinitely often. Combining this simulation and the Gathering algorithm by SSYNC robots with 2 colors, we obtain a Gathering algorithm by ASYNC robots with 6 colors. Our main result can be obtained by reducing the number of colors from 6 to 3.
Edge intelligence, a new paradigm to accelerate artificial intelligence (AI) applications by leveraging computing resources on the network edge, can be used to improve intelligent transportation systems (ITS). However, due to physical limitations and energy-supply constraints, the computing powers of edge equipment are usually limited. High altitude platform station (HAPS) computing can be considered as a promising extension of edge computing. HAPS is deployed in the stratosphere to provide wide coverage and strong computational capabilities. It is suitable to coordinate terrestrial resources and store the fundamental data associated with ITS-based applications. In this work, three computing layers,i.e., vehicles, terrestrial network edges, and HAPS, are integrated to build a computation framework for ITS, where the HAPS data library stores the fundamental data needed for the applications. In addition, the caching technique is introduced for network edges to store some of the fundamental data from the HAPS so that large propagation delays can be reduced. We aim to minimize the delay of the system by optimizing computation offloading and caching decisions as well as bandwidth and computing resource allocations. The simulation results highlight the benefits of HAPS computing for mitigating delays and the significance of caching at network edges.
In this paper, a human-like driving framework is designed for autonomous vehicles (AVs), which aims to make AVs better integrate into the transportation ecology of human driving and eliminate the misunderstanding and incompatibility of human drivers to autonomous driving. Based on the analysis of the real world INTERACTION dataset, a driving aggressiveness estimation model is established with the fuzzy inference approach. Then, a human-like driving model, which integrates the brain emotional learning circuit model (BELCM) with the two-point preview model, is designed. In the human-like lane-change decision-making algorithm, the cost function is designed comprehensively considering driving safety and travel efficiency. Based on the cost function and multi-constraint, the dynamic game algorithm is applied to modelling the interaction and decision making between AV and human driver. Additionally, to guarantee the lane-change safety of AVs, an artificial potential field model is built for collision risk assessment. Finally, the proposed algorithm is evaluated through human-in-the-loop experiments on a driving simulator, and the results demonstrated the feasibility and effectiveness of the proposed method.
We introduce a new numerical method for solving time-harmonic acoustic scattering problems. The main focus is on plane waves scattered by smoothly varying material inhomogeneities. The proposed method works for any frequency $\omega$, but is especially efficient for high-frequency problems. It is based on a time-domain approach and consists of three steps: \emph{i)} computation of a suitable incoming plane wavelet with compact support in the propagation direction; \emph{ii)} solving a scattering problem in the time domain for the incoming plane wavelet; \emph{iii)} reconstruction of the time-harmonic solution from the time-domain solution via a Fourier transform in time. An essential ingredient of the new method is a front-tracking mesh adaptation algorithm for solving the problem in \emph{ii)}. By exploiting the limited support of the wave front, this allows us to make the number of the required degrees of freedom to reach a given accuracy significantly less dependent on the frequency $\omega$. We also present a new algorithm for computing the Fourier transform in \emph{iii)} that exploits the reduced number of degrees of freedom corresponding to the adapted meshes. Numerical examples demonstrate the advantages of the proposed method and the fact that the method can also be applied with external source terms such as point sources and sound-soft scatterers. The gained efficiency, however, is limited in the presence of trapping modes.
This paper introduces our Cyber-Physical Mobility Lab (CPM Lab). It is an open-source development environment for networked and autonomous vehicles with focus on networked decision-making, trajectory planning, and control. The CPM Lab hosts 20 physical model-scale vehicles ({\mu}Cars) which we can seamlessly extend by unlimited simulated vehicles. The code and construction plans are publicly available to enable rebuilding the CPM Lab. Our four-layered architecture enables the seamless use of the same software in simulations and in experiments without any further adaptions. A Data Distribution Service (DDS) based middleware allows adapting the number of vehicles during experiments in a seamless manner. The middleware is also responsible for synchronizing all entities following a logical execution time approach to achieve determinism and reproducibility of experiments. This approach makes the CPM Lab a unique platform for rapid functional prototyping of networked decision-making algorithms. The CPM Lab allows researchers as well as students from different disciplines to see their ideas developing into reality. We demonstrate its capabilities using two example experiments. We are working on a remote access to the CPM Lab via a webinterface.
We devise a cooperative planning framework to generate optimal trajectories for a tethered robot duo, who is tasked to gather scattered objects spread in a large area using a flexible net. Specifically, the proposed planning framework first produces a set of dense waypoints for each robot, serving as the initialization for optimization. Next, we formulate an iterative optimization scheme to generate smooth and collision-free trajectories while ensuring cooperation within the robot duo to efficiently gather objects and properly avoid obstacles. We validate the generated trajectories in simulation and implement them in physical robots using Model Reference Adaptive Controller (MRAC) to handle unknown dynamics of carried payloads. In a series of studies, we find that: (i) a U-shape cost function is effective in planning cooperative robot duo, and (ii) the task efficiency is not always proportional to the tethered net's length. Given an environment configuration, our framework can gauge the optimal net length. To our best knowledge, ours is the first that provides such estimation for tethered robot duo.
In situations where humans and robots are moving in the same space whilst performing their own tasks, predictable paths taken by mobile robots can not only make the environment feel safer, but humans can also help with the navigation in the space by avoiding path conflicts or not blocking the way. So predictable paths become vital. The cognitive effort for the human to predict the robot's path becomes untenable as the number of robots increases. As the number of humans increase, it also makes it harder for the robots to move while considering the motion of multiple humans. Additionally, if new people are entering the space -- like in restaurants, banks, and hospitals -- they would have less familiarity with the trajectories typically taken by the robots; this further increases the needs for predictable robot motion along paths. With this in mind, we propose to minimize the navigation-graph of the robot for position-based predictability, which is predictability from just the current position of the robot. This is important since the human cannot be expected to keep track of the goals and prior actions of the robot in addition to doing their own tasks. In this paper, we define measures for position-based predictability, then present and evaluate a hill-climbing algorithm to minimize the navigation-graph (directed graph) of robot motion. This is followed by the results of our human-subject experiments which support our proposed methodology.
Self-driving vehicles are expected to bring many benefits among which enhancing traffic efficiency and relia-bility, and reducing fuel consumption which would have a great economical and environmental impact. The success of this technology heavily relies on the full situational awareness of its surrounding entities. This is achievable only when everything is networked, including vehicles, users and infrastructure, and exchange the sensed data among the nearby objects to increase their awareness. Nevertheless, human intervention is still needed in the loop anyway to deal with unseen situations or compensate for inaccurate or improper vehicle's decisions. For such cases, video feed, in addition to other data such as LIDAR, is considered essential to provide humans with the real picture of what is hap-pening to eventually take the right decision. However, if the video is not delivered in a timely fashion,it becomes useless or likely produce catastrophic outcomes. Additionally, any disruption in the streamed video, for instance during handover operation while traversing inter-countries cross borders, is very annoying to the user and possibly ause damages as well. In this article, we start by describing two important use cases, namely Remote Driving and Platooning, where the timely delivery of video is of extreme importance [1]. Thereafter, we detail our implemented solution to accommodate the aforementioned use cases for self-driving vehicles. Through extensive experiments in local and LTE networks, we show that our solution ensures a very low end-to-end latency. Also, we show that our solution keeps the video outage as low as possible during handover operation.
Breakthroughs in machine learning in the last decade have led to `digital intelligence', i.e. machine learning models capable of learning from vast amounts of labeled data to perform several digital tasks such as speech recognition, face recognition, machine translation and so on. The goal of this thesis is to make progress towards designing algorithms capable of `physical intelligence', i.e. building intelligent autonomous navigation agents capable of learning to perform complex navigation tasks in the physical world involving visual perception, natural language understanding, reasoning, planning, and sequential decision making. Despite several advances in classical navigation methods in the last few decades, current navigation agents struggle at long-term semantic navigation tasks. In the first part of the thesis, we discuss our work on short-term navigation using end-to-end reinforcement learning to tackle challenges such as obstacle avoidance, semantic perception, language grounding, and reasoning. In the second part, we present a new class of navigation methods based on modular learning and structured explicit map representations, which leverage the strengths of both classical and end-to-end learning methods, to tackle long-term navigation tasks. We show that these methods are able to effectively tackle challenges such as localization, mapping, long-term planning, exploration and learning semantic priors. These modular learning methods are capable of long-term spatial and semantic understanding and achieve state-of-the-art results on various navigation tasks.
Safety and decline of road traffic accidents remain important issues of autonomous driving. Statistics show that unintended lane departure is a leading cause of worldwide motor vehicle collisions, making lane detection the most promising and challenge task for self-driving. Today, numerous groups are combining deep learning techniques with computer vision problems to solve self-driving problems. In this paper, a Global Convolution Networks (GCN) model is used to address both classification and localization issues for semantic segmentation of lane. We are using color-based segmentation is presented and the usability of the model is evaluated. A residual-based boundary refinement and Adam optimization is also used to achieve state-of-art performance. As normal cars could not afford GPUs on the car, and training session for a particular road could be shared by several cars. We propose a framework to get it work in real world. We build a real time video transfer system to get video from the car, get the model trained in edge server (which is equipped with GPUs), and send the trained model back to the car.
This paper presents a safety-aware learning framework that employs an adaptive model learning method together with barrier certificates for systems with possibly nonstationary agent dynamics. To extract the dynamic structure of the model, we use a sparse optimization technique, and the resulting model will be used in combination with control barrier certificates which constrain feedback controllers only when safety is about to be violated. Under some mild assumptions, solutions to the constrained feedback-controller optimization are guaranteed to be globally optimal, and the monotonic improvement of a feedback controller is thus ensured. In addition, we reformulate the (action-)value function approximation to make any kernel-based nonlinear function estimation method applicable. We then employ a state-of-the-art kernel adaptive filtering technique for the (action-)value function approximation. The resulting framework is verified experimentally on a brushbot, whose dynamics is unknown and highly complex.