Rearrangement puzzles are variations of rearrangement problems in which the elements of a problem are potentially logically linked together. To efficiently solve such puzzles, we develop a motion planning approach based on a new state space that is logically factored, integrating the capabilities of the robot through factors of simultaneously manipulatable joints of an object. Based on this factored state space, we propose less-actions RRT (LA-RRT), a planner which optimizes for a low number of actions to solve a puzzle. At the core of our approach lies a new path defragmentation method, which rearranges and optimizes consecutive edges to minimize action cost. We solve six rearrangement scenarios with a Fetch robot, involving planar table puzzles and an escape room scenario. LA-RRT significantly outperforms the next best asymptotically-optimal planner by 4.01 to 6.58 times improvement in final action cost.
Deep reinforcement learning (DRL) has seen remarkable success in the control of single robots. However, applying DRL to robot swarms presents significant challenges. A critical challenge is non-stationarity, which occurs when two or more robots update individual or shared policies concurrently, thereby engaging in an interdependent training process with no guarantees of convergence. Circumventing non-stationarity typically involves training the robots with global information about other agents' states and/or actions. In contrast, in this paper we explore how to remove the need for global information. We pose our problem as a Partially Observable Markov Decision Process, due to the absence of global knowledge on other agents. Using collective transport as a testbed scenario, we study two approaches to multi-agent training. In the first, the robots exchange no messages, and are trained to rely on implicit communication through push-and-pull on the object to transport. In the second approach, we introduce Global State Prediction (GSP), a network trained to forma a belief over the swarm as a whole and predict its future states. We provide a comprehensive study over four well-known deep reinforcement learning algorithms in environments with obstacles, measuring performance as the successful transport of the object to the goal within a desired time-frame. Through an ablation study, we show that including GSP boosts performance and increases robustness when compared with methods that use global knowledge.
In this paper, we address the problem of autonomous search and vessel detection in an unknown GNSS-denied maritime environment with fixed-wing UAVs. The main challenge in such environments with limited localization, communication range, and the total number of UAVs and sensors is to implement an appropriate search strategy so that a target vessel can be detected as soon as possible. Thus we present informed and non-informed methods used to search the environment. The informed method relies on an obtained probabilistic map, while the non-informed method navigates the UAVs along predefined paths computed with respect to the environment. The vessel detection method is trained on synthetic data collected in the simulator with data annotation tools. Comparative experiments in simulation have shown that our combination of sensors, search methods and a vessel detection algorithm leads to a successful search for the target vessel in such challenging environments.
Reinforcement Learning (RL) algorithms have shown tremendous success in simulation environments, but their application to real-world problems faces significant challenges, with safety being a major concern. In particular, enforcing state-wise constraints is essential for many challenging tasks such as autonomous driving and robot manipulation. However, existing safe RL algorithms under the framework of Constrained Markov Decision Process (CMDP) do not consider state-wise constraints. To address this gap, we propose State-wise Constrained Policy Optimization (SCPO), the first general-purpose policy search algorithm for state-wise constrained reinforcement learning. SCPO provides guarantees for state-wise constraint satisfaction in expectation. In particular, we introduce the framework of Maximum Markov Decision Process, and prove that the worst-case safety violation is bounded under SCPO. We demonstrate the effectiveness of our approach on training neural network policies for extensive robot locomotion tasks, where the agent must satisfy a variety of state-wise safety constraints. Our results show that SCPO significantly outperforms existing methods and can handle state-wise constraints in high-dimensional robotics tasks.
We propose a new model, independent linear Markov game, for multi-agent reinforcement learning with a large state space and a large number of agents. This is a class of Markov games with independent linear function approximation, where each agent has its own function approximation for the state-action value functions that are marginalized by other players' policies. We design new algorithms for learning the Markov coarse correlated equilibria (CCE) and Markov correlated equilibria (CE) with sample complexity bounds that only scale polynomially with each agent's own function class complexity, thus breaking the curse of multiagents. In contrast, existing works for Markov games with function approximation have sample complexity bounds scale with the size of the \emph{joint action space} when specialized to the canonical tabular Markov game setting, which is exponentially large in the number of agents. Our algorithms rely on two key technical innovations: (1) utilizing policy replay to tackle non-stationarity incurred by multiple agents and the use of function approximation; (2) separating learning Markov equilibria and exploration in the Markov games, which allows us to use the full-information no-regret learning oracle instead of the stronger bandit-feedback no-regret learning oracle used in the tabular setting. Furthermore, we propose an iterative-best-response type algorithm that can learn pure Markov Nash equilibria in independent linear Markov potential games. In the tabular case, by adapting the policy replay mechanism for independent linear Markov games, we propose an algorithm with $\widetilde{O}(\epsilon^{-2})$ sample complexity to learn Markov CCE, which improves the state-of-the-art result $\widetilde{O}(\epsilon^{-3})$ in Daskalakis et al. 2022, where $\epsilon$ is the desired accuracy, and also significantly improves other problem parameters.
We present a novel approach based on sparse Gaussian processes (SGPs) to address the sensor placement problem for monitoring spatially (or spatiotemporally) correlated phenomena such as temperature and precipitation. Existing Gaussian process (GP) based sensor placement approaches use GPs with known kernel function parameters to model a phenomenon and subsequently optimize the sensor locations in a discretized representation of the environment. In our approach, we fit an SGP with known kernel function parameters to randomly sampled unlabeled locations in the environment and show that the learned inducing points of the SGP inherently solve the sensor placement problem in continuous spaces. Using SGPs avoids discretizing the environment and reduces the computation cost from cubic to linear complexity. When restricted to a candidate set of sensor placement locations, we can use greedy sequential selection algorithms on the SGP's optimization bound to find good solutions. We also present an approach to efficiently map our continuous space solutions to discrete solution spaces using the assignment problem, which gives us discrete sensor placements optimized in unison. Moreover, we generalize our approach to model sensors with non-point field-of-view and integrated observations by leveraging the inherent properties of GPs and SGPs. Our experimental results on three real-world datasets show that our approaches generate solution placements that result in reconstruction quality that is consistently on par or better than the prior state-of-the-art approach while being significantly faster. Our computationally efficient approaches will enable both large-scale sensor placement, and fast sensor placement for informative path planning problems.
State abstraction is an effective technique for planning in robotics environments with continuous states and actions, long task horizons, and sparse feedback. In object-oriented environments, predicates are a particularly useful form of state abstraction because of their compatibility with symbolic planners and their capacity for relational generalization. However, to plan with predicates, the agent must be able to interpret them in continuous environment states (i.e., ground the symbols). Manually programming predicate interpretations can be difficult, so we would instead like to learn them from data. We propose an embodied active learning paradigm where the agent learns predicate interpretations through online interaction with an expert. For example, after taking actions in a block stacking environment, the agent may ask the expert: "Is On(block1, block2) true?" From this experience, the agent learns to plan: it learns neural predicate interpretations, symbolic planning operators, and neural samplers that can be used for bilevel planning. During exploration, the agent plans to learn: it uses its current models to select actions towards generating informative expert queries. We learn predicate interpretations as ensembles of neural networks and use their entropy to measure the informativeness of potential queries. We evaluate this approach in three robotic environments and find that it consistently outperforms six baselines while exhibiting sample efficiency in two key metrics: number of environment interactions, and number of queries to the expert. Code: //tinyurl.com/active-predicates
For real-world navigation, it is important to endow robots with the capabilities to navigate safely and efficiently in a complex environment with both dynamic and non-convex static obstacles. However, achieving path-finding in non-convex complex environments without maps as well as enabling multiple robots to follow social rules for obstacle avoidance remains challenging problems. In this letter, we propose a socially aware robot mapless navigation algorithm, namely Safe Reinforcement Learning-Optimal Reciprocal Collision Avoidance (SRL-ORCA). This is a multi-agent safe reinforcement learning algorithm by using ORCA as an external knowledge to provide a safety guarantee. This algorithm further introduces traffic norms of human society to improve social comfort and achieve cooperative avoidance by following human social customs. The result of experiments shows that SRL-ORCA learns strategies to obey specific traffic rules. Compared to DRL, SRL-ORCA shows a significant improvement in navigation success rate in different complex scenarios mixed with the application of the same training network. SRL-ORCA is able to cope with non-convex obstacle environments without falling into local minimal regions and has a 14.1\% improvement in path quality (i.e., the average time to target) compared to ORCA. Videos are available at //youtu.be/huhXfCDkGws.
Change-point analysis has been successfully applied to the detect changes in multivariate data streams over time. In many applications, when data are observed over a graph/network, change does not occur simultaneously but instead spread from an initial source coordinate to the neighbouring coordinates over time. We propose a new method, SpreadDetect, that estimates both the source coordinate and the initial timepoint of change in such a setting. We prove that under appropriate conditions, the SpreadDetect algorithm consistently estimates both the source coordinate and the timepoint of change and that the minimal signal size detectable by the algorithm is minimax optimal. The practical utility of the algorithm is demonstrated through numerical experiments and a COVID-19 real dataset.
The autonomous control of flippers plays an important role in enhancing the intelligent operation of tracked robots within complex environments. While existing methods mainly rely on hand-crafted control models, in this paper, we introduce a novel approach that leverages deep reinforcement learning (DRL) techniques for autonomous flipper control in complex terrains. Specifically, we propose a new DRL network named AT-D3QN, which ensures safe and smooth flipper control for tracked robots. It comprises two modules, a feature extraction and fusion module for extracting and integrating robot and environment state features, and a deep Q-Learning control generation module for incorporating expert knowledge to obtain a smooth and efficient control strategy. To train the network, a novel reward function is proposed, considering both learning efficiency and passing smoothness. A simulation environment is constructed using the Pymunk physics engine for training. We then directly apply the trained model to a more realistic Gazebo simulation for quantitative analysis. The consistently high performance of the proposed approach validates its superiority over manual teleoperation.
Semi-autonomous telerobotic systems allow both humans and robots to exploit their strengths, while enabling personalized execution of a task. However, for new soft robots with degrees of freedom dissimilar to those of human operators, it is unknown how the control of a task should be divided between the human and robot. This work presents a set of interaction paradigms between a human and a soft growing robot manipulator, and demonstrates them in both real and simulated scenarios. The robot can grow and retract by eversion and inversion of its tubular body, a property we exploit to implement interaction paradigms. We implemented and tested six different paradigms of human-robot interaction, beginning with full teleoperation and gradually adding automation to various aspects of the task execution. All paradigms were demonstrated by two expert and two naive operators. Results show that humans and the soft robot manipulator can split control along degrees of freedom while acting simultaneously. In the simple pick-and-place task studied in this work, performance improves as the control is gradually given to the robot, because the robot can correct certain human errors. However, human engagement and enjoyment may be maximized when the task is at least partially shared. Finally, when the human operator is assisted by haptic feedback based on soft robot position errors, we observed that the improvement in performance is highly dependent on the expertise of the human operator.