Having smart and autonomous earthmoving in mind, we explore high-performance wheel loading in a simulated environment. This paper introduces a wheel loader simulator that combines contacting 3D multibody dynamics with a hybrid continuum-particle terrain model, supporting realistic digging forces and soil displacements at real-time performance. A total of 270,000 simulations are run with different loading actions, pile slopes, and soil to analyze how they affect the loading performance. The results suggest that the preferred digging actions should preserve and exploit a steep pile slope. High digging speed favors high productivity, while energy-efficient loading requires a lower dig speed.
In this paper, we propose a game between an exogenous adversary and a network of agents connected via a multigraph. The multigraph is composed of (1) a global graph structure, capturing the virtual interactions among the agents, and (2) a local graph structure, capturing physical/local interactions among the agents. The aim of each agent is to achieve consensus with the other agents in a decentralized manner by minimizing a local cost associated with its local graph and a global cost associated with the global graph. The exogenous adversary, on the other hand, aims to maximize the average cost incurred by all agents in the multigraph. We derive Nash equilibrium policies for the agents and the adversary in the Mean-Field Game setting, when the agent population in the global graph is arbitrarily large and the ``homogeneous mixing" hypothesis holds on local graphs. This equilibrium is shown to be unique and the equilibrium Markov policies for each agent depend on the local state of the agent, as well as the influences on the agent by the local and global mean fields.
This paper aims to provide a machine learning framework to simulate two-phase flow in porous media. The proposed algorithm is based on Physics-informed neural networks (PINN). A novel residual-based adaptive PINN is developed and compared with traditional PINN having fixed collocation points. The proposed algorithm is expected to have great potential to be applied to different fields where adaptivity is needed. In this paper, we focus on the two-phase flow in porous media problem. We provide a numerical example to show the effectiveness of the new algorithm. It is found that adaptivity is essential to capture moving flow fronts. We show how the results obtained through this approach are more accurate than classical PINN, while having a comparable computational cost.
Autonomy in robot-assisted surgery is essential to reduce surgeons' cognitive load and eventually improve the overall surgical outcome. A key requirement for autonomy in a safety-critical scenario as surgery lies in the generation of interpretable plans that rely on expert knowledge. Moreover, the Autonomous Robotic Surgical System (ARSS) must be able to reason on the dynamic and unpredictable anatomical environment, and quickly adapt the surgical plan in case of unexpected situations. In this paper, we present a modular Framework for Robot-Assisted Surgery (FRAS) in deformable anatomical environments. Our framework integrates a logic module for task-level interpretable reasoning, a biomechanical simulation that complements data from real sensors, and a situation awareness module for context interpretation. The framework performance is evaluated on simulated soft tissue retraction, a common surgical task to remove the tissue hiding a region of interest. Results show that the framework has the adaptability required to successfully accomplish the task, handling dynamic environmental conditions and possible failures, while guaranteeing the computational efficiency required in a real surgical scenario. The framework is made publicly available.
We consider the problem of optimizing a portfolio of financial assets, where the number of assets can be much larger than the number of observations. The optimal portfolio weights require estimating the inverse covariance matrix of excess asset returns, classical solutions of which behave badly in high-dimensional scenarios. We propose to use a regression-based joint shrinkage method for estimating the partial correlation among the assets. Extensive simulation studies illustrate the superior performance of the proposed method with respect to variance, weight, and risk estimation errors compared with competing methods for both the global minimum variance portfolios and Markowitz mean-variance portfolios. We also demonstrate the excellent empirical performances of our method on daily and monthly returns of the components of the S&P 500 index.
The difficulty in specifying rewards for many real-world problems has led to an increased focus on learning rewards from human feedback, such as demonstrations. However, there are often many different reward functions that explain the human feedback, leaving agents with uncertainty over what the true reward function is. While most policy optimization approaches handle this uncertainty by optimizing for expected performance, many applications demand risk-averse behavior. We derive a novel policy gradient-style robust optimization approach, PG-BROIL, that optimizes a soft-robust objective that balances expected performance and risk. To the best of our knowledge, PG-BROIL is the first policy optimization algorithm robust to a distribution of reward hypotheses which can scale to continuous MDPs. Results suggest that PG-BROIL can produce a family of behaviors ranging from risk-neutral to risk-averse and outperforms state-of-the-art imitation learning algorithms when learning from ambiguous demonstrations by hedging against uncertainty, rather than seeking to uniquely identify the demonstrator's reward function.
We address the issue of tuning hyperparameters (HPs) for imitation learning algorithms in the context of continuous-control, when the underlying reward function of the demonstrating expert cannot be observed at any time. The vast literature in imitation learning mostly considers this reward function to be available for HP selection, but this is not a realistic setting. Indeed, would this reward function be available, it could then directly be used for policy training and imitation would not be necessary. To tackle this mostly ignored problem, we propose a number of possible proxies to the external reward. We evaluate them in an extensive empirical study (more than 10'000 agents across 9 environments) and make practical recommendations for selecting HPs. Our results show that while imitation learning algorithms are sensitive to HP choices, it is often possible to select good enough HPs through a proxy to the reward function.
Most Deep Reinforcement Learning (Deep RL) algorithms require a prohibitively large number of training samples for learning complex tasks. Many recent works on speeding up Deep RL have focused on distributed training and simulation. While distributed training is often done on the GPU, simulation is not. In this work, we propose using GPU-accelerated RL simulations as an alternative to CPU ones. Using NVIDIA Flex, a GPU-based physics engine, we show promising speed-ups of learning various continuous-control, locomotion tasks. With one GPU and CPU core, we are able to train the Humanoid running task in less than 20 minutes, using 10-1000x fewer CPU cores than previous works. We also demonstrate the scalability of our simulator to multi-GPU settings to train more challenging locomotion tasks.
Autonomous urban driving navigation with complex multi-agent dynamics is under-explored due to the difficulty of learning an optimal driving policy. The traditional modular pipeline heavily relies on hand-designed rules and the pre-processing perception system while the supervised learning-based models are limited by the accessibility of extensive human experience. We present a general and principled Controllable Imitative Reinforcement Learning (CIRL) approach which successfully makes the driving agent achieve higher success rates based on only vision inputs in a high-fidelity car simulator. To alleviate the low exploration efficiency for large continuous action space that often prohibits the use of classical RL on challenging real tasks, our CIRL explores over a reasonably constrained action space guided by encoded experiences that imitate human demonstrations, building upon Deep Deterministic Policy Gradient (DDPG). Moreover, we propose to specialize adaptive policies and steering-angle reward designs for different control signals (i.e. follow, straight, turn right, turn left) based on the shared representations to improve the model capability in tackling with diverse cases. Extensive experiments on CARLA driving benchmark demonstrate that CIRL substantially outperforms all previous methods in terms of the percentage of successfully completed episodes on a variety of goal-directed driving tasks. We also show its superior generalization capability in unseen environments. To our knowledge, this is the first successful case of the learned driving policy through reinforcement learning in the high-fidelity simulator, which performs better-than supervised imitation learning.
We develop an approach to risk minimization and stochastic optimization that provides a convex surrogate for variance, allowing near-optimal and computationally efficient trading between approximation and estimation error. Our approach builds off of techniques for distributionally robust optimization and Owen's empirical likelihood, and we provide a number of finite-sample and asymptotic results characterizing the theoretical performance of the estimator. In particular, we show that our procedure comes with certificates of optimality, achieving (in some scenarios) faster rates of convergence than empirical risk minimization by virtue of automatically balancing bias and variance. We give corroborating empirical evidence showing that in practice, the estimator indeed trades between variance and absolute performance on a training sample, improving out-of-sample (test) performance over standard empirical risk minimization for a number of classification problems.
Cloud Robotics is one of the emerging area of robotics. It has created a lot of attention due to its direct practical implications on Robotics. In Cloud Robotics, the concept of cloud computing is used to offload computational extensive jobs of the robots to the cloud. Apart from this, additional functionalities can also be offered on run to the robots on demand. Simultaneous Localization and Mapping (SLAM) is one of the computational intensive algorithm in robotics used by robots for navigation and map building in an unknown environment. Several Cloud based frameworks are proposed specifically to address the problem of SLAM, DAvinCi, Rapyuta and C2TAM are some of those framework. In this paper, we presented a detailed review of all these framework implementation for SLAM problem.