Growing robots based on the eversion principle are known for their ability to extend rapidly, from within, along their longitudinal axis, and, in doing so, reach deep into hitherto inaccessible, remote spaces. Despite many advantages, eversion robots also present significant challenges, one of which is maintaining sensory payload at the tip without restricting the eversion process. A variety of tip mechanisms has been proposed by the robotics community, among them rounded caps of relatively complex construction that are not always compatible with functional hardware, such as sensors or navigation pouches, integrated with the main eversion structure. Moreover, many tip designs incorporate rigid materials, reducing the robot's flexibility and consequent ability to navigate through narrow openings. Here, we address these shortcomings and propose a design to overcome them: a soft, entirely fabric based, cylindrical cap that can easily be slipped onto the tip of eversion robots. Having created a series of caps of different sizes and materials, an experimental study was conducted to evaluate our new design in terms of four key aspects: eversion robot made from multiple layers of everting material, solid objects protruding from the eversion robot, squeezability, and navigability. In all scenarios, we can show that our soft, flexible cap is robust in its ability to maintain its position and is capable of transporting payloads such as a camera across long distances.
Vertebral fractures are a consequence of osteoporosis, with significant health implications for affected patients. Unfortunately, grading their severity using CT exams is hard and subjective, motivating automated grading methods. However, current approaches are hindered by imbalance and scarcity of data and a lack of interpretability. To address these challenges, this paper proposes a novel approach that leverages unlabelled data to train a generative Diffusion Autoencoder (DAE) model as an unsupervised feature extractor. We model fracture grading as a continuous regression, which is more reflective of the smooth progression of fractures. Specifically, we use a binary, supervised fracture classifier to construct a hyperplane in the DAE's latent space. We then regress the severity of the fracture as a function of the distance to this hyperplane, calibrating the results to the Genant scale. Importantly, the generative nature of our method allows us to visualize different grades of a given vertebra, providing interpretability and insight into the features that contribute to automated grading.
Autonomous navigation of mobile robots is a well studied problem in robotics. However, the navigation task becomes challenging when multi-robot systems have to cooperatively navigate dynamic environments with deadlock-prone layouts. We present a Distributed Timed Elastic Band (DTEB) Planner that combines Prioritized Planning with the online TEB trajectory Planner, in order to extend the capabilities of the latter to multi-robot systems. The proposed planner is able to reactively avoid imminent collisions as well as predictively resolve potential deadlocks among a team of robots, while navigating in a complex environment. The results of our simulation demonstrate the reliable performance and the versatility of the planner in different environment settings. The code and tests for our approach are available online.
Sionna is a GPU-accelerated open-source library for link-level simulations based on TensorFlow. It enables the rapid prototyping of complex communication system architectures and provides native support for the integration of neural networks. Sionna implements a wide breadth of carefully tested state-of-the-art algorithms that can be used for benchmarking and end-to-end performance evaluation. This allows researchers to focus on their research, making it more impactful and reproducible, while saving time implementing components outside their area of expertise. This white paper provides a brief introduction to Sionna, explains its design principles and features, as well as future extensions, such as integrated ray tracing and custom CUDA kernels. We believe that Sionna is a valuable tool for research on next-generation communication systems, such as 6G, and we welcome contributions from our community.
In this work, we present a novel target-based lidar-camera extrinsic calibration methodology that can be used for non-overlapping field of view (FOV) sensors. Contrary to previous work, our methodology overcomes the non-overlapping FOV challenge using a motion capture system (MCS) instead of traditional simultaneous localization and mapping approaches. Due to the high relative precision of the MCS, our methodology can achieve both the high accuracy and repeatable calibrations of traditional target-based methods, regardless of the amount of overlap in the field of view of the sensors. We show using simulation that we can accurately recover extrinsic calibrations for a range of perturbations to the true calibration that would be expected in real circumstances. We also validate that high accuracy calibrations can be achieved on experimental data. Furthermore, We implement the described approach in an extensible way that allows any camera model, target shape, or feature extraction methodology to be used within our framework. We validate this implementation on two target shapes: an easy to construct cylinder target and a diamond target with a checkerboard. The cylinder target shape results show that our methodology can be used for degenerate target shapes where target poses cannot be fully constrained from a single observation, and distinct repeatable features need not be detected on the target.
Decoding brain signals can not only reveal Metaverse users' expectations but also early detect error-related behaviors such as stress, drowsiness, and motion sickness. For that, this article proposes a pioneering framework using wireless/over-the-air Brain-Computer Interface (BCI) to assist creation of virtual avatars as human representation in the Metaverse. Specifically, to eliminate the computational burden for Metaverse users' devices, we leverage Wireless Edge Servers (WES) that are popular in 5G architecture and therein URLLC, enhanced broadband features to obtain and process the brain activities, i.e., electroencephalography (EEG) signals (via uplink wireless channels). As a result, the WES can learn human behaviors, adapt system configurations, and allocate radio resources to create individualized settings and enhance user experiences. Despite the potential of BCI, the inherent noisy/fading wireless channels and the uncertainty in Metaverse users' demands and behaviors make the related resource allocation and learning/classification problems particularly challenging. We formulate the joint learning and resource allocation problem as a Quality-of-Experience (QoE) maximization problem that takes into the latency, brain classification accuracy, and resources of the system. To tackle this mixed integer programming problem, we then propose two novel algorithms that are (i) a hybrid learning algorithm to maximize the user QoE and (ii) a meta-learning algorithm to exploit the neurodiversity of the brain signals among multiple Metaverse users. The extensive experiment results with different BCI datasets show that our proposed algorithms can not only provide low delay for virtual reality (VR) applications but also can achieve high classification accuracy for the collected brain signals.
We present an algorithm for safe robot navigation in complex dynamic environments using a variant of model predictive equilibrium point control. We use an optimization formulation to navigate robots gracefully in dynamic environments by optimizing over a trajectory cost function at each timestep. We present a novel trajectory cost formulation that significantly reduces the conservative and deadlock behaviors and generates smooth trajectories. In particular, we propose a new collision probability function that effectively captures the risk associated with a given configuration and the time to avoid collisions based on the velocity direction. Moreover, we propose a terminal state cost based on the expected time-to-goal and time-to-collision values that helps in avoiding trajectories that could result in deadlock. We evaluate our cost formulation in multiple simulated and real-world scenarios, including narrow corridors with dynamic obstacles, and observe significantly improved navigation behavior and reduced deadlocks as compared to prior methods.
Across various species and different scales, certain organisms use their appendages to grasp objects not through clamping but through wrapping. This pattern of movement is found in octopus tentacles, elephant trunks, and chameleon prehensile tails, demonstrating a great versatility to grasp a wide range of objects of various sizes and weights as well as dynamically manipulate them in the 3D space. We observed that the structures of these appendages follow a common pattern - a logarithmic spiral - which is especially challenging for existing robot designs to reproduce. This paper reports the design, fabrication, and operation of a class of cable-driven soft robots that morphologically replicate spiral-shaped wrapping. This amounts to substantially curling in length while actively controlling the curling direction as enabled by two principles: a) the parametric design based on the logarithmic spiral makes it possible to tightly pack to grasp objects that vary in size by more than two orders of magnitude and up to 260 times self-weight and b) asymmetric cable forces allow the swift control of the curling direction for conducting object manipulation. We demonstrate the ability to dynamically operate objects at a sub-second level by exploiting passive compliance. We believe that our study constitutes a step towards engineered systems that wrap to grasp and manipulate, and further sheds some insights into understanding the efficacy of biological spiral-shaped appendages.
The light and soft characteristics of Buoyancy Assisted Lightweight Legged Unit (BALLU) robots have a great potential to provide intrinsically safe interactions in environments involving humans, unlike many heavy and rigid robots. However, their unique and sensitive dynamics impose challenges to obtaining robust control policies in the real world. In this work, we demonstrate robust sim-to-real transfer of control policies on the BALLU robots via system identification and our novel residual physics learning method, Environment Mimic (EnvMimic). First, we model the nonlinear dynamics of the actuators by collecting hardware data and optimizing the simulation parameters. Rather than relying on standard supervised learning formulations, we utilize deep reinforcement learning to train an external force policy to match real-world trajectories, which enables us to model residual physics with greater fidelity. We analyze the improved simulation fidelity by comparing the simulation trajectories against the real-world ones. We finally demonstrate that the improved simulator allows us to learn better walking and turning policies that can be successfully deployed on the hardware of BALLU.
Recently, graph neural networks have been gaining a lot of attention to simulate dynamical systems due to their inductive nature leading to zero-shot generalizability. Similarly, physics-informed inductive biases in deep-learning frameworks have been shown to give superior performance in learning the dynamics of physical systems. There is a growing volume of literature that attempts to combine these two approaches. Here, we evaluate the performance of thirteen different graph neural networks, namely, Hamiltonian and Lagrangian graph neural networks, graph neural ODE, and their variants with explicit constraints and different architectures. We briefly explain the theoretical formulation highlighting the similarities and differences in the inductive biases and graph architecture of these systems. We evaluate these models on spring, pendulum, gravitational, and 3D deformable solid systems to compare the performance in terms of rollout error, conserved quantities such as energy and momentum, and generalizability to unseen system sizes. Our study demonstrates that GNNs with additional inductive biases, such as explicit constraints and decoupling of kinetic and potential energies, exhibit significantly enhanced performance. Further, all the physics-informed GNNs exhibit zero-shot generalizability to system sizes an order of magnitude larger than the training system, thus providing a promising route to simulate large-scale realistic systems.
Behaviors of the synthetic characters in current military simulations are limited since they are generally generated by rule-based and reactive computational models with minimal intelligence. Such computational models cannot adapt to reflect the experience of the characters, resulting in brittle intelligence for even the most effective behavior models devised via costly and labor-intensive processes. Observation-based behavior model adaptation that leverages machine learning and the experience of synthetic entities in combination with appropriate prior knowledge can address the issues in the existing computational behavior models to create a better training experience in military training simulations. In this paper, we introduce a framework that aims to create autonomous synthetic characters that can perform coherent sequences of believable behavior while being aware of human trainees and their needs within a training simulation. This framework brings together three mutually complementary components. The first component is a Unity-based simulation environment - Rapid Integration and Development Environment (RIDE) - supporting One World Terrain (OWT) models and capable of running and supporting machine learning experiments. The second is Shiva, a novel multi-agent reinforcement and imitation learning framework that can interface with a variety of simulation environments, and that can additionally utilize a variety of learning algorithms. The final component is the Sigma Cognitive Architecture that will augment the behavior models with symbolic and probabilistic reasoning capabilities. We have successfully created proof-of-concept behavior models leveraging this framework on realistic terrain as an essential step towards bringing machine learning into military simulations.