Small-size robots offer access to spaces that are inaccessible to larger ones. This type of access is crucial in applications such as drug delivery, environmental detection, and collection of small samples. However, there are some tasks that are not possible to perform using only one robot including assembly and manufacturing at small scales, manipulation of micro- and nano- objects, and robot-based structuring of small-scale materials. The solution to this problem is to use a group of robots as a system. Thus, we focus on tasks that can be achieved using a group of small-scale robots. These robots are typically externally actuated due to their size limitation. Yet, one faces the challenge of controlling a group of robots using a single global input. We propose a control algorithm to position individual members of a swarm in predefined positions. A single control input applies to the system and moves all robots in the same direction. We also add another control modality by using different length robots. An electromagnetic coil system applied external force and steered the millirobots. This millirobot can move in various modes of motion such as pivot walking and tumbling. We propose two new designs of these millirobots. In the first design, the magnets are placed at the center of body to reduce the magnetic attraction force. In the second design, the millirobots are of identical length with two extra legs acting as the pivot points. This way we vary pivot separation in design to take advantage of variable speed in pivot walking mode while keeping the speed constant in tumbling mode. This paper presents a general algorithm for positional control of n millirobots with different lengths to move them from given initial positions to final desired ones. This method is based on choosing a leader that is fully controllable. Simulations and hardware experiments validate these results.
Teleoperation platforms often require the user to be situated at a fixed location to both visualize and control the movement of the robot and thus do not provide the operator with much mobility. One example is in existing robotic surgery solutions that require the surgeons to be away from the patient, attached to consoles where their heads must be fixed and their arms can only move in a limited space. This creates a barrier between physicians and patients that does not exist in normal surgery. To address this issue, we propose a mobile telesurgery solution where the surgeons are no longer mechanically limited to control consoles and are able to teleoperate the robots from the patient bedside, using their arms equipped with wireless sensors and viewing the endoscope video via optical see-through head-mounted displays (HMDs). We evaluate the feasibility and efficiency of our user interaction method compared to a standard surgical robotic manipulator via two tasks with different levels of required dexterity. The results indicate that with sufficient training our proposed platform can attain similar efficiency while providing added mobility for the operator.
Position sensitive detectors (PSDs) offer possibility to track single active marker's two (or three) degrees of freedom (DoF) position with a high accuracy, while having a fast response time with high update frequency and low latency, all using a very simple signal processing circuit. However they are not particularly suitable for 6-DoF object pose tracking system due to lack of orientation measurement, limited tracking range, and sensitivity to environmental variation. We propose a novel 6-DoF pose tracking system for a rigid object tracking requiring a single active marker. The proposed system uses a stereo-based PSD pair and multiple Inertial Measurement Units (IMUs). This is done based on a practical approach to identify and control the power of Infrared-Light Emitting Diode (IR-LED) active markers, with an aim to increase the tracking work space and reduce the power consumption. Our proposed tracking system is validated with three different work space sizes and for static and dynamic positional accuracy using robotic arm manipulator with three different dynamic motion patterns. The results show that the static position root-mean-square (RMS) error is 0.6mm. The dynamic position RMS error is 0.7-0.9mm. The orientation RMS error is between 0.04 and 0.9 degree at varied dynamic motion. Overall, our proposed tracking system is capable of tracking a rigid object pose with sub-millimeter accuracy at the mid range of the work space and sub-degree accuracy for all work space under a lab setting.
Nowadays, multirotors are playing important roles in abundant types of missions. During these missions, entering confined and narrow tunnels that are barely accessible to humans is desirable yet extremely challenging for multirotors. The restricted space and significant ego airflow disturbances induce control issues at both fast and slow flight speeds, meanwhile bringing about problems in state estimation and perception. Thus, a smooth trajectory at a proper speed is necessary for safe tunnel flights. To address these challenges, in this letter, a complete autonomous aerial system that can fly smoothly through tunnels with dimensions narrow to 0.6 m is presented. The system contains a motion planner that generates smooth mini-jerk trajectories along the tunnel center lines, which are extracted according to the map and Euclidean Distance Field (EDF), and its practical speed range is obtained through computational fluid dynamics (CFD) and flight data analyses. Extensive flight experiments on the quadrotor are conducted inside multiple narrow tunnels to validate the planning framework as well as the robustness of the whole system.
Wearable electronic equipment is constantly evolving and is increasing the integration of humans with technology. Available in various forms, these flexible and bendable devices sense and can measure the physiological and muscular changes in the human body and may use those signals to machine control. The MYO gesture band, one such device, captures Electromyography data (EMG) using myoelectric signals and translates them to be used as input signals through some predefined gestures. Use of this device in a multi-modal environment will not only increase the possible types of work that can be accomplished with the help of such device, but it will also help in improving the accuracy of the tasks performed. This paper addresses the fusion of input modalities such as speech and myoelectric signals captured through a microphone and MYO band, respectively, to control a robotic arm. Experimental results obtained as well as their accuracies for performance analysis are also presented.
Dynamic motions are a key feature of robotic arms, enabling them to perform tasks quickly and efficiently. Soft continuum manipulators do not currently consider dynamic parameters when operating in task space. This shortcoming makes existing soft robots slow and limits their ability to deal with external forces, especially during object manipulation. We address this issue by using dynamic operational space control. Our control approach takes into account the dynamic parameters of the 3D continuum arm and introduces new models that enable multi-segment soft manipulators to operate smoothly in task space. Advanced control methods, previously afforded only to rigid robots, are now adapted to soft robots; for example, potential field avoidance was previously only shown for rigid robots and is now extended to soft robots. Using our approach, a soft manipulator can now achieve a variety of tasks that were previously not possible: we evaluate the manipulator's performance in closed-loop controlled experiments such as pick-and-place, obstacle avoidance, throwing objects using an attached soft gripper, and deliberately applying forces to a surface by drawing with a grasped piece of chalk. Besides the newly enabled skills, our approach improves tracking accuracy by 59% and increases speed by a factor of 19.3 compared to state of the art for task space control. With these newfound abilities, soft robots can start to challenge rigid robots in the field of manipulation. Our inherently safe and compliant soft robot moves the future of robotic manipulation towards a cageless setup where humans and robots work in parallel.
Cyber-Physical Systems (CPS) consist of inter-wined computational (cyber) and physical components interacting through sensors and/or actuators. Computational elements are networked at every scale and can communicate with each other and with humans. Nodes can join and leave the network at any time or they can move to different spatial locations. In this scenario, monitoring spatial and temporal properties plays a key role in the understanding of how complex behaviors can emerge from local and dynamic interactions. We revisit here the Spatio-Temporal Reach and Escape Logic (STREL), a logic-based formal language designed to express and monitor spatio-temporal requirements over the execution of mobile and spatially distributed CPS. STREL considers the physical space in which CPS entities (nodes of the graph) are arranged as a weighted graph representing their dynamic topological configuration. Both nodes and edges include attributes modeling physical and logical quantities that can evolve over time. STREL combines the Signal Temporal Logic with two spatial modalities reach and escape that operate over the weighted graph. From these basic operators, we can derive other important spatial modalities such as everywhere, somewhere and surround. We propose both qualitative and quantitative semantics based on constraint semiring algebraic structure. We provide an offline monitoring algorithm for STREL and we show the feasibility of our approach with the application to two case studies: monitoring spatio-temporal requirements over a simulated mobile ad-hoc sensor network and a simulated epidemic spreading model for COVID19.
Recent years have witnessed a trend of secure processor design in both academia and industry. Secure processors with hardware-enforced isolation can be a solid foundation of cloud computation in the future. However, due to recent side-channel attacks, the commercial secure processors failed to deliver the promises of a secure isolated execution environment. Sensitive information inside the secure execution environment always gets leaked via side channels. This work considers the most powerful software-based side-channel attackers, i.e., an All Digital State Observing (ADSO) adversary who can observe all digital states, including all digital states in secure enclaves. Traditional signature schemes are not secure in ADSO adversarial model. We introduce a new cryptographic primitive called One-Time Signature with Secret Key Exposure (OTS-SKE), which ensures no one can forge a valid signature of a new message or nonce even if all secret session keys are leaked. OTS-SKE enables us to sign attestation reports securely under the ADSO adversary. We also minimize the trusted computing base by introducing a secure co-processor into the system, and the interaction between the secure co-processor and the attestation processor is unidirectional. That is, the co-processor takes no inputs from the processor and only generates secret keys for the processor to fetch. Our experimental results show that the signing of OTS-SKE is faster than that of Elliptic Curve Digital Signature Algorithm (ECDSA) used in Intel SGX.
The manufacturing industry is currently witnessing a paradigm shift with the unprecedented adoption of industrial robots, and machine vision is a key perception technology that enables these robots to perform precise operations in unstructured environments. However, the sensitivity of conventional vision sensors to lighting conditions and high-speed motion sets a limitation on the reliability and work-rate of production lines. Neuromorphic vision is a recent technology with the potential to address the challenges of conventional vision with its high temporal resolution, low latency, and wide dynamic range. In this paper and for the first time, we propose a novel neuromorphic vision based controller for faster and more reliable machining operations, and present a complete robotic system capable of performing drilling tasks with sub-millimeter accuracy. Our proposed system localizes the target workpiece in 3D using two perception stages that we developed specifically for the asynchronous output of neuromorphic cameras. The first stage performs multi-view reconstruction for an initial estimate of the workpiece's pose, and the second stage refines this estimate for a local region of the workpiece using circular hole detection. The robot then precisely positions the drilling end-effector and drills the target holes on the workpiece using a combined position-based and image-based visual servoing approach. The proposed solution is validated experimentally for drilling nutplate holes on workpieces placed arbitrarily in an unstructured environment with uncontrolled lighting. Experimental results prove the effectiveness of our solution with an average positional errors of less than 0.1 mm, and demonstrate that the use of neuromorphic vision overcomes the lighting and speed limitations of conventional cameras.
We propose Neural Actor (NA), a new method for high-quality synthesis of humans from arbitrary viewpoints and under arbitrary controllable poses. Our method is built upon recent neural scene representation and rendering works which learn representations of geometry and appearance from only 2D images. While existing works demonstrated compelling rendering of static scenes and playback of dynamic scenes, photo-realistic reconstruction and rendering of humans with neural implicit methods, in particular under user-controlled novel poses, is still difficult. To address this problem, we utilize a coarse body model as the proxy to unwarp the surrounding 3D space into a canonical pose. A neural radiance field learns pose-dependent geometric deformations and pose- and view-dependent appearance effects in the canonical space from multi-view video input. To synthesize novel views of high fidelity dynamic geometry and appearance, we leverage 2D texture maps defined on the body model as latent variables for predicting residual deformations and the dynamic appearance. Experiments demonstrate that our method achieves better quality than the state-of-the-arts on playback as well as novel pose synthesis, and can even generalize well to new poses that starkly differ from the training poses. Furthermore, our method also supports body shape control of the synthesized results.
In this paper, a control algorithm for guiding a two wheeled mobile robot with unknown inertia to a desired point and orientation using an Adaptive Model Predictive Control (AMPC) framework is presented. The two wheeled mobile robot is modeled as a knife edge or a skate with nonholonomic kinematic constraints and the dynamical equations are derived using the Lagrangian approach. The inputs at every time instant are obtained from Model Predictive Control (MPC) with a set of nominal parameters which are updated using a recursive least squares algorithm. The efficacy of the algorithm is demonstrated through numerical simulations at the end of the paper.