Tactile information plays a critical role in human dexterity. It reveals useful contact information that may not be inferred directly from vision. In fact, humans can even perform in-hand dexterous manipulation without using vision. Can we enable the same ability for the multi-finger robot hand? In this paper, we present Touch Dexterity, a new system that can perform in-hand object rotation using only touching without seeing the object. Instead of relying on precise tactile sensing in a small region, we introduce a new system design using dense binary force sensors (touch or no touch) overlaying one side of the whole robot hand (palm, finger links, fingertips). Such a design is low-cost, giving a larger coverage of the object, and minimizing the Sim2Real gap at the same time. We train an in-hand rotation policy using Reinforcement Learning on diverse objects in simulation. Relying on touch-only sensing, we can directly deploy the policy in a real robot hand and rotate novel objects that are not presented in training. Extensive ablations are performed on how tactile information help in-hand manipulation.Our project is available at //touchdexterity.github.io.
Due to its ability to generate millions of particles, massively detailed scenes and confusing artificial illumination with reality, the version 5 of Unreal Engine promises unprecedented industrial applications. The paradigms and aims of Unreal Engine contrast with the industrial simulators typically used by the scientific community. The visual quality and performance of its rendering engine increase the opportunities, especially for industries and simulation business: where interoperability and scalability are required. The study of the following issue `` Which architecture should we implement to integrate real-world data, in an Unreal Engine 5 simulator and in a mixed-reality environment? '' offers a point of view. The topic is reexamined in an innovative and conceptual way, such as the generalization of mixedreality technologies, Internet of Things, digital twins, Big Data but providing a solution for simple and actual use cases. This paper gives a detailed analysis of the issue, at both theoretical and operational level. Then, the document goes deep into Unreal Engine's operation in order to extract the vanilla capabilities. Next, the C++ Plugin system is reviewed in details as well as the third-party library integration: pitfalls to be avoided are shown. Finally, the last chapter proposes a generic architecture, useful in large-scale industrial 3D applications, such as collaborative work or hyper-connected simulators. This document might be of interest to an Unreal Engine expert who would like to discover about server architectures. Conversely, it could be relevant for an expert in backend servers who wants to learn about Unreal Engine capabilities. This research concludes that Unreal Engine's modularity enables integration with almost any protocol. The features to integrate external real data are numerous but depend on use cases. Distributed systems for Big Data require a scalable architecture, possibly without the use of the Unreal Engine dedicated server. Environments, which require sub-second latency need to implement direct connections, bypassing any intermediate servers.
Several NP-hard problems are solved exactly using exponential-time branching strategies, whether it be branch-and-bound algorithms, or bounded search trees in fixed-parameter algorithms. The number of tractable instances that can be handled by sequential algorithms is usually small, whereas massive parallelization has been shown to significantly increase the space of instances that can be solved exactly. However, previous centralized approaches require too much communication to be efficient, whereas decentralized approaches are more efficient but have difficulty keeping track of the global state of the exploration. In this work, we propose to revisit the centralized paradigm while avoiding previous bottlenecks. In our strategy, the center has lightweight responsibilities, requires only a few bits for every communication, but is still able to keep track of the progress of every worker. In particular, the center never holds any task but is able to guarantee that a process with no work always receives the highest priority task globally. Our strategy was implemented in a generic C++ library called GemPBA, which allows a programmer to convert a sequential branching algorithm into a parallel version by changing only a few lines of code. An experimental case study on the vertex cover problem demonstrates that some of the toughest instances from the DIMACS challenge graphs that would take months to solve sequentially can be handled within two hours with our approach.
Cooperative adaptive cruise control presents an opportunity to improve road transportation through increase in road capacity and reduction in energy use and accidents. Clever design of control algorithms and communication systems is required to ensure that the vehicle platoon is stable and meets desired safety requirements. In this paper, we propose a centralized model predictive controller for a heterogeneous platoon of vehicles to reach a desired platoon velocity and individual inter-vehicle distances with driver-selected headway time. As a novel concept, we allow for interruption from a human driver in the platoon that temporarily takes control of their vehicle with the assumption that the driver will, at minimum, obey legal velocity limits and the physical performance constraints of their vehicle. The finite horizon cost function of our proposed platoon controller is inspired from the infinite horizon design. To the best of our knowledge, this is the first platoon controller that integrates human-driven vehicles. We illustrate the performance of our proposed design with a numerical study, demonstrating that the safety distance, velocity, and actuation constraints are obeyed. Additionally, in simulation we illustrate a key property of string stability where the impact of a disturbance is reduced through the platoon.
Asynchronous action coordination presents a pervasive challenge in Multi-Agent Systems (MAS), which can be represented as a Stackelberg game (SG). However, the scalability of existing Multi-Agent Reinforcement Learning (MARL) methods based on SG is severely constrained by network structures or environmental limitations. To address this issue, we propose the Stackelberg Decision Transformer (STEER), a heuristic approach that resolves the difficulties of hierarchical coordination among agents. STEER efficiently manages decision-making processes in both spatial and temporal contexts by incorporating the hierarchical decision structure of SG, the modeling capability of autoregressive sequence models, and the exploratory learning methodology of MARL. Our research contributes to the development of an effective and adaptable asynchronous action coordination method that can be widely applied to various task types and environmental configurations in MAS. Experimental results demonstrate that our method can converge to Stackelberg equilibrium solutions and outperforms other existing methods in complex scenarios.
Evaluating human-AI decision-making systems is an emerging challenge as new ways of combining multiple AI models towards a specific goal are proposed every day. As humans interact with AI in decision-making systems, multiple factors may be present in a task including trust, interpretability, and explainability, amongst others. In this context, this work proposes a retrospective method to support a more holistic understanding of how people interact with and connect multiple AI models and combine multiple outputs in human-AI decision-making systems. The method consists of employing a retrospective end-user walkthrough with the objective of providing support to HCI practitioners so that they may gain an understanding of the higher order cognitive processes in place and the role that AI model outputs play in human-AI decision-making. The method was qualitatively assessed with 29 participants (four participants in a pilot phase; 25 participants in the main user study) interacting with a human-AI decision-making system in the context of financial decision-making. The system combines visual analytics, three AI models for revenue prediction, AI-supported analogues analysis, and hypothesis testing using external news and natural language processing to provide multiple means for comparing companies. Beyond results on tasks and usability problems, outcomes presented suggest that the method is promising in highlighting why AI models are ignored, used, or trusted, and how future interactions are planned. We suggest that HCI practitioners researching human-AI interaction can benefit by adding this step to user studies in a debriefing stage as a retrospective Thinking-Aloud protocol would be applied, but with emphasis on revisiting tasks and understanding why participants ignored or connected predictions while performing a task.
The ability to generalize to a wide range of recording devices is a crucial performance factor for audio classification models. The characteristics of different types of microphones introduce distributional shifts in the digitized audio signals due to their varying frequency responses. If this domain shift is not taken into account during training, the model's performance could degrade severely when it is applied to signals recorded by unseen devices. In particular, training a model on audio signals recorded with a small number of different microphones can make generalization to unseen devices difficult. To tackle this problem, we convolve audio signals in the training set with pre-recorded device impulse responses (DIRs) to artificially increase the diversity of recording devices. We systematically study the effect of DIR augmentation on the task of Acoustic Scene Classification using CNNs and Audio Spectrogram Transformers. The results show that DIR augmentation in isolation performs similarly to the state-of-the-art method Freq-MixStyle. However, we also show that DIR augmentation and Freq-MixStyle are complementary, achieving a new state-of-the-art performance on signals recorded by devices unseen during training.
tSNE and UMAP are popular dimensionality reduction algorithms due to their speed and interpretable low-dimensional embeddings. Despite their popularity, however, little work has been done to study their full span of differences. We theoretically and experimentally evaluate the space of parameters in both tSNE and UMAP and observe that a single one -- the normalization -- is responsible for switching between them. This, in turn, implies that a majority of the algorithmic differences can be toggled without affecting the embeddings. We discuss the implications this has on several theoretic claims behind UMAP, as well as how to reconcile them with existing tSNE interpretations. Based on our analysis, we provide a method (\ourmethod) that combines previously incompatible techniques from tSNE and UMAP and can replicate the results of either algorithm. This allows our method to incorporate further improvements, such as an acceleration that obtains either method's outputs faster than UMAP. We release improved versions of tSNE, UMAP, and \ourmethod that are fully plug-and-play with the traditional libraries at //github.com/Andrew-Draganov/GiDR-DUN
In large-scale systems there are fundamental challenges when centralised techniques are used for task allocation. The number of interactions is limited by resource constraints such as on computation, storage, and network communication. We can increase scalability by implementing the system as a distributed task-allocation system, sharing tasks across many agents. However, this also increases the resource cost of communications and synchronisation, and is difficult to scale. In this paper we present four algorithms to solve these problems. The combination of these algorithms enable each agent to improve their task allocation strategy through reinforcement learning, while changing how much they explore the system in response to how optimal they believe their current strategy is, given their past experience. We focus on distributed agent systems where the agents' behaviours are constrained by resource usage limits, limiting agents to local rather than system-wide knowledge. We evaluate these algorithms in a simulated environment where agents are given a task composed of multiple subtasks that must be allocated to other agents with differing capabilities, to then carry out those tasks. We also simulate real-life system effects such as networking instability. Our solution is shown to solve the task allocation problem to 6.7% of the theoretical optimal within the system configurations considered. It provides 5x better performance recovery over no-knowledge retention approaches when system connectivity is impacted, and is tested against systems up to 100 agents with less than a 9% impact on the algorithms' performance.
When is heterogeneity in the composition of an autonomous robotic team beneficial and when is it detrimental? We investigate and answer this question in the context of a minimally viable model that examines the role of heterogeneous speeds in perimeter defense problems, where defenders share a total allocated speed budget. We consider two distinct problem settings and develop strategies based on dynamic programming and on local interaction rules. We present a theoretical analysis of both approaches and our results are extensively validated using simulations. Interestingly, our results demonstrate that the viability of heterogeneous teams depends on the amount of information available to the defenders. Moreover, our results suggest a universality property: across a wide range of problem parameters the optimal ratio of the speeds of the defenders remains nearly constant.
Command, Control, Communication, and Intelligence (C3I) system is a kind of system-of-system that integrates computing machines, sensors, and communication networks. C3I systems are increasingly used in critical civil and military operations for achieving information superiority, assurance, and operational efficacy. C3I systems are no exception to the traditional systems facing widespread cyber-threats. However, the sensitive nature of the application domain (e.g., military operations) of C3I systems makes their security a critical concern. For instance, a cyber-attack on military installations can have detrimental impacts on national security. Therefore, in this paper, we review the state-of-the-art on the security of C3I systems. In particular, this paper aims to identify the security vulnerabilities, attack vectors, and countermeasures for C3I systems. We used the well-known systematic literature review method to select and review 77 studies on the security of C3I systems. Our review enabled us to identify 27 vulnerabilities, 22 attack vectors, and 62 countermeasures for C3I systems. This review has also revealed several areas for future research and identified key lessons with regards to C3I systems' security.