This paper presents a static-equilibrium oriented interaction force modeling and control approach of aerial manipulation employing uni-directional thrust (UDT) multirotors interacting with variously defined environments. First, a simplified system model for a quadrotor-based aerial manipulator is introduced considering parameterized work surfaces under assumptions, and then a range of meaningful manipulation tasks are utilized to explore the system properties in a quasi-static equilibrium state. An explicit interaction force model in relation with the aerial manipulator pose configuration and the environment parameter is derived from the static equilibrium analysis, based on which singularity is pointed out. Then a hybrid attitude/force interaction control strategy is presented to verify the proposed interaction force model, which involves high gain attitude control and feedforward plus feedback force control. This paper represents preliminary results. We study the properties of UDT-based aerial manipulators via specific tasks, and propose a novel framework for interaction force modeling and control aiming at maximizing the commercial values of UDT platforms for aerial manipulation purpose.
We address the problem of efficient 3-D exploration in indoor environments for micro aerial vehicles with limited sensing capabilities and payload/power constraints. We develop an indoor exploration framework that uses learning to predict the occupancy of unseen areas, extracts semantic features, samples viewpoints to predict information gains for different exploration goals, and plans informative trajectories to enable safe and smart exploration. Extensive experimentation in simulated and real-world environments shows the proposed approach outperforms the state-of-the-art exploration framework by 24% in terms of the total path length in a structured indoor environment and with a higher success rate during exploration.
This paper presents a modular approach to motion planning with provable stability guarantees for robots that move through changing environments via periodic locomotion behaviors. We focus on dynamic walkers as a paradigm for such systems, although the tools developed in this paper can be used to support general compositional approaches to robot motion planning with Dynamic Movement Primitives (DMPs). Our approach ensures a priori that the suggested plan can be stably executed. This is achieved by formulating the planning process as a Switching System with Multiple Equilibria (SSME) and proving that the system's evolution remains within explicitly characterized trapping regions in the state space under suitable constraints on the frequency of switching among the DMPs. These conditions effectively encapsulate the low-level stability limitations in a form that can be easily communicated to the planner to guarantee that the suggested plan is compatible with the robot's dynamics. Furthermore, we show how the available primitives can be safely composed online in a receding horizon manner to enable the robot to react to moving obstacles. The proposed framework is applied on 3D bipedal walking models under common modeling assumptions, and offers a modular approach towards stably integrating readily available low-level locomotion control and high-level planning methods.
This paper explores the connections between optimal transport and variational inference, with a focus on forward and reverse time stochastic differential equations and Girsanov transformations.We present a principled and systematic framework for sampling and generative modelling centred around divergences on path space. Our work culminates in the development of a novel score-based annealed flow technique (with connections to Jarzynski and Crooks identities from statistical physics) and a regularised iterative proportional fitting (IPF)-type objective, departing from the sequential nature of standard IPF. Through a series of generative modelling examples and a double-well-based rare event task, we showcase the potential of the proposed methods.
Edge-device co-inference refers to deploying well-trained artificial intelligent (AI) models at the network edge under the cooperation of devices and edge servers for providing ambient intelligent services. For enhancing the utilization of limited network resources in edge-device co-inference tasks from a systematic view, we propose a task-oriented scheme of integrated sensing, computation and communication (ISCC) in this work. In this system, all devices sense a target from the same wide view to obtain homogeneous noise-corrupted sensory data, from which the local feature vectors are extracted. All local feature vectors are aggregated at the server using over-the-air computation (AirComp) in a broadband channel with the orthogonal-frequency-division-multiplexing technique for suppressing the sensing and channel noise. The aggregated denoised global feature vector is further input to a server-side AI model for completing the downstream inference task. A novel task-oriented design criterion, called maximum minimum pair-wise discriminant gain, is adopted for classification tasks. It extends the distance of the closest class pair in the feature space, leading to a balanced and enhanced inference accuracy. Under this criterion, a problem of joint sensing power assignment, transmit precoding and receive beamforming is formulated. The challenge lies in three aspects: the coupling between sensing and AirComp, the joint optimization of all feature dimensions' AirComp aggregation over a broadband channel, and the complicated form of the maximum minimum pair-wise discriminant gain. To solve this problem, a task-oriented ISCC scheme with AirComp is proposed. Experiments based on a human motion recognition task are conducted to verify the advantages of the proposed scheme over the existing scheme and a baseline.
Current research in zero-shot translation is plagued by several issues such as high compute requirements, increased training time and off target translations. Proposed remedies often come at the cost of additional data or compute requirements. Pivot based neural machine translation is preferred over a single-encoder model for most settings despite the increased training and evaluation time. In this work, we overcome the shortcomings of zero-shot translation by taking advantage of transliteration and linguistic similarity. We build a single encoder-decoder neural machine translation system for Dravidian-Dravidian multilingual translation and perform zero-shot translation. We compare the data vs zero-shot accuracy tradeoff and evaluate the performance of our vanilla method against the current state of the art pivot based method. We also test the theory that morphologically rich languages require large vocabularies by restricting the vocabulary using an optimal transport based technique. Our model manages to achieves scores within 3 BLEU of large-scale pivot-based models when it is trained on 50\% of the language directions.
Automaton-based representations of task knowledge play an important role in control and planning for sequential decision-making problems. However, obtaining the high-level task knowledge required to build such automata is often difficult. Meanwhile, large-scale generative language models (GLMs) can automatically generate relevant task knowledge. However, the textual outputs from GLMs cannot be formally verified or used for sequential decision-making. We propose a novel algorithm named GLM2FSA, which constructs a finite state automaton (FSA) encoding high-level task knowledge from a brief natural-language description of the task goal. GLM2FSA first sends queries to a GLM to extract task knowledge in textual form, and then it builds an FSA to represent this text-based knowledge. The proposed algorithm thus fills the gap between natural-language task descriptions and automaton-based representations, and the constructed FSA can be formally verified against user-defined specifications. We accordingly propose a method to iteratively refine the queries to the GLM based on the outcomes, e.g., counter-examples, from verification. We demonstrate GLM2FSA's ability to build and refine automaton-based representations of everyday tasks (e.g., crossing a road), and also of tasks that require highly-specialized knowledge (e.g., executing secure multi-party computation).
Parameter design is significant in ensuring a satisfactory holistic performance of power converters. Generally, circuit parameter design for power converters consists of two processes: analysis and deduction process and optimization process. The existing approaches for parameter design consist of two types: traditional approach and computer-aided optimization (CAO) approach. In the traditional approaches, heavy human-dependence is required. Even though the emerging CAO approaches automate the optimization process, they still require manual analysis and deduction process. To mitigate human-dependence for the sake of high accuracy and easy implementation, an artificial-intelligence-based design (AI-D) approach is proposed in this article for the parameter design of power converters. In the proposed AI-D approach, to achieve automation in the analysis and deduction process, simulation tools and batch-normalization neural network (BN-NN) are adopted to build data-driven models for the optimization objectives and design constraints. Besides, to achieve automation in the optimization process, genetic algorithm is used to search for optimal design results. The proposed AI-D approach is validated in the circuit parameter design of the synchronous buck converter in the 48 to 12 V accessory-load power supply system in electric vehicle. The design case of an efficiency-optimal synchronous buck converter with constraints in volume, voltage ripple, and current ripple is provided. In the end of this article, feasibility and accuracy of the proposed AI-D approach have been validated by hardware experiments.
Conventional methods for object detection typically require a substantial amount of training data and preparing such high-quality training data is very labor-intensive. In this paper, we propose a novel few-shot object detection network that aims at detecting objects of unseen categories with only a few annotated examples. Central to our method are our Attention-RPN, Multi-Relation Detector and Contrastive Training strategy, which exploit the similarity between the few shot support set and query set to detect novel objects while suppressing false detection in the background. To train our network, we contribute a new dataset that contains 1000 categories of various objects with high-quality annotations. To the best of our knowledge, this is one of the first datasets specifically designed for few-shot object detection. Once our few-shot network is trained, it can detect objects of unseen categories without further training or fine-tuning. Our method is general and has a wide range of potential applications. We produce a new state-of-the-art performance on different datasets in the few-shot setting. The dataset link is //github.com/fanq15/Few-Shot-Object-Detection-Dataset.
We introduce a multi-task setup of identifying and classifying entities, relations, and coreference clusters in scientific articles. We create SciERC, a dataset that includes annotations for all three tasks and develop a unified framework called Scientific Information Extractor (SciIE) for with shared span representations. The multi-task setup reduces cascading errors between tasks and leverages cross-sentence relations through coreference links. Experiments show that our multi-task model outperforms previous models in scientific information extraction without using any domain-specific features. We further show that the framework supports construction of a scientific knowledge graph, which we use to analyze information in scientific literature.
In this paper, we propose the joint learning attention and recurrent neural network (RNN) models for multi-label classification. While approaches based on the use of either model exist (e.g., for the task of image captioning), training such existing network architectures typically require pre-defined label sequences. For multi-label classification, it would be desirable to have a robust inference process, so that the prediction error would not propagate and thus affect the performance. Our proposed model uniquely integrates attention and Long Short Term Memory (LSTM) models, which not only addresses the above problem but also allows one to identify visual objects of interests with varying sizes without the prior knowledge of particular label ordering. More importantly, label co-occurrence information can be jointly exploited by our LSTM model. Finally, by advancing the technique of beam search, prediction of multiple labels can be efficiently achieved by our proposed network model.