亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper addresses the problem of task assignment and trajectory generation for installing bird diverters using a fleet of multi-rotors. The proposed solution extends our previous motion planner to compute feasible and constrained trajectories, considering payload capacity limitations and recharging constraints. Signal Temporal Logic (STL) specifications are employed to encode the mission objectives and temporal requirements. Additionally, an event-based replanning strategy is introduced to handle unforeseen failures. An energy minimization term is also employed to implicitly save multi-rotor flight time during installation operations. The effectiveness and validity of the approach are demonstrated through simulations in MATLAB and Gazebo, as well as field experiments carried out in a mock-up scenario.

相關內容

MATLAB 是 Matrix Laboratory 的縮寫,是一款由美國 MathWorks 公司出品的商業數學軟件。是一種適用于算法開發、數據可視化、數據分析以及數值計算的高級技術計算語言和交互式環境,主要包括MATLAB和Simulink兩大部分。

Local-remote systems allow robots to execute complex tasks in hazardous environments such as space and nuclear power stations. However, establishing accurate positional mapping between local and remote devices can be difficult due to time delays that can compromise system performance and stability. Enhancing the synchronicity and stability of local-remote systems is vital for enabling robots to interact with environments at greater distances and under highly challenging network conditions, including time delays. We introduce an adaptive control method employing reinforcement learning to tackle the time-delayed control problem. By adjusting controller parameters in real-time, this adaptive controller compensates for stochastic delays and improves synchronicity between local and remote robotic manipulators. To improve the adaptive PD controller's performance, we devise a model-based reinforcement learning approach that effectively incorporates multi-step delays into the learning framework. Utilizing this proposed technique, the local-remote system's performance is stabilized for stochastic communication time-delays of up to 290ms. Our results demonstrate that the suggested model-based reinforcement learning method surpasses the Soft-Actor Critic and augmented state Soft-Actor Critic techniques. Access the code at: //github.com/CAV-Research-Lab/Predictive-Model-Delay-Correction

Anticipating the motion of other road users is crucial for automated driving systems (ADS), as it enables safe and informed downstream decision-making and motion planning. Unfortunately, contemporary learning-based approaches for motion prediction exhibit significant performance degradation as the prediction horizon increases or the observation window decreases. This paper proposes a novel technique for trajectory prediction that combines a data-driven learning-based method with a velocity vector field (VVF) generated from a nature-inspired concept, i.e., fluid flow dynamics. In this work, the vector field is incorporated as an additional input to a convolutional-recurrent deep neural network to help predict the most likely future trajectories given a sequence of bird's eye view scene representations. The performance of the proposed model is compared with state-of-the-art methods on the HighD dataset demonstrating that the VVF inclusion improves the prediction accuracy for both short and long-term (5~sec) time horizons. It is also shown that the accuracy remains consistent with decreasing observation windows which alleviates the requirement of a long history of past observations for accurate trajectory prediction. Source codes are available at: //github.com/Amir-Samadi/VVF-TP.

This paper tackles the task assignment and trajectory generation problem for bird diverter installation using a fleet of multi-rotors. The proposed motion planner considers payload capacity, recharging constraints, and utilizes Signal Temporal Logic (STL) specifications for encoding mission objectives and temporal requirements. An event-based replanning strategy is introduced to handle unexpected failures and ensure operational continuity. An energy minimization term is also employed to implicitly save multi-rotor flight time during installation. Simulations in MATLAB and Gazebo, as well as field experiments, demonstrate the effectiveness and validity of the approach in a mock-up scenario.

Multilingual intelligent assistants, such as ChatGPT, have recently gained popularity. To further expand the applications of multilingual artificial intelligence assistants and facilitate international communication, it is essential to enhance the performance of multilingual speech recognition, which is a crucial component of speech interaction. In this paper, we propose two simple and parameter-efficient methods: language prompt tuning and frame-level language adapter, to respectively enhance language-configurable and language-agnostic multilingual speech recognition. Additionally, we explore the feasibility of integrating these two approaches using parameter-efficient fine-tuning methods. Our experiments demonstrate significant performance improvements across seven languages using our proposed methods.

Robotic systems for manipulation in millimeter scale often use a camera with high magnification for visual feedback of the target region. However, the limited field-of-view (FoV) of the microscopic camera necessitates camera motion to capture a broader workspace environment. In this work, we propose an autonomous robotic control method to constrain a robot-held camera within a designated FoV. Furthermore, we model the camera extrinsics as part of the kinematic model and use camera measurements coupled with a U-Net based tool tracking to adapt the complete robotic model during task execution. As a proof-of-concept demonstration, the proposed framework was evaluated in a bi-manual setup, where the microscopic camera was controlled to view a tool moving in a pre-defined trajectory. The proposed method allowed the camera to stay 99.5% of the time within the real FoV, compared to 48.1% without the proposed adaptive control.

The Metaverse is a new paradigm that aims to create a virtual environment consisting of numerous worlds, each of which will offer a different set of services. To deal with such a dynamic and complex scenario, considering the stringent quality of service requirements aimed at the 6th generation of communication systems (6G), one potential approach is to adopt self-sustaining strategies, which can be realized by employing Adaptive Artificial Intelligence (Adaptive AI) where models are continually re-trained with new data and conditions. One aspect of self-sustainability is the management of multiple access to the frequency spectrum. Although several innovative methods have been proposed to address this challenge, mostly using Deep Reinforcement Learning (DRL), the problem of adapting agents to a non-stationary environment has not yet been precisely addressed. This paper fills in the gap in the current literature by investigating the problem of multiple access in multi-channel environments to maximize the throughput of the intelligent agent when the number of active User Equipments (UEs) may fluctuate over time. To solve the problem, a Double Deep Q-Learning (DDQL) technique empowered by Continual Learning (CL) is proposed to overcome the non-stationary situation, while the environment is unknown. Numerical simulations demonstrate that, compared to other well-known methods, the CL-DDQL algorithm achieves significantly higher throughputs with a considerably shorter convergence time in highly dynamic scenarios.

Recommender systems (RS) have achieved significant success by leveraging explicit identification (ID) features. However, the full potential of content features, especially the pure image pixel features, remains relatively unexplored. The limited availability of large, diverse, and content-driven image recommendation datasets has hindered the use of raw images as item representations. In this regard, we present PixelRec, a massive image-centric recommendation dataset that includes approximately 200 million user-image interactions, 30 million users, and 400,000 high-quality cover images. By providing direct access to raw image pixels, PixelRec enables recommendation models to learn item representation directly from them. To demonstrate its utility, we begin by presenting the results of several classical pure ID-based baseline models, termed IDNet, trained on PixelRec. Then, to show the effectiveness of the dataset's image features, we substitute the itemID embeddings (from IDNet) with a powerful vision encoder that represents items using their raw image pixels. This new model is dubbed PixelNet.Our findings indicate that even in standard, non-cold start recommendation settings where IDNet is recognized as highly effective, PixelNet can already perform equally well or even better than IDNet. Moreover, PixelNet has several other notable advantages over IDNet, such as being more effective in cold-start and cross-domain recommendation scenarios. These results underscore the importance of visual features in PixelRec. We believe that PixelRec can serve as a critical resource and testing ground for research on recommendation models that emphasize image pixel content. The dataset, code, and leaderboard will be available at //github.com/westlake-repl/PixelRec.

This study investigates the outage performance of an under-laying wireless-powered secondary system that reuses the primary users (PU) spectrum in a multiple-input multiple-output (MIMO) cognitive radio (CR) network. Each secondary user (SU) harvests energy and receives information simultaneously by applying power splitting (PS) protocol. The communication between SUs is aided by a two-way (TW) decode and forward (DF) relay. We formulate a problem to design the PS ratios at SUs, the power control factor at the secondary relay, and beamforming matrices at all nodes to minimize the secondary network's outage probability. To address this problem, we propose a two-step solution. The first step establishes closedform expressions for the PS ratios at each SU and secondary relay's power control factor. Furthermore, in the second step, interference alignment (IA) is used to design proper precoding and decoding matrices for managing the interference between secondary and primary networks. We choose IA matrices based on the minimum mean square error (MMSE) iterative algorithm. The simulation results demonstrate a significant decrease in the outage probability for the proposed scheme compared to the benchmark schemes, with an average reduction of more than two orders of magnitude achieved.

Translational distance-based knowledge graph embedding has shown progressive improvements on the link prediction task, from TransE to the latest state-of-the-art RotatE. However, N-1, 1-N and N-N predictions still remain challenging. In this work, we propose a novel translational distance-based approach for knowledge graph link prediction. The proposed method includes two-folds, first we extend the RotatE from 2D complex domain to high dimension space with orthogonal transforms to model relations for better modeling capacity. Second, the graph context is explicitly modeled via two directed context representations. These context representations are used as part of the distance scoring function to measure the plausibility of the triples during training and inference. The proposed approach effectively improves prediction accuracy on the difficult N-1, 1-N and N-N cases for knowledge graph link prediction task. The experimental results show that it achieves better performance on two benchmark data sets compared to the baseline RotatE, especially on data set (FB15k-237) with many high in-degree connection nodes.

Automatic KB completion for commonsense knowledge graphs (e.g., ATOMIC and ConceptNet) poses unique challenges compared to the much studied conventional knowledge bases (e.g., Freebase). Commonsense knowledge graphs use free-form text to represent nodes, resulting in orders of magnitude more nodes compared to conventional KBs (18x more nodes in ATOMIC compared to Freebase (FB15K-237)). Importantly, this implies significantly sparser graph structures - a major challenge for existing KB completion methods that assume densely connected graphs over a relatively smaller set of nodes. In this paper, we present novel KB completion models that can address these challenges by exploiting the structural and semantic context of nodes. Specifically, we investigate two key ideas: (1) learning from local graph structure, using graph convolutional networks and automatic graph densification and (2) transfer learning from pre-trained language models to knowledge graphs for enhanced contextual representation of knowledge. We describe our method to incorporate information from both these sources in a joint model and provide the first empirical results for KB completion on ATOMIC and evaluation with ranking metrics on ConceptNet. Our results demonstrate the effectiveness of language model representations in boosting link prediction performance and the advantages of learning from local graph structure (+1.5 points in MRR for ConceptNet) when training on subgraphs for computational efficiency. Further analysis on model predictions shines light on the types of commonsense knowledge that language models capture well.

北京阿比特科技有限公司