The mobile robot relies on SLAM (Simultaneous Localization and Mapping) to provide autonomous navigation and task execution in complex and unknown environments. However, it is hard to develop a dedicated algorithm for mobile robots due to dynamic and challenging situations, such as poor lighting conditions and motion blur. To tackle this issue, we propose a tightly-coupled LiDAR-visual SLAM based on geometric features, which includes two sub-systems (LiDAR and monocular visual SLAM) and a fusion framework. The fusion framework associates the depth and semantics of the multi-modal geometric features to complement the visual line landmarks and to add direction optimization in Bundle Adjustment (BA). This further constrains visual odometry. On the other hand, the entire line segment detected by the visual subsystem overcomes the limitation of the LiDAR subsystem, which can only perform the local calculation for geometric features. It adjusts the direction of linear feature points and filters out outliers, leading to a higher accurate odometry system. Finally, we employ a module to detect the subsystem's operation, providing the LiDAR subsystem's output as a complementary trajectory to our system while visual subsystem tracking fails. The evaluation results on the public dataset M2DGR, gathered from ground robots across various indoor and outdoor scenarios, show that our system achieves more accurate and robust pose estimation compared to current state-of-the-art multi-modal methods.
The design of Wireless Networked Control System (WNCS) requires addressing critical interactions between control and communication systems with minimal complexity and communication overhead while providing ultra-high reliability. This paper introduces a novel optimization theory based deep reinforcement learning (DRL) framework for the joint design of controller and communication systems. The objective of minimum power consumption is targeted while satisfying the schedulability and rate constraints of the communication system in the finite blocklength regime and stability constraint of the control system. Decision variables include the sampling period in the control system, and blocklength and packet error probability in the communication system. The proposed framework contains two stages: optimization theory and DRL. In the optimization theory stage, following the formulation of the joint optimization problem, optimality conditions are derived to find the mathematical relations between the optimal values of the decision variables. These relations allow the decomposition of the problem into multiple building blocks. In the DRL stage, the blocks that are simplified but not tractable are replaced by DRL. Via extensive simulations, the proposed optimization theory based DRL approach is demonstrated to outperform the optimization theory and pure DRL based approaches, with close to optimal performance and much lower complexity.
Reconfigurable Intelligent Surface (RIS) is envisioned to be an enabling technique in 6G wireless communications. By configuring the reflection beamforming codebook, RIS focuses signals on target receivers to enhance signal strength. In this paper, we investigate the codebook configuration for RIS-aided communication systems. We formulate an implicit relationship between user's coordinates information and the codebook from the perspective of signal radiation mechanisms, and introduce a novel learning-based method, implicit neural representations (INRs), to solve this implicit coordinates-to-codebook mapping problem. Our approach requires only user's coordinates, avoiding reliance on channel models. Additionally, given the significant practical applications of the 1-bit RIS, we formulate the 1-bit codebook configuration as a multi-label classification problem, and propose an encoding strategy for 1-bit RIS to reduce the codebook dimension, thereby improving learning efficiency. Experimental results from simulations and measured data demonstrate significant advantages of our method.
To support the running of human-centric metaverse applications on mobile devices, Unmanned Aerial Vehicle (UAV)-assisted Wireless Powered Mobile Edge Computing (WPMEC) is promising to compensate for limited computational capabilities and energy supplies of mobile devices. The high-speed computational processing demands and significant energy consumption of metaverse applications require joint resource scheduling of multiple devices and UAVs, but existing WPMEC solutions address either device or UAV scheduling due to the complexity of combinatorial optimization. To solve the above challenge, we propose a two-stage alternating optimization algorithm based on multi-task Deep Reinforcement Learning (DRL) to jointly allocate charging time, schedule computation tasks, and optimize trajectory of UAVs and mobile devices in a wireless powered metaverse scenario. First, considering energy constraints of both UAVs and mobile devices, we formulate an optimization problem to maximize the computation efficiency of the system. Second, we propose a heuristic algorithm to efficiently perform time allocation and charging scheduling for mobile devices. Following this, we design a multi-task DRL scheme to make charging scheduling and trajectory design decisions for UAVs. Finally, theoretical analysis and performance results demonstrate that our algorithm exhibits significant advantages over representative methods in terms of convergence speed and average computation efficiency.
Future wireless networks, in particular, 5G and beyond, are anticipated to deploy dense Low Earth Orbit (LEO) satellites to provide global coverage and broadband connectivity. However, the limited frequency band and the coexistence of multiple constellations bring new challenges for interference management. In this paper, we propose a robust multilayer interference management scheme for spectrum sharing in heterogeneous satellite networks with statistical channel state information (CSI) at the transmitter (CSIT) and receivers (CSIR). In the proposed scheme, Rate-Splitting Multiple Access (RSMA), as a general and powerful framework for interference management and multiple access strategies, is implemented distributedly at GEO and LEO satellites, coined Distributed-RSMA (D-RSMA). By doing so, D-RSMA aims to mitigate the interference and boost the user fairness of the overall multilayer satellite system. Specifically, we study the problem of jointly optimizing the GEO/LEO precoders and message splits to maximize the minimum rate among User Terminals (UTs) subject to a transmit power constraint at all satellites. A robust algorithm is proposed to solve the original non-convex optimization problem. Numerical results demonstrate the effectiveness and robustness towards network load and CSI uncertainty of our proposed D-RSMA scheme. Benefiting from the interference management capability, D-RSMA provides significant max-min fairness performance gains compared to several benchmark schemes.
This study focuses on a novel task in text-to-image (T2I) generation, namely action customization. The objective of this task is to learn the co-existing action from limited data and generalize it to unseen humans or even animals. Experimental results show that existing subject-driven customization methods fail to learn the representative characteristics of actions and struggle in decoupling actions from context features, including appearance. To overcome the preference for low-level features and the entanglement of high-level features, we propose an inversion-based method Action-Disentangled Identifier (ADI) to learn action-specific identifiers from the exemplar images. ADI first expands the semantic conditioning space by introducing layer-wise identifier tokens, thereby increasing the representational richness while distributing the inversion across different features. Then, to block the inversion of action-agnostic features, ADI extracts the gradient invariance from the constructed sample triples and masks the updates of irrelevant channels. To comprehensively evaluate the task, we present an ActionBench that includes a variety of actions, each accompanied by meticulously selected samples. Both quantitative and qualitative results show that our ADI outperforms existing baselines in action-customized T2I generation.
Autonomous mobile robots (AMRs) play a crucial role in transportation and service tasks at hospitals, contributing to enhanced efficiency and meeting medical demands. This paper investigates the optimization problem of scheduling strategies for AMRs at smart hospitals, where the service and travel times of AMRs are stochastic. A stochastic mixed-integer programming model is formulated to minimize the total cost of the hospital by reducing the number of AMRs and travel distance while satisfying constraints such as AMR battery state of charge, AMR capacity, and time windows for medical requests. To address this objective, some properties of the solutions with time window constraints are identified. The variable neighborhood search (VNS) algorithm is adjusted by incorporating the properties of the AMR scheduling problem to solve the model. Experimental results demonstrate that VNS generates high-quality solutions. Both enhanced efficiency and the meeting of medical demands are achieved through intelligently arranging the driving routes of AMRs for both charging and service requests, resulting in substantial cost reductions for hospitals and enhanced utilization of medical resources.
We present a novel Graph-based debiasing Algorithm for Underreported Data (GRAUD) aiming at an efficient joint estimation of event counts and discovery probabilities across spatial or graphical structures. This innovative method provides a solution to problems seen in fields such as policing data and COVID-$19$ data analysis. Our approach avoids the need for strong priors typically associated with Bayesian frameworks. By leveraging the graph structures on unknown variables $n$ and $p$, our method debiases the under-report data and estimates the discovery probability at the same time. We validate the effectiveness of our method through simulation experiments and illustrate its practicality in one real-world application: police 911 calls-to-service data.
The emergent abilities of Large Language Models (LLMs), which power tools like ChatGPT and Bard, have produced both excitement and worry about how AI will impact academic writing. In response to rising concerns about AI use, authors of academic publications may decide to voluntarily disclose any AI tools they use to revise their manuscripts, and journals and conferences could begin mandating disclosure and/or turn to using detection services, as many teachers have done with student writing in class settings. Given these looming possibilities, we investigate whether academics view it as necessary to report AI use in manuscript preparation and how detectors react to the use of AI in academic writing.
Vast amount of data generated from networks of sensors, wearables, and the Internet of Things (IoT) devices underscores the need for advanced modeling techniques that leverage the spatio-temporal structure of decentralized data due to the need for edge computation and licensing (data access) issues. While federated learning (FL) has emerged as a framework for model training without requiring direct data sharing and exchange, effectively modeling the complex spatio-temporal dependencies to improve forecasting capabilities still remains an open problem. On the other hand, state-of-the-art spatio-temporal forecasting models assume unfettered access to the data, neglecting constraints on data sharing. To bridge this gap, we propose a federated spatio-temporal model -- Cross-Node Federated Graph Neural Network (CNFGNN) -- which explicitly encodes the underlying graph structure using graph neural network (GNN)-based architecture under the constraint of cross-node federated learning, which requires that data in a network of nodes is generated locally on each node and remains decentralized. CNFGNN operates by disentangling the temporal dynamics modeling on devices and spatial dynamics on the server, utilizing alternating optimization to reduce the communication cost, facilitating computations on the edge devices. Experiments on the traffic flow forecasting task show that CNFGNN achieves the best forecasting performance in both transductive and inductive learning settings with no extra computation cost on edge devices, while incurring modest communication cost.
Few-shot Knowledge Graph (KG) completion is a focus of current research, where each task aims at querying unseen facts of a relation given its few-shot reference entity pairs. Recent attempts solve this problem by learning static representations of entities and references, ignoring their dynamic properties, i.e., entities may exhibit diverse roles within task relations, and references may make different contributions to queries. This work proposes an adaptive attentional network for few-shot KG completion by learning adaptive entity and reference representations. Specifically, entities are modeled by an adaptive neighbor encoder to discern their task-oriented roles, while references are modeled by an adaptive query-aware aggregator to differentiate their contributions. Through the attention mechanism, both entities and references can capture their fine-grained semantic meanings, and thus render more expressive representations. This will be more predictive for knowledge acquisition in the few-shot scenario. Evaluation in link prediction on two public datasets shows that our approach achieves new state-of-the-art results with different few-shot sizes.