Six-dimensional movable antenna (6DMA) is an innovative technology to improve wireless network capacity by adjusting 3D positions and 3D rotations of antenna surfaces based on channel spatial distribution. However, the existing works on 6DMA have assumed a central processing unit (CPU) to jointly process the signals of all 6DMA surfaces to execute various tasks. This inevitably incurs prohibitively high processing cost for channel estimation. Therefore, we propose a distributed 6DMA processing architecture to reduce processing complexity of CPU by equipping each 6DMA surface with a local processing unit (LPU). In particular, we unveil for the first time a new \textbf{\textit{directional sparsity}} property of 6DMA channels, where each user has significant channel gains only for a (small) subset of 6DMA position-rotation pairs, which can receive direct/reflected signals from users. In addition, we propose a practical three-stage protocol for the 6DMA-equipped base station (BS) to conduct statistical CSI acquisition for all 6DMA candidate positions/rotations, 6DMA position/rotation optimization, and instantaneous channel estimation for user data transmission with optimized 6DMA positions/rotations. Specifically, the directional sparsity is leveraged to develop distributed algorithms for joint sparsity detection and channel power estimation, as well as for directional sparsity-aided instantaneous channel estimation. Using the estimated channel power, we develop a channel power-based optimization algorithm to maximize the ergodic sum rate of the users by optimizing the antenna positions/rotations. Simulation results show that our channel estimation algorithms are more accurate than benchmarks with lower pilot overhead, and our optimization outperforms fluid/movable antennas optimized only in two dimensions (2D), even when the latter have perfect instantaneous CSI.
Given a network with an ongoing epidemic, the network immunization problem seeks to identify a fixed number of nodes to immunize in order to maximize the number of infections prevented. One of the fundamental computational challenges in network immunization is that the objective function is generally neither submodular nor supermodular. As a result, no efficient algorithm is known to consistently find a solution with a constant approximation guarantee. Traditionally, this problem is addressed using proxy objectives, which offer better approximation properties. However, converting to these indirect optimizations often introduces losses in effectiveness. In this paper, we overcome these fundamental barriers by utilizing the underlying stochastic structures of the diffusion process. Similar to the traditional influence objective, the immunization objective is an expectation that can be expressed as the sum of objectives over deterministic instances. However, unlike the former, some of these terms are not submodular. The key step is proving that this sum has a bounded deviation from submodularity, thereby enabling the greedy algorithm to achieve constant factor approximation. We show that this approximation still stands considering a variety of immunization settings and spread models.
A common limitation of autonomous tissue manipulation in robotic minimally invasive surgery (MIS) is the absence of force sensing and control at the tool level. Recently, our team has developed miniature force-sensing forceps that can simultaneously measure the grasping and pulling forces during tissue manipulation. Based on this design, here we further present a method to automate tissue traction that comprises grasping and pulling stages. During this process, the grasping and pulling forces can be controlled either separately or simultaneously through force decoupling. The force controller is built upon a static model of tissue manipulation, considering the interaction between the force-sensing forceps and soft tissue. The efficacy of this force control approach is validated through a series of experiments comparing targeted, estimated, and actual reference forces. To verify the feasibility of the proposed method in surgical applications, various tissue resections are conducted on ex vivo tissues employing a dual-arm robotic setup. Finally, we discuss the benefits of multi-force control in tissue traction, evidenced through comparative analyses of various ex vivo tissue resections with and without the proposed method, and the potential generalization with traction on different tissues. The results affirm the feasibility of implementing automatic tissue traction using miniature forceps with multi-force control, suggesting its potential to promote autonomous MIS. A video demonstrating the experiments can be found at //youtu.be/f5gXuXe67Ak.
Millimeter-wave (mmWave) communication is a vital component of future generations of mobile networks, offering not only high data rates but also precise beams, making it ideal for indoor navigation in complex environments. However, the challenges of multipath propagation and noisy signal measurements in indoor spaces complicate the use of mmWave signals for navigation tasks. Traditional physics-based methods, such as following the angle of arrival (AoA), often fall short in complex scenarios, highlighting the need for more sophisticated approaches. Digital twins, as virtual replicas of physical environments, offer a powerful tool for simulating and optimizing mmWave signal propagation in such settings. By creating detailed, physics-based models of real-world spaces, digital twins enable the training of machine learning algorithms in virtual environments, reducing the costs and limitations of physical testing. Despite their advantages, current machine learning models trained in digital twins often overfit specific virtual environments and require costly retraining when applied to new scenarios. In this paper, we propose a Physics-Informed Reinforcement Learning (PIRL) approach that leverages the physical insights provided by digital twins to shape the reinforcement learning (RL) reward function. By integrating physics-based metrics such as signal strength, AoA, and path reflections into the learning process, PIRL enables efficient learning and improved generalization to new environments without retraining. Our experiments demonstrate that the proposed PIRL, supported by digital twin simulations, outperforms traditional heuristics and standard RL models, achieving zero-shot generalization in unseen environments and offering a cost-effective, scalable solution for wireless indoor navigation.
A neural network-based machine learning potential energy surface (PES) expressed in a matrix product operator (NN-MPO) is proposed. The MPO form enables efficient evaluation of high-dimensional integrals that arise in solving the time-dependent and time-independent Schr\"odinger equation and effectively overcomes the so-called curse of dimensionality. This starkly contrasts with other neural network-based machine learning PES methods, such as multi-layer perceptrons (MLPs), where evaluating high-dimensional integrals is not straightforward due to the fully connected topology in their backbone architecture. Nevertheless, the NN-MPO retains the high representational capacity of neural networks. NN-MPO can achieve spectroscopic accuracy with a test mean absolute error (MAE) of 3.03 cm$^{-1}$ for a fully coupled six-dimensional ab initio PES, using only 625 training points distributed across a 0 to 17,000 cm$^{-1}$ energy range. Our Python implementation is available at //github.com/KenHino/Pompon.
Deep learning has been proven to be a powerful tool for addressing the most significant issues in cognitive radio networks, such as spectrum sensing, spectrum sharing, resource allocation, and security attacks. The utilization of deep learning techniques in cognitive radio networks can significantly enhance the network's capability to adapt to changing environments and improve the overall system's efficiency and reliability. As the demand for higher data rates and connectivity increases, B5G/6G wireless networks are expected to enable new services and applications significantly. Therefore, the significance of deep learning in addressing cognitive radio network challenges cannot be overstated. This review article provides valuable insights into potential solutions that can serve as a foundation for the development of future B5G/6G services. By leveraging the power of deep learning, cognitive radio networks can pave the way for the next generation of wireless networks capable of meeting the ever-increasing demands for higher data rates, improved reliability, and security.
Graph neural networks (GNNs) are effective machine learning models for many graph-related applications. Despite their empirical success, many research efforts focus on the theoretical limitations of GNNs, i.e., the GNNs expressive power. Early works in this domain mainly focus on studying the graph isomorphism recognition ability of GNNs, and recent works try to leverage the properties such as subgraph counting and connectivity learning to characterize the expressive power of GNNs, which are more practical and closer to real-world. However, no survey papers and open-source repositories comprehensively summarize and discuss models in this important direction. To fill the gap, we conduct a first survey for models for enhancing expressive power under different forms of definition. Concretely, the models are reviewed based on three categories, i.e., Graph feature enhancement, Graph topology enhancement, and GNNs architecture enhancement.
As a primary means of information acquisition, information retrieval (IR) systems, such as search engines, have integrated themselves into our daily lives. These systems also serve as components of dialogue, question-answering, and recommender systems. The trajectory of IR has evolved dynamically from its origins in term-based methods to its integration with advanced neural models. While the neural models excel at capturing complex contextual signals and semantic nuances, thereby reshaping the IR landscape, they still face challenges such as data scarcity, interpretability, and the generation of contextually plausible yet potentially inaccurate responses. This evolution requires a combination of both traditional methods (such as term-based sparse retrieval methods with rapid response) and modern neural architectures (such as language models with powerful language understanding capacity). Meanwhile, the emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has revolutionized natural language processing due to their remarkable language understanding, generation, generalization, and reasoning abilities. Consequently, recent research has sought to leverage LLMs to improve IR systems. Given the rapid evolution of this research trajectory, it is necessary to consolidate existing methodologies and provide nuanced insights through a comprehensive overview. In this survey, we delve into the confluence of LLMs and IR systems, including crucial aspects such as query rewriters, retrievers, rerankers, and readers. Additionally, we explore promising directions within this expanding field.
Face recognition technology has advanced significantly in recent years due largely to the availability of large and increasingly complex training datasets for use in deep learning models. These datasets, however, typically comprise images scraped from news sites or social media platforms and, therefore, have limited utility in more advanced security, forensics, and military applications. These applications require lower resolution, longer ranges, and elevated viewpoints. To meet these critical needs, we collected and curated the first and second subsets of a large multi-modal biometric dataset designed for use in the research and development (R&D) of biometric recognition technologies under extremely challenging conditions. Thus far, the dataset includes more than 350,000 still images and over 1,300 hours of video footage of approximately 1,000 subjects. To collect this data, we used Nikon DSLR cameras, a variety of commercial surveillance cameras, specialized long-rage R&D cameras, and Group 1 and Group 2 UAV platforms. The goal is to support the development of algorithms capable of accurately recognizing people at ranges up to 1,000 m and from high angles of elevation. These advances will include improvements to the state of the art in face recognition and will support new research in the area of whole-body recognition using methods based on gait and anthropometry. This paper describes methods used to collect and curate the dataset, and the dataset's characteristics at the current stage.
With the extremely rapid advances in remote sensing (RS) technology, a great quantity of Earth observation (EO) data featuring considerable and complicated heterogeneity is readily available nowadays, which renders researchers an opportunity to tackle current geoscience applications in a fresh way. With the joint utilization of EO data, much research on multimodal RS data fusion has made tremendous progress in recent years, yet these developed traditional algorithms inevitably meet the performance bottleneck due to the lack of the ability to comprehensively analyse and interpret these strongly heterogeneous data. Hence, this non-negligible limitation further arouses an intense demand for an alternative tool with powerful processing competence. Deep learning (DL), as a cutting-edge technology, has witnessed remarkable breakthroughs in numerous computer vision tasks owing to its impressive ability in data representation and reconstruction. Naturally, it has been successfully applied to the field of multimodal RS data fusion, yielding great improvement compared with traditional methods. This survey aims to present a systematic overview in DL-based multimodal RS data fusion. More specifically, some essential knowledge about this topic is first given. Subsequently, a literature survey is conducted to analyse the trends of this field. Some prevalent sub-fields in the multimodal RS data fusion are then reviewed in terms of the to-be-fused data modalities, i.e., spatiospectral, spatiotemporal, light detection and ranging-optical, synthetic aperture radar-optical, and RS-Geospatial Big Data fusion. Furthermore, We collect and summarize some valuable resources for the sake of the development in multimodal RS data fusion. Finally, the remaining challenges and potential future directions are highlighted.
Traffic forecasting is an important factor for the success of intelligent transportation systems. Deep learning models including convolution neural networks and recurrent neural networks have been applied in traffic forecasting problems to model the spatial and temporal dependencies. In recent years, to model the graph structures in the transportation systems as well as the contextual information, graph neural networks (GNNs) are introduced as new tools and have achieved the state-of-the-art performance in a series of traffic forecasting problems. In this survey, we review the rapidly growing body of recent research using different GNNs, e.g., graph convolutional and graph attention networks, in various traffic forecasting problems, e.g., road traffic flow and speed forecasting, passenger flow forecasting in urban rail transit systems, demand forecasting in ride-hailing platforms, etc. We also present a collection of open data and source resources for each problem, as well as future research directions. To the best of our knowledge, this paper is the first comprehensive survey that explores the application of graph neural networks for traffic forecasting problems. We have also created a public Github repository to update the latest papers, open data and source resources.