The era of ubiquitous, affordable wireless connectivity has opened doors to countless practical applications. In this context, ambient backscatter communication (AmBC) stands out, utilizing passive tags to establish connections with readers by harnessing reflected ambient radio frequency (RF) signals. However, conventional data detectors face limitations due to their inadequate knowledge of channel and RF-source parameters. To address this challenge, we propose an innovative approach using a deep neural network (DNN) for channel state estimation (CSI) and signal detection within AmBC systems. Unlike traditional methods that separate CSI estimation and data detection, our approach leverages a DNN to implicitly estimate CSI and simultaneously detect data. The DNN model, trained offline using simulated data derived from channel statistics, excels in online data recovery, ensuring robust performance in practical scenarios. Comprehensive evaluations validate the superiority of our proposed DNN method over traditional detectors, particularly in terms of bit error rate (BER). In high signal-to-noise ratio (SNR) conditions, our method exhibits an impressive approximately 20% improvement in BER performance compared to the maximum likelihood (ML) approach. These results underscore the effectiveness of our developed approach for AmBC channel estimation and signal detection. In summary, our method outperforms traditional detectors, bolstering the reliability and efficiency of AmBC systems, even in challenging channel conditions.
Rollback recovery strategies are well-known in concurrent and distributed systems. In this context, recovering from unexpected failures is even more relevant given the non-deterministic nature of execution, which means that it is practically impossible to foresee all possible process interactions. In this work, we consider a message-passing concurrent programming language where processes interact through message sending and receiving, but shared memory is not allowed. In this context, we design a checkpoint-based rollback recovery strategy that does not need a central coordination. For this purpose, we extend the language with three new operators: check, commit, and rollback. Furthermore, our approach is purely asynchronous, which is an essential ingredient to developing a source-to-source program instrumentation implementing a rollback recovery strategy.
Longitudinal network consists of a sequence of temporal edges among multiple nodes, where the temporal edges are observed in real time. It has become ubiquitous with the rise of online social platform and e-commerce, but largely under-investigated in literature. In this paper, we propose an efficient estimation framework for longitudinal network, leveraging strengths of adaptive network merging, tensor decomposition and point process. It merges neighboring sparse networks so as to enlarge the number of observed edges and reduce estimation variance, whereas the estimation bias introduced by network merging is controlled by exploiting local temporal structures for adaptive network neighborhood. A projected gradient descent algorithm is proposed to facilitate estimation, where the upper bound of the estimation error in each iteration is established. A thorough analysis is conducted to quantify the asymptotic behavior of the proposed method, which shows that it can significantly reduce the estimation error and also provides guideline for network merging under various scenarios. We further demonstrate the advantage of the proposed method through extensive numerical experiments on synthetic datasets and a militarized interstate dispute dataset.
Image composition refers to inserting a foreground object into a background image to obtain a composite image. In this work, we focus on generating plausible shadows for the inserted foreground object to make the composite image more realistic. To supplement the existing small-scale dataset, we create a large-scale dataset called RdSOBA with rendering techniques. Moreover, we design a two-stage network named DMASNet with decomposed mask prediction and attentive shadow filling. Specifically, in the first stage, we decompose shadow mask prediction into box prediction and shape prediction. In the second stage, we attend to reference background shadow pixels to fill the foreground shadow. Abundant experiments prove that our DMASNet achieves better visual effects and generalizes well to real composite images.
Accurate trajectory tracking control for quadrotors is essential for safe navigation in cluttered environments. However, this is challenging in agile flights due to nonlinear dynamics, complex aerodynamic effects, and actuation constraints. In this article, we empirically compare two state-of-the-art control frameworks: the nonlinear-model-predictive controller (NMPC) and the differential-flatness-based controller (DFBC), by tracking a wide variety of agile trajectories at speeds up to 20 m/s (i.e.,72 km/h). The comparisons are performed in both simulation and real-world environments to systematically evaluate both methods from the aspect of tracking accuracy, robustness, and computational efficiency. We show the superiority of NMPC in tracking dynamically infeasible trajectories, at the cost of higher computation time and risk of numerical convergence issues. For both methods, we also quantitatively study the effect of adding an inner-loop controller using the incremental nonlinear dynamic inversion (INDI) method, and the effect of adding an aerodynamic drag model. Our real-world experiments, performed in one of the world's largest motion capture systems, demonstrate more than 78% tracking error reduction of both NMPC and DFBC, indicating the necessity of using an inner-loop controller and aerodynamic drag model for agile trajectory tracking.
Large antenna arrays can steer narrow beams towards a target area, and thus improve the communications capacity of wireless channels and the fidelity of radio sensing. Hardware that is capable of continuously-variable phase shifts is expensive, presenting scaling challenges. PIN diodes that apply only discrete phase shifts are promising and cost-effective; however, unlike continuous phase shifters, finding the best phase configuration across elements is an NP-hard optimization problem. Thus, the complexity of optimization becomes a new bottleneck for large-antenna arrays. To address this challenge, this paper suggests a procedure for converting the optimization objective function from a ratio of quadratic functions to a sequence of more easily solvable quadratic unconstrained binary optimization (QUBO) sub-problems. This conversion is an exact equivalence, and the resulting QUBO forms are standard input formats for various physics-inspired optimization methods. We demonstrate that a simulated annealing approach is very effective for solving these sub-problems, and we give performance metrics for several large array types optimized by this technique. Through numerical experiments, we report 3D beamforming performance for extra-large arrays with up to 10,000 elements.
Generative artificial intelligence (GenAI) offers various services to users through content creation, which is believed to be one of the most important components in future networks. However, training and deploying big artificial intelligence models (BAIMs) introduces substantial computational and communication overhead.This poses a critical challenge to centralized approaches, due to the need of high-performance computing infrastructure and the reliability, secrecy and timeliness issues in long-distance access of cloud services. Therefore, there is an urging need to decentralize the services, partly moving them from the cloud to the edge and establishing native GenAI services to enable private, timely, and personalized experiences. In this paper, we propose a brand-new bottom-up BAIM architecture with synergetic big cloud model and small edge models, and design a distributed training framework and a task-oriented deployment scheme for efficient provision of native GenAI services. The proposed framework can facilitate collaborative intelligence, enhance adaptability, gather edge knowledge and alleviate edge-cloud burden. The effectiveness of the proposed framework is demonstrated through an image generation use case. Finally, we outline fundamental research directions to fully exploit the collaborative potential of edge and cloud for native GenAI and BAIM applications.
During the last decade, wireless data services have had an incredible impact on people's lives in ways we could never have imagined. The number of mobile devices has increased exponentially and data traffic has almost doubled every year. Undoubtedly, the rate of growth will continue to be rapid with the explosive increase in demands for data rates, latency, massive connectivity, network reliability, and energy efficiency. In order to manage this level of growth and meet these requirements, the fifth-generation (5G) mobile communications network is envisioned as a revolutionary advancement combining various improvements to previous mobile generation networks and new technologies, including the use of millimeter wavebands (mm-wave), massive multiple-input multipleoutput (mMIMO) multi-beam antennas, network densification, dynamic Time Division Duplex (TDD) transmission, and new waveforms with mixed numerologies. New revolutionary features including terahertz (THz) communications and the integration of Non-Terrestrial Networks (NTN) can further improve the performance and signal quality for future 6G networks. However, despite the inevitable benefits of all these key technologies, the heterogeneous and ultra-flexible structure of the 5G and beyond network brings non-orthogonality into the system and generates significant interference that needs to be handled carefully. Therefore, it is essential to design effective interference management schemes to mitigate severe and sometimes unpredictable interference in mobile networks. In this paper, we provide a comprehensive review of interference management in 5G and Beyond networks and discuss its future evolution. We start with a unified classification and a detailed explanation of the different types of interference and continue by presenting our taxonomy of existing interference management approaches. Then, after explaining interference measurement reports and signaling, we provide for each type of interference identified, an in-depth literature review and technical discussion of appropriate management schemes. We finish by discussing the main interference challenges that will be encountered in future 6G networks and by presenting insights on the suggested new interference management approaches, including useful guidelines for an AI-based solution. This review will provide a first-hand guide to the industry in determining the most relevant technology for interference management, and will also allow for consideration of future challenges and research directions.
While LISA effectively bridges the gap between segmentation and large language models to enable reasoning segmentation, it poses certain limitations: unable to distinguish different instances of the target region, and constrained by the pre-defined textual response formats. In this work, we introduce LISA++, an update to the existing LISA model, focusing on improving core functionalities while keeping the base architecture intact. The main enhancements in LISA++ include: \textbf{1) Enhanced Segmentation}: The instance segmentation ability has been added, providing a more detailed scene analysis along with the existing multi-region semantic segmentation. \textbf{2) More Natural Conversation}: Improved capability for multi-turn dialogue, with the ability to incorporate segmentation results directly into text responses, i.e., Segmentation in Dialogue (SiD). These improvements are achieved by curating the existing samples of generic segmentation datasets, aimed specifically at enhancing the segmentation and conversational skills without structural change and additional data sources. Comparative analysis with the original LISA model shows significant advancements in these areas, positioning LISA++ as a notable upgrade in visual understanding and interaction. LISA++'s adaptability and improved features highlight the versatility of the mask-as-embedding paradigm proposed by LISA, and the potential as a foundational model for diverse applications.
Graph neural networks (GNNs) have demonstrated a significant boost in prediction performance on graph data. At the same time, the predictions made by these models are often hard to interpret. In that regard, many efforts have been made to explain the prediction mechanisms of these models from perspectives such as GNNExplainer, XGNN and PGExplainer. Although such works present systematic frameworks to interpret GNNs, a holistic review for explainable GNNs is unavailable. In this survey, we present a comprehensive review of explainability techniques developed for GNNs. We focus on explainable graph neural networks and categorize them based on the use of explainable methods. We further provide the common performance metrics for GNNs explanations and point out several future research directions.
Deep neural networks (DNNs) are successful in many computer vision tasks. However, the most accurate DNNs require millions of parameters and operations, making them energy, computation and memory intensive. This impedes the deployment of large DNNs in low-power devices with limited compute resources. Recent research improves DNN models by reducing the memory requirement, energy consumption, and number of operations without significantly decreasing the accuracy. This paper surveys the progress of low-power deep learning and computer vision, specifically in regards to inference, and discusses the methods for compacting and accelerating DNN models. The techniques can be divided into four major categories: (1) parameter quantization and pruning, (2) compressed convolutional filters and matrix factorization, (3) network architecture search, and (4) knowledge distillation. We analyze the accuracy, advantages, disadvantages, and potential solutions to the problems with the techniques in each category. We also discuss new evaluation metrics as a guideline for future research.