Time-Sensitive Networking (TSN) is an emerging real-time Ethernet technology that provides deterministic communication for time-critical traffic. At its core, TSN relies on Time-Aware Shaper (TAS) for pre-allocating frames in specific time intervals and Per-Stream Filtering and Policing (PSFP) for mitigating the fatal disturbance of unavoidable frame drift. However, as first identified in this work, PSFP incurs heavy memory consumption during policing, hindering normal switching functionalities. This work proposes a lightweight policing design called FooDog, which could facilitate sub-microsecond jitter with ultra-low memory consumption. FooDog employs a period-wise and stream-wise structure to realize the memory-efficient PSFP without loss of determinism. Results using commercial FPGAs in typical aerospace scenarios show that FooDog could keep end-to-end time-sensitive traffic jitter <150 nanoseconds in the presence of abnormal traffic, comparable to typical TSN performance without anomalies. Meanwhile, it consumes merely hundreds of kilobits of memory, reducing >90% of on-chip memory overheads than unoptimized PSFP design.
Cross-Domain Sequential Recommendation (CDSR) methods aim to tackle the data sparsity and cold-start problems present in Single-Domain Sequential Recommendation (SDSR). Existing CDSR works design their elaborate structures relying on overlapping users to propagate the cross-domain information. However, current CDSR methods make closed-world assumptions, assuming fully overlapping users across multiple domains and that the data distribution remains unchanged from the training environment to the test environment. As a result, these methods typically result in lower performance on online real-world platforms due to the data distribution shifts. To address these challenges under open-world assumptions, we design an \textbf{A}daptive \textbf{M}ulti-\textbf{I}nterest \textbf{D}ebiasing framework for cross-domain sequential recommendation (\textbf{AMID}), which consists of a multi-interest information module (\textbf{MIM}) and a doubly robust estimator (\textbf{DRE}). Our framework is adaptive for open-world environments and can improve the model of most off-the-shelf single-domain sequential backbone models for CDSR. Our MIM establishes interest groups that consider both overlapping and non-overlapping users, allowing us to effectively explore user intent and explicit interest. To alleviate biases across multiple domains, we developed the DRE for the CDSR methods. We also provide a theoretical analysis that demonstrates the superiority of our proposed estimator in terms of bias and tail bound, compared to the IPS estimator used in previous work.
A near-field wideband communication system is investigated in which a base station (BS) employs an extra-large scale antenna array (ELAA) to serve multiple users in its near-field region. To facilitate near-field multi-user beamforming and mitigate the spatial wideband effect, the BS employs a hybrid beamforming architecture based on true-time delayers (TTDs). In addition to the conventional fully-connected TTD-based hybrid beamforming architecture, a new sub-connected architecture is proposed to improve energy efficiency and reduce hardware requirements. Two wideband beamforming optimization approaches are proposed to maximize spectral efficiency for both architectures. 1) Fully-digital approximation (FDA) approach based on full channel state information (CSI): In this method, the TTD-based hybrid beamformer is optimized by the block-coordinate descent and penalty method to approximate the optimal digital beamformer. This approach ensures convergence to the stationary point of the spectral efficiency maximization problem. 2) Heuristic two-stage (HTS) approach based on partial CSI: In this approach, a piecewise-near-field approximation of near-field channels is first proposed to facilitate the design of TTD-based analog beamformers based on the outcomes of near-field beam training. Subsequently, the low-dimensional digital beamformer is optimized using knowledge of the low-dimensional equivalent channels, resulting in reduced computational complexity and channel estimation complexity. Our numerical results show that 1) the proposed approach effectively eliminates the spatial wideband effect, and 2) the proposed sub-connected architecture is more energy efficient and has fewer hardware constraints on the TTD and system bandwidth compared to the fully-connected architecture.
Reconfigurable Intelligent Surfaces (RIS) have emerged as a disruptive technology with the potential to revolutionize wireless communication systems. In this paper, we present RIShield, a novel application of RIS technology specifically designed for radiation-sensitive environments. The aim of RIShield is to enable electromagnetic blackouts, preventing radiation leakage from target areas. We propose a comprehensive framework for RIShield deployment, considering the unique challenges and requirements of radiation-sensitive environments. By strategically positioning RIS panels, we create an intelligent shielding mechanism that selectively absorbs and reflects electromagnetic waves, effectively blocking radiation transmission. To achieve optimal performance, we model the corresponding channel and design a dynamic control that adjusts the RIS configuration based on real-time radiation monitoring. By leveraging the principles of reconfiguration and intelligent control, RIShield ensures adaptive and efficient protection while minimizing signal degradation. Through full-wave and ray-tracing simulations, we demonstrate the effectiveness of RIShield in achieving significant electromagnetic attenuation. Our results highlight the potential of RIS technology to address critical concerns in radiation-sensitive environments, paving the way for safer and more secure operations in industries such as healthcare, nuclear facilities, and defense.
We consider a Multi-Agent Path Finding (MAPF) setting where agents have been assigned a plan, but during its execution some agents are delayed. Instead of replanning from scratch when such a delay occurs, we propose delay introduction, whereby we delay some additional agents so that the remainder of the plan can be executed safely. We show that finding the minimum number of additional delays is APX-Hard, i.e., it is NP-Hard to find a $(1+\varepsilon)$-approximation for some $\varepsilon>0$. However, in practice we can find optimal delay-introductions using Conflict-Based Search for very large numbers of agents, and both planning time and the resulting length of the plan are comparable, and sometimes outperform the state-of-the-art heuristics for replanning.
In the era of large language models, applying techniques such as Retrieval Augmented Generation can better address Open-Domain Question-Answering problems. Due to constraints including model sizes and computing resources, the length of context is often limited, and it becomes challenging to empower the model to cover overlong contexts while answering questions from open domains. This paper proposes a general and convenient method to covering longer contexts in Open-Domain Question-Answering tasks. It leverages a small encoder language model that effectively encodes contexts, and the encoding applies cross-attention with origin inputs. With our method, the origin language models can cover several times longer contexts while keeping the computing requirements close to the baseline. Our experiments demonstrate that after fine-tuning, there is improved performance across two held-in datasets, four held-out datasets, and also in two In Context Learning settings.
The real-time routing for satellite communication of the mega-constellations is being challenged due to the large-scale of network nodes, especially on devices with limited computation such as onboard embedded systems. In this paper, a fast routing method is proposed for mega-constellation backbone networks. Firstly, inspired by the regularity and sparse characteristics of mega-constellations, the 4-degree percolation theory is proposed to describe the node search process. Then, dynamic minimum search and mapping methods are used to narrow down the traversal range. The proposed method performs as well as the heap-optimized Dijkstra algorithm with less memory space and dynamic access. The experimental results show that the method proposed in this paper can significantly reduce routing computation time, especially on the onboard, edge-computing or other computation-limited devices.
Reconfigurable intelligent surface (RIS) is a promising technique to improve the performance of future wireless communication systems at low energy consumption. To reap the potential benefits of RIS-aided beamforming, it is vital to enhance the accuracy of channel estimation. In this paper, we consider an RIS-aided multiuser system with non-ideal reflecting elements, each of which has a phase-dependent reflecting amplitude, and we aim to minimize the mean-squared error (MSE) of the channel estimation by jointly optimizing the training signals at the user equipments (UEs) and the reflection pattern at the RIS. As examples the least squares (LS) and linear minimum MSE (LMMSE) estimators are considered. The considered problems do not admit simple solution mainly due to the complicated constraints pertaining to the non-ideal RIS reflecting elements. As far as the LS criterion is concerned, we tackle this difficulty by first proving the optimality of orthogonal training symbols and then propose a majorization-minimization (MM)-based iterative method to design the reflection pattern, where a semi-closed form solution is obtained in each iteration. As for the LMMSE criterion, we address the joint training and reflection pattern optimization problem with an MM-based alternating algorithm, where a closed-form solution to the training symbols and a semi-closed form solution to the RIS reflecting coefficients are derived, respectively. Furthermore, an acceleration scheme is proposed to improve the convergence rate of the proposed MM algorithms. Finally, simulation results demonstrate the performance advantages of our proposed joint training and reflection pattern designs.
Vast amount of data generated from networks of sensors, wearables, and the Internet of Things (IoT) devices underscores the need for advanced modeling techniques that leverage the spatio-temporal structure of decentralized data due to the need for edge computation and licensing (data access) issues. While federated learning (FL) has emerged as a framework for model training without requiring direct data sharing and exchange, effectively modeling the complex spatio-temporal dependencies to improve forecasting capabilities still remains an open problem. On the other hand, state-of-the-art spatio-temporal forecasting models assume unfettered access to the data, neglecting constraints on data sharing. To bridge this gap, we propose a federated spatio-temporal model -- Cross-Node Federated Graph Neural Network (CNFGNN) -- which explicitly encodes the underlying graph structure using graph neural network (GNN)-based architecture under the constraint of cross-node federated learning, which requires that data in a network of nodes is generated locally on each node and remains decentralized. CNFGNN operates by disentangling the temporal dynamics modeling on devices and spatial dynamics on the server, utilizing alternating optimization to reduce the communication cost, facilitating computations on the edge devices. Experiments on the traffic flow forecasting task show that CNFGNN achieves the best forecasting performance in both transductive and inductive learning settings with no extra computation cost on edge devices, while incurring modest communication cost.
The military is investigating methods to improve communication and agility in its multi-domain operations (MDO). Nascent popularity of Internet of Things (IoT) has gained traction in public and government domains. Its usage in MDO may revolutionize future battlefields and may enable strategic advantage. While this technology offers leverage to military capabilities, it comes with challenges where one is the uncertainty and associated risk. A key question is how can these uncertainties be addressed. Recently published studies proposed information camouflage to transform information from one data domain to another. As this is comparatively a new approach, we investigate challenges of such transformations and how these associated uncertainties can be detected and addressed, specifically unknown-unknowns to improve decision-making.
Multivariate time series forecasting is extensively studied throughout the years with ubiquitous applications in areas such as finance, traffic, environment, etc. Still, concerns have been raised on traditional methods for incapable of modeling complex patterns or dependencies lying in real word data. To address such concerns, various deep learning models, mainly Recurrent Neural Network (RNN) based methods, are proposed. Nevertheless, capturing extremely long-term patterns while effectively incorporating information from other variables remains a challenge for time-series forecasting. Furthermore, lack-of-explainability remains one serious drawback for deep neural network models. Inspired by Memory Network proposed for solving the question-answering task, we propose a deep learning based model named Memory Time-series network (MTNet) for time series forecasting. MTNet consists of a large memory component, three separate encoders, and an autoregressive component to train jointly. Additionally, the attention mechanism designed enable MTNet to be highly interpretable. We can easily tell which part of the historic data is referenced the most.