As a new candidate waveform for the next generation wireless communications, orthogonal chirp division multiplexing (OCDM) has attracted growing attention for its ability to achieve full diversity in uncoded transmission, and its robustness to narrow-band interference or impulsive noise. Under high mobility channels with multiple lags and multiple Doppler-shifts (MLMD), the signal suffers doubly selective (DS) fadings in time and frequency domain, and data symbols modulated on orthogonal chirps are interfered by each other. To address the problem of symbol detection of OCDM over MLMD channel, under the assumption that path attenuation factors, delays, and Doppler shifts of the channel are available, we first derive the closed-form channel matrix in Fresnel domain, and then propose a low-complexity method to approximate it as a sparse matrix. Based on the approximated Fresnel-domain channel, we propose a message passing (MP) based detector to estimate the transmit symbols iteratively. Finally, under two MLMD channels (an underspread channel for terrestrial vehicular communication, and an overspread channel for narrow-band underwater acoustic communications), Monte Carlo simulation results and analysis are provided to validate its advantages as a promising detector for OCDM.
A novel semantic communication (SC)-assisted secrecy transmission framework is proposed. In particular, the legitimate transmitter (Tx) sends the superimposed semantic and bit stream to the legitimate receiver (Rx), where the information may be eavesdropped by the malicious node (EVE). As the EVE merely has the conventional bit-oriented communication structure, the semantic signal acts as the type of beneficial information-bearing artificial noise (AN), which not only keeps strictly confidential to the EVE but also interferes with the EVE. The ergodic (equivalent) secrecy rate over fading wiretap channels is maximized by jointly optimizing the transmit power, semantic-bit power splitting ratio, and the successive interference cancellation decoding order at the Tx, subject to both the instantaneous peak and long-term average power constraints. To address this non-convex problem, both the optimal and suboptimal algorithms are developed by employing the Lagrangian dual method and the successive convex approximation method, respectively. Numerical results show that the proposed SC-assisted secrecy transmission scheme can significantly enhance the physical layer security compared to the baselines using the conventional bit-oriented communication and no-information-bearing AN. It also shows that the proposed suboptimal algorithm can achieve a near-optimal performance.
Convolutional neural networks (CNNs) with large kernels, drawing inspiration from the key operations of vision transformers (ViTs), have demonstrated impressive performance in various vision-based applications. To address the issue of computational efficiency degradation in existing designs for supporting large-kernel convolutions, an FPGA-based inference accelerator is proposed for the efficient deployment of CNNs with arbitrary kernel sizes. Firstly, a Z-flow method is presented to optimize the computing data flow by maximizing data reuse opportunity. Besides, the proposed design, incorporating the kernel-segmentation (Kseg) scheme, enables extended support for large-kernel convolutions, significantly reducing the storage requirements for overlapped data. Moreover, based on the analysis of typical block structures in emerging CNNs, vertical-fused (VF) and horizontal-fused (HF) methods are developed to optimize CNN deployments from both computation and transmission perspectives. The proposed hardware accelerator, evaluated on Intel Arria 10 FPGA, achieves up to 3.91 times better DSP efficiency than prior art on the same network. Particularly, it demonstrates efficient support for large-kernel CNNs, achieving throughputs of 169.68 GOPS and 244.55 GOPS for RepLKNet-31 and PyConvResNet-50, respectively, both of which are implemented on hardware for the first time.
The demand for unprecedented performance in the upcoming 6G wireless networks is fomenting the research on THz communications empowered by Reconfigurable Inteligent Surfaces (RISs). A wide range of use cases have been proposed, most of them, assuming high-level RIS models that overlook some of the hardware impairments that this technology faces. The expectation is that the emergent reconfigurable THz technologies will eventually overcome its current limitations. This disassociation from the hardware may mask nonphysical assumptions, perceived as hardware limitations. In this paper, a top-down approach bounded by physical constraints is presented, distilling from system-level specifications, hardware requirements, and upper bounds for the RIS-aided system performance. We consider D-band indoor and outdoor scenarios where a more realistic assessment of the state-of-the-art solution can be made. The goal is to highlight the intricacies of the design procedure based on sound assumptions for the RIS performance. For a given signal range and angular coverage, we quantify the required RIS size, number of switching elements, and maximum achievable bandwidth and capacity.
Today, we rely heavily on the constant availability of wireless communication systems. As a result, wireless jamming continues to prevail as an imminent threat: Attackers can create deliberate radio interference to overshadow desired signals, leading to denial of service. Although the broadcast nature of radio signal propagation makes such an attack possible in the first place, it likewise poses a challenge for the attacker, preventing precise targeting of single devices. In particular, the jamming signal will likely not only reach the victim receiver but also other neighboring devices. In this work, we introduce spatial control of wireless jamming signals, granting a new degree of freedom to leverage for jamming attacks. Our novel strategy employs an environment-adaptive reconfigurable intelligent surface (RIS), exploiting multipath signal propagation to spatially focus jamming signals on particular victim devices. We investigate this effect through extensive experimentation and show that our approach can disable the wireless communication of a victim device while leaving neighbouring devices unaffected. In particular, we demonstrate complete denial-of-service of a Wi-Fi device while a second device located at a distance as close as 5 mm remains unaffected, sustaining wireless communication at a data rate of 60 Mbit/s. We also show that the attacker can change the attack target on-the-fly, dynamically selecting the device to be jammed.
We describe ACE0, a lightweight platform for evaluating the suitability and viability of AI methods for behaviour discovery in multiagent simulations. Specifically, ACE0 was designed to explore AI methods for multi-agent simulations used in operations research studies related to new technologies such as autonomous aircraft. Simulation environments used in production are often high-fidelity, complex, require significant domain knowledge and as a result have high R&D costs. Minimal and lightweight simulation environments can help researchers and engineers evaluate the viability of new AI technologies for behaviour discovery in a more agile and potentially cost effective manner. In this paper we describe the motivation for the development of ACE0.We provide a technical overview of the system architecture, describe a case study of behaviour discovery in the aerospace domain, and provide a qualitative evaluation of the system. The evaluation includes a brief description of collaborative research projects with academic partners, exploring different AI behaviour discovery methods.
Vast amount of data generated from networks of sensors, wearables, and the Internet of Things (IoT) devices underscores the need for advanced modeling techniques that leverage the spatio-temporal structure of decentralized data due to the need for edge computation and licensing (data access) issues. While federated learning (FL) has emerged as a framework for model training without requiring direct data sharing and exchange, effectively modeling the complex spatio-temporal dependencies to improve forecasting capabilities still remains an open problem. On the other hand, state-of-the-art spatio-temporal forecasting models assume unfettered access to the data, neglecting constraints on data sharing. To bridge this gap, we propose a federated spatio-temporal model -- Cross-Node Federated Graph Neural Network (CNFGNN) -- which explicitly encodes the underlying graph structure using graph neural network (GNN)-based architecture under the constraint of cross-node federated learning, which requires that data in a network of nodes is generated locally on each node and remains decentralized. CNFGNN operates by disentangling the temporal dynamics modeling on devices and spatial dynamics on the server, utilizing alternating optimization to reduce the communication cost, facilitating computations on the edge devices. Experiments on the traffic flow forecasting task show that CNFGNN achieves the best forecasting performance in both transductive and inductive learning settings with no extra computation cost on edge devices, while incurring modest communication cost.
Deep neural networks (DNNs) are successful in many computer vision tasks. However, the most accurate DNNs require millions of parameters and operations, making them energy, computation and memory intensive. This impedes the deployment of large DNNs in low-power devices with limited compute resources. Recent research improves DNN models by reducing the memory requirement, energy consumption, and number of operations without significantly decreasing the accuracy. This paper surveys the progress of low-power deep learning and computer vision, specifically in regards to inference, and discusses the methods for compacting and accelerating DNN models. The techniques can be divided into four major categories: (1) parameter quantization and pruning, (2) compressed convolutional filters and matrix factorization, (3) network architecture search, and (4) knowledge distillation. We analyze the accuracy, advantages, disadvantages, and potential solutions to the problems with the techniques in each category. We also discuss new evaluation metrics as a guideline for future research.
Graph neural networks (GNNs) have emerged as a powerful paradigm for embedding-based entity alignment due to their capability of identifying isomorphic subgraphs. However, in real knowledge graphs (KGs), the counterpart entities usually have non-isomorphic neighborhood structures, which easily causes GNNs to yield different representations for them. To tackle this problem, we propose a new KG alignment network, namely AliNet, aiming at mitigating the non-isomorphism of neighborhood structures in an end-to-end manner. As the direct neighbors of counterpart entities are usually dissimilar due to the schema heterogeneity, AliNet introduces distant neighbors to expand the overlap between their neighborhood structures. It employs an attention mechanism to highlight helpful distant neighbors and reduce noises. Then, it controls the aggregation of both direct and distant neighborhood information using a gating mechanism. We further propose a relation loss to refine entity representations. We perform thorough experiments with detailed ablation studies and analyses on five entity alignment datasets, demonstrating the effectiveness of AliNet.
Multivariate time series forecasting is extensively studied throughout the years with ubiquitous applications in areas such as finance, traffic, environment, etc. Still, concerns have been raised on traditional methods for incapable of modeling complex patterns or dependencies lying in real word data. To address such concerns, various deep learning models, mainly Recurrent Neural Network (RNN) based methods, are proposed. Nevertheless, capturing extremely long-term patterns while effectively incorporating information from other variables remains a challenge for time-series forecasting. Furthermore, lack-of-explainability remains one serious drawback for deep neural network models. Inspired by Memory Network proposed for solving the question-answering task, we propose a deep learning based model named Memory Time-series network (MTNet) for time series forecasting. MTNet consists of a large memory component, three separate encoders, and an autoregressive component to train jointly. Additionally, the attention mechanism designed enable MTNet to be highly interpretable. We can easily tell which part of the historic data is referenced the most.
The potential of graph convolutional neural networks for the task of zero-shot learning has been demonstrated recently. These models are highly sample efficient as related concepts in the graph structure share statistical strength allowing generalization to new classes when faced with a lack of data. However, knowledge from distant nodes can get diluted when propagating through intermediate nodes, because current approaches to zero-shot learning use graph propagation schemes that perform Laplacian smoothing at each layer. We show that extensive smoothing does not help the task of regressing classifier weights in zero-shot learning. In order to still incorporate information from distant nodes and utilize the graph structure, we propose an Attentive Dense Graph Propagation Module (ADGPM). ADGPM allows us to exploit the hierarchical graph structure of the knowledge graph through additional connections. These connections are added based on a node's relationship to its ancestors and descendants and an attention scheme is further used to weigh their contribution depending on the distance to the node. Finally, we illustrate that finetuning of the feature representation after training the ADGPM leads to considerable improvements. Our method achieves competitive results, outperforming previous zero-shot learning approaches.