In the current era, the next-generation networks like 5th generation (5G) and 6th generation (6G) networks require high security, low latency with a high reliable standards and capacity. In these networks, reconfigurable wireless network slicing is considered as one of the key elements for 5G and 6G networks. A reconfigurable slicing allows the operators to run various instances of the network using a single infrastructure for a better quality of services (QoS). The QoS can be achieved by reconfiguring and optimizing these networks using Artificial intelligence and machine learning algorithms. To develop a smart decision-making mechanism for network management and restricting network slice failures, machine learning-enabled reconfigurable wireless network solutions are required. In this paper, we propose a hybrid deep learning model that consists of a convolution neural network (CNN) and long short term memory (LSTM). The CNN performs resource allocation, network reconfiguration, and slice selection while the LSTM is used for statistical information (load balancing, error rate etc.) regarding network slices. The applicability of the proposed model is validated by using multiple unknown devices, slice failure, and overloading conditions. The overall accuracy of 95.17% is achieved by the proposed model that reflects its applicability.
Federated learning (FL) is capable of performing large distributed machine learning tasks across multiple edge users by periodically aggregating trained local parameters. To address key challenges of enabling FL over a wireless fog-cloud system (e.g., non-i.i.d. data, users' heterogeneity), we first propose an efficient FL algorithm based on Federated Averaging (called FedFog) to perform the local aggregation of gradient parameters at fog servers and global training update at the cloud. Next, we employ FedFog in wireless fog-cloud systems by investigating a novel network-aware FL optimization problem that strikes the balance between the global loss and completion time. An iterative algorithm is then developed to obtain a precise measurement of the system performance, which helps design an efficient stopping criteria to output an appropriate number of global rounds. To mitigate the straggler effect, we propose a flexible user aggregation strategy that trains fast users first to obtain a certain level of accuracy before allowing slow users to join the global training updates. Extensive numerical results using several real-world FL tasks are provided to verify the theoretical convergence of FedFog. We also show that the proposed co-design of FL and communication is essential to substantially improve resource utilization while achieving comparable accuracy of the learning model.
We consider a wireless uplink network consisting of multiple end devices and an access point (AP). Each device monitors a physical process with stochastic arrival of status updates and sends these updates to the AP over a shared channel. The AP aims to schedule the transmissions of these devices to optimize the network-wide information freshness, quantified by the Age of Information (AoI) metric. Due to the stochastic arrival of the status updates at the devices, the AP only has partial observations of system times of the latest status updates at the devices when making scheduling decisions. We formulate such a decision-making problem as a belief Markov Decision Process (belief-MDP). The belief-MDP in its original form is difficult to solve as the dimension of its states can go to infinity and its belief space is uncountable. By leveraging the properties of the status update arrival (i.e., Bernoulli) processes, we manage to simplify the feasible states of the belief-MDP to two-dimensional vectors. Built on that, we devise a low-complexity scheduling policy. We derive upper bounds for the AoI performance of the low-complexity policy and analyze the performance guarantee by comparing its performance with a universal lower bound. Numerical results validate our analyses.
This paper proposes a deep learning approach to a class of active sensing problems in wireless communications in which an agent sequentially interacts with an environment over a predetermined number of time frames to gather information in order to perform a sensing or actuation task for maximizing some utility function. In such an active learning setting, the agent needs to design an adaptive sensing strategy sequentially based on the observations made so far. To tackle such a challenging problem in which the dimension of historical observations increases over time, we propose to use a long short-term memory (LSTM) network to exploit the temporal correlations in the sequence of observations and to map each observation to a fixed-size state information vector. We then use a deep neural network (DNN) to map the LSTM state at each time frame to the design of the next measurement step. Finally, we employ another DNN to map the final LSTM state to the desired solution. We investigate the performance of the proposed framework for adaptive channel sensing problems in wireless communications. In particular, we consider the adaptive beamforming problem for mmWave beam alignment and the adaptive reconfigurable intelligent surface sensing problem for reflection alignment. Numerical results demonstrate that the proposed deep active sensing strategy outperforms the existing adaptive or nonadaptive sensing schemes.
Optical wireless communication (OWC) has the potential to provide high communication speeds that support the massive use of the Internet that is expected in the near future. In OWC, optical access points (APs) are deployed on the celling to serve multiple users. In this context, efficient multiple access schemes are required to share the resources among the users and align multi-user interference. Recently, non-orthogonal multiple access (NOMA) has been studied to serve multiple users simultaneously using the same resources, while a different power level is allocated to each user. Despite the acceptable performance of NOMA, users might experience a high packet loss due to high noise, which results from the use of successive interference cancelation (SIC). In this work, random linear network coding (RLNC) is proposed to enhance the performance of NOMA in an optical wireless network where users are divided into multicast groups, and each group contains users that slightly differ in their channel gains. Moreover, a fixed power allocation (FPA) strategy is considered among these groups to avoid complexity. The performance of the proposed scheme is evaluated in terms of total packet success probability. The results show that the proposed scheme is more suitable for the network considered compared to other benchmark schemes such as traditional NOMA and orthogonal transmission schemes. Moreover, the total packet success probability is highly affected by the level of power allocated to each group in all the scenarios.
The implementation of integrated sensing and communication (ISAC) highly depends on the effective beamforming design exploiting accurate instantaneous channel state information (ICSI). However, channel tracking in ISAC requires large amount of training overhead and prohibitively large computational complexity. To address this problem, in this paper, we focus on ISAC-assisted vehicular networks and exploit a deep learning approach to implicitly learn the features of historical channels and directly predict the beamforming matrix for the next time slot to maximize the average achievable sum-rate of system, thus bypassing the need of explicit channel tracking for reducing the system signaling overhead. To this end, a general sum-rate maximization problem with Cramer-Rao lower bounds-based sensing constraints is first formulated for the considered ISAC system. Then, a historical channels-based convolutional long short-term memory network is designed for predictive beamforming that can exploit the spatial and temporal dependencies of communication channels to further improve the learning performance. Finally, simulation results show that the proposed method can satisfy the requirement of sensing performance, while its achievable sum-rate can approach the upper bound obtained by a genie-aided scheme with perfect ICSI available.
With deployment of 6G technology, it is envisioned that competitive edge of wireless networks will be sustained and next decade's communication requirements will be stratified. Also 6G will aim to aid development of a human society which is ubiquitous and mobile, simultaneously providing solutions to key challenges such as, coverage, capacity, etc. In addition, 6G will focus on providing intelligent use-cases and applications using higher data-rates over mill-meter waves and Tera-Hertz frequency. However, at higher frequencies multiple non-desired phenomena such as atmospheric absorption, blocking, etc., occur which create a bottleneck owing to resource (spectrum and energy) scarcity. Hence, following same trend of making efforts towards reproducing at receiver, exact information which was sent by transmitter, will result in a never ending need for higher bandwidth. A possible solution to such a challenge lies in semantic communications which focuses on meaning (context) of received data as opposed to only reproducing correct transmitted data. This in turn will require less bandwidth, and will reduce bottleneck due to various undesired phenomenon. In this respect, current article presents a detailed survey on recent technological trends in regard to semantic communications for intelligent wireless networks. We focus on semantic communications architecture including model, and source and channel coding. Next, we detail cross-layer interaction, and various goal-oriented communication applications. We also present overall semantic communications trends in detail, and identify challenges which need timely solutions before practical implementation of semantic communications within 6G wireless technology. Our survey article is an attempt to significantly contribute towards initiating future research directions in area of semantic communications for intelligent 6G wireless networks.
Temporal modeling still remains challenging for action recognition in videos. To mitigate this issue, this paper presents a new video architecture, termed as Temporal Difference Network (TDN), with a focus on capturing multi-scale temporal information for efficient action recognition. The core of our TDN is to devise an efficient temporal module (TDM) by explicitly leveraging a temporal difference operator, and systematically assess its effect on short-term and long-term motion modeling. To fully capture temporal information over the entire video, our TDN is established with a two-level difference modeling paradigm. Specifically, for local motion modeling, temporal difference over consecutive frames is used to supply 2D CNNs with finer motion pattern, while for global motion modeling, temporal difference across segments is incorporated to capture long-range structure for motion feature excitation. TDN provides a simple and principled temporal modeling framework and could be instantiated with the existing CNNs at a small extra computational cost. Our TDN presents a new state of the art on the Something-Something V1 and V2 datasets and is on par with the best performance on the Kinetics-400 dataset. In addition, we conduct in-depth ablation studies and plot the visualization results of our TDN, hopefully providing insightful analysis on temporal difference operation. We release the code at //github.com/MCG-NJU/TDN.
Deep Learning is applied to energy markets to predict extreme loads observed in energy grids. Forecasting energy loads and prices is challenging due to sharp peaks and troughs that arise due to supply and demand fluctuations from intraday system constraints. We propose deep spatio-temporal models and extreme value theory (EVT) to capture theses effects and in particular the tail behavior of load spikes. Deep LSTM architectures with ReLU and $\tanh$ activation functions can model trends and temporal dependencies while EVT captures highly volatile load spikes above a pre-specified threshold. To illustrate our methodology, we use hourly price and demand data from 4719 nodes of the PJM interconnection, and we construct a deep predictor. We show that DL-EVT outperforms traditional Fourier time series methods, both in-and out-of-sample, by capturing the observed nonlinearities in prices. Finally, we conclude with directions for future research.
Modern communication networks have become very complicated and highly dynamic, which makes them hard to model, predict and control. In this paper, we develop a novel experience-driven approach that can learn to well control a communication network from its own experience rather than an accurate mathematical model, just as a human learns a new skill (such as driving, swimming, etc). Specifically, we, for the first time, propose to leverage emerging Deep Reinforcement Learning (DRL) for enabling model-free control in communication networks; and present a novel and highly effective DRL-based control framework, DRL-TE, for a fundamental networking problem: Traffic Engineering (TE). The proposed framework maximizes a widely-used utility function by jointly learning network environment and its dynamics, and making decisions under the guidance of powerful Deep Neural Networks (DNNs). We propose two new techniques, TE-aware exploration and actor-critic-based prioritized experience replay, to optimize the general DRL framework particularly for TE. To validate and evaluate the proposed framework, we implemented it in ns-3, and tested it comprehensively with both representative and randomly generated network topologies. Extensive packet-level simulation results show that 1) compared to several widely-used baseline methods, DRL-TE significantly reduces end-to-end delay and consistently improves the network utility, while offering better or comparable throughput; 2) DRL-TE is robust to network changes; and 3) DRL-TE consistently outperforms a state-ofthe-art DRL method (for continuous control), Deep Deterministic Policy Gradient (DDPG), which, however, does not offer satisfying performance.
When deploying resource-intensive signal processing applications in wireless sensor or mesh networks, distributing processing blocks over multiple nodes becomes promising. Such distributed applications need to solve the placement problem (which block to run on which node), the routing problem (which link between blocks to map on which path between nodes), and the scheduling problem (which transmission is active when). We investigate a variant where the application graph may contain feedback loops and we exploit wireless networks? inherent multicast advantage. Thus, we propose Multicast-Aware Routing for Virtual network Embedding with Loops in Overlays (MARVELO) to find efficient solutions for scheduling and routing under a detailed interference model. We cast this as a mixed integer quadratically constrained optimisation problem and provide an efficient heuristic. Simulations show that our approach handles complex scenarios quickly.