In this paper, we investigate the resource allocation design for integrated sensing and communication (ISAC) in distributed antenna networks (DANs). In particular, coordinated by a central processor (CP), a set of remote radio heads (RRHs) provide communication services to multiple users and sense several target locations within an ISAC frame. To avoid the severe interference between the information transmission and the radar echo, we propose to divide the ISAC frame into a communication phase and a sensing phase. During the communication phase, the data signal is generated at the CP and then conveyed to the RRHs via fronthaul links. As for the sensing phase, based on pre-determined RRH-target pairings, each RRH senses a dedicated target location with a synthesized highly-directional beam and then transfers the samples of the received echo to the CP via its fronthaul link for further processing of the sensing information. Taking into account the limited fronthaul capacity and the quality-of-service requirements of both communication and sensing, we jointly optimize the durations of the two phases, the information beamforming, and the covariance matrix of the sensing signal for minimization of the total energy consumption over a given finite time horizon. To solve the formulated non-convex design problem, we develop a low-complexity alternating optimization algorithm which converges to a suboptimal solution. Simulation results show that the proposed scheme achieves significant energy savings compared to two baseline schemes. Moreover, our results reveal that for efficient ISAC in wireless networks, energy-focused short-duration pulses are favorable for sensing while low-power long-duration signals are preferable for communication.
New technologies for sensing and communication act as enablers for cooperative driving applications. Sensors are able to detect objects in the surrounding environment and information such as their current location is exchanged among vehicles. In order to cope with the vehicles' mobility, such information is required to be as fresh as possible for proper operation of cooperative driving applications. The age of information (AoI) has been proposed as a metric for evaluating freshness of information; recently also within the context of intelligent transportation systems (ITS). We investigate mechanisms to reduce the AoI of data transported in form of beacon messages while controlling their emission rate. We aim to balance packet collision probability and beacon frequency using the average peak age of information (PAoI) as a metric. This metric, however, only accounts for the generation time of the data but not for application-specific aspects, such as the location of the transmitting vehicle. We thus propose a new way of interpreting the AoI by considering information context, thereby incorporating vehicles' locations. As an example, we characterize such importance using the orientation and the distance of the involved vehicles. In particular, we introduce a weighting coefficient used in combination with the PAoI to evaluate the information freshness, thus emphasizing on information from more important neighbors. We further design the beaconing approach in a way to meet a given AoI requirement, thus, saving resources on the wireless channel while keeping the AoI minimal. We illustrate the effectiveness of our approach in Manhattan-like urban scenarios, reaching pre-specified targets for the AoI of beacon messages.
UAV (unmanned aerial vehicle) is rapidly gaining traction in various human activities and has become an integral component of the satellite-air-ground-sea (SAGS) integrated network. As high-speed moving objects, UAVs not only have extremely strict requirements for communication delay, but also cannot be maliciously controlled as a weapon by the attacker. Therefore, an efficient and secure communication method designed for UAV networks is necessary. We propose a communication mechanism ESCM. For high efficiency, ESCM provides a routing protocol based on the artificial bee colony (ABC) algorithm to accelerate communications between UAVs. Meanwhile, we use blockchain to guarantee the security of UAV networks. However, blockchain has unstable links in high-mobility networks resulting in low consensus efficiency and high communication overhead. Consequently, ESCM introduces digital twin (DT), which transforms the UAV network into a static network by mapping UAVs from the physical world into Cyberspace. This virtual UAV network is called CyberUAV. Then, in CyberUAV, we design a blockchain consensus based on network coding, named Proof of Network Coding (PoNC). Analysis and simulation show that the above modules in ESCM have advantages over existing schemes. Through ablation studies, we demonstrate that these modules are indispensable for efficient and secure communication of UAV networks.
The important phenomenon of "stickiness" of chaotic orbits in low dimensional dynamical systems has been investigated for several decades, in view of its applications to various areas of physics, such as classical and statistical mechanics, celestial mechanics and accelerator dynamics. Most of the work to date has focused on two-degree of freedom Hamiltonian models often represented by two-dimensional (2D) area preserving maps. In this paper, we extend earlier results using a 4-dimensional extension of the 2D McMillan map, and show that a symplectic model of two coupled McMillan maps also exhibits stickiness phenomena in limited regions of phase space. To this end, we employ probability distributions in the sense of the Central Limit Theorem to demonstrate that, as in the 2D case, sticky regions near the origin are also characterized by "weak" chaos and Tsallis entropy, in sharp contrast to the "strong" chaos that extends over much wider domains and is described by Boltzmann Gibbs statistics. Remarkably, similar stickiness phenomena have been observed in higher dimensional Hamiltonian systems around unstable simple periodic orbits at various values of the total energy of the system.
Very recently, Heng et al. studied a family of extended primitive cyclic codes. It was shown that the supports of all codewords with any fixed nonzero Hamming weight of this code supporting 2-designs. In this paper, we study this family of extended primitive cyclic codes in more details. The weight distribution is determined. The parameters of the related $2$-designs are also given. Moreover, we prove that the codewords with minimum Hamming weight supporting 3-designs, which gives an affirmative solution to Heng's conjecture.
Low Earth Orbit (LEO) satellites present a compelling opportunity for the establishment of a global quantum information network. However, satellite-based entanglement distribution from a networking perspective has not been fully investigated. Existing works often do not account for satellite movement over time when distributing entanglement and/or often do not permit entanglement distribution along inter-satellite links, which are two shortcomings we address in this paper. We first define a system model which considers both satellite movement over time and inter-satellite links. We next formulate the optimal entanglement distribution (OED) problem under this system model and show how to convert the OED problem in a dynamic physical network to one in a static logical graph which can be used to solve the OED problem in the dynamic physical network. We then propose a polynomial time greedy algorithm for computing satellite-assisted multi-hop entanglement paths. We also design an integer linear programming (ILP)-based algorithm to compute optimal solutions as a baseline to study the performance of our greedy algorithm. We present evaluation results to demonstrate the advantage of our model and algorithms.
To accelerate distributed training, many gradient compression methods have been proposed to alleviate the communication bottleneck in synchronous stochastic gradient descent (S-SGD), but their efficacy in real-world applications still remains unclear. In this work, we first evaluate the efficiency of three representative compression methods (quantization with Sign-SGD, sparsification with Top-k SGD, and low-rank with Power-SGD) on a 32-GPU cluster. The results show that they cannot always outperform well-optimized S-SGD or even worse due to their incompatibility with three key system optimization techniques (all-reduce, pipelining, and tensor fusion) in S-SGD. To this end, we propose a novel gradient compression method, called alternate compressed Power-SGD (ACP-SGD), which alternately compresses and communicates low-rank matrices. ACP-SGD not only significantly reduces the communication volume, but also enjoys the three system optimizations like S-SGD. Compared with Power-SGD, the optimized ACP-SGD can largely reduce the compression and communication overheads, while achieving similar model accuracy. In our experiments, ACP-SGD achieves an average of 4.06x and 1.43x speedups over S-SGD and Power-SGD, respectively, and it consistently outperforms other baselines across different setups (from 8 GPUs to 64 GPUs and from 1Gb/s Ethernet to 100Gb/s InfiniBand).
This paper addresses two fundamental problems in diffusive molecular communication: characterizing the first arrival position (FAP) density and determining the information transmission capacity of FAP channels. Previous studies on FAP channel models, mostly captured by the density function of noise, have been limited to specific spatial dimensions, drift directions, and receiver geometries. In response, we propose a unified solution for identifying the FAP density in molecular communication systems with fully-absorbing receivers. Leveraging stochastic analysis tools, we derive a concise expression with universal applicability, covering any spatial dimension, drift direction, and receiver shape. We demonstrate that several existing FAP density formulas are special cases of this innovative expression. Concurrently, we establish explicit upper and lower bounds on the capacity of three-dimensional, vertically-drifted FAP channels, drawing inspiration from vector Gaussian interference channels. In the course of deriving these bounds, we unravel an explicit analytical expression for the characteristic function of vertically-drifted FAP noise distributions, providing a more compact characterization compared to the density function. Notably, this expression sheds light on previously undiscovered weak stability properties intrinsic to vertically-drifted FAP noise distributions.
The aim of this work is to develop a fully-distributed algorithmic framework for training graph convolutional networks (GCNs). The proposed method is able to exploit the meaningful relational structure of the input data, which are collected by a set of agents that communicate over a sparse network topology. After formulating the centralized GCN training problem, we first show how to make inference in a distributed scenario where the underlying data graph is split among different agents. Then, we propose a distributed gradient descent procedure to solve the GCN training problem. The resulting model distributes computation along three lines: during inference, during back-propagation, and during optimization. Convergence to stationary solutions of the GCN training problem is also established under mild conditions. Finally, we propose an optimization criterion to design the communication topology between agents in order to match with the graph describing data relationships. A wide set of numerical results validate our proposal. To the best of our knowledge, this is the first work combining graph convolutional neural networks with distributed optimization.
The demand for artificial intelligence has grown significantly over the last decade and this growth has been fueled by advances in machine learning techniques and the ability to leverage hardware acceleration. However, in order to increase the quality of predictions and render machine learning solutions feasible for more complex applications, a substantial amount of training data is required. Although small machine learning models can be trained with modest amounts of data, the input for training larger models such as neural networks grows exponentially with the number of parameters. Since the demand for processing training data has outpaced the increase in computation power of computing machinery, there is a need for distributing the machine learning workload across multiple machines, and turning the centralized into a distributed system. These distributed systems present new challenges, first and foremost the efficient parallelization of the training process and the creation of a coherent model. This article provides an extensive overview of the current state-of-the-art in the field by outlining the challenges and opportunities of distributed machine learning over conventional (centralized) machine learning, discussing the techniques used for distributed machine learning, and providing an overview of the systems that are available.
In recent years, mobile devices have gained increasingly development with stronger computation capability and larger storage. Some of the computation-intensive machine learning and deep learning tasks can now be run on mobile devices. To take advantage of the resources available on mobile devices and preserve users' privacy, the idea of mobile distributed machine learning is proposed. It uses local hardware resources and local data to solve machine learning sub-problems on mobile devices, and only uploads computation results instead of original data to contribute to the optimization of the global model. This architecture can not only relieve computation and storage burden on servers, but also protect the users' sensitive information. Another benefit is the bandwidth reduction, as various kinds of local data can now participate in the training process without being uploaded to the server. In this paper, we provide a comprehensive survey on recent studies of mobile distributed machine learning. We survey a number of widely-used mobile distributed machine learning methods. We also present an in-depth discussion on the challenges and future directions in this area. We believe that this survey can demonstrate a clear overview of mobile distributed machine learning and provide guidelines on applying mobile distributed machine learning to real applications.