In this paper, we investigate the resource allocation design for integrated sensing and communication (ISAC) in distributed antenna networks (DANs). In particular, coordinated by a central processor (CP), a set of remote radio heads (RRHs) provide communication services to multiple users and sense several target locations within an ISAC frame. To avoid the severe interference between the information transmission and the radar echo, we propose to divide the ISAC frame into a communication phase and a sensing phase. During the communication phase, the data signal is generated at the CP and then conveyed to the RRHs via fronthaul links. As for the sensing phase, based on pre-determined RRH-target pairings, each RRH senses a dedicated target location with a synthesized highly-directional beam and then transfers the samples of the received echo to the CP via its fronthaul link for further processing of the sensing information. Taking into account the limited fronthaul capacity and the quality-of-service requirements of both communication and sensing, we jointly optimize the durations of the two phases, the information beamforming, and the covariance matrix of the sensing signal for minimization of the total energy consumption over a given finite time horizon. To solve the formulated non-convex design problem, we develop a low-complexity alternating optimization algorithm which converges to a suboptimal solution. Simulation results show that the proposed scheme achieves significant energy savings compared to two baseline schemes. Moreover, our results reveal that for efficient ISAC in wireless networks, energy-focused short-duration pulses are favorable for sensing while low-power long-duration signals are preferable for communication.
Graph Convolutional Networks (GCNs) are extensively utilized for deep learning on graphs. The large data sizes of graphs and their vertex features make scalable training algorithms and distributed memory systems necessary. Since the convolution operation on graphs induces irregular memory access patterns, designing a memory- and communication-efficient parallel algorithm for GCN training poses unique challenges. We propose a highly parallel training algorithm that scales to large processor counts. In our solution, the large adjacency and vertex-feature matrices are partitioned among processors. We exploit the vertex-partitioning of the graph to use non-blocking point-to-point communication operations between processors for better scalability. To further minimize the parallelization overheads, we introduce a sparse matrix partitioning scheme based on a hypergraph partitioning model for full-batch training. We also propose a novel stochastic hypergraph model to encode the expected communication volume in mini-batch training. We show the merits of the hypergraph model, previously unexplored for GCN training, over the standard graph partitioning model which does not accurately encode the communication costs. Experiments performed on real-world graph datasets demonstrate that the proposed algorithms achieve considerable speedups over alternative solutions. The optimizations achieved on communication costs become even more pronounced at high scalability with many processors. The performance benefits are preserved in deeper GCNs having more layers as well as on billion-scale graphs.
We derive minimax testing errors in a distributed framework where the data is split over multiple machines and their communication to a central machine is limited to $b$ bits. We investigate both the $d$- and infinite-dimensional signal detection problem under Gaussian white noise. We also derive distributed testing algorithms reaching the theoretical lower bounds. Our results show that distributed testing is subject to fundamentally different phenomena that are not observed in distributed estimation. Among our findings, we show that testing protocols that have access to shared randomness can perform strictly better in some regimes than those that do not. We also observe that consistent nonparametric distributed testing is always possible, even with as little as $1$-bit of communication and the corresponding test outperforms the best local test using only the information available at a single local machine. Furthermore, we also derive adaptive nonparametric distributed testing strategies and the corresponding theoretical lower bounds.
Virtual reality (VR) over wireless is expected to be one of the killer applications in next-generation communication networks. Nevertheless, the huge data volume along with stringent requirements on latency and reliability under limited bandwidth resources makes untethered wireless VR delivery increasingly challenging. Such bottlenecks, therefore, motivate this work to seek the potential of using semantic communication, a new paradigm that promises to significantly ease the resource pressure, for efficient VR delivery. To this end, we propose a novel framework, namely WIreless SEmantic deliveRy for VR (WiserVR), for delivering consecutive 360{\deg} video frames to VR users. Specifically, deep learning-based multiple modules are well-devised for the transceiver in WiserVR to realize high-performance feature extraction and semantic recovery. Among them, we dedicatedly develop a concept of semantic location graph and leverage the joint-semantic-channel-coding method with knowledge sharing to not only substantially reduce communication latency, but also to guarantee adequate transmission reliability and resilience under various channel states. Moreover, implementation of WiserVR is presented, followed by corresponding initial simulations for performance evaluation compared with benchmarks. Finally, we discuss several open issues and offer feasible solutions to unlock the full potential of WiserVR.
Graph Neural Networks(GNNs) are a family of neural models tailored for graph-structure data and have shown superior performance in learning representations for graph-structured data. However, training GNNs on large graphs remains challenging and a promising direction is distributed GNN training, which is to partition the input graph and distribute the workload across multiple machines. The key bottleneck of the existing distributed GNNs training framework is the across-machine communication induced by the dependency on the graph data and aggregation operator of GNNs. In this paper, we study the communication complexity during distributed GNNs training and propose a simple lossless communication reduction method, termed the Aggregation before Communication (ABC) method. ABC method exploits the permutation-invariant property of the GNNs layer and leads to a paradigm where vertex-cut is proved to admit a superior communication performance than the currently popular paradigm (edge-cut). In addition, we show that the new partition paradigm is particularly ideal in the case of dynamic graphs where it is infeasible to control the edge placement due to the unknown stochastic of the graph-changing process.
Vehicle-to-everything (V2X) communication is expected to support many promising applications in next-generation wireless networks. The recent development of integrated sensing and communications (ISAC) technology offers new opportunities to meet the stringent sensing and communication (S&C) requirements in V2X networks. However, considering the relatively small radar cross section (RCS) of the vehicles and the limited transmit power of the road site units (RSUs), the power of echoes may be too weak to achieve effective target detection and tracking. To handle this issue, we propose a novel sensing-assisted communication scheme by employing an intelligent Omni-surface (IOS) on the surface of the vehicle. First, a two-phase ISAC protocol, including the S&C phase and the communication-only phase, was presented to maximize the throughput by jointly optimizing the IOS phase shifts and the sensing duration. Then, we derive a closed-form expression of the achievable rate which achieves a good approximation. Furthermore, a sufficient and necessary condition for the existence of the S&C phase is derived to provide useful insights for practical system design. Simulation results demonstrate the effectiveness of the proposed sensing-assisted communication scheme in achieving high throughput with low transmit power requirements.
The recent development of integrated sensing and communications (ISAC) technology offers new opportunities to meet high-throughput and low-latency communication as well as high-resolution localization requirements in vehicular networks. However, considering the limited transmit power of the road site units (RSUs) and the relatively small radar cross section (RCS) of vehicles with random reflection coefficients, the power of echo signals may be too weak to be utilized for effective target detection and tracking. Moreover, high-frequency signals usually suffer from large fading loss when penetrating vehicles, which seriously degrades the quality of communication services inside the vehicles. To handle this issue, we propose a novel sensing-assisted communication mechanism by employing an intelligent omni-surface (IOS) on the surface of vehicles to enhance both sensing and communication (S&C) performance. To this end, we first propose a two-stage ISAC protocol, including the joint S&C stage and the communication-only stage, to fulfill more efficient communication performance improvements benefited from sensing. The achievable communication rate maximization problem is formulated by jointly optimizing the transmit beamforming, the IOS phase shifts, and the duration of the joint S&C stage. However, solving this ISAC optimization problem is highly non-trivial since inaccurate estimation and measurement information renders the achievable rate lack of closed-form expression. To handle this issue, we first derive a closed-form expression of the achievable rate under uncertain location information, and then unveil a sufficient and necessary condition for the existence of the joint S&C stage to offer useful insights for practical system design. Moreover, two typical scenarios including interference-limited and noise-limited cases are analyzed.
Integrated information theory (IIT) is a theoretical framework that provides a quantitative measure to estimate when a physical system is conscious, its degree of consciousness, and the complexity of the qualia space that the system is experiencing. Formally, IIT rests on the assumption that if a surrogate physical system can fully embed the phenomenological properties of consciousness, then the system properties must be constrained by the properties of the qualia being experienced. Following this assumption, IIT represents the physical system as a network of interconnected elements that can be thought of as a probabilistic causal graph, $\mathcal{G}$, where each node has an input-output function and all the graph is encoded in a transition probability matrix. Consequently, IIT's quantitative measure of consciousness, $\Phi$, is computed with respect to the transition probability matrix and the present state of the graph. In this paper, we provide a random search algorithm that is able to optimize $\Phi$ in order to investigate, as the number of nodes increases, the structure of the graphs that have higher $\Phi$. We also provide arguments that show the difficulties of applying more complex black-box search algorithms, such as Bayesian optimization or metaheuristics, in this particular problem. Additionally, we suggest specific research lines for these techniques to enhance the search algorithm that guarantees maximal $\Phi$.
Graph neural networks (GNNs) have been demonstrated to be a powerful algorithmic model in broad application fields for their effectiveness in learning over graphs. To scale GNN training up for large-scale and ever-growing graphs, the most promising solution is distributed training which distributes the workload of training across multiple computing nodes. However, the workflows, computational patterns, communication patterns, and optimization techniques of distributed GNN training remain preliminarily understood. In this paper, we provide a comprehensive survey of distributed GNN training by investigating various optimization techniques used in distributed GNN training. First, distributed GNN training is classified into several categories according to their workflows. In addition, their computational patterns and communication patterns, as well as the optimization techniques proposed by recent work are introduced. Second, the software frameworks and hardware platforms of distributed GNN training are also introduced for a deeper understanding. Third, distributed GNN training is compared with distributed training of deep neural networks, emphasizing the uniqueness of distributed GNN training. Finally, interesting issues and opportunities in this field are discussed.
The aim of this work is to develop a fully-distributed algorithmic framework for training graph convolutional networks (GCNs). The proposed method is able to exploit the meaningful relational structure of the input data, which are collected by a set of agents that communicate over a sparse network topology. After formulating the centralized GCN training problem, we first show how to make inference in a distributed scenario where the underlying data graph is split among different agents. Then, we propose a distributed gradient descent procedure to solve the GCN training problem. The resulting model distributes computation along three lines: during inference, during back-propagation, and during optimization. Convergence to stationary solutions of the GCN training problem is also established under mild conditions. Finally, we propose an optimization criterion to design the communication topology between agents in order to match with the graph describing data relationships. A wide set of numerical results validate our proposal. To the best of our knowledge, this is the first work combining graph convolutional neural networks with distributed optimization.
Since deep neural networks were developed, they have made huge contributions to everyday lives. Machine learning provides more rational advice than humans are capable of in almost every aspect of daily life. However, despite this achievement, the design and training of neural networks are still challenging and unpredictable procedures. To lower the technical thresholds for common users, automated hyper-parameter optimization (HPO) has become a popular topic in both academic and industrial areas. This paper provides a review of the most essential topics on HPO. The first section introduces the key hyper-parameters related to model training and structure, and discusses their importance and methods to define the value range. Then, the research focuses on major optimization algorithms and their applicability, covering their efficiency and accuracy especially for deep learning networks. This study next reviews major services and toolkits for HPO, comparing their support for state-of-the-art searching algorithms, feasibility with major deep learning frameworks, and extensibility for new modules designed by users. The paper concludes with problems that exist when HPO is applied to deep learning, a comparison between optimization algorithms, and prominent approaches for model evaluation with limited computational resources.