Distributed computing enables Internet of vehicle (IoV) services by collaboratively utilizing the computing resources from the network edge and the vehicles. However, the computing interruption issue caused by frequent edge network handoffs, and a severe shortage of computing resources are two problems in providing IoV services. High altitude platform station (HAPS) computing can be a promising addition to existing distributed computing frameworks because of its wide coverage and strong computational capabilities. In this regard, this paper proposes an adaptive scheme in a new distributed computing framework that involves HAPS computing to deal with the two problems of the IoV. Based on the diverse demands of vehicles, network dynamics, and the time-sensitivity of handoffs, the proposed scheme flexibly divides each task into three parts and assigns them to the vehicle, roadside units (RSUs), and a HAPS to perform synchronous computing. The scheme also constrains the computing of tasks at RSUs such that they are completed before handoffs to avoid the risk of computing interruptions. On this basis, we formulate a delay minimization problem that considers task-splitting ratio, transmit power, bandwidth allocation, and computing resource allocation. To solve the problem, variable replacement and successive convex approximation-based method are proposed. The simulation results show that this scheme not only avoids the negative effects caused by handoffs in a flexible manner, it also takes delay performance into account and maintains the delay stability.
This paper introduces general methodologies for constructing closed-form solutions to several important partial differential equations (PDEs) with polynomial right-hand sides in two and three spatial dimensions. The covered equations include the isotropic and anisotropic Poisson, Helmholtz, Stokes, and elastostatic equations, as well as the time-harmonic linear elastodynamic and Maxwell equations. Polynomial solutions have recently regained significance in the development of numerical techniques for evaluating volume integral operators and have potential applications in certain kinds of Trefftz finite element methods. Our approach to all of these PDEs relates the particular solution to polynomial solutions of the Poisson and Helmholtz polynomial particular solutions, solutions that can in turn be obtained, respectively, from expansions using homogeneous polynomials and the Neumann series expansion of the operator $(k^2+\Delta)^{-1}$. No matrix inversion is required to compute the solution. The method naturally incorporates divergence constraints on the solution, such as in the case of Maxwell and Stokes flow equations. This work is accompanied by a freely available Julia library, \texttt{PolynomialSolutions.jl}, which implements the proposed methodology in a non-symbolic format and efficiently constructs and provides access to rapid evaluation of the desired solution.
Reinforcement learning often needs to deal with the exponential growth of states and actions when exploring optimal control in high-dimensional spaces (often known as the curse of dimensionality). In this work, we address this issue by learning the inherent structure of action-wise similar MDP to appropriately balance the performance degradation versus sample/computational complexity. In particular, we partition the action spaces into multiple groups based on the similarity in transition distribution and reward function, and build a linear decomposition model to capture the difference between the intra-group transition kernel and the intra-group rewards. Both our theoretical analysis and experiments reveal a \emph{surprising and counter-intuitive result}: while a more refined grouping strategy can reduce the approximation error caused by treating actions in the same group as identical, it also leads to increased estimation error when the size of samples or the computation resources is limited. This finding highlights the grouping strategy as a new degree of freedom that can be optimized to minimize the overall performance loss. To address this issue, we formulate a general optimization problem for determining the optimal grouping strategy, which strikes a balance between performance loss and sample/computational complexity. We further propose a computationally efficient method for selecting a nearly-optimal grouping strategy, which maintains its computational complexity independent of the size of the action space.
The high altitude platform station (HAPS) concept has recently received notable attention from both industry and academia to support future wireless networks. A HAPS can be equipped with 5th generation (5G) and beyond technologies such as massive multiple-input multiple-output (MIMO) and reconfigurable intelligent surface (RIS). Hence, it is expected that HAPS will support numerous applications in both rural and urban areas. However, this comes at the expense of high energy consumption and thus shorter loitering time. To tackle this issue, we envision the use of a multi-mode HAPS that can adaptively switch between different modes so as to reduce energy consumption and extend the HAPS loitering time. These modes comprise a HAPS super macro base station (HAPS-SMBS) mode for enhanced computing, caching, and communication services, a HAPS relay station (HAPS-RS) mode for active communication, and a HAPS-RIS mode for passive communication. This multimode HAPS ensures that operations rely mostly on the passive communication payload, while switching to an energy-greedy active mode only when necessary. In this article, we begin with a brief review of HAPS features compared to other non-terrestrial systems, followed by an exposition of the different HAPS modes proposed. Subsequently, we illustrate the envisioned multi-mode HAPS, and discuss its benefits and challenges. Finally, we validate the multi-mode efficiency through a case study.
This paper presents the vision of multi-band communication networks (MBN) in 6G, where optical and TeraHertz (THz) transmissions will coexist with the conventional radio frequency (RF) spectrum. This paper will first pin-point the fundamental challenges in MBN architectures at the PHYsical (PHY) and Medium Access (MAC) layer, such as unique channel propagation and estimation issues, user offloading and resource allocation, multi-band transceiver design and antenna systems, mobility and handoff management, backhauling, etc. We then perform a quantitative performance assessment of the two fundamental MBN architectures, i.e., {stand-alone MBN} and {integrated MBN} considering critical factors like achievable rate, and capital/operational deployment cost. {Our results show that stand-alone deployment is prone to higher capital and operational expenses for a predefined data rate requirement. Stand-alone deployment, however, offers flexibility and enables controlling the number of access points in different transmission bands.} In addition, we propose a molecular absorption-aware user offloading metric for MBNs and demonstrate its performance gains over conventional user offloading schemes. Finally, open research directions are presented.
Graph neural networks (GNNs) have been demonstrated to be a powerful algorithmic model in broad application fields for their effectiveness in learning over graphs. To scale GNN training up for large-scale and ever-growing graphs, the most promising solution is distributed training which distributes the workload of training across multiple computing nodes. However, the workflows, computational patterns, communication patterns, and optimization techniques of distributed GNN training remain preliminarily understood. In this paper, we provide a comprehensive survey of distributed GNN training by investigating various optimization techniques used in distributed GNN training. First, distributed GNN training is classified into several categories according to their workflows. In addition, their computational patterns and communication patterns, as well as the optimization techniques proposed by recent work are introduced. Second, the software frameworks and hardware platforms of distributed GNN training are also introduced for a deeper understanding. Third, distributed GNN training is compared with distributed training of deep neural networks, emphasizing the uniqueness of distributed GNN training. Finally, interesting issues and opportunities in this field are discussed.
Edge computing facilitates low-latency services at the network's edge by distributing computation, communication, and storage resources within the geographic proximity of mobile and Internet-of-Things (IoT) devices. The recent advancement in Unmanned Aerial Vehicles (UAVs) technologies has opened new opportunities for edge computing in military operations, disaster response, or remote areas where traditional terrestrial networks are limited or unavailable. In such environments, UAVs can be deployed as aerial edge servers or relays to facilitate edge computing services. This form of computing is also known as UAV-enabled Edge Computing (UEC), which offers several unique benefits such as mobility, line-of-sight, flexibility, computational capability, and cost-efficiency. However, the resources on UAVs, edge servers, and IoT devices are typically very limited in the context of UEC. Efficient resource management is, therefore, a critical research challenge in UEC. In this article, we present a survey on the existing research in UEC from the resource management perspective. We identify a conceptual architecture, different types of collaborations, wireless communication models, research directions, key techniques and performance indicators for resource management in UEC. We also present a taxonomy of resource management in UEC. Finally, we identify and discuss some open research challenges that can stimulate future research directions for resource management in UEC.
Games and simulators can be a valuable platform to execute complex multi-agent, multiplayer, imperfect information scenarios with significant parallels to military applications: multiple participants manage resources and make decisions that command assets to secure specific areas of a map or neutralize opposing forces. These characteristics have attracted the artificial intelligence (AI) community by supporting development of algorithms with complex benchmarks and the capability to rapidly iterate over new ideas. The success of artificial intelligence algorithms in real-time strategy games such as StarCraft II have also attracted the attention of the military research community aiming to explore similar techniques in military counterpart scenarios. Aiming to bridge the connection between games and military applications, this work discusses past and current efforts on how games and simulators, together with the artificial intelligence algorithms, have been adapted to simulate certain aspects of military missions and how they might impact the future battlefield. This paper also investigates how advances in virtual reality and visual augmentation systems open new possibilities in human interfaces with gaming platforms and their military parallels.
Behaviors of the synthetic characters in current military simulations are limited since they are generally generated by rule-based and reactive computational models with minimal intelligence. Such computational models cannot adapt to reflect the experience of the characters, resulting in brittle intelligence for even the most effective behavior models devised via costly and labor-intensive processes. Observation-based behavior model adaptation that leverages machine learning and the experience of synthetic entities in combination with appropriate prior knowledge can address the issues in the existing computational behavior models to create a better training experience in military training simulations. In this paper, we introduce a framework that aims to create autonomous synthetic characters that can perform coherent sequences of believable behavior while being aware of human trainees and their needs within a training simulation. This framework brings together three mutually complementary components. The first component is a Unity-based simulation environment - Rapid Integration and Development Environment (RIDE) - supporting One World Terrain (OWT) models and capable of running and supporting machine learning experiments. The second is Shiva, a novel multi-agent reinforcement and imitation learning framework that can interface with a variety of simulation environments, and that can additionally utilize a variety of learning algorithms. The final component is the Sigma Cognitive Architecture that will augment the behavior models with symbolic and probabilistic reasoning capabilities. We have successfully created proof-of-concept behavior models leveraging this framework on realistic terrain as an essential step towards bringing machine learning into military simulations.
Reasoning with knowledge expressed in natural language and Knowledge Bases (KBs) is a major challenge for Artificial Intelligence, with applications in machine reading, dialogue, and question answering. General neural architectures that jointly learn representations and transformations of text are very data-inefficient, and it is hard to analyse their reasoning process. These issues are addressed by end-to-end differentiable reasoning systems such as Neural Theorem Provers (NTPs), although they can only be used with small-scale symbolic KBs. In this paper we first propose Greedy NTPs (GNTPs), an extension to NTPs addressing their complexity and scalability limitations, thus making them applicable to real-world datasets. This result is achieved by dynamically constructing the computation graph of NTPs and including only the most promising proof paths during inference, thus obtaining orders of magnitude more efficient models. Then, we propose a novel approach for jointly reasoning over KBs and textual mentions, by embedding logic facts and natural language sentences in a shared embedding space. We show that GNTPs perform on par with NTPs at a fraction of their cost while achieving competitive link prediction results on large datasets, providing explanations for predictions, and inducing interpretable models. Source code, datasets, and supplementary material are available online at //github.com/uclnlp/gntp.
The concept of smart grid has been introduced as a new vision of the conventional power grid to figure out an efficient way of integrating green and renewable energy technologies. In this way, Internet-connected smart grid, also called energy Internet, is also emerging as an innovative approach to ensure the energy from anywhere at any time. The ultimate goal of these developments is to build a sustainable society. However, integrating and coordinating a large number of growing connections can be a challenging issue for the traditional centralized grid system. Consequently, the smart grid is undergoing a transformation to the decentralized topology from its centralized form. On the other hand, blockchain has some excellent features which make it a promising application for smart grid paradigm. In this paper, we have an aim to provide a comprehensive survey on application of blockchain in smart grid. As such, we identify the significant security challenges of smart grid scenarios that can be addressed by blockchain. Then, we present a number of blockchain-based recent research works presented in different literatures addressing security issues in the area of smart grid. We also summarize several related practical projects, trials, and products that have been emerged recently. Finally, we discuss essential research challenges and future directions of applying blockchain to smart grid security issues.