The effective distribution of user transmit powers is essential for the significant advancements that the emergence of 6G wireless networks brings. In recent studies, Deep Neural Networks (DNNs) have been employed to address this challenge. However, these methods frequently encounter issues regarding fairness and computational inefficiency when making decisions, rendering them unsuitable for future dynamic services that depend heavily on the participation of each individual user. To address this gap, this paper focuses on the challenge of transmit power allocation in wireless networks, aiming to optimize $\alpha$-fairness to balance network utilization and user equity. We introduce a novel approach utilizing Kolmogorov-Arnold Networks (KANs), a class of machine learning models that offer low inference costs compared to traditional DNNs through superior explainability. The study provides a comprehensive problem formulation, establishing the NP-hardness of the power allocation problem. Then, two algorithms are proposed for dataset generation and decentralized KAN training, offering a flexible framework for achieving various fairness objectives in dynamic 6G environments. Extensive numerical simulations demonstrate the effectiveness of our approach in terms of fairness and inference cost. The results underscore the potential of KANs to overcome the limitations of existing DNN-based methods, particularly in scenarios that demand rapid adaptation and fairness.
Recent advances in multi-agent systems manipulation have demonstrated a rising demand for the implementation of multi-UAV systems in urban areas which are always subjected to the presence of static and dynamic obstacles. The focus of the presented research is on the introduction of a nature-inspired collision-free control for a multi-UAV system considering obstacle avoidance maneuvers. Inspired by the collective behavior of tilapia fish and pigeon, the presented framework in this study uses a centralized controller for the optimal formation control/recovery, which is defined by probabilistic Lloyd's algorithm, while it uses a distributed controller for the intervehicle collision and obstacle avoidance. Further, the presented framework has been extended to the 3D space with 3D maneuvers. Finally, the presented framework has been applied to a multi-UAV system in 2D and 3D scenarios, and obtained results demonstrated the validity of the presented method in the presence of buildings and different types of obstacles.
Flocking is a behavior where multiple agents in a system attempt to stay close to each other while avoiding collision and maintaining a desired formation. This is observed in the natural world and has applications in robotics, including natural disaster search and rescue, wild animal tracking, and perimeter surveillance and patrol. Recently, large language models (LLMs) have displayed an impressive ability to solve various collaboration tasks as individual decision-makers. Solving multi-agent flocking with LLMs would demonstrate their usefulness in situations requiring spatial and decentralized decision-making. Yet, when LLM-powered agents are tasked with implementing multi-agent flocking, they fall short of the desired behavior. After extensive testing, we find that agents with LLMs as individual decision-makers typically opt to converge on the average of their initial positions or diverge from each other. After breaking the problem down, we discover that LLMs cannot understand maintaining a shape or keeping a distance in a meaningful way. Solving multi-agent flocking with LLMs would enhance their ability to understand collaborative spatial reasoning and lay a foundation for addressing more complex multi-agent tasks. This paper discusses the challenges LLMs face in multi-agent flocking and suggests areas for future improvement and research.
The rapid increase in the number of parameters in large language models (LLMs) has significantly increased the cost involved in fine-tuning and retraining LLMs, a necessity for keeping models up to date and improving accuracy. Retrieval-Augmented Generation (RAG) offers a promising approach to improving the capabilities and accuracy of LLMs without the necessity of retraining. Although RAG eliminates the need for continuous retraining to update model data, it incurs a trade-off in the form of slower model inference times. Resultingly, the use of RAG in enhancing the accuracy and capabilities of LLMs often involves diverse performance implications and trade-offs based on its design. In an effort to begin tackling and mitigating the performance penalties associated with RAG from a systems perspective, this paper introduces a detailed taxonomy and characterization of the different elements within the RAG ecosystem for LLMs that explore trade-offs within latency, throughput, and memory. Our study reveals underlying inefficiencies in RAG for systems deployment, that can result in TTFT latencies that are twice as long and unoptimized datastores that consume terabytes of storage.
Recurrent neural networks (RNNs) are valued for their computational efficiency and reduced memory requirements on tasks involving long sequence lengths but require high memory-processor bandwidth to train. Checkpointing techniques can reduce the memory requirements by only storing a subset of intermediate states, the checkpoints, but are still rarely used due to the computational overhead of the additional recomputation phase. This work addresses these challenges by introducing memory-efficient gradient checkpointing strategies tailored for the general class of sparse RNNs and Spiking Neural Networks (SNNs). SNNs are energy efficient alternatives to RNNs thanks to their local, event-driven operation and potential neuromorphic implementation. We use the Intelligence Processing Unit (IPU) as an exemplary platform for architectures with distributed local memory. We exploit its suitability for sparse and irregular workloads to scale SNN training on long sequence lengths. We find that Double Checkpointing emerges as the most effective method, optimizing the use of local memory resources while minimizing recomputation overhead. This approach reduces dependency on slower large-scale memory access, enabling training on sequences over 10 times longer or 4 times larger networks than previously feasible, with only marginal time overhead. The presented techniques demonstrate significant potential to enhance scalability and efficiency in training sparse and recurrent networks across diverse hardware platforms, and highlights the benefits of sparse activations for scalable recurrent neural network training.
Vehicular fog computing (VFC) can be considered as an important alternative to address the existing challenges in intelligent transportation systems (ITS). The main purpose of VFC is to perform computational tasks through various vehicles. At present, VFCs include powerful computing resources that bring the computational resources nearer to the requesting devices. This paper presents a new algorithm based on meta-heuristic optimization method for task scheduling problem in VFC. The task scheduling in VFC is formulated as a multi-objective optimization problem, which aims to reduce makespan and monetary cost. The proposed method utilizes the grey wolf optimization (GWO) and assigns the different priorities to static and dynamic fog nodes. Dynamic fog nodes represent the parked or moving vehicles and static fog nodes show the stationary servers. Afterwards, the tasks that require the most processing resources are chosen and allocated to fog nodes. The GWO-based method is extensively evaluated in more details. Furthermore, the effectiveness of various parameters in GWO algorithm is analyzed. We also assess the proposed algorithm on real application and random data. The outcomes of our experiments confirm that, in comparison to previous works, our algorithm is capable of offering the lowest monetary cost.
The implementation of 5G and the future deployment of 6G necessitate the utilization of optical networks that possess substantial capacity and exhibit minimal latency. The dynamic arrival and departure of connection requests in optical networks result in particular central links experiencing more traffic and congestion than non-central links. The occurrence of congested links leads to service blocking despite the availability of resources within the network, restricting the efficient utilization of network resources. The available algorithms in the literature that aim to balance load among network links offer a trade-off between blocking performance and algorithmic complexity, thus increasing service provisioning time. This work proposes a dynamic routing-based congestion-aware routing, modulation, core, and spectrum assignment (RMCSA) algorithm for space division multiplexing elastic optical networks (SDM-EONs). The algorithm finds alternative candidate paths based on real-time link occupancy metrics to minimize blocking due to link congestion under dynamic traffic scenarios. As a result, the algorithm reduces the formation of congestion hotspots in the network owing to link-betweenness centrality. We have performed extensive simulations using two realistic network topologies to compare the performance of the proposed algorithm with relevant RMCSA algorithms available in the literature. The simulation results verify the superior performance of our proposed algorithm compared to the benchmark Yen's K-shortest paths and K-Disjoint shortest paths RMCSA algorithms in connection blocking ratio and spectrum utilization efficiency. To expedite the route-finding process, we present a novel caching strategy that allows the proposed algorithm to demonstrate a much-reduced service delay time compared to the recently developed adaptive link weight-based load-balancing RMCSA algorithm.
Geo-obfuscation serves as a location privacy protection mechanism (LPPM), enabling mobile users to share obfuscated locations with servers, rather than their exact locations. This method can protect users' location privacy when data breaches occur on the server side since the obfuscation process is irreversible. To reduce the utility loss caused by data obfuscation, linear programming (LP) is widely employed, which, however, might suffer from a polynomial explosion of decision variables, rendering it impractical in largescale geo-obfuscation applications. In this paper, we propose a new LPPM, called Locally Relevant Geo-obfuscation (LR-Geo), to optimize geo-obfuscation using LP in a time-efficient manner. This is achieved by confining the geo-obfuscation calculation for each user exclusively to the locally relevant (LR) locations to the user's actual location. Given the potential risk of LR locations disclosing a user's actual whereabouts, we enable users to compute the LP coefficients locally and upload them only to the server, rather than the LR locations. The server then solves the LP problem based on the received coefficients. Furthermore, we refine the LP framework by incorporating an exponential obfuscation mechanism to guarantee the indistinguishability of obfuscation distribution across multiple users. Based on the constraint structure of the LP formulation, we apply Benders' decomposition to further enhance computational efficiency. Our theoretical analysis confirms that, despite the geo-obfuscation being calculated independently for each user, it still meets geo-indistinguishability constraints across multiple users with high probability. Finally, the experimental results based on a real-world dataset demonstrate that LR-Geo outperforms existing geo-obfuscation methods in computational time, data utility, and privacy preservation.
Road traffic forecasting is crucial in real-world intelligent transportation scenarios like traffic dispatching and path planning in city management and personal traveling. Spatio-temporal graph neural networks (STGNNs) stand out as the mainstream solution in this task. Nevertheless, the quadratic complexity of remarkable dynamic spatial modeling-based STGNNs has become the bottleneck over large-scale traffic data. From the spatial data management perspective, we present a novel Transformer framework called PatchSTG to efficiently and dynamically model spatial dependencies for large-scale traffic forecasting with interpretability and fidelity. Specifically, we design a novel irregular spatial patching to reduce the number of points involved in the dynamic calculation of Transformer. The irregular spatial patching first utilizes the leaf K-dimensional tree (KDTree) to recursively partition irregularly distributed traffic points into leaf nodes with a small capacity, and then merges leaf nodes belonging to the same subtree into occupancy-equaled and non-overlapped patches through padding and backtracking. Based on the patched data, depth and breadth attention are used interchangeably in the encoder to dynamically learn local and global spatial knowledge from points in a patch and points with the same index of patches. Experimental results on four real world large-scale traffic datasets show that our PatchSTG achieves train speed and memory utilization improvements up to $10\times$ and $4\times$ with the state-of-the-art performance.
Vast amount of data generated from networks of sensors, wearables, and the Internet of Things (IoT) devices underscores the need for advanced modeling techniques that leverage the spatio-temporal structure of decentralized data due to the need for edge computation and licensing (data access) issues. While federated learning (FL) has emerged as a framework for model training without requiring direct data sharing and exchange, effectively modeling the complex spatio-temporal dependencies to improve forecasting capabilities still remains an open problem. On the other hand, state-of-the-art spatio-temporal forecasting models assume unfettered access to the data, neglecting constraints on data sharing. To bridge this gap, we propose a federated spatio-temporal model -- Cross-Node Federated Graph Neural Network (CNFGNN) -- which explicitly encodes the underlying graph structure using graph neural network (GNN)-based architecture under the constraint of cross-node federated learning, which requires that data in a network of nodes is generated locally on each node and remains decentralized. CNFGNN operates by disentangling the temporal dynamics modeling on devices and spatial dynamics on the server, utilizing alternating optimization to reduce the communication cost, facilitating computations on the edge devices. Experiments on the traffic flow forecasting task show that CNFGNN achieves the best forecasting performance in both transductive and inductive learning settings with no extra computation cost on edge devices, while incurring modest communication cost.
Path-based relational reasoning over knowledge graphs has become increasingly popular due to a variety of downstream applications such as question answering in dialogue systems, fact prediction, and recommender systems. In recent years, reinforcement learning (RL) has provided solutions that are more interpretable and explainable than other deep learning models. However, these solutions still face several challenges, including large action space for the RL agent and accurate representation of entity neighborhood structure. We address these problems by introducing a type-enhanced RL agent that uses the local neighborhood information for efficient path-based reasoning over knowledge graphs. Our solution uses graph neural network (GNN) for encoding the neighborhood information and utilizes entity types to prune the action space. Experiments on real-world dataset show that our method outperforms state-of-the-art RL methods and discovers more novel paths during the training procedure.