Future wireless communication networks are in a position to move beyond data-centric, device-oriented connectivity and offer intelligent, immersive experiences based on task-oriented connections, especially in the context of the thriving development of pre-trained foundation models (PFM) and the evolving vision of 6G native artificial intelligence (AI). Therefore, redefining modes of collaboration between devices and servers and constructing native intelligence libraries become critically important in 6G. In this paper, we analyze the challenges of achieving 6G native AI from the perspectives of data, intelligence, and networks. Then, we propose a 6G native AI framework based on foundation models, provide a customization approach for intent-aware PFM, present a construction of a task-oriented AI toolkit, and outline a novel cloud-edge-end collaboration paradigm. As a practical use case, we apply this framework for orchestration, achieving the maximum sum rate within a wireless communication system, and presenting preliminary evaluation results. Finally, we outline research directions for achieving native AI in 6G.
Recent advances in the field of generative models and in particular generative adversarial networks (GANs) have lead to substantial progress for controlled image editing, especially compared with the pre-deep learning era. Despite their powerful ability to apply realistic modifications to an image, these methods often lack properties like disentanglement (the capacity to edit attributes independently). In this paper, we propose an auto-encoder which re-organizes the latent space of StyleGAN, so that each attribute which we wish to edit corresponds to an axis of the new latent space, and furthermore that the latent axes are decorrelated, encouraging disentanglement. We work in a compressed version of the latent space, using Principal Component Analysis, meaning that the parameter complexity of our autoencoder is reduced, leading to short training times ($\sim$ 45 mins). Qualitative and quantitative results demonstrate the editing capabilities of our approach, with greater disentanglement than competing methods, while maintaining fidelity to the original image with respect to identity. Our autoencoder architecture simple and straightforward, facilitating implementation.
Finding a minimum vertex cover in a network is a fundamental NP-complete graph problem. One way to deal with its computational hardness, is to trade the qualitative performance of an algorithm (allowing non-optimal outputs) for an improved running time. For the vertex cover problem, there is a gap between theory and practice when it comes to understanding this tradeoff. On the one hand, it is known that it is NP-hard to approximate a minimum vertex cover within a factor of $\sqrt{2}$. On the other hand, a simple greedy algorithm yields close to optimal approximations in practice. A promising approach towards understanding this discrepancy is to recognize the differences between theoretical worst-case instances and real-world networks. Following this direction, we close the gap between theory and practice by providing an algorithm that efficiently computes nearly optimal vertex cover approximations on hyperbolic random graphs; a network model that closely resembles real-world networks in terms of degree distribution, clustering, and the small-world property. More precisely, our algorithm computes a $(1 + o(1))$-approximation, asymptotically almost surely, and has a running time of $\mathcal{O}(m \log(n))$. The proposed algorithm is an adaptation of the successful greedy approach, enhanced with a procedure that improves on parts of the graph where greedy is not optimal. This makes it possible to introduce a parameter that can be used to tune the tradeoff between approximation performance and running time. Our empirical evaluation on real-world networks shows that this allows for improving over the near-optimal results of the greedy approach.
An adaptive standardized protocol is essential for addressing inter-slice resource contention and conflict in network slicing. Traditional protocol standardization is a cumbersome task that yields hardcoded predefined protocols, resulting in increased costs and delayed rollout. Going beyond these limitations, this paper proposes a novel multi-agent deep reinforcement learning (MADRL) communication framework called standalone explainable protocol (STEP) for future sixth-generation (6G) open radio access network (O-RAN) slicing. As new conditions arise and affect network operation, resource orchestration agents adapt their communication messages to promote the emergence of a protocol on-the-fly, which enables the mitigation of conflict and resource contention between network slices. STEP weaves together the notion of information bottleneck (IB) theory with deep Q-network (DQN) learning concepts. By incorporating a stochastic bottleneck layer -- inspired by variational autoencoders (VAEs) -- STEP imposes an information-theoretic constraint for emergent inter-agent communication. This ensures that agents exchange concise and meaningful information, preventing resource waste and enhancing the overall system performance. The learned protocols enhance interpretability, laying a robust foundation for standardizing next-generation 6G networks. By considering an O-RAN compliant network slicing resource allocation problem, a conflict resolution protocol is developed. In particular, the results demonstrate that, on average, STEP reduces inter-slice conflicts by up to 6.06x compared to a predefined protocol method. Furthermore, in comparison with an MADRL baseline, STEP achieves 1.4x and 3.5x lower resource underutilization and latency, respectively.
Next-generation wireless networks, such as edge intelligence and wireless distributed learning, face two critical challenges: communication efficiency and privacy protection. In this work, our focus is on addressing these issues in a distributed learning framework. We consider a new approach that simultaneously achieves communication efficiency and privacy protection by exploiting the privacy advantage offered by quantization. Specifically, we use a quantization scheme called \textbf{Gau}ssian \textbf{L}ayered \textbf{R}andomized \textbf{Q}uantization (Gau-LRQ) that compresses the raw model gradients using a layer multishift coupler. By adjusting the parameters of Gau-LRQ, we shape the quantization error to follow the expected Gaussian distribution, thus ensuring client-level differential privacy (CLDP). We demonstrate the effectiveness of our proposed Gau-LRQ in the distributed stochastic gradient descent (SGD) framework and theoretically quantify the trade-offs between communication, privacy, and convergence performance. We further improve the convergence performance by enabling dynamic private budget and quantization bit allocation. We achieve this by using an optimization formula that minimizes convergence error subject to the privacy budget constraint. We evaluate our approach on multiple datasets, including MNIST, CIFAR-10, and CIFAR-100, and show that our proposed method outperforms the baselines in terms of learning performance under various privacy constraints. Moreover, we observe that dynamic privacy allocation yields additional accuracy improvements for the models compared to the fixed scheme.
Scale-free networks are one of the most famous examples of emergent behavior and are ubiquitous in social systems, especially online social media in which users can follow each other. By analyzing the interactions of multiple generative agents using GPT3.5-turbo as a language model, we demonstrate their ability to not only mimic individual human linguistic behavior but also exhibit collective phenomena intrinsic to human societies, in particular the emergence of scale-free networks. We discovered that this process is disrupted by a skewed token prior distribution of GPT3.5-turbo, which can lead to networks with extreme centralization as a kind of alignment. We show how renaming agents removes these token priors and allows the model to generate a range of networks from random networks to more realistic scale-free networks.
The traditional role of the network layer is the transfer of packet replicas from source to destination through intermediate network nodes. We present a generative network layer that uses Generative AI (GenAI) at intermediate or edge network nodes and analyze its impact on the required data rates in the network. We conduct a case study where the GenAI-aided nodes generate images from prompts that consist of substantially compressed latent representations. The results from network flow analyses under image quality constraints show that the generative network layer can achieve an improvement of more than 100% in terms of the required data rate.
Financial networks raise a significant computational challenge in identifying insolvent firms and evaluating their exposure to systemic risk. This task, known as the clearing problem, is computationally tractable when dealing with simple debt contracts. However under the presence of certain derivatives called credit default swaps (CDSes) the clearing problem is $\textsf{FIXP}$-complete. Existing techniques only show $\textsf{PPAD}$-hardness for finding an $\epsilon$-solution for the clearing problem with CDSes within an unspecified small range for $\epsilon$. We present significant progress in both facets of the clearing problem: (i) intractability of approximate solutions; (ii) algorithms and heuristics for computable solutions. Leveraging $\textsf{Pure-Circuit}$ (FOCS'22), we provide the first explicit inapproximability bound for the clearing problem involving CDSes. Our primal contribution is a reduction from $\textsf{Pure-Circuit}$ which establishes that finding approximate solutions is $\textsf{PPAD}$-hard within a range of roughly 5%. To alleviate the complexity of the clearing problem, we identify two meaningful restrictions of the class of financial networks motivated by regulations: (i) the presence of a central clearing authority; and (ii) the restriction to covered CDSes. We provide the following results: (i.) The $\textsf{PPAD}$-hardness of approximation persists when central clearing authorities are introduced; (ii.) An optimisation-based method for solving the clearing problem with central clearing authorities; (iii.) A polynomial-time algorithm when the two restrictions hold simultaneously.
Device-free wireless sensing has recently attracted significant interest due to its potential to support a wide range of immersive human-machine interactive applications. However, data heterogeneity in wireless signals and data privacy regulation of distributed sensing have been considered as the major challenges that hinder the wide applications of wireless sensing in large area networking systems. Motivated by the observation that signals recorded by wireless receivers are closely related to a set of physical-layer semantic features, in this paper we propose a novel zero-shot wireless sensing solution that allows models constructed in one or a limited number of locations to be directly transferred to other locations without any labeled data. We develop a novel physical-layer semantic-aware network (pSAN) framework to characterize the correlation between physical-layer semantic features and the sensing data distributions across different receivers. We then propose a pSAN-based zero-shot learning solution in which each receiver can obtain a location-specific gesture recognition model by directly aggregating the already constructed models of other receivers. We theoretically prove that models obtained by our proposed solution can approach the optimal model without requiring any local model training. Experimental results once again verify that the accuracy of models derived by our proposed solution matches that of the models trained by the real labeled data based on supervised learning approach.
The growing trend toward the modernization of power distribution systems has facilitated the installation of advanced measurement units and promotion of the cyber communication systems. However, these infrastructures are still prone to stealth cyber attacks. The existing data-driven anomaly detection methods suffer from a lack of knowledge about the system's physics, lack of interpretability, and scalability issues hindering their practical applications in real-world scenarios. To address these concerns, physics-informed neural networks (PINNs) were introduced. This paper proposes a multivariate physics-informed convolutional autoencoder (PIConvAE) to detect stealthy cyber-attacks in power distribution grids. The proposed model integrates the physical principles into the loss function of the neural network by applying Kirchhoff's law. Simulations are performed on the modified IEEE 13-bus and 123-bus systems using OpenDSS software to validate the efficacy of the proposed model for stealth attacks. The numerical results prove the superior performance of the proposed PIConvAE in three aspects: a) it provides more accurate results compared to the data-driven ConvAE model, b) it requires less training time to converge c) the model excels in effectively detecting a wide range of attack magnitudes making it powerful in detecting stealth attacks.
Leveraging datasets available to learn a model with high generalization ability to unseen domains is important for computer vision, especially when the unseen domain's annotated data are unavailable. We study a novel and practical problem of Open Domain Generalization (OpenDG), which learns from different source domains to achieve high performance on an unknown target domain, where the distributions and label sets of each individual source domain and the target domain can be different. The problem can be generally applied to diverse source domains and widely applicable to real-world applications. We propose a Domain-Augmented Meta-Learning framework to learn open-domain generalizable representations. We augment domains on both feature-level by a new Dirichlet mixup and label-level by distilled soft-labeling, which complements each domain with missing classes and other domain knowledge. We conduct meta-learning over domains by designing new meta-learning tasks and losses to preserve domain unique knowledge and generalize knowledge across domains simultaneously. Experiment results on various multi-domain datasets demonstrate that the proposed Domain-Augmented Meta-Learning (DAML) outperforms prior methods for unseen domain recognition.