We consider the fundamental task of network exploration. A network is modeled as a simple connected undirected n-node graph with unlabeled nodes, and all ports at any node of degree d are arbitrarily numbered 0,.....,d-1. Each of two identical mobile agents, initially situated at distinct nodes, has to visit all nodes and stop. Agents execute the same deterministic algorithm and move in synchronous rounds: in each round, an agent can either remain at the same node or move to an adjacent node. Exploration must be collision-free: in every round at most one agent can be at any node. We assume that agents have vision of radius 2: an awake agent situated at a node v can see the subgraph induced by all nodes at a distance at most 2 from v, sees all port numbers in this subgraph, and the agents located at these nodes. Agents do not know the entire graph but they know an upper bound n on its size. The time of an exploration is the number of rounds since the wakeup of the later agent to the termination by both agents. We show a collision-free exploration algorithm working in time polynomial in n, for arbitrary graphs of size larger than 2. Moreover, we show that if agents have only vision of radius 1, then collision-free exploration is impossible, e.g., in any tree of diameter 2.
While convolutional neural networks (CNNs) have achieved success in computer vision tasks, it is vulnerable to backdoor attacks. Such attacks could mislead the victim model to make attacker-chosen prediction with a specific trigger pattern. Until now, the trigger injection of existing attacks is mainly limited to spatial domain. Recent works take advantage of perceptual properties of planting specific patterns in the frequency domain, which only reflect indistinguishable pixel-wise perturbations in pixel domain. However, in the black-box setup, the inaccessibility of training process often renders more complex trigger designs. Existing frequency attacks simply handcraft the magnitude of spectrum, introducing anomaly frequency disparities between clean and poisoned data and taking risks of being removed by image processing operations (such as lossy compression and filtering). In this paper, we propose a robust low-frequency black-box backdoor attack (LFBA), which minimally perturbs low-frequency components of frequency spectrum and maintains the perceptual similarity in spatial space simultaneously. The key insight of our attack restrict the search for the optimal trigger to low-frequency region that can achieve high attack effectiveness, robustness against image transformation defenses and stealthiness in dual space. We utilize simulated annealing (SA), a form of evolutionary algorithm, to optimize the properties of frequency trigger including the number of manipulated frequency bands and the perturbation of each frequency component, without relying on the knowledge from the victim classifier. Extensive experiments on real-world datasets verify the effectiveness and robustness of LFBA against image processing operations and the state-of-the-art backdoor defenses, as well as its inherent stealthiness in both spatial and frequency space, making it resilient against frequency inspection.
Graph neural networks (GNNs) have become increasingly popular in modeling graph-structured data due to their ability to learn node representations by aggregating local structure information. However, it is widely acknowledged that the test graph structure may differ from the training graph structure, resulting in a structure shift. In this paper, we experimentally find that the performance of GNNs drops significantly when the structure shift happens, suggesting that the learned models may be biased towards specific structure patterns. To address this challenge, we propose the Cluster Information Transfer (CIT) mechanism (Code available at //github.com/BUPT-GAMMA/CITGNN), which can learn invariant representations for GNNs, thereby improving their generalization ability to various and unknown test graphs with structure shift. The CIT mechanism achieves this by combining different cluster information with the nodes while preserving their cluster-independent information. By generating nodes across different clusters, the mechanism significantly enhances the diversity of the nodes and helps GNNs learn the invariant representations. We provide a theoretical analysis of the CIT mechanism, showing that the impact of changing clusters during structure shift can be mitigated after transfer. Additionally, the proposed mechanism is a plug-in that can be easily used to improve existing GNNs. We comprehensively evaluate our proposed method on three typical structure shift scenarios, demonstrating its effectiveness in enhancing GNNs' performance.
Thanks to technologies such as virtual network function the Fifth Generation (5G) of mobile networks dynamically allocate resources to different types of users in an on-demand fashion. Virtualization extends up to the 5G core, where software-defined networks and network slicing implement a customizable environment. These technologies can be controlled via application programming interfaces and web technologies, inheriting hence their security risks and settings. An attacker exploiting vulnerable implementations of the 5G core may gain privileged control of the network assets and disrupt its availability. However, there is currently no security assessment of the web security of the 5G core network. In this paper, we present the first security assessment of the 5G core from a web security perspective. We use the STRIDE threat modeling approach to define a complete list of possible threat vectors and associated attacks. Thanks to a suite of security testing tools, we cover all of these threats and test the security of the 5G core. In particular, we test the three most relevant open-source 5G core implementations, i.e., Open5GS, Free5Gc, and OpenAirInterface. Our analysis shows that all these cores are vulnerable to at least two of our identified attack vectors, demanding increased security measures in the development of future 5G core networks.
The computing in the network (COIN) paradigm is a promising solution that leverages unused network resources to perform tasks to meet computation-demanding applications, such as the metaverse. In this vein, we consider the partial computation offloading problem in the metaverse for multiple subtasks in a COIN environment to minimize energy consumption and delay while dynamically adjusting the offloading policy based on the changing computational resource status. The problem is NP-hard, and we transform it into two subproblems: the task-splitting problem (TSP) on the user side and the task-offloading problem (TOP) on the COIN side. We model the TSP as an ordinal potential game and propose a decentralized algorithm to obtain its Nash equilibrium (NE). Then, we model the TOP as a Markov decision process and propose the double deep Q-network (DDQN) to solve for the optimal offloading policy. Unlike the conventional DDQN algorithm, where intelligent agents sample offloading decisions randomly within a certain probability, the COIN agent explores the NE of the TSP and the deep neural network. Finally, the simulation results reveal that the proposed model approach allows the COIN agent to update its policies and make more informed decisions, leading to improved performance over time compared to the traditional baseline
A wireless communication system is studied that operates in the presence of multiple reconfigurable intelligent surfaces (RISs). In particular, a multi-operator environment is considered where each operator utilizes an RIS to enhance its communication quality. Although out-of-band interference does not exist (since each operator uses isolated spectrum resources), RISs controlled by different operators do affect the system performance of one another due to the inherently rapid phase shift adjustments that occur on an independent basis. The system performance of such a communication scenario is analytically studied for the practical case where discrete-only phase shifts occur at RIS. The proposed framework is quite general since it is valid under arbitrary channel fading conditions as well as the presence (or not) of the transceiver's direct link. Finally, the derived analytical results are verified via numerical and simulation trial as well as some novel and useful engineering outcomes are manifested.
A feedforward neural network using rectified linear units constructs a mapping from inputs to outputs by partitioning its input space into a set of convex regions where points within a region share a single affine transformation. In order to understand how neural networks work, when and why they fail, and how they compare to biological intelligence, we need to understand the organization and formation of these regions. Step one is to design and implement algorithms for exact region enumeration in networks beyond toy examples. In this work, we present parallel algorithms for exact enumeration in deep (and shallow) neural networks. Our work has three main contributions: (1) we present a novel algorithm framework and parallel algorithms for region enumeration; (2) we implement one of our algorithms on a variety of network architectures and experimentally show how the number of regions dictates runtime; and (3) we show, using our algorithm's output, how the dimension of a region's affine transformation impacts further partitioning of the region by deeper layers. To our knowledge, we run our implemented algorithm on networks larger than all of the networks used in the existing region enumeration literature. Further, we experimentally demonstrate the importance of parallelism for region enumeration of any reasonably sized network.
Graph neural networks (GNNs) is widely used to learn a powerful representation of graph-structured data. Recent work demonstrates that transferring knowledge from self-supervised tasks to downstream tasks could further improve graph representation. However, there is an inherent gap between self-supervised tasks and downstream tasks in terms of optimization objective and training data. Conventional pre-training methods may be not effective enough on knowledge transfer since they do not make any adaptation for downstream tasks. To solve such problems, we propose a new transfer learning paradigm on GNNs which could effectively leverage self-supervised tasks as auxiliary tasks to help the target task. Our methods would adaptively select and combine different auxiliary tasks with the target task in the fine-tuning stage. We design an adaptive auxiliary loss weighting model to learn the weights of auxiliary tasks by quantifying the consistency between auxiliary tasks and the target task. In addition, we learn the weighting model through meta-learning. Our methods can be applied to various transfer learning approaches, it performs well not only in multi-task learning but also in pre-training and fine-tuning. Comprehensive experiments on multiple downstream tasks demonstrate that the proposed methods can effectively combine auxiliary tasks with the target task and significantly improve the performance compared to state-of-the-art methods.
Residual networks (ResNets) have displayed impressive results in pattern recognition and, recently, have garnered considerable theoretical interest due to a perceived link with neural ordinary differential equations (neural ODEs). This link relies on the convergence of network weights to a smooth function as the number of layers increases. We investigate the properties of weights trained by stochastic gradient descent and their scaling with network depth through detailed numerical experiments. We observe the existence of scaling regimes markedly different from those assumed in neural ODE literature. Depending on certain features of the network architecture, such as the smoothness of the activation function, one may obtain an alternative ODE limit, a stochastic differential equation or neither of these. These findings cast doubts on the validity of the neural ODE model as an adequate asymptotic description of deep ResNets and point to an alternative class of differential equations as a better description of the deep network limit.
A large number of real-world graphs or networks are inherently heterogeneous, involving a diversity of node types and relation types. Heterogeneous graph embedding is to embed rich structural and semantic information of a heterogeneous graph into low-dimensional node representations. Existing models usually define multiple metapaths in a heterogeneous graph to capture the composite relations and guide neighbor selection. However, these models either omit node content features, discard intermediate nodes along the metapath, or only consider one metapath. To address these three limitations, we propose a new model named Metapath Aggregated Graph Neural Network (MAGNN) to boost the final performance. Specifically, MAGNN employs three major components, i.e., the node content transformation to encapsulate input node attributes, the intra-metapath aggregation to incorporate intermediate semantic nodes, and the inter-metapath aggregation to combine messages from multiple metapaths. Extensive experiments on three real-world heterogeneous graph datasets for node classification, node clustering, and link prediction show that MAGNN achieves more accurate prediction results than state-of-the-art baselines.
Learning latent representations of nodes in graphs is an important and ubiquitous task with widespread applications such as link prediction, node classification, and graph visualization. Previous methods on graph representation learning mainly focus on static graphs, however, many real-world graphs are dynamic and evolve over time. In this paper, we present Dynamic Self-Attention Network (DySAT), a novel neural architecture that operates on dynamic graphs and learns node representations that capture both structural properties and temporal evolutionary patterns. Specifically, DySAT computes node representations by jointly employing self-attention layers along two dimensions: structural neighborhood and temporal dynamics. We conduct link prediction experiments on two classes of graphs: communication networks and bipartite rating networks. Our experimental results show that DySAT has a significant performance gain over several different state-of-the-art graph embedding baselines.