This paper introduces a quantitative generalization of the ``more capable'' comparison of broadcast channels, which is termed ``more capable with advantage''. Some basic properties are demonstrated (including tensorization on product channels), and a characterisation is given for the cases of Binary Symmetric Channel (BSC) and Binary Erasure Channel (BEC). It is then applied to two problems. First, a list decoding bound on the BSC is given that applies to transitive codes that achieve capacity on the BEC. Second, new lower bounds on entropy rates of binary hidden Markov processes are derived.
Providing emotional support through dialogue systems is becoming increasingly important in today's world, as it can support both mental health and social interactions in many conversation scenarios. Previous works have shown that using persona is effective for generating empathetic and supportive responses. They have often relied on pre-provided persona rather than inferring them during conversations. However, it is not always possible to obtain a user persona before the conversation begins. To address this challenge, we propose PESS (Persona Extraction through Semantic Similarity), a novel framework that can automatically infer informative and consistent persona from dialogues. We devise completeness loss and consistency loss based on semantic similarity scores. The completeness loss encourages the model to generate missing persona information, and the consistency loss guides the model to distinguish between consistent and inconsistent persona. Our experimental results demonstrate that high-quality persona information inferred by PESS is effective in generating emotionally supportive responses.
The focus of this paper is on automating the security testing of RESTful APIs. The testing stage of this specific kind of components is often performed manually, and this is yet considered as a long and difficult activity. This paper proposes an automated approach to help developers generate test cases for experimenting with each service in isolation. This approach is based upon the notion of test case mutation, which automatically generates new test cases from an original test case set. Test case mutation operators perform slight test case modifications to mimic possible failures or to test the component under test with new interactions. In this paper, we examine test case mutation operators for RESTful APIs and define 17 operators specialised in security testing. Then, we present our test case mutation algorithm. We evaluate its effectiveness and performance on four web service compositions.
In the rapidly evolving landscape of 5G and beyond 5G (B5G) mobile cellular communications, efficient data compression and reconstruction strategies become paramount, especially in massive multiple-input multiple-output (MIMO) systems. A critical challenge in these systems is the capacity-limited fronthaul, particularly in the context of the Ethernet-based common public radio interface (eCPRI) connecting baseband units (BBUs) and remote radio units (RRUs). This capacity limitation hinders the effective handling of increased traffic and data flows. We propose a novel two-stage compression approach to address this bottleneck. The first stage employs sparse Tucker decomposition, targeting the weight tensor's low-rank components for compression. The second stage further compresses these components using complex givens decomposition and run-length encoding, substantially improving the compression ratio. Our approach specifically targets the Zero-Forcing (ZF) beamforming weights in BBUs. By reconstructing these weights in RRUs, we significantly alleviate the burden on eCPRI traffic, enabling a higher number of concurrent streams in the radio access network (RAN). Through comprehensive evaluations, we demonstrate the superior effectiveness of our method in Channel State Information (CSI) compression, paving the way for more efficient 5G/B5G fronthaul links.
Graph neural networks (GNNs) have become increasingly popular in modeling graph-structured data due to their ability to learn node representations by aggregating local structure information. However, it is widely acknowledged that the test graph structure may differ from the training graph structure, resulting in a structure shift. In this paper, we experimentally find that the performance of GNNs drops significantly when the structure shift happens, suggesting that the learned models may be biased towards specific structure patterns. To address this challenge, we propose the Cluster Information Transfer (CIT) mechanism (Code available at //github.com/BUPT-GAMMA/CITGNN), which can learn invariant representations for GNNs, thereby improving their generalization ability to various and unknown test graphs with structure shift. The CIT mechanism achieves this by combining different cluster information with the nodes while preserving their cluster-independent information. By generating nodes across different clusters, the mechanism significantly enhances the diversity of the nodes and helps GNNs learn the invariant representations. We provide a theoretical analysis of the CIT mechanism, showing that the impact of changing clusters during structure shift can be mitigated after transfer. Additionally, the proposed mechanism is a plug-in that can be easily used to improve existing GNNs. We comprehensively evaluate our proposed method on three typical structure shift scenarios, demonstrating its effectiveness in enhancing GNNs' performance.
Large language models (LLMs) show remarkable capabilities across a variety of tasks. Despite the models only seeing text in training, several recent studies suggest that LLM representations implicitly capture aspects of the underlying grounded concepts. Here, we explore LLM representations of a particularly salient kind of grounded knowledge -- spatial relationships. We design natural-language navigation tasks and evaluate the ability of LLMs, in particular GPT-3.5-turbo, GPT-4, and Llama2 series models, to represent and reason about spatial structures. These tasks reveal substantial variability in LLM performance across different spatial structures, including square, hexagonal, and triangular grids, rings, and trees. In extensive error analysis, we find that LLMs' mistakes reflect both spatial and non-spatial factors. These findings suggest that LLMs appear to capture certain aspects of spatial structure implicitly, but room for improvement remains.
Thanks to technologies such as virtual network function the Fifth Generation (5G) of mobile networks dynamically allocate resources to different types of users in an on-demand fashion. Virtualization extends up to the 5G core, where software-defined networks and network slicing implement a customizable environment. These technologies can be controlled via application programming interfaces and web technologies, inheriting hence their security risks and settings. An attacker exploiting vulnerable implementations of the 5G core may gain privileged control of the network assets and disrupt its availability. However, there is currently no security assessment of the web security of the 5G core network. In this paper, we present the first security assessment of the 5G core from a web security perspective. We use the STRIDE threat modeling approach to define a complete list of possible threat vectors and associated attacks. Thanks to a suite of security testing tools, we cover all of these threats and test the security of the 5G core. In particular, we test the three most relevant open-source 5G core implementations, i.e., Open5GS, Free5Gc, and OpenAirInterface. Our analysis shows that all these cores are vulnerable to at least two of our identified attack vectors, demanding increased security measures in the development of future 5G core networks.
With the escalating threats posed by cyberattacks on Industrial Control Systems (ICSs), the development of customized Industrial Intrusion Detection Systems (IIDSs) received significant attention in research. While existing literature proposes effective IIDS solutions evaluated in controlled environments, their deployment in real-world industrial settings poses several challenges. This paper highlights two critical yet often overlooked aspects that significantly impact their practical deployment, i.e., the need for sufficient amounts of data to train the IIDS models and the challenges associated with finding suitable hyperparameters, especially for IIDSs training only on genuine ICS data. Through empirical experiments conducted on multiple state-of-the-art IIDSs and diverse datasets, we establish the criticality of these issues in deploying IIDSs. Our findings show the necessity of extensive malicious training data for supervised IIDSs, which can be impractical considering the complexity of recording and labeling attacks in actual industrial environments. Furthermore, while other IIDSs circumvent the previous issue by requiring only benign training data, these can suffer from the difficulty of setting appropriate hyperparameters, which likewise can diminish their performance. By shedding light on these challenges, we aim to enhance the understanding of the limitations and considerations necessary for deploying effective cybersecurity solutions in ICSs, which might be one reason why IIDSs see few deployments.
This paper offers a comprehensive review of the research on Natural Language Generation (NLG) over the past two decades, especially in relation to data-to-text generation and text-to-text generation deep learning methods, as well as new applications of NLG technology. This survey aims to (a) give the latest synthesis of deep learning research on the NLG core tasks, as well as the architectures adopted in the field; (b) detail meticulously and comprehensively various NLG tasks and datasets, and draw attention to the challenges in NLG evaluation, focusing on different evaluation methods and their relationships; (c) highlight some future emphasis and relatively recent research issues that arise due to the increasing synergy between NLG and other artificial intelligence areas, such as computer vision, text and computational creativity.
Residual networks (ResNets) have displayed impressive results in pattern recognition and, recently, have garnered considerable theoretical interest due to a perceived link with neural ordinary differential equations (neural ODEs). This link relies on the convergence of network weights to a smooth function as the number of layers increases. We investigate the properties of weights trained by stochastic gradient descent and their scaling with network depth through detailed numerical experiments. We observe the existence of scaling regimes markedly different from those assumed in neural ODE literature. Depending on certain features of the network architecture, such as the smoothness of the activation function, one may obtain an alternative ODE limit, a stochastic differential equation or neither of these. These findings cast doubts on the validity of the neural ODE model as an adequate asymptotic description of deep ResNets and point to an alternative class of differential equations as a better description of the deep network limit.
This paper proposes a method to modify traditional convolutional neural networks (CNNs) into interpretable CNNs, in order to clarify knowledge representations in high conv-layers of CNNs. In an interpretable CNN, each filter in a high conv-layer represents a certain object part. We do not need any annotations of object parts or textures to supervise the learning process. Instead, the interpretable CNN automatically assigns each filter in a high conv-layer with an object part during the learning process. Our method can be applied to different types of CNNs with different structures. The clear knowledge representation in an interpretable CNN can help people understand the logics inside a CNN, i.e., based on which patterns the CNN makes the decision. Experiments showed that filters in an interpretable CNN were more semantically meaningful than those in traditional CNNs.