This article compares two of the leading mobile network operators in Thailand's telecom market in terms of the service quality of Thailand's 5G networks. The following three factors, download speed, upload speed and latency, which are frequently considered to be indicators of the quality of Internet networks, were examined. The researchers employed the test results to determine an average grade of service that was reached by comparing newly collected data to data that had previously been examined utilizing the same format and application in the middle of May 2021. The typical upload speed dropped from 62.6 Mbps in 2021 to 52.0 Mbps in 2023, while the latency increased from 14.9 to 23.3 milliseconds on average. It was established that the results delivered considerably enhanced quality values despite the fact that the test region in this study only comprised BTS stations. Furthermore, this was the case despite the fact that the test area in this study only encompassed a small percentage of the total population.
In the current landscape of online abuses and harms, effective content moderation is necessary to cultivate safe and inclusive online spaces. Yet, the effectiveness of many moderation interventions is still unclear. Here, we assess the effectiveness of The Great Ban, a massive deplatforming operation that affected nearly 2,000 communities on Reddit. By analyzing 16M comments posted by 17K users during 14 months, we provide nuanced results on the effects, both desired and otherwise, of the ban. Among our main findings is that 15.6% of the affected users left Reddit and that those who remained reduced their toxicity by 6.6% on average. The ban also caused 5% users to increase their toxicity by more than 70% of their pre-ban level. However, these resentful users likely had limited impact on Reddit due to low activity and little support by peers. Overall, our multifaceted results provide new insights into the efficacy of deplatforming. Our findings can inform the development of future moderation interventions and the policing of online platforms.
In the evolving landscape of online communication, moderating hate speech (HS) presents an intricate challenge, compounded by the multimodal nature of digital content. This comprehensive survey delves into the recent strides in HS moderation, spotlighting the burgeoning role of large language models (LLMs) and large multimodal models (LMMs). Our exploration begins with a thorough analysis of current literature, revealing the nuanced interplay between textual, visual, and auditory elements in propagating HS. We uncover a notable trend towards integrating these modalities, primarily due to the complexity and subtlety with which HS is disseminated. A significant emphasis is placed on the advances facilitated by LLMs and LMMs, which have begun to redefine the boundaries of detection and moderation capabilities. We identify existing gaps in research, particularly in the context of underrepresented languages and cultures, and the need for solutions to handle low-resource settings. The survey concludes with a forward-looking perspective, outlining potential avenues for future research, including the exploration of novel AI methodologies, the ethical governance of AI in moderation, and the development of more nuanced, context-aware systems. This comprehensive overview aims to catalyze further research and foster a collaborative effort towards more sophisticated, responsible, and human-centric approaches to HS moderation in the digital era.\footnote{ \textcolor{red}{WARNING: This paper contains offensive examples.
Integrated sensing and communication (ISAC) has attracted growing interests for enabling the future 6G wireless networks, due to its capability of sharing spectrum and hardware resources between communication and sensing systems. However, existing works on ISAC usually need to modify the communication protocol to cater for the new sensing performance requirement, which may be difficult to implement in practice. In this paper, we study a new intelligent reflecting surface (IRS) aided millimeter-wave (mmWave) ISAC system by exploiting the distinct beam scanning operation in mmWave communications to achieve efficient sensing at the same time. First, we propose a two-phase ISAC protocol aided by a semi-passive IRS, consisting of beam scanning and data transmission. Specifically, in the beam scanning phase, the IRS finds the optimal beam for reflecting signals from the base station to a communication user via its passive elements. Meanwhile, the IRS directly estimates the angle of a nearby target based on echo signals from the target using its equipped active sensing element. Then, in the data transmission phase, the sensing accuracy is further improved by leveraging the data signals via possible IRS beam splitting. Next, we derive the achievable rate of the communication user as well as the Cram\'er-Rao bound and the approximate mean square error of the target angle estimation Finally, extensive simulation results are provided to verify our analysis as well as the effectiveness of the proposed scheme.
The aspiration of the next generation's autonomous driving (AD) technology relies on the dedicated integration and interaction among intelligent perception, prediction, planning, and low-level control. There has been a huge bottleneck regarding the upper bound of autonomous driving algorithm performance, a consensus from academia and industry believes that the key to surmount the bottleneck lies in data-centric autonomous driving technology. Recent advancement in AD simulation, closed-loop model training, and AD big data engine have gained some valuable experience. However, there is a lack of systematic knowledge and deep understanding regarding how to build efficient data-centric AD technology for AD algorithm self-evolution and better AD big data accumulation. To fill in the identified research gaps, this article will closely focus on reviewing the state-of-the-art data-driven autonomous driving technologies, with an emphasis on the comprehensive taxonomy of autonomous driving datasets characterized by milestone generations, key features, data acquisition settings, etc. Furthermore, we provide a systematic review of the existing benchmark closed-loop AD big data pipelines from the industrial frontier, including the procedure of closed-loop frameworks, key technologies, and empirical studies. Finally, the future directions, potential applications, limitations and concerns are discussed to arouse efforts from both academia and industry for promoting the further development of autonomous driving. The project repository is available at: //github.com/LincanLi98/Awesome-Data-Centric-Autonomous-Driving.
Next generation mobile networks are poised to transition from monolithic structures owned and operated by single mobile network operators into multi-stakeholder networks where various parties contribute with infrastructure, resources, and services. However, a federation of networks and services brings along a crucial challenge: Guaranteeing secure and trustworthy access control among network entities of different administrative domains. This paper introduces a novel technical concept and a prototype, outlining and implementing a 5G Service-Based Architecture that utilizes Decentralized Identifiers and Verifiable Credentials instead of traditional X.509 certificates and OAuth2.0 access tokens to authenticate and authorize network functions among each other across administrative domains. This decentralized approach to identity and permission management for network functions reduces the risk of single points of failure associated with centralized public key infrastructures. It unifies access control mechanisms and lays the groundwork for lesser complex and more trustful cross-domain key management for highly collaborative network functions in a multi-party Service-Based Architecture of 6G.
Most of the existing research on degrees-of-freedom (DoF) with imperfect channel state information at the transmitter (CSIT) assume the messages are private, which may not reflect reality as the two receivers can request the same content. To overcome this limitation, we therefore consider the hybrid unicast and multicast messages. In particular, we characterize the optimal DoF region for the two-user multiple-input multiple-output (MIMO) broadcast channel (BC) with imperfect CSIT and hybrid messages. For the converse, we establish a three-step procedure to exploit the utmost possible relaxation. For the achievability, since the DoF region is with specific three-dimensional structure regarding antenna configurations and CSIT qualities, we verify the existence or non-existence of corner point candidates via the feature of antenna configurations and CSIT qualities categorization and provide a hybrid message-aware rate-splitting scheme. Besides, we show that to achieve the strictly positive corner points, it is unnecessary to split the unicast messages into private and common parts, implying that adding a multicast message may mitigate the rate-splitting complexity.
Graph neural networks (GNNs) have demonstrated a significant boost in prediction performance on graph data. At the same time, the predictions made by these models are often hard to interpret. In that regard, many efforts have been made to explain the prediction mechanisms of these models from perspectives such as GNNExplainer, XGNN and PGExplainer. Although such works present systematic frameworks to interpret GNNs, a holistic review for explainable GNNs is unavailable. In this survey, we present a comprehensive review of explainability techniques developed for GNNs. We focus on explainable graph neural networks and categorize them based on the use of explainable methods. We further provide the common performance metrics for GNNs explanations and point out several future research directions.
With the advent of 5G commercialization, the need for more reliable, faster, and intelligent telecommunication systems are envisaged for the next generation beyond 5G (B5G) radio access technologies. Artificial Intelligence (AI) and Machine Learning (ML) are not just immensely popular in the service layer applications but also have been proposed as essential enablers in many aspects of B5G networks, from IoT devices and edge computing to cloud-based infrastructures. However, most of the existing surveys in B5G security focus on the performance of AI/ML models and their accuracy, but they often overlook the accountability and trustworthiness of the models' decisions. Explainable AI (XAI) methods are promising techniques that would allow system developers to identify the internal workings of AI/ML black-box models. The goal of using XAI in the security domain of B5G is to allow the decision-making processes of the security of systems to be transparent and comprehensible to stakeholders making the systems accountable for automated actions. In every facet of the forthcoming B5G era, including B5G technologies such as RAN, zero-touch network management, E2E slicing, this survey emphasizes the role of XAI in them and the use cases that the general users would ultimately enjoy. Furthermore, we presented the lessons learned from recent efforts and future research directions on top of the currently conducted projects involving XAI.
Deep neural networks (DNNs) are successful in many computer vision tasks. However, the most accurate DNNs require millions of parameters and operations, making them energy, computation and memory intensive. This impedes the deployment of large DNNs in low-power devices with limited compute resources. Recent research improves DNN models by reducing the memory requirement, energy consumption, and number of operations without significantly decreasing the accuracy. This paper surveys the progress of low-power deep learning and computer vision, specifically in regards to inference, and discusses the methods for compacting and accelerating DNN models. The techniques can be divided into four major categories: (1) parameter quantization and pruning, (2) compressed convolutional filters and matrix factorization, (3) network architecture search, and (4) knowledge distillation. We analyze the accuracy, advantages, disadvantages, and potential solutions to the problems with the techniques in each category. We also discuss new evaluation metrics as a guideline for future research.
Small data challenges have emerged in many learning problems, since the success of deep neural networks often relies on the availability of a huge amount of labeled data that is expensive to collect. To address it, many efforts have been made on training complex models with small data in an unsupervised and semi-supervised fashion. In this paper, we will review the recent progresses on these two major categories of methods. A wide spectrum of small data models will be categorized in a big picture, where we will show how they interplay with each other to motivate explorations of new ideas. We will review the criteria of learning the transformation equivariant, disentangled, self-supervised and semi-supervised representations, which underpin the foundations of recent developments. Many instantiations of unsupervised and semi-supervised generative models have been developed on the basis of these criteria, greatly expanding the territory of existing autoencoders, generative adversarial nets (GANs) and other deep networks by exploring the distribution of unlabeled data for more powerful representations. While we focus on the unsupervised and semi-supervised methods, we will also provide a broader review of other emerging topics, from unsupervised and semi-supervised domain adaptation to the fundamental roles of transformation equivariance and invariance in training a wide spectrum of deep networks. It is impossible for us to write an exclusive encyclopedia to include all related works. Instead, we aim at exploring the main ideas, principles and methods in this area to reveal where we are heading on the journey towards addressing the small data challenges in this big data era.