Mission critical communication (MCC) involves the exchange of information and data among emergency services, including the police, fire brigade, and other first responders, particularly during emergencies, disasters, or critical incidents. The widely-adopted TETRA (Terrestrial Trunked Radio)-based communication for mission critical services faces challenges including limited data capacity, coverage limitations, spectrum congestion, and security concerns. Therefore, as an alternative, mission critical communication over cellular networks (4G and 5G) has emerged. While cellular-based MCC enables features like real-time video streaming and high-speed data transmission, the involvement of network operators and application service providers in the MCC architecture raises privacy concerns for mission critical users and services. For instance, the disclosure of a policeman's location details to the network operator raises privacy concerns. To the best of our knowledge, no existing work considers the privacy issues in mission critical system with respect to 5G and upcoming technologies. Therefore, in this paper, we analyse the 3GPP standardised MCC architecture within the context of 5G core network concepts and assess the privacy implications for MC users, network entities, and MC servers. The privacy analysis adheres to the deployment strategies in the standard for MCC. Additionally, we explore emerging 6G technologies, such as off-network communications, joint communication and sensing, and non-3GPP communications, to identify privacy challenges in MCC architecture. Finally, we propose privacy controls to establish a next-generation privacy-preserving MCC architecture.
The Lucas critique emphasizes the importance of considering the impact of policy changes on the expectations of micro-level agents in macroeconomic policymaking. However, the inherently self-interested nature of large-scale micro-agents, who pursue long-term benefits, complicates the formulation of optimal macroeconomic policies. This paper proposes a novel general framework named Dynamic Stackelberg Mean Field Games (Dynamic SMFG) to model such policymaking within sequential decision-making processes, with the government as the leader and households as dynamic followers. Dynamic SMFGs capture the dynamic interactions among large-scale households and their response to macroeconomic policy changes. To solve dynamic SMFGs, we propose the Stackelberg Mean Field Reinforcement Learning (SMFRL) algorithm, which leverages the population distribution of followers to represent high-dimensional joint state and action spaces. In experiments, our method surpasses macroeconomic policies in the real world, existing AI-based and economic methods. It allows the leader to approach the social optimum with the highest performance, while large-scale followers converge toward their best response to the leader's policy. Besides, we demonstrate that our approach retains effectiveness even when some households do not adopt the SMFG policy. In summary, this paper contributes to the field of AI for economics by offering an effective tool for modeling and solving macroeconomic policy-making issues.
Quantum communication represents a revolutionary advancement over classical information theory, which leverages unique quantum mechanics properties like entanglement to achieve unprecedented capabilities in secure and efficient information transmission. Unlike bits in classical communication, quantum communication utilizes qubits in superposition states, allowing for novel information storage and processing. Entanglement, a key quantum phenomenon, enables advanced protocols with enhanced security and processing power. This paper provides a comprehensive overview of quantum communication, emphasizing the role of entanglement in theoretical foundations, practical protocols, experimental progress, and security implications. It contrasts quantum communications potential applications with classical networks, identifying areas where entanglement offers significant advantages. The paper explores the fundamentals of quantum mechanics in communication, the physical realization of quantum information, and the formation of secure quantum networks through entanglement-based strategies like Quantum Key Distribution (QKD) and teleportation. It addresses the challenges of long-distance quantum communication, the role of quantum repeaters in scaling networks, and the conceptualization of interconnected quantum networks. Additionally, it discusses strides towards the Quantum Internet, Quantum Error-Correcting codes, and quantum cryptographys role in ensuring secure communication. By highlighting the role of entanglement, this paper aims to inspire further research and innovation in secure and efficient information exchange within quantum networks.
Open Government Data (OGD) initiatives aim to enhance public participation and collaboration by making government data accessible to diverse stakeholders, fostering social, environmental, and economic benefits through public value generation. However, challenges such as declining popularity, lack of OGD portal usability, and private interests overshadowing public accessibility persist. This study proposes an integrated usability framework for evaluating OGD portals, focusing on inclusivity, user collaboration, and data exploration. Employing Design Science Research (DSR), the framework is developed and applied to 33 OGD portals from the European Union (EU) and Gulf Cooperation Council (GCC) countries. The quantitative analysis is complemented by qualitative analysis and clustering, enabling assessment of portal performance, identification of best practices, and common weaknesses. This results in 19 high-level recommendations for improving the open data ecosystem. Key findings highlight the competitive nature of EU portals and the innovative features of GCC portals, emphasizing the need for multilingual support, better communication mechanisms, and improved dataset usability. The study stresses trends towards exposing data quality indicators and incorporating advanced functionalities such as AI systems. This framework serves as a baseline for OGD portal requirements elicitation, offering practical implications for developing sustainable, collaborative, and robust OGD portals, ultimately contributing to a more transparent and equitable world.
Effective communication requires the ability to refer to specific parts of an observation in relation to others. While emergent communication literature shows success in developing various language properties, no research has shown the emergence of such positional references. This paper demonstrates how agents can communicate about spatial relationships within their observations. The results indicate that agents can develop a language capable of expressing the relationships between parts of their observation, achieving over 90% accuracy when trained in a referential game which requires such communication. Using a collocation measure, we demonstrate how the agents create such references. This analysis suggests that agents use a mixture of non-compositional and compositional messages to convey spatial relationships. We also show that the emergent language is interpretable by humans. The translation accuracy is tested by communicating with the receiver agent, where the receiver achieves over 78% accuracy using parts of this lexicon, confirming that the interpretation of the emergent language was successful.
In light of recent advancements in AI capabilities and the increasingly widespread integration of AI systems into society, governments worldwide are actively seeking to mitigate the potential harms and risks associated with these technologies through regulation and other governance tools. However, there exist significant gaps between governance aspirations and the current state of the technical tooling necessary for their realisation. In this position paper, we survey policy documents published by public-sector institutions in the EU, US, and China to highlight specific areas of disconnect between the technical requirements necessary for enacting proposed policy actions, and the current technical state of the art. Our analysis motivates a call for tighter integration of the AI/ML research community within AI governance in order to i) catalyse technical research aimed at bridging the gap between current and supposed technical underpinnings of regulatory action, as well as ii) increase the level of technical expertise within governing institutions so as to inform and guide effective governance of AI.
We study the generalization of two-layer ReLU neural networks in a univariate nonparametric regression problem with noisy labels. This is a problem where kernels (\emph{e.g.} NTK) are provably sub-optimal and benign overfitting does not happen, thus disqualifying existing theory for interpolating (0-loss, global optimal) solutions. We present a new theory of generalization for local minima that gradient descent with a constant learning rate can \emph{stably} converge to. We show that gradient descent with a fixed learning rate $\eta$ can only find local minima that represent smooth functions with a certain weighted \emph{first order total variation} bounded by $1/\eta - 1/2 + \widetilde{O}(\sigma + \sqrt{\mathrm{MSE}})$ where $\sigma$ is the label noise level, $\mathrm{MSE}$ is short for mean squared error against the ground truth, and $\widetilde{O}(\cdot)$ hides a logarithmic factor. Under mild assumptions, we also prove a nearly-optimal MSE bound of $\widetilde{O}(n^{-4/5})$ within the strict interior of the support of the $n$ data points. Our theoretical results are validated by extensive simulation that demonstrates large learning rate training induces sparse linear spline fits. To the best of our knowledge, we are the first to obtain generalization bound via minima stability in the non-interpolation case and the first to show ReLU NNs without regularization can achieve near-optimal rates in nonparametric regression.
The scarcity of high-quality Earth Observation (EO) imagery poses a significant challenge, despite its critical role in enabling precise analysis and informed decision-making across various sectors. This scarcity is primarily due to atmospheric conditions, seasonal variations, and limited geographical coverage, which complicates the application of Artificial Intelligence (AI) in EO. Data augmentation, a widely used technique in AI that involves generating additional data mainly through parameterized image transformations, has been employed to increase the volume and diversity of data. However, this method often falls short in generating sufficient diversity across key semantic axes, adversely affecting the accuracy of EO applications. To address this issue, we propose a novel four-stage approach aimed at improving the diversity of augmented data by integrating diffusion models. Our approach employs meta-prompts for instruction generation, harnesses general-purpose vision-language models for generating rich captions, fine-tunes an Earth Observation diffusion model, and iteratively augments data. We conducted extensive experiments using four different data augmentation techniques, and our approach consistently demonstrated improvements, outperforming the established augmentation methods, revealing its effectiveness in generating semantically rich and diverse EO images.
Recurrent negative thoughts can significantly disrupt daily life and contribute to negative emotional states. Facing, confronting, and noticing such thoughts without support can be challenging. To provide a playful setting and leverage the technical maturation of Virtual Reality (VR), our VR experience, Mind Mansion, places the user in an initially cluttered virtual apartment. Here we utilize established concepts from traditional therapy and metaphors identified in prior works to let users engage metaphorically with representations of thoughts, gradually sorting the space, fostering awareness of thoughts, and supporting mental self-care. The results of our user study (n = 30) reveal that Mind Mansion encourages the exploration of alternative perspectives, fosters acceptance, and potentially offers new coping mechanisms. Our findings suggest that this VR intervention can reduce negative affect and improve overall emotional awareness.
With the advent of 5G commercialization, the need for more reliable, faster, and intelligent telecommunication systems are envisaged for the next generation beyond 5G (B5G) radio access technologies. Artificial Intelligence (AI) and Machine Learning (ML) are not just immensely popular in the service layer applications but also have been proposed as essential enablers in many aspects of B5G networks, from IoT devices and edge computing to cloud-based infrastructures. However, most of the existing surveys in B5G security focus on the performance of AI/ML models and their accuracy, but they often overlook the accountability and trustworthiness of the models' decisions. Explainable AI (XAI) methods are promising techniques that would allow system developers to identify the internal workings of AI/ML black-box models. The goal of using XAI in the security domain of B5G is to allow the decision-making processes of the security of systems to be transparent and comprehensible to stakeholders making the systems accountable for automated actions. In every facet of the forthcoming B5G era, including B5G technologies such as RAN, zero-touch network management, E2E slicing, this survey emphasizes the role of XAI in them and the use cases that the general users would ultimately enjoy. Furthermore, we presented the lessons learned from recent efforts and future research directions on top of the currently conducted projects involving XAI.
A fundamental goal of scientific research is to learn about causal relationships. However, despite its critical role in the life and social sciences, causality has not had the same importance in Natural Language Processing (NLP), which has traditionally placed more emphasis on predictive tasks. This distinction is beginning to fade, with an emerging area of interdisciplinary research at the convergence of causal inference and language processing. Still, research on causality in NLP remains scattered across domains without unified definitions, benchmark datasets and clear articulations of the remaining challenges. In this survey, we consolidate research across academic areas and situate it in the broader NLP landscape. We introduce the statistical challenge of estimating causal effects, encompassing settings where text is used as an outcome, treatment, or as a means to address confounding. In addition, we explore potential uses of causal inference to improve the performance, robustness, fairness, and interpretability of NLP models. We thus provide a unified overview of causal inference for the computational linguistics community.