Semantic communications, as one of the potential key technologies of the sixth generation communications (6G), has attracted research interest from both academia and industry. However, semantic communication is still in its infancy and it faces many challenges, such as semantic information definition and semantic communication measurement. To address these challenges, we investigate unified semantic information measures and semantic channel coding theorem. Specifically, to address the shortcoming of existing semantic entropy definitions can only be applied to specific tasks, we propose a universal semantic entropy definition as the uncertainty in the semantic interpretation of random variable symbols in the context of knowledge bases. The proposed universal semantic entropy not only depends on the probability distribution, but also depends on the specific value of the symbol and the background knowledge base. Under the given conditions, the proposed universal semantic entropy definition can degenerate into the existing semantic entropy and Shannon entropy definitions. Moreover, since the accurate transmission of semantic symbols in the semantic communication system can allow a non-zero bit error rate, we conjecture that the bit rate of the semantic communication may exceed the Shannon channel capacity. Furthermore, we propose a semantic channel coding theorem, and prove its achievability and converse. Since the well-known Fano's inequality cannot be directly applied to semantic communications, we derive and prove the semantic Fano's inequality, and use it to prove the converse. To our best knowledge, this is the first theoretical proof that the transmission rate of semantic communication can exceed the Shannon channel capacity, which provides a theoretical basis for semantic communication research.
Binary rewriting is a widely adopted technique in software analysis. WebAssembly (Wasm), as an emerging bytecode format, has attracted great attention from our community. Unfortunately, there is no general-purpose binary rewriting framework for Wasm, and existing effort on Wasm binary modification is error-prone and tedious. In this paper, we present BREWasm, the first general purpose static binary rewriting framework for Wasm, which has addressed inherent challenges of Wasm rewriting including high complicated binary structure, strict static syntax verification, and coupling among sections. We perform extensive evaluation on diverse Wasm applications to show the efficiency, correctness and effectiveness of BREWasm. We further show the promising direction of implementing a diverse set of binary rewriting tasks based on BREWasm in an effortless and user-friendly manner.
In this paper, we investigate the resource allocation design for integrated sensing and communication (ISAC) in distributed antenna networks (DANs). In particular, coordinated by a central processor (CP), a set of remote radio heads (RRHs) provide communication services to multiple users and sense several target locations within an ISAC frame. To avoid the severe interference between the information transmission and the radar echo, we propose to divide the ISAC frame into a communication phase and a sensing phase. During the communication phase, the data signal is generated at the CP and then conveyed to the RRHs via fronthaul links. As for the sensing phase, based on pre-determined RRH-target pairings, each RRH senses a dedicated target location with a synthesized highly-directional beam and then transfers the samples of the received echo to the CP via its fronthaul link for further processing of the sensing information. Taking into account the limited fronthaul capacity and the quality-of-service requirements of both communication and sensing, we jointly optimize the durations of the two phases, the information beamforming, and the covariance matrix of the sensing signal for minimization of the total energy consumption over a given finite time horizon. To solve the formulated non-convex design problem, we develop a low-complexity alternating optimization algorithm which converges to a suboptimal solution. Simulation results show that the proposed scheme achieves significant energy savings compared to two baseline schemes. Moreover, our results reveal that for efficient ISAC in wireless networks, energy-focused short-duration pulses are favorable for sensing while low-power long-duration signals are preferable for communication.
It is anticipated that integrated sensing and communications (ISAC) would be one of the key enablers of next-generation wireless networks (such as beyond 5G (B5G) and 6G) for supporting a variety of emerging applications. In this paper, we provide a comprehensive review of the recent advances in ISAC systems, with a particular focus on their foundations, system design, networking aspects and ISAC applications. Furthermore, we discuss the corresponding open questions of the above that emerged in each issue. Hence, we commence with the information theory of sensing and communications (S$\&$C), followed by the information-theoretic limits of ISAC systems by shedding light on the fundamental performance metrics. Next, we discuss their clock synchronization and phase offset problems, the associated Pareto-optimal signaling strategies, as well as the associated super-resolution ISAC system design. Moreover, we envision that ISAC ushers in a paradigm shift for the future cellular networks relying on network sensing, transforming the classic cellular architecture, cross-layer resource management methods, and transmission protocols. In ISAC applications, we further highlight the security and privacy issues of wireless sensing. Finally, we close by studying the recent advances in a representative ISAC use case, namely the multi-object multi-task (MOMT) recognition problem using wireless signals.
Assessing causal effects in the presence of unmeasured confounding is a challenging problem. Although auxiliary variables, such as instrumental variables, are commonly used to identify causal effects, they are often unavailable in practice due to stringent and untestable conditions. To address this issue, previous researches have utilized linear structural equation models to show that the causal effect can be identifiable when noise variables of the treatment and outcome are both non-Gaussian. In this paper, we investigate the problem of identifying the causal effect using auxiliary covariates and non-Gaussianity from the treatment. Our key idea is to characterize the impact of unmeasured confounders using an observed covariate, assuming they are all Gaussian. The auxiliary covariate can be an invalid instrument or an invalid proxy variable. We demonstrate that the causal effect can be identified using this measured covariate, even when the only source of non-Gaussianity comes from the treatment. We then extend the identification results to the multi-treatment setting and provide sufficient conditions for identification. Based on our identification results, we propose a simple and efficient procedure for calculating causal effects and show the $\sqrt{n}$-consistency of the proposed estimator. Finally, we evaluate the performance of our estimator through simulation studies and an application.
Semantic communication represents a promising roadmap toward achieving end-to-end communication with reduced communication overhead and an enhanced user experience. The integration of semantic concepts with wireless communications presents novel challenges. This paper proposes a flexible simulation software that automatically transmits semantic segmentation map images over a communication channel. An additive white Gaussian noise (AWGN) channel using binary phase-shift keying (BPSK) modulation is considered as the channel setup. The well-known polar codes are chosen as the channel coding scheme. The popular COCO-Stuff dataset is used as an example to generate semantic map images corresponding to different signal-to-noise ratios (SNRs). To evaluate the proposed software, we have generated four small datasets, each containing a thousand semantic map samples, accompanied by comprehensive information corresponding to each image, including the polar code specifications, detailed image attributes, bit error rate (BER), and frame error rate (FER). The capacity to generate an unlimited number of semantic maps utilizing desired channel coding parameters and preferred SNR, in conjunction with the flexibility of using alternative datasets, renders our simulation software highly adaptable and transferable to a broad range of use cases.
We present algorithms based on satisfiability problem (SAT) solving, as well as answer set programming (ASP), for solving the problem of determining inconsistency degrees in propositional knowledge bases. We consider six different inconsistency measures whose respective decision problems lie on the first level of the polynomial hierarchy. Namely, these are the contension inconsistency measure, the forgetting-based inconsistency measure, the hitting set inconsistency measure, the max-distance inconsistency measure, the sum-distance inconsistency measure, and the hit-distance inconsistency measure. In an extensive experimental analysis, we compare the SAT-based and ASP-based approaches with each other, as well as with a set of naive baseline algorithms. Our results demonstrate that overall, both the SAT-based and the ASP-based approaches clearly outperform the naive baseline methods in terms of runtime. The results further show that the proposed ASP-based approaches perform superior to the SAT-based ones with regard to all six inconsistency measures considered in this work. Moreover, we conduct additional experiments to explain the aforementioned results in greater detail.
Since communication signals are publicly exposed while they transmit across space, Ad Hoc Networks (MANETs) are where secured communication is most crucial. Unfortunately, these systems are more open to intrusions that range from passive listening to aggressive spying. A Hybrid Team centric Re-Key Control Framework (HT-RCF) suggests that this research examines private group communication in Adhoc environments. Each group selects a Group Manager to oversee the group's members choose the group manager, and the suggested HT-RCF uses the Improved Hybrid Power-Aware Decentralized (I-HPAD) mechanism. The Key Distribution Center (KDC) generates the keys and distributes them to the group managers (GMs) using the base algorithm Rivest Shamir Adleman (RSA). The key agreement technique is investigated for safe user-user communication. Threats that aim to exploit a node are recognized and stopped using regular transmissions. The rekeying procedure is started every time a node enters and exits the network. The research findings demonstrate that the suggested approach outperforms the currently used Cluster-based Group Key Management in terms of power use, privacy level, storage use, and processing time.
Since the cyberspace consolidated as fifth warfare dimension, the different actors of the defense sector began an arms race toward achieving cyber superiority, on which research, academic and industrial stakeholders contribute from a dual vision, mostly linked to a large and heterogeneous heritage of developments and adoption of civilian cybersecurity capabilities. In this context, augmenting the conscious of the context and warfare environment, risks and impacts of cyber threats on kinetic actuations became a critical rule-changer that military decision-makers are considering. A major challenge on acquiring mission-centric Cyber Situational Awareness (CSA) is the dynamic inference and assessment of the vertical propagations from situations that occurred at the mission supportive Information and Communications Technologies (ICT), up to their relevance at military tactical, operational and strategical views. In order to contribute on acquiring CSA, this paper addresses a major gap in the cyber defence state-of-the-art: the dynamic identification of Key Cyber Terrains (KCT) on a mission-centric context. Accordingly, the proposed KCT identification approach explores the dependency degrees among tasks and assets defined by commanders as part of the assessment criteria. These are correlated with the discoveries on the operational network and the asset vulnerabilities identified thorough the supported mission development. The proposal is presented as a reference model that reveals key aspects for mission-centric KCT analysis and supports its enforcement and further enforcement by including an illustrative application case.
Relation prediction for knowledge graphs aims at predicting missing relationships between entities. Despite the importance of inductive relation prediction, most previous works are limited to a transductive setting and cannot process previously unseen entities. The recent proposed subgraph-based relation reasoning models provided alternatives to predict links from the subgraph structure surrounding a candidate triplet inductively. However, we observe that these methods often neglect the directed nature of the extracted subgraph and weaken the role of relation information in the subgraph modeling. As a result, they fail to effectively handle the asymmetric/anti-symmetric triplets and produce insufficient embeddings for the target triplets. To this end, we introduce a \textbf{C}\textbf{o}mmunicative \textbf{M}essage \textbf{P}assing neural network for \textbf{I}nductive re\textbf{L}ation r\textbf{E}asoning, \textbf{CoMPILE}, that reasons over local directed subgraph structures and has a vigorous inductive bias to process entity-independent semantic relations. In contrast to existing models, CoMPILE strengthens the message interactions between edges and entitles through a communicative kernel and enables a sufficient flow of relation information. Moreover, we demonstrate that CoMPILE can naturally handle asymmetric/anti-symmetric relations without the need for explosively increasing the number of model parameters by extracting the directed enclosing subgraphs. Extensive experiments show substantial performance gains in comparison to state-of-the-art methods on commonly used benchmark datasets with variant inductive settings.
Many tasks in natural language processing can be viewed as multi-label classification problems. However, most of the existing models are trained with the standard cross-entropy loss function and use a fixed prediction policy (e.g., a threshold of 0.5) for all the labels, which completely ignores the complexity and dependencies among different labels. In this paper, we propose a meta-learning method to capture these complex label dependencies. More specifically, our method utilizes a meta-learner to jointly learn the training policies and prediction policies for different labels. The training policies are then used to train the classifier with the cross-entropy loss function, and the prediction policies are further implemented for prediction. Experimental results on fine-grained entity typing and text classification demonstrate that our proposed method can obtain more accurate multi-label classification results.