This paper investigates the secure resource allocation for a downlink integrated sensing and communication system with multiple legal users and potential eavesdroppers. In the considered model, the base station (BS) simultaneously transmits sensing and communication signals through beamforming design, where the sensing signals can be viewed as artificial noise to enhance the security of communication signals. To further enhance the security in the semantic layer, the semantic information is extracted from the original information before transmission. The user side can only successfully recover the received information with the help of the knowledge base shared with the BS, which is stored in advance. Our aim is to maximize the sum semantic secrecy rate of all users while maintaining the minimum quality of service for each user and guaranteeing overall sensing performance. To solve this sum semantic secrecy rate maximization problem, an iterative algorithm is proposed using the alternating optimization method. The simulation results demonstrate the superiority of the proposed algorithm in terms of secure semantic communication and reliable detection.
Synthetic data from generative models emerges as the privacy-preserving data-sharing solution. Such a synthetic data set shall resemble the original data without revealing identifiable private information. The backbone technology of tabular synthesizers is rooted in image generative models, ranging from Generative Adversarial Networks (GANs) to recent diffusion models. Recent prior work sheds light on the utility-privacy tradeoff on tabular data, revealing and quantifying privacy risks on synthetic data. We first conduct an exhaustive empirical analysis, highlighting the utility-privacy tradeoff of five state-of-the-art tabular synthesizers, against eight privacy attacks, with a special focus on membership inference attacks. Motivated by the observation of high data quality but also high privacy risk in tabular diffusion, we propose DP-TLDM, Differentially Private Tabular Latent Diffusion Model, which is composed of an autoencoder network to encode the tabular data and a latent diffusion model to synthesize the latent tables. Following the emerging f-DP framework, we apply DP-SGD to train the auto-encoder in combination with batch clipping and use the separation value as the privacy metric to better capture the privacy gain from DP algorithms. Our empirical evaluation demonstrates that DP-TLDM is capable of achieving a meaningful theoretical privacy guarantee while also significantly enhancing the utility of synthetic data. Specifically, compared to other DP-protected tabular generative models, DP-TLDM improves the synthetic quality by an average of 35% in data resemblance, 15% in the utility for downstream tasks, and 50% in data discriminability, all while preserving a comparable level of privacy risk.
We propose a fast and simple explainable AI (XAI) method for point cloud data. It computes pointwise importance with respect to a trained network downstream task. This allows better understanding of the network properties, which is imperative for safety-critical applications. In addition to debugging and visualization, our low computational complexity facilitates online feedback to the network at inference. This can be used to reduce uncertainty and to increase robustness. In this work, we introduce \emph{Feature Based Interpretability} (FBI), where we compute the features' norm, per point, before the bottleneck. We analyze the use of gradients and post- and pre-bottleneck strategies, showing pre-bottleneck is preferred, in terms of smoothness and ranking. We obtain at least three orders of magnitude speedup, compared to current XAI methods, thus, scalable for big point clouds or large-scale architectures. Our approach achieves SOTA results, in terms of classification explainability. We demonstrate how the proposed measure is helpful in analyzing and characterizing various aspects of 3D learning, such as rotation invariance, robustness to out-of-distribution (OOD) outliers or domain shift and dataset bias.
This paper addresses point-to-point packet routing in undirected networks, which is the most important communication primitive in most networks. The main result proves the existence of routing tables that guarantee a polylog-competitive completion-time $\textbf{deterministically}$: in any undirected network, it is possible to give each node simple stateless deterministic local forwarding rules, such that, any adversarially chosen set of packets are delivered as fast as possible, up to polylog factors. All previous routing strategies crucially required randomization for both route selection and packet scheduling. The core technical contribution of this paper is a new local packet scheduling result of independent interest. This scheduling strategy integrates well with recent sparse semi-oblivious path selection strategies. Such strategies deterministically select not one but several candidate paths for each packet and require a global coordinator to select a single good path from those candidates for each packet. Another challenge is that, even if a single path is selected for each packet, no strategy for scheduling packets along low-congestion paths that is both local and deterministic is known. Our novel scheduling strategy utilizes the fact that every semi-oblivious routing strategy uses only a small (polynomial) subset of candidate routes. It overcomes the issue of global coordination by furthermore being provably robust to adversarial noise. This avoids the issue of having to choose a single path per packet because congestion caused by ineffective candidate paths can be treated as noise. Our results imply the first deterministic universally-optimal algorithms in the distributed supported-CONGEST model for many important global distributed tasks, including computing minimum spanning trees, approximate shortest paths, and part-wise aggregates.
This paper presents a new approach for the detection of fake videos, based on the analysis of style latent vectors and their abnormal behavior in temporal changes in the generated videos. We discovered that the generated facial videos suffer from the temporal distinctiveness in the temporal changes of style latent vectors, which are inevitable during the generation of temporally stable videos with various facial expressions and geometric transformations. Our framework utilizes the StyleGRU module, trained by contrastive learning, to represent the dynamic properties of style latent vectors. Additionally, we introduce a style attention module that integrates StyleGRU-generated features with content-based features, enabling the detection of visual and temporal artifacts. We demonstrate our approach across various benchmark scenarios in deepfake detection, showing its superiority in cross-dataset and cross-manipulation scenarios. Through further analysis, we also validate the importance of using temporal changes of style latent vectors to improve the generality of deepfake video detection.
In millimeter-wave (mmWave) integrated sensing and communication networks, users may be within the coverage of multiple access points (AP), which typically employ large-scale antenna arrays to mitigate obstacle occlusion and path loss. However, large-scale arrays generate pencil-shaped beams, which necessitate a higher number of training beams to cover the desired space. Furthermore, as the antenna aperture increases, users are more likely to be situated in the near-field region of the AP antenna array. This motivates our investigation into the near-field beam training problem to achieve effective positioning services. To address the high complexity and low identification accuracy of existing beam training techniques, we propose an efficient hashing multi-arm beam (HMB) training scheme for the near-field scenario. Specifically, we first construct a near-field single-beam training codebook for the uniform planar arrays. Then, the hash functions are chosen independently to construct the multi-arm beam training codebooks for each AP. All APs traverse the predefined multi-arm beam training codeword simultaneously and the multi-AP superimposed signals at the user are recorded. Finally, the soft decision and voting methods are applied to obtain the correctly aligned beams only based on the signal powers. In addition, we logically prove that the traversal complexity is at the logarithmic level. Simulation results show that our proposed near-field HMB training method can achieve 96.4% identification accuracy of the exhaustive beam training method and greatly reduce the training overhead. Furthermore, we verify its applicability under the far-field scenario as well.
This paper addresses the challenge of identifying the paths for vessels with operating routes of repetitive paths, partially repetitive paths, and new paths. We propose a spatial clustering approach for labeling the vessel paths by using only position information. We develop a path clustering framework employing two methods: a distance-based path modeling and a likelihood estimation method. The former enhances the accuracy of path clustering through the integration of unsupervised machine learning techniques, while the latter focuses on likelihood-based path modeling and introduces segmentation for a more detailed analysis. The result findings highlight the superior performance and efficiency of the developed approach, as both methods for clustering vessel paths into five classes achieve a perfect F1-score. The approach aims to offer valuable insights for route planning, ultimately contributing to improving safety and efficiency in maritime transportation.
This paper proposes a new approach to address the problem of unmeasured confounding in spatial designs. Spatial confounding occurs when some confounding variables are unobserved and not included in the model, leading to distorted inferential results about the effect of an exposure on an outcome. We show the relationship existing between the confounding bias of a non-spatial model and that of a semi-parametric model that includes a basis matrix to represent the unmeasured confounder conditional on the exposure. This relationship holds for any basis expansion, however it is shown that using the semi-parametric approach guarantees a reduction in the confounding bias only under certain circumstances, which are related to the spatial structures of the exposure and the unmeasured confounder, the type of basis expansion utilized, and the regularization mechanism. To adjust for spatial confounding, and therefore try to recover the effect of interest, we propose a Bayesian semi-parametric regression model, where an expansion matrix of principal spline basis functions is used to approximate the unobserved factor, and spike-and-slab priors are imposed on the respective expansion coefficients in order to select the most important bases. From the results of an extensive simulation study, we conclude that our proposal is able to reduce the confounding bias with respect to the non-spatial model, and it also seems more robust to bias amplification than competing approaches.
This paper develops a new filtering approach for state estimation in polynomial systems corrupted by arbitrary noise, which commonly arise in robotics. We first consider a batch setup where we perform state estimation using all data collected from the initial to the current time. We formulate the batch state estimation problem as a Polynomial Optimization Problem (POP) and relax the assumption of Gaussian noise by specifying a finite number of moments of the noise. We solve the resulting POP using a moment relaxation and prove that under suitable conditions on the rank of the relaxation, (i) we can extract a provably optimal estimate from the moment relaxation, and (ii) we can obtain a belief representation from the dual (sum-of-squares) relaxation. We then turn our attention to the filtering setup and apply similar insights to develop a GMKF for recursive state estimation in polynomial systems with arbitrary noise. The GMKF formulates the prediction and update steps as POPs and solves them using moment relaxations, carrying over a possibly non-Gaussian belief. In the linear-Gaussian case, GMKF reduces to the standard Kalman Filter. We demonstrate that GMKF performs well under highly non-Gaussian noise and outperforms common alternatives, including the Extended and Unscented Kalman Filter, and their variants on matrix Lie group.
The prevalence of 3D printing poses a significant risk to public safety, as any individual with internet access and a commodity printer is able to produce untraceable firearms, keys, counterfeit products, etc. To aid government authorities in combating these new security threats, several approaches have been taken to tag 3D-prints with identifying information. Known as fingerprints, this information is written into the object using various bit embedding techniques; examples include varying the height of the molten thermoplastic layers, and depositing metallic powder with different magnetic properties. Yet, the practicality of theses techniques in real-world forensic settings is hindered by the adversarial nature of this problem. That is, the 3D-printing process is out of reach of any law enforcement agencies; it is the adversary who controls all aspects of printing and possesses the printed object. To combat these threats, law enforcement agencies can regulate the manufacturing of 3D printers, on which they may enforce a fingerprinting scheme, and collect adversarially tampered remains (e.g., fragments of a broken 3D-printed firearm) during forensic investigation. Therefore, it is important to devise fingerprinting techniques so that the fingerprint could be extracted even if printing is carried out by the adversary. To this end, we present SIDE (Secure Information Embedding and Extraction), a fingerprinting framework that tackles the adversarial nature of forensic fingerprinting in 3D prints by offering both secure information embedding and secure information extraction.
Knowledge graphs capture interlinked information between entities and they represent an attractive source of structured information that can be harnessed for recommender systems. However, existing recommender engines use knowledge graphs by manually designing features, do not allow for end-to-end training, or provide poor scalability. Here we propose Knowledge Graph Convolutional Networks (KGCN), an end-to-end trainable framework that harnesses item relationships captured by the knowledge graph to provide better recommendations. Conceptually, KGCN computes user-specific item embeddings by first applying a trainable function that identifies important knowledge graph relations for a given user and then transforming the knowledge graph into a user-specific weighted graph. Then, KGCN applies a graph convolutional neural network that computes an embedding of an item node by propagating and aggregating knowledge graph neighborhood information. Moreover, to provide better inductive bias KGCN uses label smoothness (LS), which provides regularization over edge weights and we prove that it is equivalent to label propagation scheme on a graph. Finally, We unify KGCN and LS regularization, and present a scalable minibatch implementation for KGCN-LS model. Experiments show that KGCN-LS outperforms strong baselines in four datasets. KGCN-LS also achieves great performance in sparse scenarios and is highly scalable with respect to the knowledge graph size.