A non-interactive ZK (NIZK) proof enables verification of NP statements without revealing secrets about them. However, an adversary that obtains a NIZK proof may be able to clone this proof and distribute arbitrarily many copies of it to various entities: this is inevitable for any proof that takes the form of a classical string. In this paper, we ask whether it is possible to rely on quantum information in order to build NIZK proof systems that are impossible to clone. We define and construct unclonable non-interactive zero-knowledge proofs (of knowledge) for NP. Besides satisfying the zero-knowledge and proof of knowledge properties, these proofs additionally satisfy unclonability. Very roughly, this ensures that no adversary can split an honestly generated proof of membership of an instance $x$ in an NP language $\mathcal{L}$ and distribute copies to multiple entities that all obtain accepting proofs of membership of $x$ in $\mathcal{L}$. Our result has applications to unclonable signatures of knowledge, which we define and construct in this work; these non-interactively prevent replay attacks.
A pivotal feature of IPv6 is its plug-and-play capability that enables hosts to integrate seamlessly into networks. In the absence of a trusted authority or security infrastructure, the challenge for hosts is generating their own address and verifying ownership of others. Cryptographically Generated Addresses (CGA) solves this problem by binding IPv6 addresses to hosts' public keys to prove address ownership. CGA generation involves solving a cryptographic puzzle similar to Bitcoin's Proof-of-Work (PoW) to deter address spoofing. Unfortunately, solving the puzzle often causes undesirable address generation delays, which has hindered the adoption of CGA. In this paper, we present Bitcoin-Certified Addresses (BCA), a new technique to bind IPv6 addresses to hosts' public keys. BCA reduces the computational cost of generating addresses by using the PoW computed by Bitcoin nodes to secure the binding. Compared to CGA, BCA provides better protection against spoofing attacks and improves the privacy of hosts. Due to the decentralized nature of the Bitcoin network, BCA avoids reliance on a trusted authority, similar to CGA. BCA shows how the PoW computed by Bitcoin nodes can be reused, which saves costs for hosts and makes Bitcoin mining more efficient.
We present an unsupervised data-driven approach for non-rigid shape matching. Shape matching identifies correspondences between two shapes and is a fundamental step in many computer vision and graphics applications. Our approach is designed to be particularly robust when matching shapes digitized using 3D scanners that contain fine geometric detail and suffer from different types of noise including topological noise caused by the coalescence of spatially close surface regions. We build on two strategies. First, using a hierarchical patch based shape representation we match shapes consistently in a coarse to fine manner, allowing for robustness to noise. This multi-scale representation drastically reduces the dimensionality of the problem when matching at the coarsest scale, rendering unsupervised learning feasible. Second, we constrain this hierarchical matching to be reflected in 3D by fitting a patch-wise near-rigid deformation model. Using this constraint, we leverage spatial continuity at different scales to capture global shape properties, resulting in matchings that generalize well to data with different deformations and noise characteristics. Experiments demonstrate that our approach obtains significantly better results on raw 3D scans than state-of-the-art methods, while performing on-par on standard test scenarios.
We present a novel framework to bootstrap Motion forecasting with Self-consistent Constraints (MISC). The motion forecasting task aims at predicting future trajectories of vehicles by incorporating spatial and temporal information from the past. A key design of MISC is the proposed Dual Consistency Constraints that regularize the predicted trajectories under spatial and temporal perturbation during training. Also, to model the multi-modality in motion forecasting, we design a novel self-ensembling scheme to obtain accurate teacher targets to enforce the self-constraints with multi-modality supervision. With explicit constraints from multiple teacher targets, we observe a clear improvement in the prediction performance. Extensive experiments on the Argoverse motion forecasting benchmark and Waymo Open Motion dataset show that MISC significantly outperforms the state-of-the-art methods. As the proposed strategies are general and can be easily incorporated into other motion forecasting approaches, we also demonstrate that our proposed scheme consistently improves the prediction performance of several existing methods.
Discrete Cosine Transform (DCT) can be used instead of conventional Discrete Fourier Transform (DFT) for the Orthogonal Frequency Division Multiplexing (OFDM) construction, which offers many advantages. In this paper, the Multiple-Input-Multiple-Output (MIMO) DCT-OFDM is enhanced using a proposed Cosine Domain Equalizer (CDE) instead of a Frequency Domain Equalizer (FDE). The results are evaluated through the Rayleigh fading channel with Co-Carrier Frequency Offset (Co-CFO) of different MIMO configurations. The average bit error probability and the simulated time of the proposed scheme and the conventional one is compared, which indicates the importance of the proposed scheme. Also, a closed formula for the number of arithmetic operations of the proposed equalizer is developed. The proposed equalizer gives a simulation time reduction of about 81.21%, 83.74% compared to that of the conventional LZF-FDE, and LMMSE-FDE, respectively for the case of 4x4 configuration.
The dramatic increase in the connectivity demand results in an excessive amount of Internet of Things (IoT) sensors. To meet the management needs of these large-scale networks, such as accurate monitoring and learning capabilities, Digital Twin (DT) is the key enabler. However, current attempts regarding DT implementations remain insufficient due to the perpetual connectivity requirements of IoT networks. Furthermore, the sensor data streaming in IoT networks cause higher processing time than traditional methods. In addition to these, the current intelligent mechanisms cannot perform well due to the spatiotemporal changes in the implemented IoT network scenario. To handle these challenges, we propose a DT-native AI-driven service architecture in support of the concept of IoT networks. Within the proposed DT-native architecture, we implement a TCP-based data flow pipeline and a Reinforcement Learning (RL)-based learner model. We apply the proposed architecture to one of the broad concepts of IoT networks, the Internet of Vehicles (IoV). We measure the efficiency of our proposed architecture and note ~30% processing time-saving thanks to the TCP-based data flow pipeline. Moreover, we test the performance of the learner model by applying several learning rate combinations for actor and critic networks and highlight the most successive model.
With the advancement of generation models, AI-generated content (AIGC) is becoming more realistic, flooding the Internet. A recent study suggests that this phenomenon has elevated the issue of source bias in text retrieval for web searches. Specifically, neural retrieval models tend to rank generated texts higher than human-written texts. In this paper, we extend the study of this bias to cross-modal retrieval. Firstly, we successfully construct a suitable benchmark to explore the existence of the bias. Subsequent extensive experiments on this benchmark reveal that AI-generated images introduce an invisible relevance bias to text-image retrieval models. Specifically, our experiments show that text-image retrieval models tend to rank the AI-generated images higher than the real images, even though the AI-generated images do not exhibit more visually relevant features to the query than real images. This invisible relevance bias is prevalent across retrieval models with varying training data and architectures. Furthermore, our subsequent exploration reveals that the inclusion of AI-generated images in the training data of the retrieval models exacerbates the invisible relevance bias. The above phenomenon triggers a vicious cycle, which makes the invisible relevance bias become more and more serious. To elucidate the potential causes of invisible relevance and address the aforementioned issues, we introduce an effective training method aimed at alleviating the invisible relevance bias. Subsequently, we apply our proposed debiasing method to retroactively identify the causes of invisible relevance, revealing that the AI-generated images induce the image encoder to embed additional information into their representation. This information exhibits a certain consistency across generated images with different semantics and can make the retriever estimate a higher relevance score.
The task of Grammatical Error Correction (GEC) aims to automatically correct grammatical errors in natural texts. Almost all previous works treat annotated training data equally, but inherent discrepancies in data are neglected. In this paper, the inherent discrepancies are manifested in two aspects, namely, accuracy of data annotation and diversity of potential annotations. To this end, we propose MainGEC, which designs token-level and sentence-level training weights based on inherent discrepancies in accuracy and potential diversity of data annotation, respectively, and then conducts mixed-grained weighted training to improve the training effect for GEC. Empirical evaluation shows that whether in the Seq2Seq or Seq2Edit manner, MainGEC achieves consistent and significant performance improvements on two benchmark datasets, demonstrating the effectiveness and superiority of the mixed-grained weighted training. Further ablation experiments verify the effectiveness of designed weights of both granularities in MainGEC.
Performance bounds for parameter estimation play a crucial role in statistical signal processing theory and applications. Two widely recognized bounds are the Cram\'{e}r-Rao bound (CRB) in the non-Bayesian framework, and the Bayesian CRB (BCRB) in the Bayesian framework. However, unlike the CRB, the BCRB is asymptotically unattainable in general, and its equality condition is restrictive. This paper introduces an extension of the Bobrovsky--Mayer-Wolf--Zakai class of bounds, also known as the weighted BCRB (WBCRB). The WBCRB is optimized by tuning the weighting function in the scalar case. Based on this result, we propose an asymptotically tight version of the bound called AT-BCRB. We prove that the AT-BCRB is asymptotically attained by the maximum {\it a-posteriori} probability (MAP) estimator. Furthermore, we extend the WBCRB and the AT-BCRB to the case of vector parameters. The proposed bounds are evaluated in several fundamental signal processing examples, such as variance estimation of white Gaussian process, direction-of-arrival estimation, and mean estimation of Gaussian process with unknown variance and prior statistical information. It is shown that unlike the BCRB, the proposed bounds are asymptotically attainable and coincide with the expected CRB (ECRB). The ECRB, which imposes uniformly unbiasedness, cannot serve as a valid lower bound in the Bayesian framework, while the proposed bounds are valid for any estimator.
We present a novel Graph-based debiasing Algorithm for Underreported Data (GRAUD) aiming at an efficient joint estimation of event counts and discovery probabilities across spatial or graphical structures. This innovative method provides a solution to problems seen in fields such as policing data and COVID-$19$ data analysis. Our approach avoids the need for strong priors typically associated with Bayesian frameworks. By leveraging the graph structures on unknown variables $n$ and $p$, our method debiases the under-report data and estimates the discovery probability at the same time. We validate the effectiveness of our method through simulation experiments and illustrate its practicality in one real-world application: police 911 calls-to-service data.
Knowledge graphs (KGs) serve as useful resources for various natural language processing applications. Previous KG completion approaches require a large number of training instances (i.e., head-tail entity pairs) for every relation. The real case is that for most of the relations, very few entity pairs are available. Existing work of one-shot learning limits method generalizability for few-shot scenarios and does not fully use the supervisory information; however, few-shot KG completion has not been well studied yet. In this work, we propose a novel few-shot relation learning model (FSRL) that aims at discovering facts of new relations with few-shot references. FSRL can effectively capture knowledge from heterogeneous graph structure, aggregate representations of few-shot references, and match similar entity pairs of reference set for every relation. Extensive experiments on two public datasets demonstrate that FSRL outperforms the state-of-the-art.