亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Continuous authentication has been proposed as a complementary security mechanism to password-based authentication for computer devices that are handled directly by humans, such as smart phones. Continuous authentication has some privacy issues as certain user features and actions are revealed to the authentication server, which is not assumed to be trusted. Wei et al. proposed in 2021 a privacy-preserving protocol for behavioral authentication that utilizes homomorphic encryption. The encryption prevents the server from obtaining sampled user features. In this paper, we show that the Wei et al. scheme is insecure regarding both an honest-but-curious server and an active eavesdropper. We present two attacks: The first attack enables the authentication server to obtain the secret user key, plaintext behavior template and plaintext authentication behavior data from encrypted data. The second attack enables an active eavesdropper to restore the plaintext authentication behavior data from the transmitted encrypted data.

相關內容

讓 iOS 8 和 OS X Yosemite 無縫切換的一個新特性。 > Apple products have always been designed to work together beautifully. But now they may really surprise you. With iOS 8 and OS X Yosemite, you’ll be able to do more wonderful things than ever before.

Source:

A bidirectional integrated sensing and communication (ISAC) system is proposed, in which a pair of transceivers carry out two-way communication and mutual sensing. Both full-duplex and half-duplex operations in narrowband and wideband systems are conceived for the bidirectional ISAC. 1) For the narrowband system, the conventional full-duplex and half-duplex operations are redesigned to take into account sensing echo signals. Then, the transmit beamforming design of both transceivers is proposed for addressing the sensing and communication (S\&C) tradeoff. A one-layer iterative algorithm relying on successive convex approximation (SCA) is proposed to obtain Karush-Kuhn-Tucker (KKT) optimal solutions. 2) For the wideband system, the new full-duplex and half-duplex operations are proposed for the bidirectional ISAC. In particular, the frequency-selective fading channel is tackled by delay pre-compensation and path-based beamforming. By redesigning the proposed SCA-based algorithm, the KKT optimal solutions for path-based beamforming for characterizing the S\&C tradeoff are obtained. Finally, the numerical results show that: i) For both bandwidth scenarios, the existence of the interference introduced by sensing results in full-duplex may not always outperform half-duplex, especially in the sensing-prior regime or when the communication channel is line-of-sight-dominated; and ii) For both duplex operations, it is sufficient to reuse communication signals for sensing in the narrowband system, while an additional dedicated sensing signal is required in the wideband system.

With the rapid growth of machine learning, deep neural networks (DNNs) are now being used in numerous domains. Unfortunately, DNNs are "black-boxes", and cannot be interpreted by humans, which is a substantial concern in safety-critical systems. To mitigate this issue, researchers have begun working on explainable AI (XAI) methods, which can identify a subset of input features that are the cause of a DNN's decision for a given input. Most existing techniques are heuristic, and cannot guarantee the correctness of the explanation provided. In contrast, recent and exciting attempts have shown that formal methods can be used to generate provably correct explanations. Although these methods are sound, the computational complexity of the underlying verification problem limits their scalability; and the explanations they produce might sometimes be overly complex. Here, we propose a novel approach to tackle these limitations. We (1) suggest an efficient, verification-based method for finding minimal explanations, which constitute a provable approximation of the global, minimum explanation; (2) show how DNN verification can assist in calculating lower and upper bounds on the optimal explanation; (3) propose heuristics that significantly improve the scalability of the verification process; and (4) suggest the use of bundles, which allows us to arrive at more succinct and interpretable explanations. Our evaluation shows that our approach significantly outperforms state-of-the-art techniques, and produces explanations that are more useful to humans. We thus regard this work as a step toward leveraging verification technology in producing DNNs that are more reliable and comprehensible.

Modern recommender systems are trained to predict users potential future interactions from users historical behavior data. During the interaction process, despite the data coming from the user side recommender systems also generate exposure data to provide users with personalized recommendation slates. Compared with the sparse user behavior data, the system exposure data is much larger in volume since only very few exposed items would be clicked by the user. Besides, the users historical behavior data is privacy sensitive and is commonly protected with careful access authorization. However, the large volume of recommender exposure data usually receives less attention and could be accessed within a relatively larger scope of various information seekers. In this paper, we investigate the problem of user behavior leakage in recommender systems. We show that the privacy sensitive user past behavior data can be inferred through the modeling of system exposure. Besides, one can infer which items the user have clicked just from the observation of current system exposure for this user. Given the fact that system exposure data could be widely accessed from a relatively larger scope, we believe that the user past behavior privacy has a high risk of leakage in recommender systems. More precisely, we conduct an attack model whose input is the current recommended item slate (i.e., system exposure) for the user while the output is the user's historical behavior. Experimental results on two real-world datasets indicate a great danger of user behavior leakage. To address the risk, we propose a two-stage privacy-protection mechanism which firstly selects a subset of items from the exposure slate and then replaces the selected items with uniform or popularity-based exposure. Experimental evaluation reveals a trade-off effect between the recommendation accuracy and the privacy disclosure risk.

Traffic analysis for instant messaging (IM) applications continues to pose an important privacy challenge. In particular, transport-level data can leak unintentional information about IM -- such as who communicates with whom. Existing tools for metadata privacy have adoption obstacles, including the risks of being scrutinized for having a particular app installed, and performance overheads incompatible with mobile devices. We posit that resilience to traffic analysis must be directly supported by major IM services themselves, and must be done in a low-latency manner without breaking existing features. As a first step in this direction, we propose a hybrid model that combines regular network traffic and deniable messages. We present a novel protocol for deniable instant messaging that we call DenIM that is a variant of the Signal protocol. DenIM is built on the principle that deniable messages can be incorporated as part of padding in regular traffic. By padding traffic, DenIM achieves bandwidth overhead that scales with the volume of regular traffic, as opposed to scaling with time or the number of users. To show the effectiveness of DenIM, we construct a formal model and prove that DenIM's deniability guarantees hold against strong adversaries such as internet service providers, and implement and empirically evaluate a proof-of-concept version of DenIM.

We address two major obstacles to practical use of supervised classifiers on distributed private data. Whether a classifier was trained by a federation of cooperating clients or trained centrally out of distribution, (1) the output scores must be calibrated, and (2) performance metrics must be evaluated -- all without assembling labels in one place. In particular, we show how to perform calibration and compute precision, recall, accuracy and ROC-AUC in the federated setting under three privacy models (i) secure aggregation, (ii) distributed differential privacy, (iii) local differential privacy. Our theorems and experiments clarify tradeoffs between privacy, accuracy, and data efficiency. They also help decide whether a given application has sufficient data to support federated calibration and evaluation.

Polynomial Networks (PNs) have demonstrated promising performance on face and image recognition recently. However, robustness of PNs is unclear and thus obtaining certificates becomes imperative for enabling their adoption in real-world applications. Existing verification algorithms on ReLU neural networks (NNs) based on classical branch and bound (BaB) techniques cannot be trivially applied to PN verification. In this work, we devise a new bounding method, equipped with BaB for global convergence guarantees, called Verification of Polynomial Networks or VPN for short. One key insight is that we obtain much tighter bounds than the interval bound propagation (IBP) and DeepT-Fast [Bonaert et al., 2021] baselines. This enables sound and complete PN verification with empirical validation on MNIST, CIFAR10 and STL10 datasets. We believe our method has its own interest to NN verification. The source code is publicly available at //github.com/megaelius/PNVerification.

Federated Learning (FL) enables collaborative model building among a large number of participants without the need for explicit data sharing. But this approach shows vulnerabilities when privacy inference attacks are applied to it. In particular, in the event of a gradient leakage attack, which has a higher success rate in retrieving sensitive data from the model gradients, FL models are at higher risk due to the presence of communication in their inherent architecture. The most alarming thing about this gradient leakage attack is that it can be performed in such a covert way that it does not hamper the training performance while the attackers backtrack from the gradients to get information about the raw data. Two of the most common approaches proposed as solutions to this issue are homomorphic encryption and adding noise with differential privacy parameters. These two approaches suffer from two major drawbacks. They are: the key generation process becomes tedious with the increasing number of clients, and noise-based differential privacy suffers from a significant drop in global model accuracy. As a countermeasure, we propose a mixed-precision quantized FL scheme, and we empirically show that both of the issues addressed above can be resolved. In addition, our approach can ensure more robustness as different layers of the deep model are quantized with different precision and quantization modes. We empirically proved the validity of our method with three benchmark datasets and found a minimal accuracy drop in the global model after applying quantization.

Black-box machine learning models are being used in more and more high-stakes domains, which creates a growing need for Explainable AI (XAI). Unfortunately, the use of XAI in machine learning introduces new privacy risks, which currently remain largely unnoticed. We introduce the explanation linkage attack, which can occur when deploying instance-based strategies to find counterfactual explanations. To counter such an attack, we propose k-anonymous counterfactual explanations and introduce pureness as a new metric to evaluate the validity of these k-anonymous counterfactual explanations. Our results show that making the explanations, rather than the whole dataset, k- anonymous, is beneficial for the quality of the explanations.

The recent worldwide introduction of RemoteID (RID) regulations forces all Unmanned Aircrafts (UAs), a.k.a. drones, to broadcast in plaintext on the wireless channel their identity and real-time location, for accounting and monitoring purposes. Although improving drones' monitoring and situational awareness, the RID rule also generates significant privacy concerns for UAs' operators, threatened by the ease of tracking of UAs and related confidentiality and privacy concerns connected with the broadcasting of plaintext identity information. In this paper, we propose $A^2RID$, a protocol suite for anonymous direct authentication and remote identification of heterogeneous commercial UAs. $A^2RID$ integrates and adapts protocols for anonymous message signing to work in the UA domain, coping with the constraints of commercial drones and the tight real-time requirements imposed by the RID regulation. Overall, the protocols in the $A^2RID$ suite allow a UA manufacturer to pick the configuration that best suits the capabilities and constraints of the drone, i.e., either a processing-intensive but memory-lightweight solution (namely, $CS-A^2RID$) or a computationally-friendly but memory-hungry approach (namely, $DS-A^2RID$). Besides formally defining the protocols and formally proving their security in our setting, we also implement and test them on real heterogeneous hardware platforms, i.e., the Holybro X-500 and the ESPcopter, releasing open-source the produced code. For all the protocols, we demonstrated experimentally the capability of generating anonymous RemoteID messages well below the time bound of $1$ second required by RID, while at the same time having quite a limited impact on the energy budget of the drone.

Users today expect more security from services that handle their data. In addition to traditional data privacy and integrity requirements, they expect transparency, i.e., that the service's processing of the data is verifiable by users and trusted auditors. Our goal is to build a multi-user system that provides data privacy, integrity, and transparency for a large number of operations, while achieving practical performance. To this end, we first identify the limitations of existing approaches that use authenticated data structures. We find that they fall into two categories: 1) those that hide each user's data from other users, but have a limited range of verifiable operations (e.g., CONIKS, Merkle2, and Proofs of Liabilities), and 2) those that support a wide range of verifiable operations, but make all data publicly visible (e.g., IntegriDB and FalconDB). We then present TAP to address the above limitations. The key component of TAP is a novel tree data structure that supports efficient result verification, and relies on independent audits that use zero-knowledge range proofs to show that the tree is constructed correctly without revealing user data. TAP supports a broad range of verifiable operations, including quantiles and sample standard deviations. We conduct a comprehensive evaluation of TAP, and compare it against two state-of-the-art baselines, namely IntegriDB and Merkle2, showing that the system is practical at scale.

北京阿比特科技有限公司