亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The well-known benefits of cloud computing have spurred the popularity of database service outsourcing, where one can resort to the cloud to conveniently store and query databases. Coming with such popular trend is the threat to data privacy, as the cloud gains access to the databases and queries which may contain sensitive information, like medical or financial data. A large body of work has been presented for querying encrypted databases, which has been mostly focused on secure keyword search. In this paper, we instead focus on the support for secure skyline query processing over encrypted outsourced databases, where little work has been done. Skyline query is an advanced kind of database query which is important for multi-criteria decision-making systems and applications. We propose SecSkyline, a new system framework building on lightweight cryptography for fast privacy-preserving skyline queries. SecSkyline ambitiously provides strong protection for not only the content confidentiality of the outsourced database, the query, and the result, but also for data patterns that may incur indirect data leakages, such as dominance relationships among data points and search access patterns. Extensive experiments demonstrate that SecSkyline is substantially superior to the state-of-the-art in query latency, with up to 813$\times$ improvement.

相關內容

We study the problem of simultaneously addressing both ballot stuffing and participation privacy for pollsite voting systems. Ballot stuffing is the attack where fake ballots (not cast by any eligible voter) are inserted into the system. Participation privacy is about hiding which eligible voters have actually cast their vote. So far, the combination of ballot stuffing and participation privacy has been mostly studied for internet voting, where voters are assumed to own trusted computing devices. Such approaches are inapplicable to pollsite voting where voters typically vote bare handed. We present an eligibility audit protocol to detect ballot stuffing in pollsite voting protocols. This is done while protecting participation privacy from a remote observer - one who does not physically observe voters during voting. Our protocol can be instantiated as an additional layer on top of most existing pollsite E2E-V voting protocols. To achieve our guarantees, we develop an efficient zero-knowledge proof (ZKP), that, given a value $v$ and a set $\Phi$ of commitments, proves $v$ is committed by some commitment in $\Phi$, without revealing which one. We call this a ZKP of reverse set membership because of its relationship to the popular ZKPs of set membership. This ZKP may be of independent interest.

The monitoring of individuals/objects has become increasingly possible in recent years due to the convenience of integrated cameras in many devices. Due to the important moments or activities of people captured by these devices, it has made it a great asset for attackers to launch attacks against by exploiting the weaknesses in these devices. Different studies proposed na\"ive/selective encryption of the captured visual data for safety but despite the encryption, an attacker can still access or manipulate such data. This paper proposed a novel threat model, DEMIS which helps analyse the threats against such encrypted videos. The paper also examines the attack vectors that can be used for threats and the mitigation that will reduce or prevent the attack. For experiments, firstly the data set is generated by applying selective encryption on the Regions-of-interests (ROI) of the tested videos using the image segmentation technique and Chacha20 cipher. Secondly, different types of attacks, such as inverse, lowercase, uppercase, random insertion, and malleability attacks were simulated in experiments to show the effects of the attacks, the risk matrix, and the severity of these attacks. Our developed data set with the original, selective encrypted, and attacked videos are available on git-repository(//github.com/Ifeoluwapoo/video-datasets) for future researchers.

Federated learning has recently gained significant attention and popularity due to its effectiveness in training machine learning models on distributed data privately. However, as in the single-node supervised learning setup, models trained in federated learning suffer from vulnerability to imperceptible input transformations known as adversarial attacks, questioning their deployment in security-related applications. In this work, we study the interplay between federated training, personalization, and certified robustness. In particular, we deploy randomized smoothing, a widely-used and scalable certification method, to certify deep networks trained on a federated setup against input perturbations and transformations. We find that the simple federated averaging technique is effective in building not only more accurate, but also more certifiably-robust models, compared to training solely on local data. We further analyze personalization, a popular technique in federated training that increases the model's bias towards local data, on robustness. We show several advantages of personalization over both~(that is, only training on local data and federated training) in building more robust models with faster training. Finally, we explore the robustness of mixtures of global and local~(i.e. personalized) models, and find that the robustness of local models degrades as they diverge from the global model

A new approach to calculating the finite Fourier transform is suggested throughout the process of this study. The idea that the series has been updated with the appropriate modification and purification, which serves as the basis for the study, and that this update functions as the basis for the investigation is the conceptual goal of this method, which was designed especially for the purpose of this study. It is provided here that this methodology, which was designed especially for the purpose of this study, has been updated with the appropriate modification and purification, which serves as the basis for the study, is provided here. This study also used this update as the premise to get started. In order for this approach to be successful, the starting point must be the presumption that the series has been appropriately purified and organized to the point where it can be considered adequate. The attributes of this series were discovered as a result of the work that was ordered to choose an acceptable application of the Fourier series, to apply it, and to conduct an analysis of it in relation to the finite Fourier transform. These qualities were determined this study. The results of this study provided a better understanding of the characteristics of this series.

Trusted execution environments are quickly rising in popularity as they enable to run workloads in the cloud without having to trust cloud service providers, by offering additional hardware-assisted security guarantees. One key mechanism for server-grade TEEs is main memory encryption, as it not only prevents system-level attackers from reading the TEE's content, but also provides protection against physical, off-chip attackers. The recent Cipherleaks attacks show that the memory encryption system of AMD SEV-SNP and potentially other TEEs are vulnerable to a new kind of attack, dubbed the ciphertext side-channel. The ciphertext side-channel allows to leak secret data from TEE-protected implementations by analyzing ciphertext patterns exhibited due to deterministic memory encryption. It cannot be mitigated by current best practices like data-oblivious constant-time code. As these ciphertext leakages are inherent to deterministic memory encryption, a hardware fix on existing systems is unlikely. Thus, in this paper, we present a software-based, drop-in solution that can harden existing binaries such that they can be safely executed under TEEs vulnerable to ciphertext side-channels. We combine taint tracking with both static and dynamic binary instrumentation to find sensitive memory locations and prevent the leakage by masking secret data before it gets written to memory. This way, although the memory encryption remains deterministic, we destroy any secret-dependent patterns in encrypted memory. We show that our proof-of-concept implementation can protect constant-time EdDSA and ECDSA implementations against ciphertext side-channels.

In Federated Learning (FL), multiple clients collaborate to learn a shared model through a central server while keeping data decentralized. Personalized Federated Learning (PFL) further extends FL by learning a personalized model per client. In both FL and PFL, all clients participate in the training process and their labeled data are used for training. However, in reality, novel clients may wish to join a prediction service after it has been deployed, obtaining predictions for their own \textbf{unlabeled} data. Here, we introduce a new learning setup, On-Demand Unlabeled PFL (OD-PFL), where a system trained on a set of clients, needs to be later applied to novel unlabeled clients at inference time. We propose a novel approach to this problem, ODPFL-HN, which learns to produce a new model for the late-to-the-party client. Specifically, we train an encoder network that learns a representation for a client given its unlabeled data. That client representation is fed to a hypernetwork that generates a personalized model for that client. Evaluated on five benchmark datasets, we find that ODPFL-HN generalizes better than the current FL and PFL methods, especially when the novel client has a large shift from training clients. We also analyzed the generalization error for novel clients, and showed analytically and experimentally how novel clients can apply differential privacy.

Pufferfish privacy (PP) is a generalization of differential privacy (DP), that offers flexibility in specifying sensitive information and integrates domain knowledge into the privacy definition. Inspired by the illuminating equivalent formulation of DP in terms of mutual information due to Cuff and Yu, this work explores PP through the lens of information theory. We provide an information-theoretic formulation of PP, termed mutual information PP (MI-PP), in terms of the conditional mutual information between the mechanism and the secret, given the public information. We show that MI-PP is implied by the regular PP and characterize conditions under which the reverse implication is also true, recovering the DP information-theoretic equivalence result as a special case. We establish convexity, composability, and post-processing properties for MI-PP mechanisms and derive noise levels for the Gaussian and Laplace mechanisms. The obtained mechanisms are applicable under relaxed assumptions and provide improved noise levels in some regimes, compared to classic, sensitivity-based approaches. Lastly, applications of MI-PP to auditing privacy frameworks, statistical inference tasks, and algorithm stability are explored.

Automatically understanding the contents of an image is a highly relevant problem in practice. In e-commerce and social media settings, for example, a common problem is to automatically categorize user-provided pictures. Nowadays, a standard approach is to fine-tune pre-trained image models with application-specific data. Besides images, organizations however often also collect collaborative signals in the context of their application, in particular how users interacted with the provided online content, e.g., in forms of viewing, rating, or tagging. Such signals are commonly used for item recommendation, typically by deriving latent user and item representations from the data. In this work, we show that such collaborative information can be leveraged to improve the classification process of new images. Specifically, we propose a multitask learning framework, where the auxiliary task is to reconstruct collaborative latent item representations. A series of experiments on datasets from e-commerce and social media demonstrates that considering collaborative signals helps to significantly improve the performance of the main task of image classification by up to 9.1%.

With its powerful capability to deal with graph data widely found in practical applications, graph neural networks (GNNs) have received significant research attention. However, as societies become increasingly concerned with data privacy, GNNs face the need to adapt to this new normal. This has led to the rapid development of federated graph neural networks (FedGNNs) research in recent years. Although promising, this interdisciplinary field is highly challenging for interested researchers to enter into. The lack of an insightful survey on this topic only exacerbates this problem. In this paper, we bridge this gap by offering a comprehensive survey of this emerging field. We propose a unique 3-tiered taxonomy of the FedGNNs literature to provide a clear view into how GNNs work in the context of Federated Learning (FL). It puts existing works into perspective by analyzing how graph data manifest themselves in FL settings, how GNN training is performed under different FL system architectures and degrees of graph data overlap across data silo, and how GNN aggregation is performed under various FL settings. Through discussions of the advantages and limitations of existing works, we envision future research directions that can help build more robust, dynamic, efficient, and interpretable FedGNNs.

As data are increasingly being stored in different silos and societies becoming more aware of data privacy issues, the traditional centralized training of artificial intelligence (AI) models is facing efficiency and privacy challenges. Recently, federated learning (FL) has emerged as an alternative solution and continue to thrive in this new reality. Existing FL protocol design has been shown to be vulnerable to adversaries within or outside of the system, compromising data privacy and system robustness. Besides training powerful global models, it is of paramount importance to design FL systems that have privacy guarantees and are resistant to different types of adversaries. In this paper, we conduct the first comprehensive survey on this topic. Through a concise introduction to the concept of FL, and a unique taxonomy covering: 1) threat models; 2) poisoning attacks and defenses against robustness; 3) inference attacks and defenses against privacy, we provide an accessible review of this important topic. We highlight the intuitions, key techniques as well as fundamental assumptions adopted by various attacks and defenses. Finally, we discuss promising future research directions towards robust and privacy-preserving federated learning.

北京阿比特科技有限公司