亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Users today expect more security from services that handle their data. In addition to traditional data privacy and integrity requirements, they expect transparency, i.e., that the service's processing of the data is verifiable by users and trusted auditors. Our goal is to build a multi-user system that provides data privacy, integrity, and transparency for a large number of operations, while achieving practical performance. To this end, we first identify the limitations of existing approaches that use authenticated data structures. We find that they fall into two categories: 1) those that hide each user's data from other users, but have a limited range of verifiable operations (e.g., CONIKS, Merkle2, and Proofs of Liabilities), and 2) those that support a wide range of verifiable operations, but make all data publicly visible (e.g., IntegriDB and FalconDB). We then present TAP to address the above limitations. The key component of TAP is a novel tree data structure that supports efficient result verification, and relies on independent audits that use zero-knowledge range proofs to show that the tree is constructed correctly without revealing user data. TAP supports a broad range of verifiable operations, including quantiles and sample standard deviations. We conduct a comprehensive evaluation of TAP, and compare it against two state-of-the-art baselines, namely IntegriDB and Merkle2, showing that the system is practical at scale.

相關內容

ACM應用感知TAP(ACM Transactions on Applied Perception)旨在通過發表有助于統一這些領域研究的高質量論文來增強計算機科學與心理學/感知之間的協同作用。該期刊發表跨學科研究,在跨計算機科學和感知心理學的任何主題領域都具有重大而持久的價值。所有論文都必須包含感知和計算機科學兩個部分。主題包括但不限于:視覺感知:計算機圖形學,科學/數據/信息可視化,數字成像,計算機視覺,立體和3D顯示技術。聽覺感知:聽覺顯示和界面,聽覺聽覺編碼,空間聲音,語音合成和識別。觸覺:觸覺渲染,觸覺輸入和感知。感覺運動知覺:手勢輸入,身體運動輸入。感官感知:感官整合,多模式渲染和交互。 官網地址:

Authentication, authorization, and trust verification are central parts of an access control system. The conditions for granting access in such a system are collected in access policies. Since access conditions are often complex, dedicated languages -- policy languages -- for defining policies are in use. However, current policy languages are unable to express such conditions having privacy of users in mind. With privacy-preserving technologies, users are enabled to prove information to the access system without revealing it. In this work, we present a generic design for supporting privacy-preserving technologies in policy languages. Our design prevents unnecessary disclosure of sensitive information while still allowing the formulation of expressive rules for access control. For that we make use of zero-knowledge proofs (NIZKs). We demonstrate our design by applying it to the TPL policy language, while using SNARKs. Also, we evaluate the resulting ZK-TPL language and its associated toolchain. Our evaluation shows that for regular-sized credentials communication and verification overhead is negligible.

With advancements in technology, the threats to the privacy of sensitive data (e.g. location data) are surging. A standard method to mitigate the privacy risks for location data is by adding noise to the true values to achieve geo-indistinguishability. However, we argue that geo-indistinguishability alone is insufficient to cover all privacy concerns. In particular, isolated locations are not protected by the state-of-the-art Laplace mechanism (LAP) for geo-indistinguishability. We focus on a mechanism that is generated by the Blahut-Arimoto algorithm (BA) from rate-distortion theory. We show that BA, in addition to providing geo-indistinguishability, enforces an elastic metric that ameliorates the issue of isolation. We then study the utility of BA in terms of the statistical precision that can be derived from the reported data, focusing on the inference of the original distribution. To this purpose, we apply the iterative Bayesian update (IBU), an instance of the famous expectation-maximization method from statistics, that produces the most likely distribution for any obfuscation mechanism. We show that BA harbours a better statistical utility than LAP for high privacy and becomes comparable as privacy decreases. Remarkably, we point out that BA and IBU, two seemingly unrelated methods that were developed for completely different purposes, are dual to each other. Exploiting this duality and the privacy-preserving properties of BA, we propose an iterative method, PRIVIC, for a privacy-friendly incremental collection of location data from users by service providers. In addition to extending the privacy guarantees of geo-indistinguishability and retaining a better statistical utility than LAP, PRIVIC also provides an optimal trade-off between information leakage and quality of service. We illustrate the soundness and functionality of our method both analytically and with experiments.

The success of deep learning is partly attributed to the availability of massive data downloaded freely from the Internet. However, it also means that users' private data may be collected by commercial organizations without consent and used to train their models. Therefore, it's important and necessary to develop a method or tool to prevent unauthorized data exploitation. In this paper, we propose ConfounderGAN, a generative adversarial network (GAN) that can make personal image data unlearnable to protect the data privacy of its owners. Specifically, the noise produced by the generator for each image has the confounder property. It can build spurious correlations between images and labels, so that the model cannot learn the correct mapping from images to labels in this noise-added dataset. Meanwhile, the discriminator is used to ensure that the generated noise is small and imperceptible, thereby remaining the normal utility of the encrypted image for humans. The experiments are conducted in six image classification datasets, consisting of three natural object datasets and three medical datasets. The results demonstrate that our method not only outperforms state-of-the-art methods in standard settings, but can also be applied to fast encryption scenarios. Moreover, we show a series of transferability and stability experiments to further illustrate the effectiveness and superiority of our method.

Scheduled batch jobs have been widely used on the asynchronous computing platforms to execute various enterprise applications, including the scheduled notifications and the candidate pre-computation for the modern recommender systems. It is important to deliver or update the information to the users at the right time to maintain the user experience and the execution impact. However, it is challenging to provide a versatile execution time optimization solution for the user-basis scheduled jobs to satisfy various product scenarios while maintaining reasonable infrastructure resource consumption. In this paper, we describe how we apply a learning-to-rank approach plus a "best time policy" in the best time selection. In addition, we propose an ensemble learner to minimize the ranking loss by efficiently leveraging multiple streams of user activity signals in our scheduling decisions of the execution time. Especially, we observe the cannibalization cross use cases to compete the user's peak time slot and introduce a coordination system to mitigate the problem. Our optimization approach has been successfully tested with production traffic that serves billions of users per day, with statistically significant improvements in various product metrics, including the notifications and content candidate generation. To the best of our knowledge, our study represents the first ML-based multi-tenant solution of the execution time optimization problem for the scheduled jobs at a large industrial scale cross different product domains.

Fueled by its successful commercialization, the recommender system (RS) has gained widespread attention. However, as the training data fed into the RS models are often highly sensitive, it ultimately leads to severe privacy concerns, especially when data are shared among different platforms. In this paper, we follow the tune of existing works to investigate the problem of secure sparse matrix multiplication for cross-platform RSs. Two fundamental while critical issues are addressed: preserving the training data privacy and breaking the data silo problem. Specifically, we propose two concrete constructions with significantly boosted efficiency. They are designed for the sparse location insensitive case and location sensitive case, respectively. State-of-the-art cryptography building blocks including homomorphic encryption (HE) and private information retrieval (PIR) are fused into our protocols with non-trivial optimizations. As a result, our schemes can enjoy the HE acceleration technique without privacy trade-offs. We give formal security proofs for the proposed schemes and conduct extensive experiments on both real and large-scale simulated datasets. Compared with state-of-the-art works, our two schemes compress the running time roughly by 10* and 2.8*. They also attain up to 15* and 2.3* communication reduction without accuracy loss.

Derived from spiking neuron models via the diffusion approximation, the moment activation (MA) faithfully captures the nonlinear coupling of correlated neural variability. However, numerical evaluation of the MA faces significant challenges due to a number of ill-conditioned Dawson-like functions. By deriving asymptotic expansions of these functions, we develop an efficient numerical algorithm for evaluating the MA and its derivatives ensuring reliability, speed, and accuracy. We also provide exact analytical expressions for the MA in the weak fluctuation limit. Powered by this efficient algorithm, the MA may serve as an effective tool for investigating the dynamics of correlated neural variability in large-scale spiking neural circuits.

This paper introduces SPOT, a Secure and Privacy-preserving prOximity based protocol for e-healthcare systems. It relies on a distributed proxy-based approach to preserve users' privacy and a semi-trusted computing server to ensure data consistency and integrity. The proposed protocol ensures a balance between security, privacy and scalability. As far as we know, in terms of security, SPOT is the first one to prevent malicious users from colluding and generating false positives. In terms of privacy, SPOT supports both anonymity of users being in proximity of infected people and unlinkability of contact information issued by the same user. A concrete construction based on structure-preserving signatures and NIWI proofs is proposed and a detailed security and privacy analysis proves that SPOT is secure under standard assumptions. In terms of scalability, SPOT's procedures and algorithms are implemented to show its efficiency and practical usability with acceptable computation and communication overhead.

IoT is the fastest-growing technology with a wide range of applications in various domains. IoT devices generate data from a real-world environment every second and transfer it to the cloud due to the less storage at the edge site. An outsourced cloud is a solution for handling the storage problem. Users' privacy can be exposed by storing the data on the cloud. Therefore, we propose a Private Data Storage model that stores IoT data on the outsourced cloud with privacy preservation. Fog nodes are used at the edge side for data partition and encryption. Partitioned and encrypted data is aggregated with the help of homomorphic encryption on the outsourced cloud. For secure query processing and accessing the data from the outsourced cloud, the introduced model can be used on the outsourced cloud.

Differential private (DP) query and response mechanisms have been widely adopted in various applications based on Internet of Things (IoT) to leverage variety of benefits through data analysis. The protection of sensitive information is achieved through the addition of noise into the query response which hides the individual records in a dataset. However, the noise addition negatively impacts the accuracy which gives rise to privacy-utility trade-off. Moreover, the DP budget or cost $\epsilon$ is often fixed and it accumulates due to the sequential composition which limits the number of queries. Therefore, in this paper, we propose a framework known as optimized privacy-utility trade-off framework for data sharing in IoT (OPU-TF-IoT). Firstly, OPU-TF-IoT uses an adaptive approach to utilize the DP budget $\epsilon$ by considering a new metric of population or dataset size along with the query. Secondly, our proposed heuristic search algorithm reduces the DP budget accordingly whereas satisfying both data owner and data user. Thirdly, to make the utilization of DP budget transparent to the data owners, a blockchain-based verification mechanism is also proposed. Finally, the proposed framework is evaluated using real-world datasets and compared with the traditional DP model and other related state-of-the-art works. The results confirm that our proposed framework not only utilize the DP budget $\epsilon$ efficiently, but it also optimizes the number of queries. Furthermore, the data owners can effectively make sure that their data is shared accordingly through our blockchain-based verification mechanism which encourages them to share their data into the IoT system.

The integration of permissioned blockchain such as Hyperledger fabric (HF) and Industrial internet of Things (IIoT) has opened new opportunities for interdependent supply chain partners to improve their performance through data sharing and coordination. The multichannel mechanism, private data collection and querying mechanism of HF enable private data sharing, transparency, traceability, and verification across the supply chain. However, the existing querying mechanism of HF needs further improvement for statistical data sharing because the query is evaluated on the original data recorded on the ledger. As a result, it gives rise to privacy issues such as leak of business secrets, tracking of resources and assets, and disclose of personal information. Therefore, we solve this problem by proposing a differentially private enhanced permissioned blockchain for private data sharing in the context of supply chain in IIoT which is known as (EDH-IIoT). We propose an algorithms to efficiently utilize the $\epsilon$ through the reuse of the privacy budget for the repeated queries. Furthermore, the reuse and tracking of $\epsilon$ enable the data owner to get ensure that $\epsilon$ does not exceed the threshold which is the maximum privacy budget ($\epsilon_{t}$). Finally, we model two privacy attacks namely linking attack and composition attack to evaluate and compare privacy preservation, and the efficiency of reuse of {\epsilon} with the default chaincode of HF and traditional differential privacy model, respectively. The results confirm that EDH-IIoT obtains an accuracy of 97% in the shared data for $\epsilon_{t}$ = 1, and a reduction of 35.96% in spending of $\epsilon$.

北京阿比特科技有限公司