亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Computational differential privacy (CDP) is a natural relaxation of the standard notion of (statistical) differential privacy (SDP) proposed by Beimel, Nissim, and Omri (CRYPTO 2008) and Mironov, Pandey, Reingold, and Vadhan (CRYPTO 2009). In contrast to SDP, CDP only requires privacy guarantees to hold against computationally-bounded adversaries rather than computationally-unbounded statistical adversaries. Despite the question being raised explicitly in several works (e.g., Bun, Chen, and Vadhan, TCC 2016), it has remained tantalizingly open whether there is any task achievable with the CDP notion but not the SDP notion. Even a candidate such task is unknown. Indeed, it is even unclear what the truth could be! In this work, we give the first construction of a task achievable with the CDP notion but not the SDP notion. More specifically, under strong but plausible cryptographic assumptions, we construct a task for which there exists an $\varepsilon$-CDP mechanism with $\varepsilon = O(1)$ achieving $1-o(1)$ utility, but any $(\varepsilon, \delta)$-SDP mechanism, including computationally unbounded ones, that achieves a constant utility must use either a super-constant $\varepsilon$ or a non-negligible $\delta$. To prove this, we introduce a new approach for showing that a mechanism satisfies CDP: first we show that a mechanism is "private" against a certain class of decision tree adversaries, and then we use cryptographic constructions to "lift" this into privacy against computational adversaries. We believe this approach could be useful to devise further tasks separating CDP from SDP.

相關內容

Federated Learning (FL) is a setting for training machine learning models in distributed environments where the clients do not share their raw data but instead send model updates to a server. However, model updates can be subject to attacks and leak private information. Differential Privacy (DP) is a leading mitigation strategy which involves adding noise to clipped model updates, trading off performance for strong theoretical privacy guarantees. Previous work has shown that the threat model of DP is conservative and that the obtained guarantees may be vacuous or may overestimate information leakage in practice. In this paper, we aim to achieve a tighter measurement of the model exposure by considering a realistic threat model. We propose a novel method, CANIFE, that uses canaries - carefully crafted samples by a strong adversary to evaluate the empirical privacy of a training round. We apply this attack to vision models trained on CIFAR-10 and CelebA and to language models trained on Sent140 and Shakespeare. In particular, in realistic FL scenarios, we demonstrate that the empirical per-round epsilon obtained with CANIFE is 4-5x lower than the theoretical bound.

Language support for differentially-private programming is both crucial and delicate. While elaborate program logics can be very expressive, type-system based approaches using linear types tend to be more lightweight and amenable to automatic checking and inference, and in particular in the presence of higher-order programming. Since the seminal design of Fuzz, which is restricted to $\epsilon$-differential privacy in its original design, significant progress has been made to support more advancedvariants of differential privacy, like($\epsilon$,$\delta$)-differential privacy. However, supporting these advanced privacy variants while also supporting higher-order programming in full has proven to be challenging. We present Jazz, a language and type system which uses linear types and latent contextual effects to support both advanced variants of differential privacy and higher-order programming. Latent contextual effects allow delaying the payment of effects for connectives such as products, sums and functions, yielding advantages in terms of precision of the analysis and annotation burden upon elimination, as well as modularity. We formalize the core of Jazz, prove it sound for privacy via a logical relation for metric preservation, and illustrate its expressive power through a number of case studies drawn from the recent differential privacy literature.

Optimization algorithms such as projected Newton's method, FISTA, mirror descent, and its variants enjoy near-optimal regret bounds and convergence rates, but suffer from a computational bottleneck of computing ``projections'' in potentially each iteration (e.g., $O(T^{1/2})$ regret of online mirror descent). On the other hand, conditional gradient variants solve a linear optimization in each iteration, but result in suboptimal rates (e.g., $O(T^{3/4})$ regret of online Frank-Wolfe). Motivated by this trade-off in runtime v/s convergence rates, we consider iterative projections of close-by points over widely-prevalent submodular base polytopes $B(f)$. We first give necessary and sufficient conditions for when two close points project to the same face of a polytope, and then show that points far away from the polytope project onto its vertices with high probability. We next use this theory and develop a toolkit to speed up the computation of iterative projections over submodular polytopes using both discrete and continuous perspectives. We subsequently adapt the away-step Frank-Wolfe algorithm to use this information and enable early termination. For the special case of cardinality-based submodular polytopes, we improve the runtime of computing certain Bregman projections by a factor of $\Omega(n/\log(n))$. Our theoretical results show orders of magnitude reduction in runtime in preliminary computational experiments.

Computation of a tensor singular value decomposition (t-SVD) with a few passes over the underlying data tensor is crucial in using modern computer architectures, where the main concern is communication cost. The current subspace randomized algorithms for computation of the t-SVD, need 2q + 2 passes over the data tensor where q is a non-negative integer number (power iteration parameter). In this paper, we propose an efficient and flexible randomized algorithm that works for any number of passes q, not necessarily being an even number. The flexibility of the proposed algorithm in using fewer passes naturally leads to lower computational and communication costs. This benefit makes it applicable especially when the data tensors are large or multiple tensor decompositions are required in our task. The proposed algorithm is a generalization of the methods developed for matrices to tensors. The expected/average error bound of the proposed algorithm is derived. Several numerical experiments on random and real-time datasets are conducted and the proposed algorithm is compared with some baseline algorithms. The results confirmed that the proposed algorithm is efficient, applicable, and can provide better performance than the existing algorithms. We also use our proposed method to develop a fast algorithm for the tensor completion problem.

We adopt an information-theoretic framework to analyze the generalization behavior of the class of iterative, noisy learning algorithms. This class is particularly suitable for study under information-theoretic metrics as the algorithms are inherently randomized, and it includes commonly used algorithms such as Stochastic Gradient Langevin Dynamics (SGLD). Herein, we use the maximal leakage (equivalently, the Sibson mutual information of order infinity) metric, as it is simple to analyze, and it implies both bounds on the probability of having a large generalization error and on its expected value. We show that, if the update function (e.g., gradient) is bounded in $L_2$-norm, then adding isotropic Gaussian noise leads to optimal generalization bounds: indeed, the input and output of the learning algorithm in this case are asymptotically statistically independent. Furthermore, we demonstrate how the assumptions on the update function affect the optimal (in the sense of minimizing the induced maximal leakage) choice of the noise. Finally, we compute explicit tight upper bounds on the induced maximal leakage for several scenarios of interest.

This paper considers distributed optimization (DO) where multiple agents cooperate to minimize a global objective function, expressed as a sum of local objectives, subject to some constraints. In DO, each agent iteratively solves a local optimization model constructed by its own data and communicates some information (e.g., a local solution) with its neighbors until a global solution is obtained. Even though locally stored data are not shared with other agents, it is still possible to reconstruct the data from the information communicated among agents, which could limit the practical usage of DO in applications with sensitive data. To address this issue, we propose a privacy-preserving DO algorithm for constrained convex optimization models, which provides a statistical guarantee of data privacy, known as differential privacy, and a sequence of iterates that converges to an optimal solution in expectation. The proposed algorithm generalizes a linearized alternating direction method of multipliers by introducing a multiple local updates technique to reduce communication costs and incorporating an objective perturbation method in the local optimization models to compute and communicate randomized feasible local solutions that cannot be utilized to reconstruct the local data, thus preserving data privacy. Under the existence of convex constraints, we show that, while both algorithms provide the same level of data privacy, the objective perturbation used in the proposed algorithm can provide better solutions than does the widely adopted output perturbation method that randomizes the local solutions by adding some noise. We present the details of privacy and convergence analyses and numerically demonstrate the effectiveness of the proposed algorithm by applying it in two different applications, namely, distributed control of power flow and federated learning, where data privacy is of concern.

The power prior is a popular class of informative priors for incorporating information from historical data. It involves raising the likelihood for the historical data to a power, which acts as discounting parameter. When the discounting parameter is modelled as random, the normalized power prior is recommended. In this work, we prove that the marginal posterior for the discounting parameter for generalized linear models converges to a point mass at zero if there is any discrepancy between the historical and current data, and that it does not converge to a point mass at one when they are fully compatible. In addition, we explore the construction of optimal priors for the discounting parameter in a normalized power prior. In particular, we are interested in achieving the dual objectives of encouraging borrowing when the historical and current data are compatible and limiting borrowing when they are in conflict. We propose intuitive procedures for eliciting the shape parameters of a beta prior for the discounting parameter based on two minimization criteria, the Kullback-Leibler divergence and the mean squared error. Based on the proposed criteria, the optimal priors derived are often quite different from commonly used priors such as the uniform prior.

The kidney exchange problem (KEP) seeks to find possible exchanges among pairs of patients and their incompatible kidney donors while meeting specific optimization criteria such as maximizing the overall number of possible transplants. In practice, patient-donor pairs register with so-called kidney exchange platforms which determine exchange cycles in a centralized fashion. Such a centralized approach raises numerous security concerns. Thus, several privacy-preserving protocols for solving the KEP have been proposed recently. However, the protocols known to date lack scalability in practice since the KEP is an NP-complete problem. We address this issue by proposing a novel privacy-preserving protocol which computes an approximate solution to the KEP that scales well for the large numbers of patient-donor pairs encountered in practice. In contrast to the only other existing protocol that computes an approximate solution to the KEP, our protocol is entirely data oblivious and it exhibits a far superior run time performance without suffering a loss in the quality of the approximation. As a second contribution, we simulate the application of our novel protocol as part of a kidney exchange platform, where patient-donor pairs register and de-register over time and exchanges are determined on a regular basis. In this dynamic setting, the application of our novel privacy-preserving approximation protocol yields a larger number of transplants over time than using the best known privacy-preserving protocol for solving the KEP. Our simulation further shows that the difference between the number of transplants found when using our novel protocol in this dynamic setting compared to the non-privacy-preserving state-of-the-art approach is negligible in practice.

Federated learning (FL) was originally regarded as a framework for collaborative learning among clients with data privacy protection through a coordinating server. In this paper, we propose a new active membership inference (AMI) attack carried out by a dishonest server in FL. In AMI attacks, the server crafts and embeds malicious parameters into global models to effectively infer whether a target data sample is included in a client's private training data or not. By exploiting the correlation among data features through a non-linear decision boundary, AMI attacks with a certified guarantee of success can achieve severely high success rates under rigorous local differential privacy (LDP) protection; thereby exposing clients' training data to significant privacy risk. Theoretical and experimental results on several benchmark datasets show that adding sufficient privacy-preserving noise to prevent our attack would significantly damage FL's model utility.

A novel definition for data privacy in quantum computing based on quantum hypothesis testing is presented in this paper. The parameters in this privacy notion possess an operational interpretation based on the success/failure of an omnipotent adversary being able to distinguish the private categories to which the data belongs using arbitrary measurements on quantum states. Important properties of post processing and composition are then proved for the new notion of privacy. The relationship between privacy against hypothesis-testing adversaries, defined in this paper, and quantum differential privacy are then examined. It is shown that these definitions are intertwined in some parameter regimes. This enables us to provide an interpretation for the privacy budget in quantum differential privacy based on its relationship with privacy against hypothesis testing adversaries.

北京阿比特科技有限公司