亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Differential Privacy (DP) provides tight upper bounds on the capabilities of optimal adversaries, but such adversaries are rarely encountered in practice. Under the hypothesis testing/membership inference interpretation of DP, we examine the Gaussian mechanism and relax the usual assumption of a Neyman-Pearson-Optimal (NPO) adversary to a Generalized Likelihood Test (GLRT) adversary. This mild relaxation leads to improved privacy guarantees, which we express in the spirit of Gaussian DP and $(\varepsilon, \delta)$-DP, including composition and sub-sampling results. We evaluate our results numerically and find them to match the theoretical upper bounds.

相關內容

This note complements the upcoming paper "One-Way Ticket to Las Vegas and the Quantum Adversary" by Belovs and Yolcu, to be presented at QIP 2023. I develop the ideas behind the adversary bound - universal algorithm duality therein in a different form, using the same perspective as Barnum-Saks-Szegedy in which query algorithms are defined as sequences of feasible reduced density matrices rather than sequences of unitaries. This form may be faster to understand for a general quantum information audience: It avoids defining the "unidirectional relative $\gamma_{2}$-bound" and relating it to query algorithms explicitly. This proof is also more general because the lower bound (and universal query algorithm) apply to a class of optimal control problems rather than just query problems. That is in addition to the advantages to be discussed in Belovs-Yolcu, namely the more elementary algorithm and correctness proof that avoids phase estimation and spectral analysis, allows for limited treatment of noise, and removes another $\Theta(\log(1/\epsilon))$ factor from the runtime compared to the previous discrete-time algorithm.

Deep neural networks have strong capabilities of memorizing the underlying training data, which can be a serious privacy concern. An effective solution to this problem is to train models with differential privacy, which provides rigorous privacy guarantees by injecting random noise to the gradients. This paper focuses on the scenario where sensitive data are distributed among multiple participants, who jointly train a model through federated learning (FL), using both secure multiparty computation (MPC) to ensure the confidentiality of each gradient update, and differential privacy to avoid data leakage in the resulting model. A major challenge in this setting is that common mechanisms for enforcing DP in deep learning, which inject real-valued noise, are fundamentally incompatible with MPC, which exchanges finite-field integers among the participants. Consequently, most existing DP mechanisms require rather high noise levels, leading to poor model utility. Motivated by this, we propose Skellam mixture mechanism (SMM), an approach to enforce DP on models built via FL. Compared to existing methods, SMM eliminates the assumption that the input gradients must be integer-valued, and, thus, reduces the amount of noise injected to preserve DP. Further, SMM allows tight privacy accounting due to the nice composition and sub-sampling properties of the Skellam distribution, which are key to accurate deep learning with DP. The theoretical analysis of SMM is highly non-trivial, especially considering (i) the complicated math of differentially private deep learning in general and (ii) the fact that the mixture of two Skellam distributions is rather complex, and to our knowledge, has not been studied in the DP literature. Extensive experiments on various practical settings demonstrate that SMM consistently and significantly outperforms existing solutions in terms of the utility of the resulting model.

We construct a universally Bayes consistent learning rule that satisfies differential privacy (DP). We first handle the setting of binary classification and then extend our rule to the more general setting of density estimation (with respect to the total variation metric). The existence of a universally consistent DP learner reveals a stark difference with the distribution-free PAC model. Indeed, in the latter DP learning is extremely limited: even one-dimensional linear classifiers are not privately learnable in this stringent model. Our result thus demonstrates that by allowing the learning rate to depend on the target distribution, one can circumvent the above-mentioned impossibility result and in fact, learn \emph{arbitrary} distributions by a single DP algorithm. As an application, we prove that any VC class can be privately learned in a semi-supervised setting with a near-optimal \emph{labeled} sample complexity of $\tilde{O}(d/\varepsilon)$ labeled examples (and with an unlabeled sample complexity that can depend on the target distribution).

This paper describes an algorithm (thus far referred to as the "Dragonfly Algorithm") by which the subset-sum problem can be solved in $O(n^{11}\log(n))$ time complexity. The paper will first detail the generalized "product-derivative" method (and the more efficient version of this method which will be used in the algorithm) by which a pair of monic polynomials can be used to generate a system of unique monic polynomials for which each polynomial in the system will share with every other a set of roots equivalent to the intersection of the roots of the original pair; this method will then be applied on a pair of polynomials one of which, $\phi(x)$, exhibiting known roots based on the instance of the subset-sum problem and the other of which, $t(x)$, containing unknown placeholder coefficients and representing an unknown subset of the linear factors of $\phi(x)$.

In this paper, we propose a new approach for the time-discretization of the incompressible stochastic Stokes equations with multiplicative noise. Our new strategy is based on the classical Milstein method from stochastic differential equations. We use the energy method for its error analysis and show a strong convergence order of at most $1$ for both velocity and pressure approximations. The proof is based on a new H\"older continuity estimate of the velocity solution. While the errors of the velocity approximation are estimated in the standard $L^2$- and $H^1$-norms, the pressure errors are carefully analyzed in a special norm because of the low regularity of the pressure solution. In addition, a new interpretation of the pressure solution, which is very useful in computation, is also introduced. Numerical experiments are also provided to validate the error estimates and their sharpness.

Firms and statistical agencies must protect the privacy of the individuals whose data they collect, analyze, and publish. Increasingly, these organizations do so by using publication mechanisms that satisfy differential privacy. We consider the problem of choosing such a mechanism so as to maximize the value of its output to end users. We show that this is a constrained information design problem, and characterize its solution. When the underlying database is drawn from a symmetric distribution -- for instance, if individuals' data are i.i.d. -- we show that the problem's dimensionality can be reduced, and that its solution belongs to a simpler class of mechanisms. When, in addition, data users have supermodular payoffs, we show that the simple geometric mechanism is always optimal by using a novel comparative static that ranks information structures according to their usefulness in supermodular decision problems.

It is shown in this note that approximating the number of independent sets in a $k$-uniform linear hypergraph with maximum degree at most $\Delta$ is NP-hard if $\Delta\geq 5\cdot 2^{k-1}+1$. This confirms that for the relevant sampling and approximate counting problems, the regimes on the maximum degree where the state-of-the-art algorithms work are tight, up to some small factors. These algorithms include: the approximate sampler and randomised approximation scheme by Hermon, Sly and Zhang (2019), the perfect sampler by Qiu, Wang and Zhang (2022), and the deterministic approximation scheme by Feng, Guo, Wang, Wang and Yin (2022).

We address the problem of integrating data from multiple observational and interventional studies to eventually compute counterfactuals in structural causal models. We derive a likelihood characterisation for the overall data that leads us to extend a previous EM-based algorithm from the case of a single study to that of multiple ones. The new algorithm learns to approximate the (unidentifiability) region of model parameters from such mixed data sources. On this basis, it delivers interval approximations to counterfactual results, which collapse to points in the identifiable case. The algorithm is very general, it works on semi-Markovian models with discrete variables and can compute any counterfactual. Moreover, it automatically determines if a problem is feasible (the parameter region being nonempty), which is a necessary step not to yield incorrect results. Systematic numerical experiments show the effectiveness and accuracy of the algorithm, while hinting at the benefits of integrating heterogeneous data to get informative bounds in case of unidentifiability.

In many modern applications of deep learning the neural network has many more parameters than the data points used for its training. Motivated by those practices, a large body of recent theoretical research has been devoted to studying overparameterized models. One of the central phenomena in this regime is the ability of the model to interpolate noisy data, but still have test error lower than the amount of noise in that data. arXiv:1906.11300 characterized for which covariance structure of the data such a phenomenon can happen in linear regression if one considers the interpolating solution with minimum $\ell_2$-norm and the data has independent components: they gave a sharp bound on the variance term and showed that it can be small if and only if the data covariance has high effective rank in a subspace of small co-dimension. We strengthen and complete their results by eliminating the independence assumption and providing sharp bounds for the bias term. Thus, our results apply in a much more general setting than those of arXiv:1906.11300, e.g., kernel regression, and not only characterize how the noise is damped but also which part of the true signal is learned. Moreover, we extend the result to the setting of ridge regression, which allows us to explain another interesting phenomenon: we give general sufficient conditions under which the optimal regularization is negative.

Recent advances in maximizing mutual information (MI) between the source and target have demonstrated its effectiveness in text generation. However, previous works paid little attention to modeling the backward network of MI (i.e., dependency from the target to the source), which is crucial to the tightness of the variational information maximization lower bound. In this paper, we propose Adversarial Mutual Information (AMI): a text generation framework which is formed as a novel saddle point (min-max) optimization aiming to identify joint interactions between the source and target. Within this framework, the forward and backward networks are able to iteratively promote or demote each other's generated instances by comparing the real and synthetic data distributions. We also develop a latent noise sampling strategy that leverages random variations at the high-level semantic space to enhance the long term dependency in the generation process. Extensive experiments based on different text generation tasks demonstrate that the proposed AMI framework can significantly outperform several strong baselines, and we also show that AMI has potential to lead to a tighter lower bound of maximum mutual information for the variational information maximization problem.

北京阿比特科技有限公司