亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Inspired by a recent study by Christensen and Popovski on secure $2$-user product computation for finite-fields of prime-order over a quantum multiple access channel (QMAC), the generalization to $K$ users and arbitrary finite fields is explored. Combining ideas of batch-processing, quantum $2$-sum protocol, a secure computation scheme of Feige, Killian and Naor (FKN), a field-group isomorphism and additive secret sharing, asymptotically optimal (capacity-achieving for large alphabet) schemes are proposed for secure $K$-user (any $K$) product computation over any finite field. The capacity of modulo-$d$ ($d\geq 2$) secure $K$-sum computation over the QMAC is found to be $2/K$ computations/qudit as a byproduct of the analysis.

相關內容

In this note, we observe that quantum logspace computations are verifiable by classical logspace algorithms, with unconditional security. More precisely, every language in BQL has an (information-theoretically secure) streaming proof with a quantum logspace prover and a classical logspace verifier. The prover provides a polynomial-length proof that is streamed to the verifier. The verifier has a read-once one-way access to that proof and is able to verify that the computation was performed correctly. That is, if the input is in the language and the prover is honest, the verifier accepts with high probability, and, if the input is not in the language, the verifier rejects with high probability even if the prover is adversarial. Moreover, the verifier uses only $O(\log n)$ random bits.

Mixtures of classifiers (a.k.a. randomized ensembles) have been proposed as a way to improve robustness against adversarial attacks. However, it has been shown that existing attacks are not well suited for this kind of classifiers. In this paper, we discuss the problem of attacking a mixture in a principled way and introduce two desirable properties of attacks based on a geometrical analysis of the problem (effectiveness and maximality). We then show that existing attacks do not meet both of these properties. Finally, we introduce a new attack called lattice climber attack with theoretical guarantees on the binary linear setting, and we demonstrate its performance by conducting experiments on synthetic and real datasets.

Federated learning (FL) has emerged as a highly effective paradigm for privacy-preserving collaborative training among different parties. Unlike traditional centralized learning, which requires collecting data from each party, FL allows clients to share privacy-preserving information without exposing private datasets. This approach not only guarantees enhanced privacy protection but also facilitates more efficient and secure collaboration among multiple participants. Therefore, FL has gained considerable attention from researchers, promoting numerous surveys to summarize the related works. However, the majority of these surveys concentrate on methods sharing model parameters during the training process, while overlooking the potential of sharing other forms of local information. In this paper, we present a systematic survey from a new perspective, i.e., what to share in FL, with an emphasis on the model utility, privacy leakage, and communication efficiency. This survey differs from previous ones due to four distinct contributions. First, we present a new taxonomy of FL methods in terms of the sharing methods, which includes three categories of shared information: model sharing, synthetic data sharing, and knowledge sharing. Second, we analyze the vulnerability of different sharing methods to privacy attacks and review the defense mechanisms that provide certain privacy guarantees. Third, we conduct extensive experiments to compare the performance and communication overhead of various sharing methods in FL. Besides, we assess the potential privacy leakage through model inversion and membership inference attacks, while comparing the effectiveness of various defense approaches. Finally, we discuss potential deficiencies in current methods and outline future directions for improvement.

We consider the problem of learning from data corrupted by underrepresentation bias, where positive examples are filtered from the data at different, unknown rates for a fixed number of sensitive groups. We show that with a small amount of unbiased data, we can efficiently estimate the group-wise drop-out parameters, even in settings where intersectional group membership makes learning each intersectional rate computationally infeasible. Using this estimate for the group-wise drop-out rate, we construct a re-weighting scheme that allows us to approximate the loss of any hypothesis on the true distribution, even if we only observe the empirical error on a biased sample. Finally, we present an algorithm encapsulating this learning and re-weighting process, and we provide strong PAC-style guarantees that, with high probability, our estimate of the risk of the hypothesis over the true distribution will be arbitrarily close to the true risk.

We investigate covert communication over general memoryless classical-quantum channels with fixed finite-size input alphabets. We show that the square root law (SRL) governs covert communication in this setting when product of $n$ input states is used: $L_{\rm SRL}\sqrt{n}+o(\sqrt{n})$ covert bits (but no more) can be reliably transmitted in $n$ uses of classical-quantum channel, where $L_{\rm SRL}>0$ is a channel-dependent constant that we call covert capacity. We also show that ensuring covertness requires $J_{\rm SRL}\sqrt{n}+o(\sqrt{n})$ bits secret shared by the communicating parties prior to transmission, where $J_{\rm SRL}\geq0$ is a channel-dependent constant. We assume a quantum-powerful adversary that can perform an arbitrary joint (entangling) measurement on all $n$ channel uses. We determine the single-letter expressions for $L_{\rm SRL}$ and $J_{\rm SRL}$, and establish conditions when $J_{\rm SRL}=0$ (i.e., no pre-shared secret is needed). Finally, we evaluate the scenarios where covert communication is not governed by the SRL.

Secure aggregation protocols ensure the privacy of users' data in the federated learning settings by preventing the disclosure of users' local gradients. Despite their merits, existing aggregation protocols often incur high communication and computation overheads on the participants and might not be optimized to handle the large update vectors for machine learning models efficiently. This paper presents e-SeaFL, an efficient, verifiable secure aggregation protocol taking one communication round in aggregation. e-SeaFL allows the aggregation server to generate proof of honest aggregation for the participants. Our core idea is to employ a set of assisting nodes to help the aggregation server, under similar trust assumptions existing works placed upon the participating users. For verifiability, e-SeaFL uses authenticated homomorphic vector commitments. Our experiments show that the user enjoys five orders of magnitude higher efficiency than the state of the art (PPML 2022) for a gradient vector of a high dimension up to $100,000$.

For time-dependent PDEs, the numerical schemes can be rendered bound-preserving without losing conservation and accuracy, by a post processing procedure of solving a constrained minimization in each time step. Such a constrained optimization can be formulated as a nonsmooth convex minimization, which can be efficiently solved by first order optimization methods, if using the optimal algorithm parameters. By analyzing the asymptotic linear convergence rate of the generalized Douglas-Rachford splitting method, optimal algorithm parameters can be approximately expressed as a simple function of the number of out-of-bounds cells. We demonstrate the efficiency of this simple choice of algorithm parameters by applying such a limiter to cell averages of a discontinuous Galerkin scheme solving phase field equations for 3D demanding problems. Numerical tests on a sophisticated 3D Cahn-Hilliard-Navier-Stokes system indicate that the limiter is high order accurate, very efficient, and well-suited for large-scale simulations. For each time step, it takes at most $20$ iterations for the Douglas-Rachford splitting to enforce bounds and conservation up to the round-off error, for which the computational cost is at most $80N$ with $N$ being the total number of cells.

We combine dependent types with linear type systems that soundly and completely capture polynomial time computation. We explore two systems for capturing polynomial time: one system that disallows construction of iterable data, and one, based on the LFPL system of Martin Hofmann, that controls construction via a payment method. Both of these are extended to full dependent types via Quantitative Type Theory, allowing for arbitrary computation in types alongside guaranteed polynomial time computation in terms. We prove the soundness of the systems using a realisability technique due to Dal Lago and Hofmann. Our long-term goal is to combine the extensional reasoning of type theory with intensional reasoning about the resources intrinsically consumed by programs. This paper is a step along this path, which we hope will lead both to practical systems for reasoning about programs' resource usage, and to theoretical use as a form of synthetic computational complexity theory.

We generalize signature Gr\"obner bases, previously studied in the free algebra over a field or polynomial rings over a ring, to ideals in the mixed algebra $R[x_1,...,x_k]\langle y_1,\dots,y_n \rangle$ where $R$ is a principal ideal domain. We give an algorithm for computing them, combining elements from the theory of commutative and noncommutative (signature) Gr\"obner bases, and prove its correctness. Applications include extensions of the free algebra with commutative variables, e.g., for homogenization purposes or for performing ideal theoretic operations such as intersections, and computations over $\mathbb{Z}$ as universal proofs over fields of arbitrary characteristic. By extending the signature cover criterion to our setting, our algorithm also lifts some technical restrictions from previous noncommutative signature-based algorithms, now allowing, e.g., elimination orderings. We provide a prototype implementation for the case when $R$ is a field, and show that our algorithm for the mixed algebra is more efficient than classical approaches using existing algorithms.

This paper is concerned with the designing, analyzing and implementing linear and nonlinear discretization scheme for the distributed optimal control problem (OCP) with the Cahn-Hilliard (CH) equation as constrained. We propose three difference schemes to approximate and investigate the solution behaviour of the OCP for the CH equation. We present the convergence analysis of the proposed discretization. We verify our findings by presenting numerical experiments.

北京阿比特科技有限公司