亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

(Sender-)Deniable encryption provides a very strong privacy guarantee: a sender who is coerced by an attacker into "opening" their ciphertext after-the-fact is able to generate "fake" local random choices that are consistent with any plaintext of their choice. The only known fully-efficient constructions of public-key deniable encryption rely on indistinguishability obfuscation (iO) (which currently can only be based on sub-exponential hardness assumptions). In this work, we study (sender-)deniable encryption in a setting where the encryption procedure is a quantum algorithm, but the ciphertext is classical. First, we propose a quantum analog of the classical definition in this setting. We give a fully efficient construction satisfying this definition, assuming the quantum hardness of the Learning with Errors (LWE) problem. Second, we show that quantum computation unlocks a fundamentally stronger form of deniable encryption, which we call perfect unexplainability. The primitive at the heart of unexplainability is a quantum computation for which there is provably no efficient way, such as exhibiting the "history of the computation", to establish that the output was indeed the result of the computation. We give a construction which is secure in the random oracle model, assuming the quantum hardness of LWE. Crucially, this notion implies a form of protection against coercion "before-the-fact", a property that is impossible to achieve classically.

相關內容

Cybersecurity threats affect all aspects of society; critical infrastructures (such as networks, corporate systems, water supply systems, and intelligent transportation systems) are especially prone to attacks and can have tangible negative consequences on society. However, these critical cyber systems are generally governed by multiple jurisdictions, for instance the Metro in the Washington, D.C. area is managed by the states of Virginia and Maryland, as well as the District of Columbia (DC) through Washington Metropolitan Area Transit Authority (WMATA). Additionally, the water treatment infrastructure managed by DC Water consists of waste water input from Fairfax and Arlington counties, and the district (i.e. DC). Additionally, cyber attacks usually launch from unknown sources, through unknown switches and servers, and end up at the destination without much knowledge on their source or path. Certain infrastructures are shared amongst multiple countries, another idiosyncrasy that exacerbates the issue of governance. This law paper however, is not concerned with the general governance of these infrastructures, rather with the ambiguity in the relevant laws or doctrines about which authority would prevail in the context of a cyber threat or a cyber-attack, with a focus on federal vs. state issues, international law involvement, federal preemption, technical aspects that could affect lawmaking, and conflicting responsibilities in cases of cyber crime. A legal analysis of previous cases is presented, as well as an extended discussion addressing different sides of the argument.

Security and safety are intertwined concepts in the world of computing. In recent years, the terms "sustainable security" and "sustainable safety" came into fashion and are being used referring to a variety of systems properties ranging from efficiency to profitability, and sometimes meaning that a product or service is good for people and planet. This leads to confusing perceptions of products where customers might expect a sustainable product to be developed without child labour, while the producer uses the term to signify that their new product uses marginally less power than the previous generation of that products. Even in research on sustainably safe and secure ICT, these different notions of terminology are prevalent. As researchers we often work towards optimising our subject of study towards one specific sustainability metric - let's say energy consumption - while being blissfully unaware of, e.g., social impacts, life-cycle impacts, or rebound effects of such optimisations. In this paper I dissect the idea of sustainable safety and security, starting from the questions of what we want to sustain, and for whom we want to sustain it. I believe that a general "people and planet" answer is inadequate here because this form of sustainability cannot be the property of a single industry sector but must be addressed by society as a whole. However, with sufficient understanding of life-cycle impacts we may very well be able to devise research and development efforts, and inform decision making processes towards the use of integrated safety and security solutions that help us to address societal challenges in the context of the climate and ecological crises, and that are aligned with concepts such as intersectionality and climate justice. Of course, these solutions can only be effective if they are embedded in societal and economic change towards more frugal uses of data and ICT.

We introduce Qunity, a new quantum programming language designed to treat quantum computing as a natural generalization of classical computing. Qunity presents a unified syntax where familiar programming constructs can have both quantum and classical effects. For example, one can use sum types to implement the direct sum of linear operators, exception handling syntax to implement projective measurements, and aliasing to induce entanglement. Further, Qunity takes advantage of the overlooked BQP subroutine theorem, allowing one to construct reversible subroutines from irreversible quantum algorithms through the uncomputation of "garbage" outputs. Unlike existing languages that enable quantum aspects with separate add-ons (like a classical language with quantum gates bolted on), Qunity provides a unified syntax along with a novel denotational semantics that guarantees that programs are quantum mechanically valid. We present Qunity's syntax, type system, and denotational semantics, showing how it can cleanly express several quantum algorithms. We also detail how Qunity can be compiled to a low-level qubit circuit language like OpenQASM, proving the realizability of our design.

In recent years a new class of symmetric-key primitives over GF(p) that are essential to Multi-party computation and Zero-knowledge proofs based protocols have emerged. A number of these constructions show, that following alternative design strategies to the classical SPN and Feistel networks, leads to more efficient cipher and hash function designs. In view of these efforts, in this work we build an \emph{algebraic framework} that allows the systematic exploration of viable and efficient design strategies for constructing symmetric-key (iterative) permutations over GF(p). We propose a generalized triangular polynomial dynamical system (GTDS), and based on the GTDS we provide a generic definition of an iterative (keyed) permutation Our GTDS-based generic definition is able to describe the three most well-known design strategies, namely SPNs, Feistel networks and Lai--Massey, and instantiations of them. Most notably, this definition allows for instantiations of novel and efficient (keyed) permutations. As example, we show that the partial SPN-based permutations and the recently proposed Griffin design, which does not explicitly follow Feistel or SPN, can be described using the generic GTDS-based definition. We show that GTDS-based permutations are able to achieve exponential degree growth - a property desirable for constructing efficient cryptographic permutations. In addition, we prove a general upper bound on the differential uniformity of the GTDS. Given that the upper bound on differential uniformity is small enough, we also prove that a GTDS-based two round block cipher is $\epsilon$-close to being pairwise independent. Finally, we provide the discrepancy analysis, a technique used to measure the (pseudo-)randomness of a sequence, for analyzing the randomness of the sequence generated by the generic permutation described by GTDS.

In a desired environmental protection system, groundwater may not be excluded. In addition to the problem of over-exploitation, in total disagreement with the concept of sustainable development, another not negligible issue concerns the groundwater contamination. Mainly, this aspect is due to intensive agricultural activities or industrialized areas. In literature, several papers have dealt with transport problem, especially for inverse problems in which the release history or the source location are identified. The innovative aim of the paper is to develop a data-driven model that is able to analyze multiple scenarios, even strongly non-linear, in order to solve forward and inverse transport problems, preserving the reliability of the results and reducing the uncertainty. Furthermore, this tool has the characteristic of providing extremely fast responses, essential to identify remediation strategies immediately. The advantages produced by the model were compared with literature studies. In this regard, a feedforward artificial neural network, which has been trained to handle different cases, represents the data-driven model. Firstly, to identify the concentration of the pollutant at specific observation points in the study area (forward problem); secondly, to deal with inverse problems identifying the release history at known source location; then, in case of one contaminant source, identifying the release history and, at the same time, the location of the source in a specific sub-domain of the investigated area. At last, the observation error is investigated and estimated. The results are satisfactorily achieved, highlighting the capability of the ANN to deal with multiple scenarios by approximating nonlinear functions without the physical point of view that describes the phenomenon, providing reliable results, with very low computational burden and uncertainty.

Near-term quantum systems tend to be noisy. Crosstalk noise has been recognized as one of several major types of noises in superconducting Noisy Intermediate-Scale Quantum (NISQ) devices. Crosstalk arises from the concurrent execution of two-qubit gates on nearby qubits, such as \texttt{CX}. It might significantly raise the error rate of gates in comparison to running them individually. Crosstalk can be mitigated through scheduling or hardware machine tuning. Prior scientific studies, however, manage crosstalk at a really late phase in the compilation process, usually after hardware mapping is done. It may miss great opportunities of optimizing algorithm logic, routing, and crosstalk at the same time. In this paper, we push the envelope by considering all these factors simultaneously at the very early compilation stage. We propose a crosstalk-aware quantum program compilation framework called CQC that can enhance crosstalk mitigation while achieving satisfactory circuit depth. Moreover, we identify opportunities for translation from intermediate representation to the circuit for application-specific crosstalk mitigation, for instance, the \texttt{CX} ladder construction in variational quantum eigensolvers (VQE). Evaluations through simulation and on real IBM-Q devices show that our framework can significantly reduce the error rate by up to 6$\times$, with only $\sim$60\% circuit depth compared to state-of-the-art gate scheduling approaches. In particular, for VQE, we demonstrate 49\% circuit depth reduction with 9.6\% fidelity improvement over prior art on the H4 molecule using IBMQ Guadalupe. Our CQC framework will be released on GitHub.

We propose the first near-optimal quantum algorithm for estimating in Euclidean norm the mean of a vector-valued random variable with finite mean and covariance. Our result aims at extending the theory of multivariate sub-Gaussian estimators to the quantum setting. Unlike classically, where any univariate estimator can be turned into a multivariate estimator with at most a logarithmic overhead in the dimension, no similar result can be proved in the quantum setting. Indeed, Heinrich ruled out the existence of a quantum advantage for the mean estimation problem when the sample complexity is smaller than the dimension. Our main result is to show that, outside this low-precision regime, there is a quantum estimator that outperforms any classical estimator. Our approach is substantially more involved than in the univariate setting, where most quantum estimators rely only on phase estimation. We exploit a variety of additional algorithmic techniques such as amplitude amplification, the Bernstein-Vazirani algorithm, and quantum singular value transformation. Our analysis also uses concentration inequalities for multivariate truncated statistics. We develop our quantum estimators in two different input models that showed up in the literature before. The first one provides coherent access to the binary representation of the random variable and it encompasses the classical setting. In the second model, the random variable is directly encoded into the phases of quantum registers. This model arises naturally in many quantum algorithms but it is often incomparable to having classical samples. We adapt our techniques to these two settings and we show that the second model is strictly weaker for solving the mean estimation problem. Finally, we describe several applications of our algorithms, notably in measuring the expectation values of commuting observables and in the field of machine learning.

A fundamental quantity of interest in Shannon theory, classical or quantum, is the optimal error exponent of a given channel W and rate R: the constant E(W,R) which governs the exponential decay of decoding error when using ever larger codes of fixed rate R to communicate over ever more (memoryless) instances of a given channel W. Here I show that a bound by Hayashi [CMP 333, 335 (2015)] for an analogous quantity in privacy amplification implies a lower bound on the error exponent of communication over symmetric classical-quantum channels. The resulting bound matches Dalai's [IEEE TIT 59, 8027 (2013)] sphere-packing upper bound for rates above a critical value, and reproduces the well-known classical result for symmetric channels. The argument proceeds by first relating the error exponent of privacy amplification to that of compression of classical information with quantum side information, which gives a lower bound that matches the sphere-packing upper bound of Cheng et al. [IEEE TIT 67, 902 (2021)]. In turn, the polynomial prefactors to the sphere-packing bound found by Cheng et al. may be translated to the privacy amplification problem, sharpening a recent result by Li, Yao, and Hayashi [arXiv:2111.01075 [quant-ph]], at least for linear randomness extractors.

As data are increasingly being stored in different silos and societies becoming more aware of data privacy issues, the traditional centralized training of artificial intelligence (AI) models is facing efficiency and privacy challenges. Recently, federated learning (FL) has emerged as an alternative solution and continue to thrive in this new reality. Existing FL protocol design has been shown to be vulnerable to adversaries within or outside of the system, compromising data privacy and system robustness. Besides training powerful global models, it is of paramount importance to design FL systems that have privacy guarantees and are resistant to different types of adversaries. In this paper, we conduct the first comprehensive survey on this topic. Through a concise introduction to the concept of FL, and a unique taxonomy covering: 1) threat models; 2) poisoning attacks and defenses against robustness; 3) inference attacks and defenses against privacy, we provide an accessible review of this important topic. We highlight the intuitions, key techniques as well as fundamental assumptions adopted by various attacks and defenses. Finally, we discuss promising future research directions towards robust and privacy-preserving federated learning.

Recommender systems (RSs) have been the most important technology for increasing the business in Taobao, the largest online consumer-to-consumer (C2C) platform in China. The billion-scale data in Taobao creates three major challenges to Taobao's RS: scalability, sparsity and cold start. In this paper, we present our technical solutions to address these three challenges. The methods are based on the graph embedding framework. We first construct an item graph from users' behavior history. Each item is then represented as a vector using graph embedding. The item embeddings are employed to compute pairwise similarities between all items, which are then used in the recommendation process. To alleviate the sparsity and cold start problems, side information is incorporated into the embedding framework. We propose two aggregation methods to integrate the embeddings of items and the corresponding side information. Experimental results from offline experiments show that methods incorporating side information are superior to those that do not. Further, we describe the platform upon which the embedding methods are deployed and the workflow to process the billion-scale data in Taobao. Using online A/B test, we show that the online Click-Through-Rate (CTRs) are improved comparing to the previous recommendation methods widely used in Taobao, further demonstrating the effectiveness and feasibility of our proposed methods in Taobao's live production environment.

北京阿比特科技有限公司