亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Classical forgery attacks against Offset Two-round (OTR) structures require some harsh conditions, such as some plaintext and ciphertext pairs need to be known, and the success probability is not too high. To solve these problems, a quantum forgery attack on OTR structure using Simon's algorithm is proposed. The attacker intercept the ciphertext-tag pair $(C,T)$ between the sender and receiver, while Simon's algorithm is used to find the period of the tag generation function in OTR, then we can successfully forge new ciphertext $C'$ ($C'\ne C$) for intercepted tag $T$. For a variant of OTR structure (Pr{/o}st-OTR-Even-Mansour structure), a universal forgery attack, in which it is easy to generate the correct tag of any given message if the attacker is allowed to change a single block in it, is proposed. It first obtains the secret parameter L using Simon's algorithm, then the secret parameter L is used to find the keys $k_1$ and $k_2$, so that an attacker can forge the changed messages. It only needs several plaintext blocks to help obtain the keys to forge any messages. Performance analysis shows that the query complexity of our attack is $O(n)$, and its success probability is very close to 1.

相關內容

We offer an alternative proof, using the Stein-Chen method, of Bollob\'{a}s' theorem concerning the distribution of the extreme degrees of a random graph. Our proof also provides a rate of convergence of the extreme degree to its asymptotic distribution. The same method also applies in a more general setting where the probability of every pair of vertices being connected by edges depends on the number of vertices.

Integer data is typically made differentially private by adding noise from a Discrete Laplace (or Discrete Gaussian) distribution. We study the setting where differential privacy of a counting query is achieved using bit-wise randomized response, i.e., independent, random bit flips on the encoding of the query answer. Binary error-correcting codes transmitted through noisy channels with independent bit flips are well-studied in information theory. However, such codes are unsuitable for differential privacy since they have (by design) high sensitivity, i.e., neighbouring integers have encodings with a large Hamming distance. Gray codes show that it is possible to create an efficient sensitivity 1 encoding, but are also not suitable for differential privacy due to lack of noise-robustness. Our main result is that it is possible, with a constant rate code, to simultaneously achieve the sensitivity of Gray codes and the noise-robustness of error-correcting codes (down to the noise level required for differential privacy). An application of this new encoding of the integers is an asymptotically faster, space-optimal differentially private data structure for histograms.

As Large Language Models (LLMs) are becoming prevalent in various fields, there is an urgent need for improved NLP benchmarks that encompass all the necessary knowledge of individual discipline. Many contemporary benchmarks for foundational models emphasize a broad range of subjects but often fall short in presenting all the critical subjects and encompassing necessary professional knowledge of them. This shortfall has led to skewed results, given that LLMs exhibit varying performance across different subjects and knowledge areas. To address this issue, we present psybench, the first comprehensive Chinese evaluation suite that covers all the necessary knowledge required for graduate entrance exams. psybench offers a deep evaluation of a model's strengths and weaknesses in psychology through multiple-choice questions. Our findings show significant differences in performance across different sections of a subject, highlighting the risk of skewed results when the knowledge in test sets is not balanced. Notably, only the ChatGPT model reaches an average accuracy above $70\%$, indicating that there is still plenty of room for improvement. We expect that psybench will help to conduct thorough evaluations of base models' strengths and weaknesses and assist in practical application in the field of psychology.

The versatility of Large Language Models (LLMs) on natural language understanding tasks has made them popular for research in social sciences. In particular, to properly understand the properties and innate personas of LLMs, researchers have performed studies that involve using prompts in the form of questions that ask LLMs of particular opinions. In this study, we take a cautionary step back and examine whether the current format of prompting enables LLMs to provide responses in a consistent and robust manner. We first construct a dataset that contains 693 questions encompassing 39 different instruments of persona measurement on 115 persona axes. Additionally, we design a set of prompts containing minor variations and examine LLM's capabilities to generate accurate answers, as well as consistency variations to examine their consistency towards simple perturbations such as switching the option order. Our experiments on 15 different open-source LLMs reveal that even simple perturbations are sufficient to significantly downgrade a model's question-answering ability, and that most LLMs have low negation consistency. Our results suggest that the currently widespread practice of prompting is insufficient to accurately capture model perceptions, and we discuss potential alternatives to improve such issues.

Orthogonal meta-learners, such as DR-learner, R-learner and IF-learner, are increasingly used to estimate conditional average treatment effects. They improve convergence rates relative to na\"{\i}ve meta-learners (e.g., T-, S- and X-learner) through de-biasing procedures that involve applying standard learners to specifically transformed outcome data. This leads them to disregard the possibly constrained outcome space, which can be particularly problematic for dichotomous outcomes: these typically get transformed to values that are no longer constrained to the unit interval, making it difficult for standard learners to guarantee predictions within the unit interval. To address this, we construct orthogonal meta-learners for the prediction of counterfactual outcomes which respect the outcome space. As such, the obtained i-learner or imputation-learner is more generally expected to outperform existing learners, even when the outcome is unconstrained, as we confirm empirically in simulation studies and an analysis of critical care data. Our development also sheds broader light onto the construction of orthogonal learners for other estimands.

Deep biasing for the Transducer can improve the recognition performance of rare words or contextual entities, which is essential in practical applications, especially for streaming Automatic Speech Recognition (ASR). However, deep biasing with large-scale rare words remains challenging, as the performance drops significantly when more distractors exist and there are words with similar grapheme sequences in the bias list. In this paper, we combine the phoneme and textual information of rare words in Transducers to distinguish words with similar pronunciation or spelling. Moreover, the introduction of training with text-only data containing more rare words benefits large-scale deep biasing. The experiments on the LibriSpeech corpus demonstrate that the proposed method achieves state-of-the-art performance on rare word error rate for different scales and levels of bias lists.

De Finetti theorems tell us that if we expect the likelihood of outcomes to be independent of their order, then these sequences of outcomes could be equivalently generated by drawing an experiment at random from a distribution, and repeating it over and over. In particular, the quantum de Finetti theorem says that exchangeable sequences of quantum states are always represented by distributions over a single state produced over and over. The main result of this paper is that this quantum de Finetti construction has a universal property as a categorical limit. This allows us to pass canonically between categorical treatments of finite dimensional quantum theory and the infinite dimensional. The treatment here is through understanding properties of (co)limits with respect to the contravariant functor which takes a C*-algebra describing a physical system to its convex, compact space of states, and through discussion of the Radon probability monad. We also show that the same categorical analysis also justifies a continuous de Finetti theorem for classical probability.

Tracking the fundamental frequency (f0) of a monophonic instrumental performance is effectively a solved problem with several solutions achieving 99% accuracy. However, the related task of automatic music transcription requires a further processing step to segment an f0 contour into discrete notes. This sub-task of note segmentation is necessary to enable a range of applications including musicological analysis and symbolic music generation. Building on CREPE, a state-of-the-art monophonic pitch tracking solution based on a simple neural network, we propose a simple and effective method for post-processing CREPE's output to achieve monophonic note segmentation. The proposed method demonstrates state-of-the-art results on two challenging datasets of monophonic instrumental music. Our approach also gives a 97% reduction in the total number of parameters used when compared with other deep learning based methods.

By establishing an interesting connection between ordinary Bell polynomials and rational convolution powers, some composition and inverse relations of Bell polynomials as well as explicit expressions for convolution roots of sequences are obtained. Based on these results, a new method is proposed for calculation of partial Bell polynomials based on prime factorization. It is shown that this method is more efficient than the conventional recurrence procedure for computing Bell polynomials in most cases, requiring far less arithmetic operations. A detailed analysis of the computation complexity is provided, followed by some numerical evaluations.

Although measuring held-out accuracy has been the primary approach to evaluate generalization, it often overestimates the performance of NLP models, while alternative approaches for evaluating models either focus on individual tasks or on specific behaviors. Inspired by principles of behavioral testing in software engineering, we introduce CheckList, a task-agnostic methodology for testing NLP models. CheckList includes a matrix of general linguistic capabilities and test types that facilitate comprehensive test ideation, as well as a software tool to generate a large and diverse number of test cases quickly. We illustrate the utility of CheckList with tests for three tasks, identifying critical failures in both commercial and state-of-art models. In a user study, a team responsible for a commercial sentiment analysis model found new and actionable bugs in an extensively tested model. In another user study, NLP practitioners with CheckList created twice as many tests, and found almost three times as many bugs as users without it.

北京阿比特科技有限公司