亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

An automaton is unambiguous if for every input it has at most one accepting computation. An automaton is k-ambiguous (for k > 0) if for every input it has at most k accepting computations. An automaton is boundedly ambiguous if it is k-ambiguous for some $k \in \mathbb{N}$. An automaton is finitely (respectively, countably) ambiguous if for every input it has at most finitely (respectively, countably) many accepting computations. The degree of ambiguity of a regular language is defined in a natural way. A language is k-ambiguous (respectively, boundedly, finitely, countably ambiguous) if it is accepted by a k-ambiguous (respectively, boundedly, finitely, countably ambiguous) automaton. Over finite words every regular language is accepted by a deterministic automaton. Over finite trees every regular language is accepted by an unambiguous automaton. Over $\omega$-words every regular language is accepted by an unambiguous B\"uchi automaton and by a deterministic parity automaton. Over infinite trees Carayol et al. showed that there are ambiguous languages. We show that over infinite trees there is a hierarchy of degrees of ambiguity: For every k > 1 there are k-ambiguous languages that are not k - 1 ambiguous; and there are finitely (respectively countably, uncountably) ambiguous languages that are not boundedly (respectively finitely, countably) ambiguous.

相關內容

We consider statistical inference in the density estimation model using a tree-based Bayesian approach, with Optional P\'olya trees as prior distribution. We derive near-optimal convergence rates for corresponding posterior distributions with respect to the supremum norm. For broad classes of H\"older-smooth densities, we show that the method automatically adapts to the unknown H\"older regularity parameter. We consider the question of uncertainty quantification by providing mathematical guarantees for credible sets from the obtained posterior distributions, leading to near-optimal uncertainty quantification for the density function, as well as related functionals such as the cumulative distribution function. The results are illustrated through a brief simulation study.

We investigate the connection between the complexity of nonlocal games and the arithmetical hierarchy, a classification of languages according to the complexity of arithmetical formulas defining them. It was recently shown by Ji, Natarajan, Vidick, Wright and Yuen that deciding whether the (finite-dimensional) quantum value of a nonlocal game is $1$ or at most $\frac{1}{2}$ is complete for the class $\Sigma_1$ (i.e., $\mathsf{RE}$). A result of Slofstra implies that deciding whether the commuting operator value of a nonlocal game is equal to $1$ is complete for the class $\Pi_1$ (i.e., $\mathsf{coRE}$). We prove that deciding whether the quantum value of a two-player nonlocal game is exactly equal to $1$ is complete for $\Pi_2$; this class is in the second level of the arithmetical hierarchy and corresponds to formulas of the form ``$\forall x \, \exists y \, \phi(x,y)$''. This shows that exactly computing the quantum value is strictly harder than approximating it, and also strictly harder than computing the commuting operator value (either exactly or approximately). We explain how results about the complexity of nonlocal games all follow in a unified manner from a technique known as compression. At the core of our $\Pi_2$-completeness result is a new ``gapless'' compression theorem that holds for both quantum and commuting operator strategies. Our compression theorem yields as a byproduct an alternative proof of Slofstra's result that the set of quantum correlations is not closed. We also show how a ``gap-preserving'' compression theorem for commuting operator strategies would imply that approximating the commuting operator value is complete for $\Pi_1$.

Persistence modules have a natural home in the setting of stratified spaces and constructible cosheaves. In this article, we first give explicit constructible cosheaves for common data-motivated persistence modules, namely, for modules that arise from zig-zag filtrations (including monotone filtrations), and for augmented persistence modules (which encode the data of instantaneous events). We then identify an equivalence of categories between a particular notion of zig-zag modules and the combinatorial entrance path category on stratified $\mathbb{R}$. Finally, we compute the algebraic $K$-theory of generalized zig-zag modules and describe connections to both Euler curves and $K_0$ of the monoid of persistence diagrams as described by Bubenik and Elchesen.

The classical (parallel) black pebbling game is a useful abstraction which allows us to analyze the resources (space, space-time, cumulative space) necessary to evaluate a function $f$ with a static data-dependency graph $G$. Of particular interest in the field of cryptography are data-independent memory-hard functions $f_{G,H}$ which are defined by a directed acyclic graph (DAG) $G$ and a cryptographic hash function $H$. The pebbling complexity of the graph $G$ characterized the amortized cost of evaluating $f_{G,H}$ multiple times or the total cost to run a brute-force preimage attack over a fixed domain $\mathcal{X}$, i.e., given $y \in \{0,1\}^*$ find $x \in \mathcal{X}$ such that $f_{G,H}(x)=y$. While a classical attacker will need to evaluate the function $f_{G,H}$ at least $m=|\mathcal{X}|$ times a quantum attacker running Grover's algorithm only requires $\mathcal{O}(\sqrt{m})$ blackbox calls to a quantum circuit $C_{G,H}$ evaluating the function $f_{G,H}$. Thus, to analyze the cost of a quantum attack it is crucial to understand the space-time cost (equivalently width times depth) of the quantum circuit $C_{G,H}$. We first observe that a legal black pebbling strategy for the graph $G$ does not necessarily imply the existence of a quantum circuit with comparable complexity -- in contrast to the classical setting where any efficient pebbling strategy for $G$ corresponds to an algorithm with comparable complexity evaluating $f_{G,H}$. Motivated by this observation we introduce a new (parallel) quantum pebbling game which captures additional restrictions imposed by the No-Deletion Theorem in Quantum Computing. We apply our new quantum pebbling game to analyze the quantum space-time complexity of several important graphs: the line graph, Argon2i-A, Argon2i-B, and DRSample. (See the paper for the full abstract.)

Research on quantum technology spans multiple disciplines: physics, computer science, engineering, and mathematics. The objective of this manuscript is to provide an accessible introduction to this emerging field for economists that is centered around quantum computing and quantum money. We proceed in three steps. First, we discuss basic concepts in quantum computing and quantum communication, assuming knowledge of linear algebra and statistics, but not of computer science or physics. This covers fundamental topics, such as qubits, superposition, entanglement, quantum circuits, oracles, and the no-cloning theorem. Second, we provide an overview of quantum money, an early invention of the quantum communication literature that has recently been partially implemented in an experimental setting. One form of quantum money offers the privacy and anonymity of physical cash, the option to transact without the involvement of a third party, and the efficiency and convenience of a debit card payment. Such features cannot be achieved in combination with any other form of money. Finally, we review all existing quantum speedups that have been identified for algorithms used to solve and estimate economic models. This includes function approximation, linear systems analysis, Monte Carlo simulation, matrix inversion, principal component analysis, linear regression, interpolation, numerical differentiation, and true random number generation. We also discuss the difficulty of achieving quantum speedups and comment on common misconceptions about what is achievable with quantum computing.

This study investigates whether phonological features can be applied in text-to-speech systems to generate native and non-native speech. We present a mapping between ARPABET/pinyin->SAMPA/SAMPA-SC->phonological features in this paper, and tested whether native, non-native, and code-switched speech could be successfully generated using this mapping. We ran two experiments, one with a small dataset and one with a larger dataset. The results proved that phonological features can be a feasible input system, although it needs further investigation to improve model performance. The accented output generated by the TTS models also helps with understanding human second language acquisition processes.

Bayesian networks are popular probabilistic models that capture the conditional dependencies among a set of variables. Inference in Bayesian networks is a fundamental task for answering probabilistic queries over a subset of variables in the data. However, exact inference in Bayesian networks is \NP-hard, which has prompted the development of many practical inference methods. In this paper, we focus on improving the performance of the junction-tree algorithm, a well-known method for exact inference in Bayesian networks. In particular, we seek to leverage information in the workload of probabilistic queries to obtain an optimal workload-aware materialization of junction trees, with the aim to accelerate the processing of inference queries. We devise an optimal pseudo-polynomial algorithm to tackle this problem and discuss approximation schemes. Compared to state-of-the-art approaches for efficient processing of inference queries via junction trees, our methods are the first to exploit the information provided in query workloads. Our experimentation on several real-world Bayesian networks confirms the effectiveness of our techniques in speeding-up query processing.

We study the problem of the embedding degree of an abelian variety over a finite field which is vital in pairing-based cryptography. In particular, we show that for a prescribed CM field $L$ of degree $\geq 4$, prescribed integers $m$, $n$ and any prime $\ell\equiv 1 \mod{mn}$ that splits completely in $L$, there exists an ordinary abelian variety over a prime finite field with endomorphism algebra $L$, embedding degree $n$ with respect to $\ell$ and the field extension generated by the $\ell$-torsion points of degree $mn$ over the field of definition. We also study a class of absolutely simple higher dimensional abelian varieties whose endomorphism algebras are central over imaginary quadratic fields.

This paper studies task adaptive pre-trained model selection, an \emph{underexplored} problem of assessing pre-trained models so that models suitable for the task can be selected from the model zoo without fine-tuning. A pilot work~\cite{nguyen_leep:_2020} addressed the problem in transferring supervised pre-trained models to classification tasks, but it cannot handle emerging unsupervised pre-trained models or regression tasks. In pursuit of a practical assessment method, we propose to estimate the maximum evidence (marginalized likelihood) of labels given features extracted by pre-trained models. The maximum evidence is \emph{less prone to over-fitting} than the likelihood, and its \emph{expensive computation can be dramatically reduced} by our carefully designed algorithm. The Logarithm of Maximum Evidence (LogME) can be used to assess pre-trained models for transfer learning: a pre-trained model with high LogME is likely to have good transfer performance. LogME is fast, accurate, and general, characterizing it as \emph{the first practical assessment method for transfer learning}. Compared to brute-force fine-tuning, LogME brings over $3000\times$ speedup in wall-clock time. It outperforms prior methods by a large margin in their setting and is applicable to new settings that prior methods cannot deal with. It is general enough to diverse pre-trained models (supervised pre-trained and unsupervised pre-trained), downstream tasks (classification and regression), and modalities (vision and language). Code is at \url{//github.com/thuml/LogME}.

We consider the task of inferring is-a relationships from large text corpora. For this purpose, we propose a new method combining hyperbolic embeddings and Hearst patterns. This approach allows us to set appropriate constraints for inferring concept hierarchies from distributional contexts while also being able to predict missing is-a relationships and to correct wrong extractions. Moreover -- and in contrast with other methods -- the hierarchical nature of hyperbolic space allows us to learn highly efficient representations and to improve the taxonomic consistency of the inferred hierarchies. Experimentally, we show that our approach achieves state-of-the-art performance on several commonly-used benchmarks.

北京阿比特科技有限公司