亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Any class of languages $\mathbf{L}$ accepted in time $\mathbf{T}$ has a counterpart $\mathbf{NL}$ accepted in nondeterministic time $\mathbf{NT}$. It follows from the definition of nondeterministic languages that $\mathbf{L} \subseteq \mathbf{NL}$. This work shows that every sufficiently powerful language in $\mathbf{L}$ contains a string corresponding to G\"{o}del's undecidable proposition, but this string is not contained in its nondeterministic counterpart. This inconsistency in the definition of nondeterministic languages shows that certain questions regarding nondeterministic time complexity equivalences are irrevocably ill-posed.

相關內容

In this article, we deal with the efficient computation of the Wright function in the cases of interest for the expression of solutions of some fractional differential equations. The proposed algorithm is based on the inversion of the Laplace transform of a particular expression of the Wright function for which we discuss in detail the error analysis. We also present a code package that implements the algorithm proposed here in different programming languages. The analysis and implementation are accompanied by an extensive set of numerical experiments that validate both the theoretical estimates of the error and the applicability of the proposed method for representing the solutions of fractional differential equations.

We present a Large Eddy Simulation (LES) approach based on a nonlinear differential low-pass filter for the simulation of two-dimensional barotropic flows with under-refined meshes. For the implementation of such model, we choose a segregated three-step algorithm combined with a computationally efficient Finite Volume method. We assess the performance of our approach on the classical double-gyre wind forcing benchmark. The numerical experiments we present demonstrate that our nonlinear filter is an improvement over a linear filter since it is able to recover the four-gyre pattern of the time-averaged stream function even with extremely coarse meshes. In addition, our LES approach provides an average kinetic energy that compares well with the one computed with a Direct Numerical Simulation.

We introduce a universal class of geometric deep learning models, called metric hypertransformers (MHTs), capable of approximating any adapted map $F:\mathscr{X}^{\mathbb{Z}}\rightarrow \mathscr{Y}^{\mathbb{Z}}$ with approximable complexity, where $\mathscr{X}\subseteq \mathbb{R}^d$ and $\yyy$ is any suitable metric space, and $\mathscr{X}^{\mathbb{Z}}$ (resp. $\mathscr{Y}^{\mathbb{Z}}$) capture all discrete-time paths on $\mathscr{X}$ (resp. $\mathscr{Y}$). Suitable spaces $\mathscr{Y}$ include various (adapted) Wasserstein spaces, all Fr\'{e}chet spaces admitting a Schauder basis, and a variety of Riemannian manifolds arising from information geometry. Even in the static case, where $f:\mathscr{X}\rightarrow \mathscr{Y}$ is a H\"{o}lder map, our results provide the first (quantitative) universal approximation theorem compatible with any such $\mathscr{X}$ and $\mathscr{Y}$. Our universal approximation theorems are quantitative, and they depend on the regularity of $F$, the choice of activation function, the metric entropy and diameter of $\mathscr{X}$, and on the regularity of the compact set of paths whereon the approximation is performed. Our guiding examples originate from mathematical finance. Notably, the MHT models introduced here are able to approximate a broad range of stochastic processes' kernels, including solutions to SDEs, many processes with arbitrarily long memory, and functions mapping sequential data to sequences of forward rate curves.

Most popular dimension reduction (DR) methods like t-SNE and UMAP are based on minimizing a cost between input and latent pairwise similarities. Though widely used, these approaches lack clear probabilistic foundations to enable a full understanding of their properties and limitations. To that extent, we introduce a unifying statistical framework based on the coupling of hidden graphs using cross entropy. These graphs induce a Markov random field dependency structure among the observations in both input and latent spaces. We show that existing pairwise similarity DR methods can be retrieved from our framework with particular choices of priors for the graphs. Moreover this reveals that these methods suffer from a statistical deficiency that explains poor performances in conserving coarse-grain dependencies. Our model is leveraged and extended to address this issue while new links are drawn with Laplacian eigenmaps and PCA.

We study a syntax for specifying quantitative "assertions" - functions mapping program states to numbers - for probabilistic program verification. We prove that our syntax is expressive in the following sense: Given any probabilistic program $C$, if a function $f$ is expressible in our syntax, then the function mapping each initial state $\sigma$ to the expected value of $f$ evaluated in the final states reached after termination of $C$ on $\sigma$ (also called the weakest preexpectation $\textit{wp} [C](f)$) is also expressible in our syntax. As a consequence, we obtain a relatively complete verification system for reasoning about expected values and probabilities in the sense of Cook: Apart from proving a single inequality between two functions given by syntactic expressions in our language, given $f$, $g$, and $C$, we can check whether $g \preceq \textit{wp} [C] (f)$.

Recent research has examined algorithms to minimize robots' resource footprints. The class of combinatorial filters (discrete variants of widely-used probabilistic estimators) has been studied and methods for reducing their space requirements introduced. This paper extends existing combinatorial filters by introducing a natural generalization that we dub cover combinatorial filters. In addressing the new -- but still NP-complete -- problem of minimization of cover filters, this paper shows that multiple concepts previously believed to be true about combinatorial filters (and actually conjectured, claimed, or assumed to be) are in fact false. For instance, minimization does not induce an equivalence relation. We give an exact algorithm for the cover filter minimization problem. Unlike prior work (based on graph coloring) we consider a type of clique-cover problem, involving a new conditional constraint, from which we can find more general relations. In addition to solving the more general problem, the algorithm also corrects flaws present in all prior filter reduction methods. In employing SAT, the algorithm provides a promising basis for future practical development.

We prove two theorems related to the Central Limit Theorem (CLT) for Martin-L\"of Random (MLR) sequences. Martin-L\"of randomness attempts to capture what it means for a sequence of bits to be "truly random". By contrast, CLTs do not make assertions about the behavior of a single random sequence, but only on the distributional behavior of a sequence of random variables. Semantically, we usually interpret CLTs as assertions about the collective behavior of infinitely many sequences. Yet, our intuition is that if a sequence of bits is "truly random", then it should provide a "source of randomness" for which CLT-type results should hold. We tackle this difficulty by using a sampling scheme that generates an infinite number of samples from a single binary sequence. We show that when we apply this scheme to a Martin-L\"of random sequence, the empirical moments and cumulative density functions (CDF) of these samples tend to their corresponding counterparts for the normal distribution. We also prove the well known almost sure central limit theorem (ASCLT), which provides an alternative, albeit less intuitive, answer to this question. Both results are also generalized for Schnorr random sequences.

Personalized recommender systems are playing an increasingly important role as more content and services become available and users struggle to identify what might interest them. Although matrix factorization and deep learning based methods have proved effective in user preference modeling, they violate the triangle inequality and fail to capture fine-grained preference information. To tackle this, we develop a distance-based recommendation model with several novel aspects: (i) each user and item are parameterized by Gaussian distributions to capture the learning uncertainties; (ii) an adaptive margin generation scheme is proposed to generate the margins regarding different training triplets; (iii) explicit user-user/item-item similarity modeling is incorporated in the objective function. The Wasserstein distance is employed to determine preferences because it obeys the triangle inequality and can measure the distance between probabilistic distributions. Via a comparison using five real-world datasets with state-of-the-art methods, the proposed model outperforms the best existing models by 4-22% in terms of recall@K on Top-K recommendation.

Neural waveform models such as the WaveNet are used in many recent text-to-speech systems, but the original WaveNet is quite slow in waveform generation because of its autoregressive (AR) structure. Although faster non-AR models were recently reported, they may be prohibitively complicated due to the use of a distilling training method and the blend of other disparate training criteria. This study proposes a non-AR neural source-filter waveform model that can be directly trained using spectrum-based training criteria and the stochastic gradient descent method. Given the input acoustic features, the proposed model first uses a source module to generate a sine-based excitation signal and then uses a filter module to transform the excitation signal into the output speech waveform. Our experiments demonstrated that the proposed model generated waveforms at least 100 times faster than the AR WaveNet and the quality of its synthetic speech is close to that of speech generated by the AR WaveNet. Ablation test results showed that both the sine-wave excitation signal and the spectrum-based training criteria were essential to the performance of the proposed model.

Singleton arc consistency is an important type of local consistency which has been recently shown to solve all constraint satisfaction problems (CSPs) over constraint languages of bounded width. We aim to characterise all classes of CSPs defined by a forbidden pattern that are solved by singleton arc consistency and closed under removing constraints. We identify five new patterns whose absence ensures solvability by singleton arc consistency, four of which are provably maximal and three of which generalise 2-SAT. Combined with simple counter-examples for other patterns, we make significant progress towards a complete classification.

北京阿比特科技有限公司