亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Symbolic automata are finite state automata that support potentially infinite alphabets, such as the set of rational numbers, generally applied to regular expressions/languages over finite words. In symbolic automata (or automata modulo theories), an alphabet is represented by an effective Boolean algebra, supported by a decision procedure for satisfiability. Regular languages over infinite words (so called $\omega$-regular languages) have a rich history paralleling that of regular languages over finite words, with well known applications to model checking via B\"uchi automata and temporal logics. We generalize symbolic automata to support $\omega$-regular languages via symbolic transition terms and symbolic derivatives, bringing together a variety of classic automata and logics in a unified framework that provides all the necessary ingredients to support symbolic model checking modulo $A$, $NBW_A$. In particular, we define: (1) alternating B\"uchi automata modulo $A$, $ABW_A$ as well (non-alternating) non-deterministic B\"uchi automata modulo $A$, $NBW_A$; (2) an alternation elimination algorithm that incrementally constructs an $NBW_A$ from an $ABW_A$, and can also be used for constructing the product of two $NBW_A$'s; (3) a definition of linear temporal logic (LTL) modulo $A$ that generalizes Vardi's construction of alternating B\"uchi automata from LTL, using (2) to go from LTL modulo $A$ to $NBW_A$ via $ABW_A$. Finally, we present a combination of LTL modulo $A$ with extended regular expressions modulo $A$ that generalizes the Property Specification Language (PSL). Our combination allows regex complement, that is not supported in PSL but can be supported naturally by using symbolic transition terms.

相關內容

The (Perfect) Matching Cut problem is to decide if a graph $G$ has a (perfect) matching cut, i.e., a (perfect) matching that is also an edge cut of $G$. Both Matching Cut and Perfect Matching Cut are known to be NP-complete. A perfect matching cut is also a matching cut with maximum number of edges. To increase our understanding of the relationship between the two problems, we introduce the Maximum Matching Cut problem. This problem is to determine a largest matching cut in a graph. We generalize and unify known polynomial-time algorithms for Matching Cut and Perfect Matching Cut restricted to graphs of diameter at most $2$ and to $(P_6+sP_2)$-free graphs. We also show that the complexity of Maximum Matching Cut differs from the complexities of Matching Cut and Perfect Matching Cut by proving NP-hardness of Maximum Matching Cut for $2P_3$-free quadrangulated graphs of diameter $3$ and radius $2$ and for subcubic line graphs of triangle-free graphs. In this way, we obtain full dichotomies of Maximum Matching Cut for graphs of bounded diameter, bounded radius and $H$-free graphs. Finally, we apply our techniques to get a dichotomy for the Maximum Disconnected Perfect Matching problem for $H$-free graphs. A disconnected perfect matching of a graph $G$ is a perfect matching that contains a matching cut of $G$. The Maximum Disconnected Perfect Matching problem asks to determine for a connected graph $G$, a disconnected perfect matching with a largest matching cut over all disconnected perfect matchings of $G$. Our dichotomy result implies that the original decision problem Disconnected Perfect Matching is polynomial-time solvable for $(P_6+sP_2)$-free graphs for every $s\geq 0$, which resolves an open problem of Bouquet and Picouleau (arXiv, 2020).

We apply the FLAME methodology to derive algorithms hand in hand with their proofs of correctness for the computation of the $ L T L^T $ decomposition (with and without pivoting) of a skew-symmetric matrix. The approach yields known as well as new algorithms, presented using the FLAME notation. A number of BLAS-like primitives are exposed at the core of blocked algorithms that can attain high performance. The insights can be easily extended to yield algorithms for computing the $ L T L^T $ decomposition of a symmetric matrix.

Nonparametric learning is a fundamental concept in machine learning that aims to capture complex patterns and relationships in data without making strong assumptions about the underlying data distribution. Owing to simplicity and familiarity, one of the most well-known algorithms under this paradigm is the $k$-nearest neighbors ($k$-NN) algorithm. Driven by the usage of machine learning in safety-critical applications, in this work, we shed new light on the traditional nearest neighbors algorithm from the perspective of information theory and propose a robust and interpretable framework for tasks such as classification, regression, and anomaly detection using a single model. Instead of using a traditional distance measure which needs to be scaled and contextualized, we use a novel formulation of \textit{surprisal} (amount of information required to explain the difference between the observed and expected result). Finally, we demonstrate this architecture's capability to perform at-par or above the state-of-the-art on classification, regression, and anomaly detection tasks using a single model with enhanced interpretability by providing novel concepts for characterizing data and predictions.

Online misinformation is often multimodal in nature, i.e., it is caused by misleading associations between texts and accompanying images. To support the fact-checking process, researchers have been recently developing automatic multimodal methods that gather and analyze external information, evidence, related to the image-text pairs under examination. However, prior works assumed all collected evidence to be relevant. In this study, we introduce a "Relevant Evidence Detection" (RED) module to discern whether each piece of evidence is relevant, to support or refute the claim. Specifically, we develop the "Relevant Evidence Detection Directed Transformer" (RED-DOT) and explore multiple architectural variants (e.g., single or dual-stage) and mechanisms (e.g., "guided attention"). Extensive ablation and comparative experiments demonstrate that RED-DOT achieves significant improvements over the state-of-the-art on the VERITE benchmark by up to 28.5%. Furthermore, our evidence re-ranking and element-wise modality fusion led to RED-DOT achieving competitive and even improved performance on NewsCLIPings+, without the need for numerous evidence or multiple backbone encoders. Finally, our qualitative analysis demonstrates that the proposed "guided attention" module has the potential to enhance the architecture's interpretability. We release our code at: //github.com/stevejpapad/relevant-evidence-detection

Multi-valued logics have a long tradition in the literature on system verification, including run-time verification. However, comparatively fewer model-checking tools have been developed for multi-valued specification languages. We present 3vLTL, a tool to generate Buchi automata from formulas in Linear-time Temporal Logic (LTL) interpreted on a three-valued semantics. Given an LTL formula, a set of atomic propositions as the alphabet for the automaton, and a truth value, our procedure generates a Buchi automaton that accepts all the words that assign the chosen truth value to the LTL formula. Given the particular type of the output of the tool, it can also be seamlessly processed by third-party libraries in a natural way. That is, the Buchi automaton can then be used in the context of formal verification to check whether an LTL formula is true, false, or undefined on a given model.

Approaches to recommendation are typically evaluated in one of two ways: (1) via a (simulated) online experiment, often seen as the gold standard, or (2) via some offline evaluation procedure, where the goal is to approximate the outcome of an online experiment. Several offline evaluation metrics have been adopted in the literature, inspired by ranking metrics prevalent in the field of Information Retrieval. (Normalised) Discounted Cumulative Gain (nDCG) is one such metric that has seen widespread adoption in empirical studies, and higher (n)DCG values have been used to present new methods as the state-of-the-art in top-$n$ recommendation for many years. Our work takes a critical look at this approach, and investigates when we can expect such metrics to approximate the gold standard outcome of an online experiment. We formally present the assumptions that are necessary to consider DCG an unbiased estimator of online reward and provide a derivation for this metric from first principles, highlighting where we deviate from its traditional uses in IR. Importantly, we show that normalising the metric renders it inconsistent, in that even when DCG is unbiased, ranking competing methods by their normalised DCG can invert their relative order. Through a correlation analysis between off- and on-line experiments conducted on a large-scale recommendation platform, we show that our unbiased DCG estimates strongly correlate with online reward, even when some of the metric's inherent assumptions are violated. This statement no longer holds for its normalised variant, suggesting that nDCG's practical utility may be limited.

The numerical solution of eigenvalue problems is essential in various application areas of scientific and engineering domains. In many problem classes, the practical interest is only a small subset of eigenvalues so it is unnecessary to compute all of the eigenvalues. Notable examples are the electronic structure problems where the $k$-th smallest eigenvalue is closely related to the electronic properties of materials. In this paper, we consider the $k$-th eigenvalue problems of symmetric dense matrices with low-rank off-diagonal blocks. We present a linear time generalized LDL decomposition of $\mathcal{H}^2$ matrices and combine it with the bisection eigenvalue algorithm to compute the $k$-th eigenvalue with controllable accuracy. In addition, if more than one eigenvalue is required, some of the previous computations can be reused to compute the other eigenvalues in parallel. Numerical experiments show that our method is more efficient than the state-of-the-art dense eigenvalue solver in LAPACK/ScaLAPACK and ELPA. Furthermore, tests on electronic state calculations of carbon nanomaterials demonstrate that our method outperforms the existing HSS-based bisection eigenvalue algorithm on 3D problems.

Transformers have a potential of learning longer-term dependency, but are limited by a fixed-length context in the setting of language modeling. We propose a novel neural architecture Transformer-XL that enables learning dependency beyond a fixed length without disrupting temporal coherence. It consists of a segment-level recurrence mechanism and a novel positional encoding scheme. Our method not only enables capturing longer-term dependency, but also resolves the context fragmentation problem. As a result, Transformer-XL learns dependency that is 80% longer than RNNs and 450% longer than vanilla Transformers, achieves better performance on both short and long sequences, and is up to 1,800+ times faster than vanilla Transformers during evaluation. Notably, we improve the state-of-the-art results of bpc/perplexity to 0.99 on enwiki8, 1.08 on text8, 18.3 on WikiText-103, 21.8 on One Billion Word, and 54.5 on Penn Treebank (without finetuning). When trained only on WikiText-103, Transformer-XL manages to generate reasonably coherent, novel text articles with thousands of tokens. Our code, pretrained models, and hyperparameters are available in both Tensorflow and PyTorch.

When labeled training data is scarce, a promising data augmentation approach is to generate visual features of unknown classes using their attributes. To learn the class conditional distribution of CNN features, these models rely on pairs of image features and class attributes. Hence, they can not make use of the abundance of unlabeled data samples. In this paper, we tackle any-shot learning problems i.e. zero-shot and few-shot, in a unified feature generating framework that operates in both inductive and transductive learning settings. We develop a conditional generative model that combines the strength of VAE and GANs and in addition, via an unconditional discriminator, learns the marginal feature distribution of unlabeled images. We empirically show that our model learns highly discriminative CNN features for five datasets, i.e. CUB, SUN, AWA and ImageNet, and establish a new state-of-the-art in any-shot learning, i.e. inductive and transductive (generalized) zero- and few-shot learning settings. We also demonstrate that our learned features are interpretable: we visualize them by inverting them back to the pixel space and we explain them by generating textual arguments of why they are associated with a certain label.

Most existing works in visual question answering (VQA) are dedicated to improving the accuracy of predicted answers, while disregarding the explanations. We argue that the explanation for an answer is of the same or even more importance compared with the answer itself, since it makes the question and answering process more understandable and traceable. To this end, we propose a new task of VQA-E (VQA with Explanation), where the computational models are required to generate an explanation with the predicted answer. We first construct a new dataset, and then frame the VQA-E problem in a multi-task learning architecture. Our VQA-E dataset is automatically derived from the VQA v2 dataset by intelligently exploiting the available captions. We have conducted a user study to validate the quality of explanations synthesized by our method. We quantitatively show that the additional supervision from explanations can not only produce insightful textual sentences to justify the answers, but also improve the performance of answer prediction. Our model outperforms the state-of-the-art methods by a clear margin on the VQA v2 dataset.

北京阿比特科技有限公司