亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

When reasoning about formal objects whose structures involve binding, it is often necessary to analyze expressions relative to a context that associates types, values, and other related attributes with variables that appear free in the expressions. We refer to such associations as binding contexts. Reasoning tasks also require properties such as the shape and uniqueness of associations concerning binding contexts to be made explicit. The Abella proof assistant, which supports a higher-order treatment of syntactic constructs, provides a simple and elegant way to describe such contexts from which their properties can be extracted. This mechanism is based at the outset on viewing binding contexts as ordered sequences of associations. However, when dealing with object systems that embody notions of linearity, it becomes necessary to treat binding contexts more generally as partitionable multisets. We show how to adapt the original Abella encoding to encompass such a generalization. The key idea in this adaptation is to base the definition of a binding context on a mapping to an underlying ordered sequence of associations. We further show that properties that hold with the ordered sequence view can be lifted to the generalized definition of binding contexts and that this lifting can, in fact, be automated. These ideas find use in the extension currently under development of the two-level logic approach of Abella to a setting where linear logic is used as the specification logic.

相關內容

在(zai)數(shu)學中,多(duo)(duo)(duo)(duo)重(zhong)(zhong)集(ji)是對(dui)集(ji)的(de)概念的(de)修改,與(yu)集(ji)不(bu)同(tong),集(ji)對(dui)每個(ge)(ge)元(yuan)(yuan)素(su)(su)(su)(su)允許多(duo)(duo)(duo)(duo)個(ge)(ge)實例(li)(li)。 為(wei)每個(ge)(ge)元(yuan)(yuan)素(su)(su)(su)(su)提供的(de)實例(li)(li)的(de)正(zheng)整數(shu)個(ge)(ge)數(shu)稱為(wei)該(gai)元(yuan)(yuan)素(su)(su)(su)(su)在(zai)多(duo)(duo)(duo)(duo)重(zhong)(zhong)集(ji)中的(de)多(duo)(duo)(duo)(duo)重(zhong)(zhong)性。 結果存在(zai)無限多(duo)(duo)(duo)(duo)個(ge)(ge)多(duo)(duo)(duo)(duo)重(zhong)(zhong)集(ji),它(ta)們僅包含元(yuan)(yuan)素(su)(su)(su)(su)a和(he)b,但因(yin)元(yuan)(yuan)素(su)(su)(su)(su)的(de)多(duo)(duo)(duo)(duo)樣性而變(bian)化(hua):(1)集(ji){a,b}僅包含元(yuan)(yuan)素(su)(su)(su)(su)a和(he)b,當(dang)將(jiang){a,b}視(shi)為(wei)多(duo)(duo)(duo)(duo)集(ji)時,每個(ge)(ge)元(yuan)(yuan)素(su)(su)(su)(su)的(de)多(duo)(duo)(duo)(duo)重(zhong)(zhong)性為(wei)1;(2)在(zai)多(duo)(duo)(duo)(duo)重(zhong)(zhong)集(ji){a,a,b}中,元(yuan)(yuan)素(su)(su)(su)(su)a具有多(duo)(duo)(duo)(duo)重(zhong)(zhong)性2,而b具有多(duo)(duo)(duo)(duo)重(zhong)(zhong)性1;(3)在(zai)多(duo)(duo)(duo)(duo)集(ji){a,a,a,b,b,b}中,a和(he)b都具有多(duo)(duo)(duo)(duo)重(zhong)(zhong)性3。

The success of AI models relies on the availability of large, diverse, and high-quality datasets, which can be challenging to obtain due to data scarcity, privacy concerns, and high costs. Synthetic data has emerged as a promising solution by generating artificial data that mimics real-world patterns. This paper provides an overview of synthetic data research, discussing its applications, challenges, and future directions. We present empirical evidence from prior art to demonstrate its effectiveness and highlight the importance of ensuring its factuality, fidelity, and unbiasedness. We emphasize the need for responsible use of synthetic data to build more powerful, inclusive, and trustworthy language models.

Interpretability methods are developed to understand the working mechanisms of black-box models, which is crucial to their responsible deployment. Fulfilling this goal requires both that the explanations generated by these methods are correct and that people can easily and reliably understand them. While the former has been addressed in prior work, the latter is often overlooked, resulting in informal model understanding derived from a handful of local explanations. In this paper, we introduce explanation summary (ExSum), a mathematical framework for quantifying model understanding, and propose metrics for its quality assessment. On two domains, ExSum highlights various limitations in the current practice, helps develop accurate model understanding, and reveals easily overlooked properties of the model. We also connect understandability to other properties of explanations such as human alignment, robustness, and counterfactual minimality and plausibility.

This manuscript portrays optimization as a process. In many practical applications the environment is so complex that it is infeasible to lay out a comprehensive theoretical model and use classical algorithmic theory and mathematical optimization. It is necessary as well as beneficial to take a robust approach, by applying an optimization method that learns as one goes along, learning from experience as more aspects of the problem are observed. This view of optimization as a process has become prominent in varied fields and has led to some spectacular success in modeling and systems that are now part of our daily lives.

With the capability of modeling bidirectional contexts, denoising autoencoding based pretraining like BERT achieves better performance than pretraining approaches based on autoregressive language modeling. However, relying on corrupting the input with masks, BERT neglects dependency between the masked positions and suffers from a pretrain-finetune discrepancy. In light of these pros and cons, we propose XLNet, a generalized autoregressive pretraining method that (1) enables learning bidirectional contexts by maximizing the expected likelihood over all permutations of the factorization order and (2) overcomes the limitations of BERT thanks to its autoregressive formulation. Furthermore, XLNet integrates ideas from Transformer-XL, the state-of-the-art autoregressive model, into pretraining. Empirically, XLNet outperforms BERT on 20 tasks, often by a large margin, and achieves state-of-the-art results on 18 tasks including question answering, natural language inference, sentiment analysis, and document ranking.

Benefit from the quick development of deep learning techniques, salient object detection has achieved remarkable progresses recently. However, there still exists following two major challenges that hinder its application in embedded devices, low resolution output and heavy model weight. To this end, this paper presents an accurate yet compact deep network for efficient salient object detection. More specifically, given a coarse saliency prediction in the deepest layer, we first employ residual learning to learn side-output residual features for saliency refinement, which can be achieved with very limited convolutional parameters while keep accuracy. Secondly, we further propose reverse attention to guide such side-output residual learning in a top-down manner. By erasing the current predicted salient regions from side-output features, the network can eventually explore the missing object parts and details which results in high resolution and accuracy. Experiments on six benchmark datasets demonstrate that the proposed approach compares favorably against state-of-the-art methods, and with advantages in terms of simplicity, efficiency (45 FPS) and model size (81 MB).

It is a common paradigm in object detection frameworks to treat all samples equally and target at maximizing the performance on average. In this work, we revisit this paradigm through a careful study on how different samples contribute to the overall performance measured in terms of mAP. Our study suggests that the samples in each mini-batch are neither independent nor equally important, and therefore a better classifier on average does not necessarily mean higher mAP. Motivated by this study, we propose the notion of Prime Samples, those that play a key role in driving the detection performance. We further develop a simple yet effective sampling and learning strategy called PrIme Sample Attention (PISA) that directs the focus of the training process towards such samples. Our experiments demonstrate that it is often more effective to focus on prime samples than hard samples when training a detector. Particularly, On the MSCOCO dataset, PISA outperforms the random sampling baseline and hard mining schemes, e.g. OHEM and Focal Loss, consistently by more than 1% on both single-stage and two-stage detectors, with a strong backbone ResNeXt-101.

It is always well believed that modeling relationships between objects would be helpful for representing and eventually describing an image. Nevertheless, there has not been evidence in support of the idea on image description generation. In this paper, we introduce a new design to explore the connections between objects for image captioning under the umbrella of attention-based encoder-decoder framework. Specifically, we present Graph Convolutional Networks plus Long Short-Term Memory (dubbed as GCN-LSTM) architecture that novelly integrates both semantic and spatial object relationships into image encoder. Technically, we build graphs over the detected objects in an image based on their spatial and semantic connections. The representations of each region proposed on objects are then refined by leveraging graph structure through GCN. With the learnt region-level features, our GCN-LSTM capitalizes on LSTM-based captioning framework with attention mechanism for sentence generation. Extensive experiments are conducted on COCO image captioning dataset, and superior results are reported when comparing to state-of-the-art approaches. More remarkably, GCN-LSTM increases CIDEr-D performance from 120.1% to 128.7% on COCO testing set.

We introduce a generic framework that reduces the computational cost of object detection while retaining accuracy for scenarios where objects with varied sizes appear in high resolution images. Detection progresses in a coarse-to-fine manner, first on a down-sampled version of the image and then on a sequence of higher resolution regions identified as likely to improve the detection accuracy. Built upon reinforcement learning, our approach consists of a model (R-net) that uses coarse detection results to predict the potential accuracy gain for analyzing a region at a higher resolution and another model (Q-net) that sequentially selects regions to zoom in. Experiments on the Caltech Pedestrians dataset show that our approach reduces the number of processed pixels by over 50% without a drop in detection accuracy. The merits of our approach become more significant on a high resolution test set collected from YFCC100M dataset, where our approach maintains high detection performance while reducing the number of processed pixels by about 70% and the detection time by over 50%.

Dynamic programming (DP) solves a variety of structured combinatorial problems by iteratively breaking them down into smaller subproblems. In spite of their versatility, DP algorithms are usually non-differentiable, which hampers their use as a layer in neural networks trained by backpropagation. To address this issue, we propose to smooth the max operator in the dynamic programming recursion, using a strongly convex regularizer. This allows to relax both the optimal value and solution of the original combinatorial problem, and turns a broad class of DP algorithms into differentiable operators. Theoretically, we provide a new probabilistic perspective on backpropagating through these DP operators, and relate them to inference in graphical models. We derive two particular instantiations of our framework, a smoothed Viterbi algorithm for sequence prediction and a smoothed DTW algorithm for time-series alignment. We showcase these instantiations on two structured prediction tasks and on structured and sparse attention for neural machine translation.

While existing machine learning models have achieved great success for sentiment classification, they typically do not explicitly capture sentiment-oriented word interaction, which can lead to poor results for fine-grained analysis at the snippet level (a phrase or sentence). Factorization Machine provides a possible approach to learning element-wise interaction for recommender systems, but they are not directly applicable to our task due to the inability to model contexts and word sequences. In this work, we develop two Position-aware Factorization Machines which consider word interaction, context and position information. Such information is jointly encoded in a set of sentiment-oriented word interaction vectors. Compared to traditional word embeddings, SWI vectors explicitly capture sentiment-oriented word interaction and simplify the parameter learning. Experimental results show that while they have comparable performance with state-of-the-art methods for document-level classification, they benefit the snippet/sentence-level sentiment analysis.

北京阿比特科技有限公司