亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Existing active strategies for training surrogate models yield accurate structural reliability estimates by aiming at design space regions in the vicinity of a specified limit state function. In many practical engineering applications, various damage conditions, e.g. repair, failure, should be probabilistically characterized, thus demanding the estimation of multiple performance functions. In this work, we investigate the capability of active learning approaches for efficiently selecting training samples under a limited computational budget while still preserving the accuracy associated with multiple surrogated limit states. Specifically, PC-Kriging-based surrogate models are actively trained considering a variance correction derived from leave-one-out cross-validation error information, whereas the sequential learning scheme relies on U-function-derived metrics. The proposed active learning approaches are tested in a highly nonlinear structural reliability setting, whereas in a more practical application, failure and repair events are stochastically predicted in the aftermath of a ship collision against an offshore wind substructure. The results show that a balanced computational budget administration can be effectively achieved by successively targeting the specified multiple limit state functions within a unified active learning scheme.

相關內容

主動(dong)(dong)(dong)學(xue)(xue)(xue)(xue)習(xi)(xi)是(shi)機器學(xue)(xue)(xue)(xue)習(xi)(xi)(更(geng)普遍的(de)(de)(de)(de)說(shuo)是(shi)人(ren)工智能(neng)(neng))的(de)(de)(de)(de)一個子領域(yu),在統計學(xue)(xue)(xue)(xue)領域(yu)也叫(jiao)查詢學(xue)(xue)(xue)(xue)習(xi)(xi)、最優實驗(yan)設計。“學(xue)(xue)(xue)(xue)習(xi)(xi)模塊(kuai)(kuai)”和“選擇策略”是(shi)主動(dong)(dong)(dong)學(xue)(xue)(xue)(xue)習(xi)(xi)算法的(de)(de)(de)(de)2個基本且(qie)重要的(de)(de)(de)(de)模塊(kuai)(kuai)。 主動(dong)(dong)(dong)學(xue)(xue)(xue)(xue)習(xi)(xi)是(shi)“一種(zhong)學(xue)(xue)(xue)(xue)習(xi)(xi)方(fang)法,在這種(zhong)方(fang)法中(zhong),學(xue)(xue)(xue)(xue)生會(hui)主動(dong)(dong)(dong)或體(ti)驗(yan)性地參與(yu)學(xue)(xue)(xue)(xue)習(xi)(xi)過程(cheng)(cheng),并且(qie)根(gen)據學(xue)(xue)(xue)(xue)生的(de)(de)(de)(de)參與(yu)程(cheng)(cheng)度,有不同程(cheng)(cheng)度的(de)(de)(de)(de)主動(dong)(dong)(dong)學(xue)(xue)(xue)(xue)習(xi)(xi)。” (Bonwell&Eison 1991)Bonwell&Eison(1991) 指出:“學(xue)(xue)(xue)(xue)生除了(le)被動(dong)(dong)(dong)地聽課以外,還(huan)從事其(qi)他(ta)(ta)活(huo)動(dong)(dong)(dong)。” 在高等(deng)教(jiao)育研究協會(hui)(ASHE)的(de)(de)(de)(de)一份報告中(zhong),作者(zhe)討(tao)論了(le)各種(zhong)促進主動(dong)(dong)(dong)學(xue)(xue)(xue)(xue)習(xi)(xi)的(de)(de)(de)(de)方(fang)法。他(ta)(ta)們引用了(le)一些(xie)文(wen)獻,這些(xie)文(wen)獻表明(ming)學(xue)(xue)(xue)(xue)生不僅要做聽,還(huan)必(bi)須做更(geng)多(duo)的(de)(de)(de)(de)事情才(cai)能(neng)(neng)學(xue)(xue)(xue)(xue)習(xi)(xi)。他(ta)(ta)們必(bi)須閱讀,寫(xie)作,討(tao)論并參與(yu)解決問題(ti)。此過程(cheng)(cheng)涉及三個學(xue)(xue)(xue)(xue)習(xi)(xi)領域(yu),即(ji)知識,技(ji)能(neng)(neng)和態(tai)度(KSA)。這種(zhong)學(xue)(xue)(xue)(xue)習(xi)(xi)行為(wei)分類(lei)(lei)法可(ke)以被認(ren)為(wei)是(shi)“學(xue)(xue)(xue)(xue)習(xi)(xi)過程(cheng)(cheng)的(de)(de)(de)(de)目標”。特(te)別是(shi),學(xue)(xue)(xue)(xue)生必(bi)須從事諸如(ru)分析(xi),綜(zong)合和評估之類(lei)(lei)的(de)(de)(de)(de)高級(ji)思(si)維任務。

Learning on big data brings success for artificial intelligence (AI), but the annotation and training costs are expensive. In future, learning on small data is one of the ultimate purposes of AI, which requires machines to recognize objectives and scenarios relying on small data as humans. A series of machine learning models is going on this way such as active learning, few-shot learning, deep clustering. However, there are few theoretical guarantees for their generalization performance. Moreover, most of their settings are passive, that is, the label distribution is explicitly controlled by one specified sampling scenario. This survey follows the agnostic active sampling under a PAC (Probably Approximately Correct) framework to analyze the generalization error and label complexity of learning on small data using a supervised and unsupervised fashion. With these theoretical analyses, we categorize the small data learning models from two geometric perspectives: the Euclidean and non-Euclidean (hyperbolic) mean representation, where their optimization solutions are also presented and discussed. Later, some potential learning scenarios that may benefit from small data learning are then summarized, and their potential learning scenarios are also analyzed. Finally, some challenging applications such as computer vision, natural language processing that may benefit from learning on small data are also surveyed.

Inspired by the human cognitive system, attention is a mechanism that imitates the human cognitive awareness about specific information, amplifying critical details to focus more on the essential aspects of data. Deep learning has employed attention to boost performance for many applications. Interestingly, the same attention design can suit processing different data modalities and can easily be incorporated into large networks. Furthermore, multiple complementary attention mechanisms can be incorporated in one network. Hence, attention techniques have become extremely attractive. However, the literature lacks a comprehensive survey specific to attention techniques to guide researchers in employing attention in their deep models. Note that, besides being demanding in terms of training data and computational resources, transformers only cover a single category in self-attention out of the many categories available. We fill this gap and provide an in-depth survey of 50 attention techniques categorizing them by their most prominent features. We initiate our discussion by introducing the fundamental concepts behind the success of attention mechanism. Next, we furnish some essentials such as the strengths and limitations of each attention category, describe their fundamental building blocks, basic formulations with primary usage, and applications specifically for computer vision. We also discuss the challenges and open questions related to attention mechanism in general. Finally, we recommend possible future research directions for deep attention.

Causality can be described in terms of a structural causal model (SCM) that carries information on the variables of interest and their mechanistic relations. For most processes of interest the underlying SCM will only be partially observable, thus causal inference tries to leverage any exposed information. Graph neural networks (GNN) as universal approximators on structured input pose a viable candidate for causal learning, suggesting a tighter integration with SCM. To this effect we present a theoretical analysis from first principles that establishes a novel connection between GNN and SCM while providing an extended view on general neural-causal models. We then establish a new model class for GNN-based causal inference that is necessary and sufficient for causal effect identification. Our empirical illustration on simulations and standard benchmarks validate our theoretical proofs.

Generative models are now capable of producing highly realistic images that look nearly indistinguishable from the data on which they are trained. This raises the question: if we have good enough generative models, do we still need datasets? We investigate this question in the setting of learning general-purpose visual representations from a black-box generative model rather than directly from data. Given an off-the-shelf image generator without any access to its training data, we train representations from the samples output by this generator. We compare several representation learning methods that can be applied to this setting, using the latent space of the generator to generate multiple "views" of the same semantic content. We show that for contrastive methods, this multiview data can naturally be used to identify positive pairs (nearby in latent space) and negative pairs (far apart in latent space). We find that the resulting representations rival those learned directly from real data, but that good performance requires care in the sampling strategy applied and the training method. Generative models can be viewed as a compressed and organized copy of a dataset, and we envision a future where more and more "model zoos" proliferate while datasets become increasingly unwieldy, missing, or private. This paper suggests several techniques for dealing with visual representation learning in such a future. Code is released on our project page: //ali-design.github.io/GenRep/

Knowledge is a formal way of understanding the world, providing a human-level cognition and intelligence for the next-generation artificial intelligence (AI). One of the representations of knowledge is the structural relations between entities. An effective way to automatically acquire this important knowledge, called Relation Extraction (RE), a sub-task of information extraction, plays a vital role in Natural Language Processing (NLP). Its purpose is to identify semantic relations between entities from natural language text. To date, there are several studies for RE in previous works, which have documented these techniques based on Deep Neural Networks (DNNs) become a prevailing technique in this research. Especially, the supervised and distant supervision methods based on DNNs are the most popular and reliable solutions for RE. This article 1)introduces some general concepts, and further 2)gives a comprehensive overview of DNNs in RE from two points of view: supervised RE, which attempts to improve the standard RE systems, and distant supervision RE, which adopts DNNs to design the sentence encoder and the de-noise method. We further 3)cover some novel methods and describe some recent trends and discuss possible future research directions for this task.

Most object recognition approaches predominantly focus on learning discriminative visual patterns while overlooking the holistic object structure. Though important, structure modeling usually requires significant manual annotations and therefore is labor-intensive. In this paper, we propose to "look into object" (explicitly yet intrinsically model the object structure) through incorporating self-supervisions into the traditional framework. We show the recognition backbone can be substantially enhanced for more robust representation learning, without any cost of extra annotation and inference speed. Specifically, we first propose an object-extent learning module for localizing the object according to the visual patterns shared among the instances in the same category. We then design a spatial context learning module for modeling the internal structures of the object, through predicting the relative positions within the extent. These two modules can be easily plugged into any backbone networks during training and detached at inference time. Extensive experiments show that our look-into-object approach (LIO) achieves large performance gain on a number of benchmarks, including generic object recognition (ImageNet) and fine-grained object recognition tasks (CUB, Cars, Aircraft). We also show that this learning paradigm is highly generalizable to other tasks such as object detection and segmentation (MS COCO). Project page: //github.com/JDAI-CV/LIO.

Automatic KB completion for commonsense knowledge graphs (e.g., ATOMIC and ConceptNet) poses unique challenges compared to the much studied conventional knowledge bases (e.g., Freebase). Commonsense knowledge graphs use free-form text to represent nodes, resulting in orders of magnitude more nodes compared to conventional KBs (18x more nodes in ATOMIC compared to Freebase (FB15K-237)). Importantly, this implies significantly sparser graph structures - a major challenge for existing KB completion methods that assume densely connected graphs over a relatively smaller set of nodes. In this paper, we present novel KB completion models that can address these challenges by exploiting the structural and semantic context of nodes. Specifically, we investigate two key ideas: (1) learning from local graph structure, using graph convolutional networks and automatic graph densification and (2) transfer learning from pre-trained language models to knowledge graphs for enhanced contextual representation of knowledge. We describe our method to incorporate information from both these sources in a joint model and provide the first empirical results for KB completion on ATOMIC and evaluation with ranking metrics on ConceptNet. Our results demonstrate the effectiveness of language model representations in boosting link prediction performance and the advantages of learning from local graph structure (+1.5 points in MRR for ConceptNet) when training on subgraphs for computational efficiency. Further analysis on model predictions shines light on the types of commonsense knowledge that language models capture well.

The potential of graph convolutional neural networks for the task of zero-shot learning has been demonstrated recently. These models are highly sample efficient as related concepts in the graph structure share statistical strength allowing generalization to new classes when faced with a lack of data. However, knowledge from distant nodes can get diluted when propagating through intermediate nodes, because current approaches to zero-shot learning use graph propagation schemes that perform Laplacian smoothing at each layer. We show that extensive smoothing does not help the task of regressing classifier weights in zero-shot learning. In order to still incorporate information from distant nodes and utilize the graph structure, we propose an Attentive Dense Graph Propagation Module (ADGPM). ADGPM allows us to exploit the hierarchical graph structure of the knowledge graph through additional connections. These connections are added based on a node's relationship to its ancestors and descendants and an attention scheme is further used to weigh their contribution depending on the distance to the node. Finally, we illustrate that finetuning of the feature representation after training the ADGPM leads to considerable improvements. Our method achieves competitive results, outperforming previous zero-shot learning approaches.

Image segmentation is still an open problem especially when intensities of the interested objects are overlapped due to the presence of intensity inhomogeneity (also known as bias field). To segment images with intensity inhomogeneities, a bias correction embedded level set model is proposed where Inhomogeneities are Estimated by Orthogonal Primary Functions (IEOPF). In the proposed model, the smoothly varying bias is estimated by a linear combination of a given set of orthogonal primary functions. An inhomogeneous intensity clustering energy is then defined and membership functions of the clusters described by the level set function are introduced to rewrite the energy as a data term of the proposed model. Similar to popular level set methods, a regularization term and an arc length term are also included to regularize and smooth the level set function, respectively. The proposed model is then extended to multichannel and multiphase patterns to segment colourful images and images with multiple objects, respectively. It has been extensively tested on both synthetic and real images that are widely used in the literature and public BrainWeb and IBSR datasets. Experimental results and comparison with state-of-the-art methods demonstrate that advantages of the proposed model in terms of bias correction and segmentation accuracy.

北京阿比特科技有限公司