亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Requirements Engineering and Software Testing are mature areas and have seen a lot of research. Nevertheless, their interactions have been sparsely explored beyond the concept of traceability. To fill this gap, we propose a definition of requirements engineering and software test (REST) alignment, a taxonomy that characterizes the methods linking the respective areas, and a process to assess alignment. The taxonomy can support researchers to identify new opportunities for investigation, as well as practitioners to compare alignment methods and evaluate alignment, or lack thereof. We constructed the REST taxonomy by analyzing alignment methods published in literature, iteratively validating the emerging dimensions. The resulting concept of an information dyad characterizes the exchange of information required for any alignment to take place. We demonstrate use of the taxonomy by applying it on five in-depth cases and illustrate angles of analysis on a set of thirteen alignment methods. In addition, we developed an assessment framework (REST-bench), applied it in an industrial assessment, and showed that it, with a low effort, can identify opportunities to improve REST alignment. Although we expect that the taxonomy can be further refined, we believe that the information dyad is a valid and useful construct to understand alignment.

相關內容

分(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei)(lei)學是分(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei)(lei)的(de)(de)(de)實踐和(he)科學。Wikipedia類(lei)(lei)(lei)(lei)(lei)(lei)(lei)別說明了(le)一種(zhong)分(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei)(lei)法(fa),可以(yi)(yi)通過自動方(fang)式提取Wikipedia類(lei)(lei)(lei)(lei)(lei)(lei)(lei)別的(de)(de)(de)完整分(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei)(lei)法(fa)。截(jie)至(zhi)2009年,已經證明,可以(yi)(yi)使用(yong)(yong)人工構(gou)(gou)建的(de)(de)(de)分(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei)(lei)法(fa)(例(li)如(ru)像WordNet這樣的(de)(de)(de)計算(suan)詞典的(de)(de)(de)分(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei)(lei)法(fa))來改進和(he)重組(zu)Wikipedia類(lei)(lei)(lei)(lei)(lei)(lei)(lei)別分(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei)(lei)法(fa)。 從(cong)廣義上(shang)講,分(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei)(lei)法(fa)還適用(yong)(yong)于除(chu)父子(zi)(zi)層次結(jie)構(gou)(gou)以(yi)(yi)外的(de)(de)(de)關系(xi)方(fang)案,例(li)如(ru)網絡結(jie)構(gou)(gou)。然(ran)后分(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei)(lei)法(fa)可能(neng)包括有(you)多父母(mu)的(de)(de)(de)單(dan)身孩子(zi)(zi),例(li)如(ru),“汽車(che)”可能(neng)與(yu)父母(mu)雙(shuang)方(fang)一起出現(xian)“車(che)輛(liang)”和(he)“鋼結(jie)構(gou)(gou)”;但是對某些人而言,這僅意味著“汽車(che)”是幾種(zhong)不(bu)同分(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei)(lei)法(fa)的(de)(de)(de)一部分(fen)(fen)(fen)。分(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei)(lei)法(fa)也可能(neng)只是將(jiang)事物組(zu)織(zhi)成組(zu),或者是按(an)字(zi)母(mu)順序排列的(de)(de)(de)列表;但是在(zai)這里,術(shu)語詞匯更(geng)合適。在(zai)知識(shi)管理中的(de)(de)(de)當(dang)前用(yong)(yong)法(fa)中,分(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei)(lei)法(fa)被認為比(bi)本(ben)體論窄(zhai),因為本(ben)體論應用(yong)(yong)了(le)各種(zhong)各樣的(de)(de)(de)關系(xi)類(lei)(lei)(lei)(lei)(lei)(lei)(lei)型。 在(zai)數(shu)學上(shang),分(fen)(fen)(fen)層分(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei)(lei)法(fa)是給定(ding)對象集的(de)(de)(de)分(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei)(lei)樹結(jie)構(gou)(gou)。該結(jie)構(gou)(gou)的(de)(de)(de)頂(ding)部是適用(yong)(yong)于所有(you)對象的(de)(de)(de)單(dan)個分(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei)(lei),即(ji)根(gen)節(jie)點。此根(gen)下的(de)(de)(de)節(jie)點是更(geng)具體的(de)(de)(de)分(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei)(lei),適用(yong)(yong)于總分(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei)(lei)對象集的(de)(de)(de)子(zi)(zi)集。推理的(de)(de)(de)進展從(cong)一般到(dao)更(geng)具體。

知識薈萃

精品(pin)入門和進階教程、論(lun)文和代(dai)碼整(zheng)理等

更多

查看相關VIP內容、論文、資訊等

Recommender systems (RS) have achieved significant success by leveraging explicit identification (ID) features. However, the full potential of content features, especially the pure image pixel features, remains relatively unexplored. The limited availability of large, diverse, and content-driven image recommendation datasets has hindered the use of raw images as item representations. In this regard, we present PixelRec, a massive image-centric recommendation dataset that includes approximately 200 million user-image interactions, 30 million users, and 400,000 high-quality cover images. By providing direct access to raw image pixels, PixelRec enables recommendation models to learn item representation directly from them. To demonstrate its utility, we begin by presenting the results of several classical pure ID-based baseline models, termed IDNet, trained on PixelRec. Then, to show the effectiveness of the dataset's image features, we substitute the itemID embeddings (from IDNet) with a powerful vision encoder that represents items using their raw image pixels. This new model is dubbed PixelNet.Our findings indicate that even in standard, non-cold start recommendation settings where IDNet is recognized as highly effective, PixelNet can already perform equally well or even better than IDNet. Moreover, PixelNet has several other notable advantages over IDNet, such as being more effective in cold-start and cross-domain recommendation scenarios. These results underscore the importance of visual features in PixelRec. We believe that PixelRec can serve as a critical resource and testing ground for research on recommendation models that emphasize image pixel content. The dataset, code, and leaderboard will be available at //github.com/website-pixelrec/PixelRec.

The Harrisonburg Department of Public Transportation (HDPT) aims to leverage their data to improve the efficiency and effectiveness of their operations. We construct two supply and demand models that help the department identify gaps in their service. The models take many variables into account, including the way that the HDPT reports to the federal government and the areas with the most vulnerable populations in Harrisonburg City. We employ data analysis and machine learning techniques to make our predictions.

Many computational linguistic methods have been proposed to study the information content of languages. We consider two interesting research questions: 1) how is information distributed over long documents, and 2) how does content reduction, such as token selection and text summarization, affect the information density in long documents. We present four criteria for information density estimation for long documents, including surprisal, entropy, uniform information density, and lexical density. Among those criteria, the first three adopt the measures from information theory. We propose an attention-based word selection method for clinical notes and study machine summarization for multiple-domain documents. Our findings reveal the systematic difference in information density of long text in various domains. Empirical results on automated medical coding from long clinical notes show the effectiveness of the attention-based word selection method.

Large Language Models (LLMs) have emerged as powerful tools in the field of Natural Language Processing (NLP) and have recently gained significant attention in the domain of Recommendation Systems (RS). These models, trained on massive amounts of data using self-supervised learning, have demonstrated remarkable success in learning universal representations and have the potential to enhance various aspects of recommendation systems by some effective transfer techniques such as fine-tuning and prompt tuning, and so on. The crucial aspect of harnessing the power of language models in enhancing recommendation quality is the utilization of their high-quality representations of textual features and their extensive coverage of external knowledge to establish correlations between items and users. To provide a comprehensive understanding of the existing LLM-based recommendation systems, this survey presents a taxonomy that categorizes these models into two major paradigms, respectively Discriminative LLM for Recommendation (DLLM4Rec) and Generative LLM for Recommendation (GLLM4Rec), with the latter being systematically sorted out for the first time. Furthermore, we systematically review and analyze existing LLM-based recommendation systems within each paradigm, providing insights into their methodologies, techniques, and performance. Additionally, we identify key challenges and several valuable findings to provide researchers and practitioners with inspiration.

Existing knowledge graph (KG) embedding models have primarily focused on static KGs. However, real-world KGs do not remain static, but rather evolve and grow in tandem with the development of KG applications. Consequently, new facts and previously unseen entities and relations continually emerge, necessitating an embedding model that can quickly learn and transfer new knowledge through growth. Motivated by this, we delve into an expanding field of KG embedding in this paper, i.e., lifelong KG embedding. We consider knowledge transfer and retention of the learning on growing snapshots of a KG without having to learn embeddings from scratch. The proposed model includes a masked KG autoencoder for embedding learning and update, with an embedding transfer strategy to inject the learned knowledge into the new entity and relation embeddings, and an embedding regularization method to avoid catastrophic forgetting. To investigate the impacts of different aspects of KG growth, we construct four datasets to evaluate the performance of lifelong KG embedding. Experimental results show that the proposed model outperforms the state-of-the-art inductive and lifelong embedding baselines.

Recently, graph neural networks (GNNs) have been widely used for document classification. However, most existing methods are based on static word co-occurrence graphs without sentence-level information, which poses three challenges:(1) word ambiguity, (2) word synonymity, and (3) dynamic contextual dependency. To address these challenges, we propose a novel GNN-based sparse structure learning model for inductive document classification. Specifically, a document-level graph is initially generated by a disjoint union of sentence-level word co-occurrence graphs. Our model collects a set of trainable edges connecting disjoint words between sentences and employs structure learning to sparsely select edges with dynamic contextual dependencies. Graphs with sparse structures can jointly exploit local and global contextual information in documents through GNNs. For inductive learning, the refined document graph is further fed into a general readout function for graph-level classification and optimization in an end-to-end manner. Extensive experiments on several real-world datasets demonstrate that the proposed model outperforms most state-of-the-art results, and reveal the necessity to learn sparse structures for each document.

Recommender systems have been widely applied in different real-life scenarios to help us find useful information. Recently, Reinforcement Learning (RL) based recommender systems have become an emerging research topic. It often surpasses traditional recommendation models even most deep learning-based methods, owing to its interactive nature and autonomous learning ability. Nevertheless, there are various challenges of RL when applying in recommender systems. Toward this end, we firstly provide a thorough overview, comparisons, and summarization of RL approaches for five typical recommendation scenarios, following three main categories of RL: value-function, policy search, and Actor-Critic. Then, we systematically analyze the challenges and relevant solutions on the basis of existing literature. Finally, under discussion for open issues of RL and its limitations of recommendation, we highlight some potential research directions in this field.

Graph Neural Networks (GNNs) have recently become increasingly popular due to their ability to learn complex systems of relations or interactions arising in a broad spectrum of problems ranging from biology and particle physics to social networks and recommendation systems. Despite the plethora of different models for deep learning on graphs, few approaches have been proposed thus far for dealing with graphs that present some sort of dynamic nature (e.g. evolving features or connectivity over time). In this paper, we present Temporal Graph Networks (TGNs), a generic, efficient framework for deep learning on dynamic graphs represented as sequences of timed events. Thanks to a novel combination of memory modules and graph-based operators, TGNs are able to significantly outperform previous approaches being at the same time more computationally efficient. We furthermore show that several previous models for learning on dynamic graphs can be cast as specific instances of our framework. We perform a detailed ablation study of different components of our framework and devise the best configuration that achieves state-of-the-art performance on several transductive and inductive prediction tasks for dynamic graphs.

Dynamic programming (DP) solves a variety of structured combinatorial problems by iteratively breaking them down into smaller subproblems. In spite of their versatility, DP algorithms are usually non-differentiable, which hampers their use as a layer in neural networks trained by backpropagation. To address this issue, we propose to smooth the max operator in the dynamic programming recursion, using a strongly convex regularizer. This allows to relax both the optimal value and solution of the original combinatorial problem, and turns a broad class of DP algorithms into differentiable operators. Theoretically, we provide a new probabilistic perspective on backpropagating through these DP operators, and relate them to inference in graphical models. We derive two particular instantiations of our framework, a smoothed Viterbi algorithm for sequence prediction and a smoothed DTW algorithm for time-series alignment. We showcase these instantiations on two structured prediction tasks and on structured and sparse attention for neural machine translation.

Recommender System (RS) is a hot area where artificial intelligence (AI) techniques can be effectively applied to improve performance. Since the well-known Netflix Challenge, collaborative filtering (CF) has become the most popular and effective recommendation method. Despite their success in CF, various AI techniques still have to face the data sparsity and cold start problems. Previous works tried to solve these two problems by utilizing auxiliary information, such as social connections among users and meta-data of items. However, they process different types of information separately, leading to information loss. In this work, we propose to utilize Heterogeneous Information Network (HIN), which is a natural and general representation of different types of data, to enhance CF-based recommending methods. HIN-based recommender systems face two problems: how to represent high-level semantics for recommendation and how to fuse the heterogeneous information to recommend. To address these problems, we propose to applying meta-graph to HIN-based RS and solve the information fusion problem with a "matrix factorization (MF) + factorization machine (FM)" framework. For the "MF" part, we obtain user-item similarity matrices from each meta-graph and adopt low-rank matrix approximation to get latent features for both users and items. For the "FM" part, we propose to apply FM with Group lasso (FMG) on the obtained features to simultaneously predict missing ratings and select useful meta-graphs. Experimental results on two large real-world datasets, i.e., Amazon and Yelp, show that our proposed approach is better than that of the state-of-the-art FM and other HIN-based recommending methods.

北京阿比特科技有限公司