亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Background: Establishing traceability from requirements documents to downstream artifacts early can be beneficial as it allows engineers to reason about requirements quality (e.g. completeness, consistency, redundancy). However, creating such early traces is difficult if downstream artifacts do not exist yet. Objective: We propose to use domain-specific taxonomies to establish early traceability, raising the value and perceived benefits of trace links so that they are also available at later development phases, e.g. in design, testing or maintenance. Method: We developed a recommender system that suggests trace links from requirements to a domain-specific taxonomy based on a series of heuristics. We designed a controlled experiment to compare industry practitioners' efficiency, accuracy, consistency and confidence with and without support from the recommender. Results: We have piloted the experimental material with seven practitioners. The analysis of self-reported confidence suggests that the trace task itself is very challenging as both control and treatment group report low confidence on correctness and completeness. Conclusions: As a pilot, the experiment was successful since it provided initial feedback on the performance of the recommender, insight on the experimental material and illustrated that the collected data can be meaningfully analysed.

相關內容

分(fen)(fen)(fen)類(lei)(lei)學(xue)是(shi)(shi)分(fen)(fen)(fen)類(lei)(lei)的(de)(de)(de)(de)實踐和(he)科學(xue)。Wikipedia類(lei)(lei)別說明了(le)一(yi)種(zhong)分(fen)(fen)(fen)類(lei)(lei)法(fa),可以通過自動(dong)方(fang)式提取Wikipedia類(lei)(lei)別的(de)(de)(de)(de)完整分(fen)(fen)(fen)類(lei)(lei)法(fa)。截至2009年,已經證(zheng)明,可以使用(yong)人工構建(jian)的(de)(de)(de)(de)分(fen)(fen)(fen)類(lei)(lei)法(fa)(例(li)如像WordNet這(zhe)樣(yang)(yang)的(de)(de)(de)(de)計算詞典的(de)(de)(de)(de)分(fen)(fen)(fen)類(lei)(lei)法(fa))來改進和(he)重組Wikipedia類(lei)(lei)別分(fen)(fen)(fen)類(lei)(lei)法(fa)。 從(cong)廣義上(shang)講,分(fen)(fen)(fen)類(lei)(lei)法(fa)還適(shi)(shi)用(yong)于除父子層次結(jie)(jie)構以外的(de)(de)(de)(de)關系方(fang)案,例(li)如網(wang)絡(luo)結(jie)(jie)構。然后(hou)分(fen)(fen)(fen)類(lei)(lei)法(fa)可能(neng)(neng)包括有多(duo)父母的(de)(de)(de)(de)單身孩子,例(li)如,“汽(qi)車(che)”可能(neng)(neng)與(yu)父母雙(shuang)方(fang)一(yi)起出現“車(che)輛”和(he)“鋼結(jie)(jie)構”;但是(shi)(shi)對某些人而言,這(zhe)僅意味著“汽(qi)車(che)”是(shi)(shi)幾種(zhong)不同分(fen)(fen)(fen)類(lei)(lei)法(fa)的(de)(de)(de)(de)一(yi)部分(fen)(fen)(fen)。分(fen)(fen)(fen)類(lei)(lei)法(fa)也可能(neng)(neng)只是(shi)(shi)將事物(wu)組織成(cheng)組,或者是(shi)(shi)按(an)字母順序排列的(de)(de)(de)(de)列表;但是(shi)(shi)在這(zhe)里,術語詞匯更(geng)合(he)適(shi)(shi)。在知識(shi)管理(li)(li)中的(de)(de)(de)(de)當(dang)前用(yong)法(fa)中,分(fen)(fen)(fen)類(lei)(lei)法(fa)被認為(wei)比本(ben)體論窄,因為(wei)本(ben)體論應用(yong)了(le)各種(zhong)各樣(yang)(yang)的(de)(de)(de)(de)關系類(lei)(lei)型。 在數學(xue)上(shang),分(fen)(fen)(fen)層分(fen)(fen)(fen)類(lei)(lei)法(fa)是(shi)(shi)給定對象集的(de)(de)(de)(de)分(fen)(fen)(fen)類(lei)(lei)樹(shu)結(jie)(jie)構。該結(jie)(jie)構的(de)(de)(de)(de)頂部是(shi)(shi)適(shi)(shi)用(yong)于所有對象的(de)(de)(de)(de)單個分(fen)(fen)(fen)類(lei)(lei),即根節點(dian)。此根下的(de)(de)(de)(de)節點(dian)是(shi)(shi)更(geng)具體的(de)(de)(de)(de)分(fen)(fen)(fen)類(lei)(lei),適(shi)(shi)用(yong)于總分(fen)(fen)(fen)類(lei)(lei)對象集的(de)(de)(de)(de)子集。推理(li)(li)的(de)(de)(de)(de)進展從(cong)一(yi)般到更(geng)具體。

知識薈萃

精品入門和進階教程(cheng)、論文和代碼整(zheng)理等

更多

查(cha)看相(xiang)關VIP內(nei)容(rong)、論文(wen)、資訊等

Visible-infrared person re-identification (VIReID) primarily deals with matching identities across person images from different modalities. Due to the modality gap between visible and infrared images, cross-modality identity matching poses significant challenges. Recognizing that high-level semantics of pedestrian appearance, such as gender, shape, and clothing style, remain consistent across modalities, this paper intends to bridge the modality gap by infusing visual features with high-level semantics. Given the capability of CLIP to sense high-level semantic information corresponding to visual representations, we explore the application of CLIP within the domain of VIReID. Consequently, we propose a CLIP-Driven Semantic Discovery Network (CSDN) that consists of Modality-specific Prompt Learner, Semantic Information Integration (SII), and High-level Semantic Embedding (HSE). Specifically, considering the diversity stemming from modality discrepancies in language descriptions, we devise bimodal learnable text tokens to capture modality-private semantic information for visible and infrared images, respectively. Additionally, acknowledging the complementary nature of semantic details across different modalities, we integrate text features from the bimodal language descriptions to achieve comprehensive semantics. Finally, we establish a connection between the integrated text features and the visual features across modalities. This process embed rich high-level semantic information into visual representations, thereby promoting the modality invariance of visual representations. The effectiveness and superiority of our proposed CSDN over existing methods have been substantiated through experimental evaluations on multiple widely used benchmarks. The code will be released at \url{//github.com/nengdong96/CSDN}.

Denoising Diffusion Models (DDMs) have become the leading generative technique for synthesizing high-quality images but are often constrained by their UNet-based architectures that impose certain limitations. In particular, the considerable size of often hundreds of millions of parameters makes them impractical when hardware resources are limited. However, even with powerful hardware, processing images in the gigapixel range is difficult. This is especially true in fields such as microscopy or satellite imaging, where such challenges arise from the limitation to a predefined generative size and the inefficient scaling to larger images. We present two variations of Neural Cellular Automata (NCA)-based DDM methods to address these challenges and jumpstart NCA-based DDMs: Diff-NCA and FourierDiff-NCA. Diff-NCA performs diffusion by using only local features of the underlying distribution, making it suitable for applications where local features are critical. To communicate global knowledge in image space, naive NCA setups require timesteps that increase with the image scale. We solve this bottleneck of current NCA architectures by introducing FourierDiff-NCA, which advances Diff-NCA by adding a Fourier-based diffusion process and combines the frequency-organized Fourier space with the image space. By initiating diffusion in the Fourier domain and finalizing it in the image space, FourierDiff-NCA accelerates global communication. We validate our techniques by using Diff-NCA (208k parameters) to generate high-resolution digital pathology scans at 576x576 resolution and FourierDiff-NCA (887k parameters) to synthesize CelebA images at 64x64, outperforming VNCA and five times bigger UNet-based DDMs. In addition, we demonstrate FourierDiff-NCA's capabilities in super-resolution, OOD image synthesis, and inpainting without additional training.

Neural Radiances Fields (NeRF) and their extensions have shown great success in representing 3D scenes and synthesizing novel-view images. However, most NeRF methods take in low-dynamic-range (LDR) images, which may lose details, especially with nonuniform illumination. Some previous NeRF methods attempt to introduce high-dynamic-range (HDR) techniques but mainly target static scenes. To extend HDR NeRF methods to wider applications, we propose a dynamic HDR NeRF framework, named HDR-HexPlane, which can learn 3D scenes from dynamic 2D images captured with various exposures. A learnable exposure mapping function is constructed to obtain adaptive exposure values for each image. Based on the monotonically increasing prior, a camera response function is designed for stable learning. With the proposed model, high-quality novel-view images at any time point can be rendered with any desired exposure. We further construct a dataset containing multiple dynamic scenes captured with diverse exposures for evaluation. All the datasets and code are available at \url{//guanjunwu.github.io/HDR-HexPlane/}.

Overlapping sound events are ubiquitous in real-world environments, but existing end-to-end sound event detection (SED) methods still struggle to detect them effectively. A critical reason is that these methods represent overlapping events using shared and entangled frame-wise features, which degrades the feature discrimination. To solve the problem, we propose a disentangled feature learning framework to learn a category-specific representation. Specifically, we employ different projectors to learn the frame-wise features for each category. To ensure that these feature does not contain information of other categories, we maximize the common information between frame-wise features within the same category and propose a frame-wise contrastive loss. In addition, considering that the labeled data used by the proposed method is limited, we propose a semi-supervised frame-wise contrastive loss that can leverage large amounts of unlabeled data to achieve feature disentanglement. The experimental results demonstrate the effectiveness of our method.

Part of speech tagging in zero-resource settings can be an effective approach for low-resource languages when no labeled training data is available. Existing systems use two main techniques for POS tagging i.e. pretrained multilingual large language models(LLM) or project the source language labels into the zero resource target language and train a sequence labeling model on it. We explore the latter approach using the off-the-shelf alignment module and train a hidden Markov model(HMM) to predict the POS tags. We evaluate transfer learning setup with English as a source language and French, German, and Spanish as target languages for part-of-speech tagging. Our conclusion is that projected alignment data in zero-resource language can be beneficial to predict POS tags.

Few-shot Knowledge Graph (KG) completion is a focus of current research, where each task aims at querying unseen facts of a relation given its few-shot reference entity pairs. Recent attempts solve this problem by learning static representations of entities and references, ignoring their dynamic properties, i.e., entities may exhibit diverse roles within task relations, and references may make different contributions to queries. This work proposes an adaptive attentional network for few-shot KG completion by learning adaptive entity and reference representations. Specifically, entities are modeled by an adaptive neighbor encoder to discern their task-oriented roles, while references are modeled by an adaptive query-aware aggregator to differentiate their contributions. Through the attention mechanism, both entities and references can capture their fine-grained semantic meanings, and thus render more expressive representations. This will be more predictive for knowledge acquisition in the few-shot scenario. Evaluation in link prediction on two public datasets shows that our approach achieves new state-of-the-art results with different few-shot sizes.

Convolutional neural networks (CNNs) have shown dramatic improvements in single image super-resolution (SISR) by using large-scale external samples. Despite their remarkable performance based on the external dataset, they cannot exploit internal information within a specific image. Another problem is that they are applicable only to the specific condition of data that they are supervised. For instance, the low-resolution (LR) image should be a "bicubic" downsampled noise-free image from a high-resolution (HR) one. To address both issues, zero-shot super-resolution (ZSSR) has been proposed for flexible internal learning. However, they require thousands of gradient updates, i.e., long inference time. In this paper, we present Meta-Transfer Learning for Zero-Shot Super-Resolution (MZSR), which leverages ZSSR. Precisely, it is based on finding a generic initial parameter that is suitable for internal learning. Thus, we can exploit both external and internal information, where one single gradient update can yield quite considerable results. (See Figure 1). With our method, the network can quickly adapt to a given image condition. In this respect, our method can be applied to a large spectrum of image conditions within a fast adaptation process.

Medical image segmentation requires consensus ground truth segmentations to be derived from multiple expert annotations. A novel approach is proposed that obtains consensus segmentations from experts using graph cuts (GC) and semi supervised learning (SSL). Popular approaches use iterative Expectation Maximization (EM) to estimate the final annotation and quantify annotator's performance. Such techniques pose the risk of getting trapped in local minima. We propose a self consistency (SC) score to quantify annotator consistency using low level image features. SSL is used to predict missing annotations by considering global features and local image consistency. The SC score also serves as the penalty cost in a second order Markov random field (MRF) cost function optimized using graph cuts to derive the final consensus label. Graph cut obtains a global maximum without an iterative procedure. Experimental results on synthetic images, real data of Crohn's disease patients and retinal images show our final segmentation to be accurate and more consistent than competing methods.

We propose a novel single shot object detection network named Detection with Enriched Semantics (DES). Our motivation is to enrich the semantics of object detection features within a typical deep detector, by a semantic segmentation branch and a global activation module. The segmentation branch is supervised by weak segmentation ground-truth, i.e., no extra annotation is required. In conjunction with that, we employ a global activation module which learns relationship between channels and object classes in a self-supervised manner. Comprehensive experimental results on both PASCAL VOC and MS COCO detection datasets demonstrate the effectiveness of the proposed method. In particular, with a VGG16 based DES, we achieve an mAP of 81.7 on VOC2007 test and an mAP of 32.8 on COCO test-dev with an inference speed of 31.5 milliseconds per image on a Titan Xp GPU. With a lower resolution version, we achieve an mAP of 79.7 on VOC2007 with an inference speed of 13.0 milliseconds per image.

We investigate the problem of automatically determining what type of shoe left an impression found at a crime scene. This recognition problem is made difficult by the variability in types of crime scene evidence (ranging from traces of dust or oil on hard surfaces to impressions made in soil) and the lack of comprehensive databases of shoe outsole tread patterns. We find that mid-level features extracted by pre-trained convolutional neural nets are surprisingly effective descriptors for this specialized domains. However, the choice of similarity measure for matching exemplars to a query image is essential to good performance. For matching multi-channel deep features, we propose the use of multi-channel normalized cross-correlation and analyze its effectiveness. Our proposed metric significantly improves performance in matching crime scene shoeprints to laboratory test impressions. We also show its effectiveness in other cross-domain image retrieval problems: matching facade images to segmentation labels and aerial photos to map images. Finally, we introduce a discriminatively trained variant and fine-tune our system through our proposed metric, obtaining state-of-the-art performance.

北京阿比特科技有限公司