亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The success of automated medical image analysis depends on large-scale and expert-annotated training sets. Unsupervised domain adaptation (UDA) has been raised as a promising approach to alleviate the burden of labeled data collection. However, they generally operate under the closed-set adaptation setting assuming an identical label set between the source and target domains, which is over-restrictive in clinical practice where new classes commonly exist across datasets due to taxonomic inconsistency. While several methods have been presented to tackle both domain shifts and incoherent label sets, none of them take into account the common characteristics of the two issues and consider the learning dynamics along network training. In this work, we propose optimization trajectory distillation, a unified approach to address the two technical challenges from a new perspective. It exploits the low-rank nature of gradient space and devises a dual-stream distillation algorithm to regularize the learning dynamics of insufficiently annotated domain and classes with the external guidance obtained from reliable sources. Our approach resolves the issue of inadequate navigation along network optimization, which is the major obstacle in the taxonomy adaptive cross-domain adaptation scenario. We evaluate the proposed method extensively on several tasks towards various endpoints with clinical and open-world significance. The results demonstrate its effectiveness and improvements over previous methods.

相關內容

分(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei)(lei)學是(shi)(shi)分(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei)(lei)的(de)(de)(de)(de)實踐和科學。Wikipedia類(lei)(lei)(lei)(lei)(lei)(lei)(lei)別說明(ming)了一(yi)(yi)種(zhong)分(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei)(lei)法(fa)(fa),可(ke)(ke)(ke)以(yi)通過自(zi)動(dong)方式提(ti)取Wikipedia類(lei)(lei)(lei)(lei)(lei)(lei)(lei)別的(de)(de)(de)(de)完整分(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei)(lei)法(fa)(fa)。截至(zhi)2009年,已經證(zheng)明(ming),可(ke)(ke)(ke)以(yi)使(shi)用(yong)(yong)人(ren)工構(gou)建(jian)的(de)(de)(de)(de)分(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei)(lei)法(fa)(fa)(例如像WordNet這(zhe)樣的(de)(de)(de)(de)計算詞典的(de)(de)(de)(de)分(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei)(lei)法(fa)(fa))來(lai)改進和重組Wikipedia類(lei)(lei)(lei)(lei)(lei)(lei)(lei)別分(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei)(lei)法(fa)(fa)。 從廣義上講,分(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei)(lei)法(fa)(fa)還適(shi)用(yong)(yong)于(yu)(yu)除父(fu)子層次結(jie)構(gou)以(yi)外的(de)(de)(de)(de)關系方案,例如網絡結(jie)構(gou)。然后分(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei)(lei)法(fa)(fa)可(ke)(ke)(ke)能(neng)包(bao)括有多父(fu)母的(de)(de)(de)(de)單(dan)身孩子,例如,“汽車(che)”可(ke)(ke)(ke)能(neng)與(yu)父(fu)母雙方一(yi)(yi)起出現“車(che)輛”和“鋼結(jie)構(gou)”;但是(shi)(shi)對某些人(ren)而言,這(zhe)僅意味著“汽車(che)”是(shi)(shi)幾種(zhong)不同(tong)分(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei)(lei)法(fa)(fa)的(de)(de)(de)(de)一(yi)(yi)部分(fen)(fen)(fen)。分(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei)(lei)法(fa)(fa)也可(ke)(ke)(ke)能(neng)只是(shi)(shi)將事物組織(zhi)成組,或者是(shi)(shi)按字母順序(xu)排(pai)列的(de)(de)(de)(de)列表;但是(shi)(shi)在這(zhe)里,術語詞匯更合適(shi)。在知識管理中的(de)(de)(de)(de)當前用(yong)(yong)法(fa)(fa)中,分(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei)(lei)法(fa)(fa)被(bei)認(ren)為比本體(ti)論(lun)窄(zhai),因為本體(ti)論(lun)應用(yong)(yong)了各種(zhong)各樣的(de)(de)(de)(de)關系類(lei)(lei)(lei)(lei)(lei)(lei)(lei)型。 在數學上,分(fen)(fen)(fen)層分(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei)(lei)法(fa)(fa)是(shi)(shi)給定對象(xiang)集的(de)(de)(de)(de)分(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei)(lei)樹結(jie)構(gou)。該結(jie)構(gou)的(de)(de)(de)(de)頂(ding)部是(shi)(shi)適(shi)用(yong)(yong)于(yu)(yu)所有對象(xiang)的(de)(de)(de)(de)單(dan)個分(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei)(lei),即根(gen)節點。此根(gen)下(xia)的(de)(de)(de)(de)節點是(shi)(shi)更具體(ti)的(de)(de)(de)(de)分(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei)(lei),適(shi)用(yong)(yong)于(yu)(yu)總分(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei)(lei)對象(xiang)集的(de)(de)(de)(de)子集。推(tui)理的(de)(de)(de)(de)進展從一(yi)(yi)般到更具體(ti)。

知識薈萃

精(jing)品入(ru)門和(he)進(jin)階(jie)教程、論文和(he)代碼整理等

更多

查看相關VIP內容、論文、資訊等

NSFW (Not Safe for Work) content, in the context of a dialogue, can have severe side effects on users in open-domain dialogue systems. However, research on detecting NSFW language, especially sexually explicit content, within a dialogue context has significantly lagged behind. To address this issue, we introduce CensorChat, a dialogue monitoring dataset aimed at NSFW dialogue detection. Leveraging knowledge distillation techniques involving GPT-4 and ChatGPT, this dataset offers a cost-effective means of constructing NSFW content detectors. The process entails collecting real-life human-machine interaction data and breaking it down into single utterances and single-turn dialogues, with the chatbot delivering the final utterance. ChatGPT is employed to annotate unlabeled data, serving as a training set. Rationale validation and test sets are constructed using ChatGPT and GPT-4 as annotators, with a self-criticism strategy for resolving discrepancies in labeling. A BERT model is fine-tuned as a text classifier on pseudo-labeled data, and its performance is assessed. The study emphasizes the importance of AI systems prioritizing user safety and well-being in digital conversations while respecting freedom of expression. The proposed approach not only advances NSFW content detection but also aligns with evolving user protection needs in AI-driven dialogues.

The convergence of SGD based distributed training algorithms is tied to the data distribution across workers. Standard partitioning techniques try to achieve equal-sized partitions with per-class population distribution in proportion to the total dataset. Partitions having the same overall population size or even the same number of samples per class may still have Non-IID distribution in the feature space. In heterogeneous computing environments, when devices have different computing capabilities, even-sized partitions across devices can lead to the straggler problem in distributed SGD. We develop a framework for distributed SGD in heterogeneous environments based on a novel data partitioning algorithm involving submodular optimization. Our data partitioning algorithm explicitly accounts for resource heterogeneity across workers while achieving similar class-level feature distribution and maintaining class balance. Based on this algorithm, we develop a distributed SGD framework that can accelerate existing SOTA distributed training algorithms by up to 32%.

3D scene reconstruction from 2D images has been a long-standing task. Instead of estimating per-frame depth maps and fusing them in 3D, recent research leverages the neural implicit surface as a unified representation for 3D reconstruction. Equipped with data-driven pre-trained geometric cues, these methods have demonstrated promising performance. However, inaccurate prior estimation, which is usually inevitable, can lead to suboptimal reconstruction quality, particularly in some geometrically complex regions. In this paper, we propose a two-stage training process, decouple view-dependent and view-independent colors, and leverage two novel consistency constraints to enhance detail reconstruction performance without requiring extra priors. Additionally, we introduce an essential mask scheme to adaptively influence the selection of supervision constraints, thereby improving performance in a self-supervised paradigm. Experiments on synthetic and real-world datasets show the capability of reducing the interference from prior estimation errors and achieving high-quality scene reconstruction with rich geometric details.

Probabilistic diffusion models enjoy increasing popularity in the deep learning community. They generate convincing samples from a learned distribution of input images with a wide field of practical applications. Originally, these approaches were motivated from drift-diffusion processes, but these origins find less attention in recent, practice-oriented publications. We investigate probabilistic diffusion models from the viewpoint of scale-space research and show that they fulfil generalised scale-space properties on evolving probability distributions. Moreover, we discuss similarities and differences between interpretations of the physical core concept of drift-diffusion in the deep learning and model-based world. To this end, we examine relations of probabilistic diffusion to osmosis filters.

Many studies focus on improving pretraining or developing new backbones in text-video retrieval. However, existing methods may suffer from the learning and inference bias issue, as recent research suggests in other text-video-related tasks. For instance, spatial appearance features on action recognition or temporal object co-occurrences on video scene graph generation could induce spurious correlations. In this work, we present a unique and systematic study of a temporal bias due to frame length discrepancy between training and test sets of trimmed video clips, which is the first such attempt for a text-video retrieval task, to the best of our knowledge. We first hypothesise and verify the bias on how it would affect the model illustrated with a baseline study. Then, we propose a causal debiasing approach and perform extensive experiments and ablation studies on the Epic-Kitchens-100, YouCook2, and MSR-VTT datasets. Our model overpasses the baseline and SOTA on nDCG, a semantic-relevancy-focused evaluation metric which proves the bias is mitigated, as well as on the other conventional metrics.

Deep neural networks have shown impressive performance for image-based disease detection. Performance is commonly evaluated through clinical validation on independent test sets to demonstrate clinically acceptable accuracy. Reporting good performance metrics on test sets, however, is not always a sufficient indication of the generalizability and robustness of an algorithm. In particular, when the test data is drawn from the same distribution as the training data, the iid test set performance can be an unreliable estimate of the accuracy on new data. In this paper, we employ stress testing to assess model robustness and subgroup performance disparities in disease detection models. We design progressive stress testing using five different bidirectional and unidirectional image perturbations with six different severity levels. As a use case, we apply stress tests to measure the robustness of disease detection models for chest X-ray and skin lesion images, and demonstrate the importance of studying class and domain-specific model behaviour. Our experiments indicate that some models may yield more robust and equitable performance than others. We also find that pretraining characteristics play an important role in downstream robustness. We conclude that progressive stress testing is a viable and important tool and should become standard practice in the clinical validation of image-based disease detection models.

This paper presents an innovative methodology for improving the robustness and computational efficiency of Spiking Neural Networks (SNNs), a critical component in neuromorphic computing. The proposed approach integrates astrocytes, a type of glial cell prevalent in the human brain, into SNNs, creating astrocyte-augmented networks. To achieve this, we designed and implemented an astrocyte model in two distinct platforms: CPU/GPU and FPGA. Our FPGA implementation notably utilizes Dynamic Function Exchange (DFX) technology, enabling real-time hardware reconfiguration and adaptive model creation based on current operating conditions. The novel approach of leveraging astrocytes significantly improves the fault tolerance of SNNs, thereby enhancing their robustness. Notably, our astrocyte-augmented SNN displays near-zero latency and theoretically infinite throughput, implying exceptional computational efficiency. Through comprehensive comparative analysis with prior works, it's established that our model surpasses others in terms of neuron and synapse count while maintaining an efficient power consumption profile. These results underscore the potential of our methodology in shaping the future of neuromorphic computing, by providing robust and energy-efficient systems.

Recently, graph neural networks (GNNs) have been widely used for document classification. However, most existing methods are based on static word co-occurrence graphs without sentence-level information, which poses three challenges:(1) word ambiguity, (2) word synonymity, and (3) dynamic contextual dependency. To address these challenges, we propose a novel GNN-based sparse structure learning model for inductive document classification. Specifically, a document-level graph is initially generated by a disjoint union of sentence-level word co-occurrence graphs. Our model collects a set of trainable edges connecting disjoint words between sentences and employs structure learning to sparsely select edges with dynamic contextual dependencies. Graphs with sparse structures can jointly exploit local and global contextual information in documents through GNNs. For inductive learning, the refined document graph is further fed into a general readout function for graph-level classification and optimization in an end-to-end manner. Extensive experiments on several real-world datasets demonstrate that the proposed model outperforms most state-of-the-art results, and reveal the necessity to learn sparse structures for each document.

Data transmission between two or more digital devices in industry and government demands secure and agile technology. Digital information distribution often requires deployment of Internet of Things (IoT) devices and Data Fusion techniques which have also gained popularity in both, civilian and military environments, such as, emergence of Smart Cities and Internet of Battlefield Things (IoBT). This usually requires capturing and consolidating data from multiple sources. Because datasets do not necessarily originate from identical sensors, fused data typically results in a complex Big Data problem. Due to potentially sensitive nature of IoT datasets, Blockchain technology is used to facilitate secure sharing of IoT datasets, which allows digital information to be distributed, but not copied. However, blockchain has several limitations related to complexity, scalability, and excessive energy consumption. We propose an approach to hide information (sensor signal) by transforming it to an image or an audio signal. In one of the latest attempts to the military modernization, we investigate sensor fusion approach by investigating the challenges of enabling an intelligent identification and detection operation and demonstrates the feasibility of the proposed Deep Learning and Anomaly Detection models that can support future application for specific hand gesture alert system from wearable devices.

The potential of graph convolutional neural networks for the task of zero-shot learning has been demonstrated recently. These models are highly sample efficient as related concepts in the graph structure share statistical strength allowing generalization to new classes when faced with a lack of data. However, knowledge from distant nodes can get diluted when propagating through intermediate nodes, because current approaches to zero-shot learning use graph propagation schemes that perform Laplacian smoothing at each layer. We show that extensive smoothing does not help the task of regressing classifier weights in zero-shot learning. In order to still incorporate information from distant nodes and utilize the graph structure, we propose an Attentive Dense Graph Propagation Module (ADGPM). ADGPM allows us to exploit the hierarchical graph structure of the knowledge graph through additional connections. These connections are added based on a node's relationship to its ancestors and descendants and an attention scheme is further used to weigh their contribution depending on the distance to the node. Finally, we illustrate that finetuning of the feature representation after training the ADGPM leads to considerable improvements. Our method achieves competitive results, outperforming previous zero-shot learning approaches.

北京阿比特科技有限公司