亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Anomaly detection has a wide range of applications and is especially important in industrial quality inspection. Currently, many top-performing anomaly-detection models rely on feature-embedding methods. However, these methods do not perform well on datasets with large variations in object locations. Reconstruction-based methods use reconstruction errors to detect anomalies without considering positional differences between samples. In this study, a reconstruction-based method using the noise-to-norm paradigm is proposed, which avoids the invariant reconstruction of anomalous regions. Our reconstruction network is based on M-net and incorporates multiscale fusion and residual attention modules to enable end-to-end anomaly detection and localization. Experiments demonstrate that the method is effective in reconstructing anomalous regions into normal patterns and achieving accurate anomaly detection and localization. On the MPDD and VisA datasets, our proposed method achieved more competitive results than the latest methods, and it set a new state-of-the-art standard on the MPDD dataset.

相關內容

在數(shu)(shu)(shu)據(ju)挖掘中(zhong)(zhong),異(yi)(yi)(yi)常(chang)(chang)(chang)檢(jian)測(ce)(ce)(ce)(ce)(英語:anomaly detection)對不(bu)符合預(yu)期模(mo)(mo)式(shi)或數(shu)(shu)(shu)據(ju)集(ji)(ji)中(zhong)(zhong)其他項目(mu)的(de)(de)(de)項目(mu)、事件(jian)或觀測(ce)(ce)(ce)(ce)值的(de)(de)(de)識別(bie)。通(tong)常(chang)(chang)(chang)異(yi)(yi)(yi)常(chang)(chang)(chang)項目(mu)會轉變成銀行欺詐、結構缺陷、醫療問(wen)題、文本錯誤等類(lei)(lei)(lei)型(xing)的(de)(de)(de)問(wen)題。異(yi)(yi)(yi)常(chang)(chang)(chang)也被稱為(wei)離群值、新奇(qi)、噪聲(sheng)、偏差和例外。 特別(bie)是(shi)(shi)在檢(jian)測(ce)(ce)(ce)(ce)濫用與(yu)網絡入(ru)侵時,有(you)趣性(xing)(xing)對象往往不(bu)是(shi)(shi)罕見對象,但(dan)卻是(shi)(shi)超出(chu)預(yu)料(liao)的(de)(de)(de)突發活動。這(zhe)種(zhong)模(mo)(mo)式(shi)不(bu)遵循通(tong)常(chang)(chang)(chang)統(tong)計定(ding)義中(zhong)(zhong)把異(yi)(yi)(yi)常(chang)(chang)(chang)點看作(zuo)是(shi)(shi)罕見對象,于(yu)是(shi)(shi)許(xu)多異(yi)(yi)(yi)常(chang)(chang)(chang)檢(jian)測(ce)(ce)(ce)(ce)方法(fa)(特別(bie)是(shi)(shi)無監(jian)督(du)的(de)(de)(de)方法(fa))將對此(ci)類(lei)(lei)(lei)數(shu)(shu)(shu)據(ju)失效,除非進行了(le)合適的(de)(de)(de)聚集(ji)(ji)。相反,聚類(lei)(lei)(lei)分析算法(fa)可(ke)能(neng)可(ke)以檢(jian)測(ce)(ce)(ce)(ce)出(chu)這(zhe)些(xie)模(mo)(mo)式(shi)形成的(de)(de)(de)微(wei)聚類(lei)(lei)(lei)。 有(you)三(san)大類(lei)(lei)(lei)異(yi)(yi)(yi)常(chang)(chang)(chang)檢(jian)測(ce)(ce)(ce)(ce)方法(fa)。[1] 在假設數(shu)(shu)(shu)據(ju)集(ji)(ji)中(zhong)(zhong)大多數(shu)(shu)(shu)實例都是(shi)(shi)正(zheng)常(chang)(chang)(chang)的(de)(de)(de)前提下,無監(jian)督(du)異(yi)(yi)(yi)常(chang)(chang)(chang)檢(jian)測(ce)(ce)(ce)(ce)方法(fa)能(neng)通(tong)過尋找與(yu)其他數(shu)(shu)(shu)據(ju)最不(bu)匹(pi)配的(de)(de)(de)實例來檢(jian)測(ce)(ce)(ce)(ce)出(chu)未(wei)標記測(ce)(ce)(ce)(ce)試(shi)數(shu)(shu)(shu)據(ju)的(de)(de)(de)異(yi)(yi)(yi)常(chang)(chang)(chang)。監(jian)督(du)式(shi)異(yi)(yi)(yi)常(chang)(chang)(chang)檢(jian)測(ce)(ce)(ce)(ce)方法(fa)需要一(yi)個已經被標記“正(zheng)常(chang)(chang)(chang)”與(yu)“異(yi)(yi)(yi)常(chang)(chang)(chang)”的(de)(de)(de)數(shu)(shu)(shu)據(ju)集(ji)(ji),并涉及到訓練分類(lei)(lei)(lei)器(與(yu)許(xu)多其他的(de)(de)(de)統(tong)計分類(lei)(lei)(lei)問(wen)題的(de)(de)(de)關鍵(jian)區別(bie)是(shi)(shi)異(yi)(yi)(yi)常(chang)(chang)(chang)檢(jian)測(ce)(ce)(ce)(ce)的(de)(de)(de)內在不(bu)均衡(heng)性(xing)(xing))。半監(jian)督(du)式(shi)異(yi)(yi)(yi)常(chang)(chang)(chang)檢(jian)測(ce)(ce)(ce)(ce)方法(fa)根據(ju)一(yi)個給定(ding)的(de)(de)(de)正(zheng)常(chang)(chang)(chang)訓練數(shu)(shu)(shu)據(ju)集(ji)(ji)創建一(yi)個表示正(zheng)常(chang)(chang)(chang)行為(wei)的(de)(de)(de)模(mo)(mo)型(xing),然(ran)后檢(jian)測(ce)(ce)(ce)(ce)由學(xue)習(xi)模(mo)(mo)型(xing)生成的(de)(de)(de)測(ce)(ce)(ce)(ce)試(shi)實例的(de)(de)(de)可(ke)能(neng)性(xing)(xing)。

Self-adaptation is a crucial feature of autonomous systems that must cope with uncertainties in, e.g., their environment and their internal state. Self-adaptive systems are often modelled as two-layered systems with a managed subsystem handling the domain concerns and a managing subsystem implementing the adaptation logic. We consider a case study of a self-adaptive robotic system; more concretely, an autonomous underwater vehicle (AUV) used for pipeline inspection. In this paper, we model and analyse it with the feature-aware probabilistic model checker ProFeat. The functionalities of the AUV are modelled in a feature model, capturing the AUV's variability. This allows us to model the managed subsystem of the AUV as a family of systems, where each family member corresponds to a valid feature configuration of the AUV. The managing subsystem of the AUV is modelled as a control layer capable of dynamically switching between such valid feature configurations, depending both on environmental and internal conditions. We use this model to analyse probabilistic reward and safety properties for the AUV.

Multiple intent detection and slot filling are two fundamental and crucial tasks in spoken language understanding. Motivated by the fact that the two tasks are closely related, joint models that can detect intents and extract slots simultaneously are preferred to individual models that perform each task independently. The accuracy of a joint model depends heavily on the ability of the model to transfer information between the two tasks so that the result of one task can correct the result of the other. In addition, since a joint model has multiple outputs, how to train the model effectively is also challenging. In this paper, we present a method for multiple intent detection and slot filling by addressing these challenges. First, we propose a bidirectional joint model that explicitly employs intent information to recognize slots and slot features to detect intents. Second, we introduce a novel method for training the proposed joint model using supervised contrastive learning and self-distillation. Experimental results on two benchmark datasets MixATIS and MixSNIPS show that our method outperforms state-of-the-art models in both tasks. The results also demonstrate the contributions of both bidirectional design and the training method to the accuracy improvement. Our source code is available at //github.com/anhtunguyen98/BiSLU

Social media have great potential for enabling public discourse on important societal issues. However, adverse effects, such as polarization and echo chambers, greatly impact the benefits of social media and call for algorithms that mitigate these effects. In this paper, we propose a novel problem formulation aimed at slightly nudging users' social feeds in order to strike a balance between relevance and diversity, thus mitigating the emergence of polarization, without lowering the quality of the feed. Our approach is based on re-weighting the relative importance of the accounts that a user follows, so as to calibrate the frequency with which the content produced by various accounts is shown to the user. We analyze the convexity properties of the problem, demonstrating the non-matrix convexity of the objective function and the convexity of the feasible set. To efficiently address the problem, we develop a scalable algorithm based on projected gradient descent. We also prove that our problem statement is a proper generalization of the undirected-case problem so that our method can also be adopted for undirected social networks. As a baseline for comparison in the undirected case, we develop a semidefinite programming approach, which provides the optimal solution. Through extensive experiments on synthetic and real-world datasets, we validate the effectiveness of our approach, which outperforms non-trivial baselines, underscoring its ability to foster healthier and more cohesive online communities.

Motion planning and control are crucial components of robotics applications like automated driving. Here, spatio-temporal hard constraints like system dynamics and safety boundaries (e.g., obstacles) restrict the robot's motions. Direct methods from optimal control solve a constrained optimization problem. However, in many applications finding a proper cost function is inherently difficult because of the weighting of partially conflicting objectives. On the other hand, Imitation Learning (IL) methods such as Behavior Cloning (BC) provide an intuitive framework for learning decision-making from offline demonstrations and constitute a promising avenue for planning and control in complex robot applications. Prior work primarily relied on soft constraint approaches, which use additional auxiliary loss terms describing the constraints. However, catastrophic safety-critical failures might occur in out-of-distribution (OOD) scenarios. This work integrates the flexibility of IL with hard constraint handling in optimal control. Our approach constitutes a general framework for constraint robotic motion planning and control, as well as traffic agent simulation, whereas we focus on mobile robot and automated driving applications. Hard constraints are integrated into the learning problem in a differentiable manner, via explicit completion and gradient-based correction. Simulated experiments of mobile robot navigation and automated driving provide evidence for the performance of the proposed method.

Sensor fusion has become a popular topic in robotics. However, conventional fusion methods encounter many difficulties, such as data representation differences, sensor variations, and extrinsic calibration. For example, the calibration methods used for LiDAR-camera fusion often require manual operation and auxiliary calibration targets. Implicit neural representations (INRs) have been developed for 3D scenes, and the volume density distribution involved in an INR unifies the scene information obtained by different types of sensors. Therefore, we propose implicit neural fusion (INF) for LiDAR and camera. INF first trains a neural density field of the target scene using LiDAR frames. Then, a separate neural color field is trained using camera images and the trained neural density field. Along with the training process, INF both estimates LiDAR poses and optimizes extrinsic parameters. Our experiments demonstrate the high accuracy and stable performance of the proposed method.

Knowledge distillation (KD) has shown potential for learning compact models in dense object detection. However, the commonly used softmax-based distillation ignores the absolute classification scores for individual categories. Thus, the optimum of the distillation loss does not necessarily lead to the optimal student classification scores for dense object detectors. This cross-task protocol inconsistency is critical, especially for dense object detectors, since the foreground categories are extremely imbalanced. To address the issue of protocol differences between distillation and classification, we propose a novel distillation method with cross-task consistent protocols, tailored for the dense object detection. For classification distillation, we address the cross-task protocol inconsistency problem by formulating the classification logit maps in both teacher and student models as multiple binary-classification maps and applying a binary-classification distillation loss to each map. For localization distillation, we design an IoU-based Localization Distillation Loss that is free from specific network structures and can be compared with existing localization distillation losses. Our proposed method is simple but effective, and experimental results demonstrate its superiority over existing methods. Code is available at //github.com/TinyTigerPan/BCKD.

Recently, there has been a growing interest in learning and explaining causal effects within Neural Network (NN) models. By virtue of NN architectures, previous approaches consider only direct and total causal effects assuming independence among input variables. We view an NN as a structural causal model (SCM) and extend our focus to include indirect causal effects by introducing feedforward connections among input neurons. We propose an ante-hoc method that captures and maintains direct, indirect, and total causal effects during NN model training. We also propose an algorithm for quantifying learned causal effects in an NN model and efficient approximation strategies for quantifying causal effects in high-dimensional data. Extensive experiments conducted on synthetic and real-world datasets demonstrate that the causal effects learned by our ante-hoc method better approximate the ground truth effects compared to existing methods.

Event detection (ED), a sub-task of event extraction, involves identifying triggers and categorizing event mentions. Existing methods primarily rely upon supervised learning and require large-scale labeled event datasets which are unfortunately not readily available in many real-life applications. In this paper, we consider and reformulate the ED task with limited labeled data as a Few-Shot Learning problem. We propose a Dynamic-Memory-Based Prototypical Network (DMB-PN), which exploits Dynamic Memory Network (DMN) to not only learn better prototypes for event types, but also produce more robust sentence encodings for event mentions. Differing from vanilla prototypical networks simply computing event prototypes by averaging, which only consume event mentions once, our model is more robust and is capable of distilling contextual information from event mentions for multiple times due to the multi-hop mechanism of DMNs. The experiments show that DMB-PN not only deals with sample scarcity better than a series of baseline models but also performs more robustly when the variety of event types is relatively large and the instance quantity is extremely small.

Many tasks in natural language processing can be viewed as multi-label classification problems. However, most of the existing models are trained with the standard cross-entropy loss function and use a fixed prediction policy (e.g., a threshold of 0.5) for all the labels, which completely ignores the complexity and dependencies among different labels. In this paper, we propose a meta-learning method to capture these complex label dependencies. More specifically, our method utilizes a meta-learner to jointly learn the training policies and prediction policies for different labels. The training policies are then used to train the classifier with the cross-entropy loss function, and the prediction policies are further implemented for prediction. Experimental results on fine-grained entity typing and text classification demonstrate that our proposed method can obtain more accurate multi-label classification results.

Inspired by recent development of artificial satellite, remote sensing images have attracted extensive attention. Recently, noticeable progress has been made in scene classification and target detection.However, it is still not clear how to describe the remote sensing image content with accurate and concise sentences. In this paper, we investigate to describe the remote sensing images with accurate and flexible sentences. First, some annotated instructions are presented to better describe the remote sensing images considering the special characteristics of remote sensing images. Second, in order to exhaustively exploit the contents of remote sensing images, a large-scale aerial image data set is constructed for remote sensing image caption. Finally, a comprehensive review is presented on the proposed data set to fully advance the task of remote sensing caption. Extensive experiments on the proposed data set demonstrate that the content of the remote sensing image can be completely described by generating language descriptions. The data set is available at //github.com/2051/RSICD_optimal

北京阿比特科技有限公司