亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Anomaly detection in imbalanced datasets is a frequent and crucial problem, especially in the medical domain where retrieving and labeling irregularities is often expensive. By combining the generative stability of a $\beta$-variational autoencoder (VAE) with the discriminative strengths of generative adversarial networks (GANs), we propose a novel model, $\beta$-VAEGAN. We investigate methods for composing anomaly scores based on the discriminative and reconstructive capabilities of our model. Existing work focuses on linear combinations of these components to determine if data is anomalous. We advance existing work by training a kernelized support vector machine (SVM) on the respective error components to also consider nonlinear relationships. This improves anomaly detection performance, while allowing faster optimization. Lastly, we use the deviations from the Gaussian prior of $\beta$-VAEGAN to form a novel anomaly score component. In comparison to state-of-the-art work, we improve the $F_1$ score during anomaly detection from 0.85 to 0.92 on the widely used MITBIH Arrhythmia Database.

相關內容

在數據挖掘中,異常檢測(英語:anomaly detection)對不符合預期模式或數據集中其他項目的項目、事件或觀測值的識別。通常異常項目會轉變成銀行欺詐、結構缺陷、醫療問題、文本錯誤等類型的問題。異常也被稱為離群值、新奇、噪聲、偏差和例外。 特別是在檢測濫用與網絡入侵時,有趣性對象往往不是罕見對象,但卻是超出預料的突發活動。這種模式不遵循通常統計定義中把異常點看作是罕見對象,于是許多異常檢測方法(特別是無監督的方法)將對此類數據失效,除非進行了合適的聚集。相反,聚類分析算法可能可以檢測出這些模式形成的微聚類。 有三大類異常檢測方法。[1] 在假設數據集中大多數實例都是正常的前提下,無監督異常檢測方法能通過尋找與其他數據最不匹配的實例來檢測出未標記測試數據的異常。監督式異常檢測方法需要一個已經被標記“正常”與“異常”的數據集,并涉及到訓練分類器(與許多其他的統計分類問題的關鍵區別是異常檢測的內在不均衡性)。半監督式異常檢測方法根據一個給定的正常訓練數據集創建一個表示正常行為的模型,然后檢測由學習模型生成的測試實例的可能性。

Over-the-Air (OtA) computation is a newly emerged concept for computing functions of data from distributed nodes by taking advantage of the wave superposition property of wireless channels. Despite its advantage in communication efficiency, OtA computation is associated with significant security and privacy concerns that have so far not been thoroughly investigated, especially in the case of active attacks. In this paper, we propose and evaluate a detection scheme against active attacks in OtA computation systems. More explicitly, we consider an active attacker which is an external node sending random or misleading data to alter the aggregated data received by the server. To detect the presence of the attacker, in every communication period, legitimate users send some dummy samples in addition to the real data. We propose a detector design that relies on the existence of a shared secret only known by the legitimate users and the server, that can be used to hide the transmitted signal in a secret subspace. After the server projects the received vector back to the original subspace, the dummy samples can be used to detect active attacks. We show that this design achieves good detection performance for a small cost in terms of channel resources.

Interpretability is essential in medical imaging to ensure that clinicians can comprehend and trust artificial intelligence models. In this paper, we propose a novel interpretable approach that combines attribute regularization of the latent space within the framework of an adversarially trained variational autoencoder. Comparative experiments on a cardiac MRI dataset demonstrate the ability of the proposed method to address blurry reconstruction issues of variational autoencoder methods and improve latent space interpretability. Additionally, our analysis of a downstream task reveals that the classification of cardiac disease using the regularized latent space heavily relies on attribute regularized dimensions, demonstrating great interpretability by connecting the used attributes for prediction with clinical observations.

Although current data augmentation methods are successful to alleviate the data insufficiency, conventional augmentation are primarily intra-domain while advanced generative adversarial networks (GANs) generate images remaining uncertain, particularly in small-scale datasets. In this paper, we propose a parameterized GAN (ParaGAN) that effectively controls the changes of synthetic samples among domains and highlights the attention regions for downstream classification. Specifically, ParaGAN incorporates projection distance parameters in cyclic projection and projects the source images to the decision boundary to obtain the class-difference maps. Our experiments show that ParaGAN can consistently outperform the existing augmentation methods with explainable classification on two small-scale medical datasets.

Discrimination can occur when the underlying unbiased labels are overwritten by an agent with potential bias, resulting in biased datasets that unfairly harm specific groups and cause classifiers to inherit these biases. In this paper, we demonstrate that despite only having access to the biased labels, it is possible to eliminate bias by filtering the fairest instances within the framework of confident learning. In the context of confident learning, low self-confidence usually indicates potential label errors; however, this is not always the case. Instances, particularly those from underrepresented groups, might exhibit low confidence scores for reasons other than labeling errors. To address this limitation, our approach employs truncation of the confidence score and extends the confidence interval of the probabilistic threshold. Additionally, we incorporate with co-teaching paradigm for providing a more robust and reliable selection of fair instances and effectively mitigating the adverse effects of biased labels. Through extensive experimentation and evaluation of various datasets, we demonstrate the efficacy of our approach in promoting fairness and reducing the impact of label bias in machine learning models.

Conformal prediction is a statistical tool for producing prediction regions of machine learning models that are valid with high probability. However, applying conformal prediction to time series data leads to conservative prediction regions. In fact, to obtain prediction regions over $T$ time steps with confidence $1-\delta$, {previous works require that each individual prediction region is valid} with confidence $1-\delta/T$. We propose an optimization-based method for reducing this conservatism to enable long horizon planning and verification when using learning-enabled time series predictors. Instead of considering prediction errors individually at each time step, we consider a parameterized prediction error over multiple time steps. By optimizing the parameters over an additional dataset, we find prediction regions that are not conservative. We show that this problem can be cast as a mixed integer linear complementarity program (MILCP), which we then relax into a linear complementarity program (LCP). Additionally, we prove that the relaxed LP has the same optimal cost as the original MILCP. Finally, we demonstrate the efficacy of our method on case studies using pedestrian trajectory predictors and F16 fighter jet altitude predictors.

Increasing and massive volumes of trajectory data are being accumulated that may serve a variety of applications, such as mining popular routes or identifying ridesharing candidates. As storing and querying massive trajectory data is costly, trajectory simplification techniques have been introduced that intuitively aim to reduce the sizes of trajectories, thus reducing storage and speeding up querying, while preserving as much information as possible. Existing techniques rely mainly on hand-crafted error measures when deciding which point to drop when simplifying a trajectory. While the hope may be that such simplification affects the subsequent usability of the data only minimally, the usability of the simplified data remains largely unexplored. Instead of using error measures that indirectly may to some extent yield simplified trajectories with high usability, we adopt a direct approach to simplification and present the first study of query accuracy driven trajectory simplification, where the direct objective is to achieve a simplified trajectory database that preserves the query accuracy of the original database as much as possible. Specifically, we propose a multi-agent reinforcement learning based solution with two agents working cooperatively to collectively simplify trajectories in a database while optimizing query usability. Extensive experiments on four real-world trajectory datasets show that the solution is capable of consistently outperforming baseline solutions over various query types and dynamics.

Management of network resources in advanced IoT applications is a challenging topic due to their distributed nature from the Edge to the Cloud, and the heavy demand of real-time data from many sources to take action in the deployment. FANETs (Flying Ad-hoc Networks) are a clear example of heterogeneous multi-modal use cases, which require strict quality in the network communications, as well as the coordination of the computing capabilities, in order to operate correctly the final service. In this paper, we present a Virtual Network Embedding (VNE) framework designed for the allocation of dataflow applications, composed of nano-services that produce or consume data, in a wireless infrastructure, such as an airborne network. To address the problem, an anypath-based heuristic algorithm that considers the quality demand of the communication between nano-services is proposed, coined as Quality-Revenue Paired Anypath Dataflow VNE (QRPAD-VNE). We also provide a simulation environment for the evaluation of its performance according to the virtual network (VN) request load in the system. Finally, we show the suitability of a multi-parameter framework in conjunction with anypath routing in order to have better performance results that guarantee minimum quality in the wireless communications.

Over the past decade, domain adaptation has become a widely studied branch of transfer learning that aims to improve performance on target domains by leveraging knowledge from the source domain. Conventional domain adaptation methods often assume access to both source and target domain data simultaneously, which may not be feasible in real-world scenarios due to privacy and confidentiality concerns. As a result, the research of Source-Free Domain Adaptation (SFDA) has drawn growing attention in recent years, which only utilizes the source-trained model and unlabeled target data to adapt to the target domain. Despite the rapid explosion of SFDA work, yet there has no timely and comprehensive survey in the field. To fill this gap, we provide a comprehensive survey of recent advances in SFDA and organize them into a unified categorization scheme based on the framework of transfer learning. Instead of presenting each approach independently, we modularize several components of each method to more clearly illustrate their relationships and mechanics in light of the composite properties of each method. Furthermore, we compare the results of more than 30 representative SFDA methods on three popular classification benchmarks, namely Office-31, Office-home, and VisDA, to explore the effectiveness of various technical routes and the combination effects among them. Additionally, we briefly introduce the applications of SFDA and related fields. Drawing from our analysis of the challenges facing SFDA, we offer some insights into future research directions and potential settings.

Face recognition technology has advanced significantly in recent years due largely to the availability of large and increasingly complex training datasets for use in deep learning models. These datasets, however, typically comprise images scraped from news sites or social media platforms and, therefore, have limited utility in more advanced security, forensics, and military applications. These applications require lower resolution, longer ranges, and elevated viewpoints. To meet these critical needs, we collected and curated the first and second subsets of a large multi-modal biometric dataset designed for use in the research and development (R&D) of biometric recognition technologies under extremely challenging conditions. Thus far, the dataset includes more than 350,000 still images and over 1,300 hours of video footage of approximately 1,000 subjects. To collect this data, we used Nikon DSLR cameras, a variety of commercial surveillance cameras, specialized long-rage R&D cameras, and Group 1 and Group 2 UAV platforms. The goal is to support the development of algorithms capable of accurately recognizing people at ranges up to 1,000 m and from high angles of elevation. These advances will include improvements to the state of the art in face recognition and will support new research in the area of whole-body recognition using methods based on gait and anthropometry. This paper describes methods used to collect and curate the dataset, and the dataset's characteristics at the current stage.

Object detection typically assumes that training and test data are drawn from an identical distribution, which, however, does not always hold in practice. Such a distribution mismatch will lead to a significant performance drop. In this work, we aim to improve the cross-domain robustness of object detection. We tackle the domain shift on two levels: 1) the image-level shift, such as image style, illumination, etc, and 2) the instance-level shift, such as object appearance, size, etc. We build our approach based on the recent state-of-the-art Faster R-CNN model, and design two domain adaptation components, on image level and instance level, to reduce the domain discrepancy. The two domain adaptation components are based on H-divergence theory, and are implemented by learning a domain classifier in adversarial training manner. The domain classifiers on different levels are further reinforced with a consistency regularization to learn a domain-invariant region proposal network (RPN) in the Faster R-CNN model. We evaluate our newly proposed approach using multiple datasets including Cityscapes, KITTI, SIM10K, etc. The results demonstrate the effectiveness of our proposed approach for robust object detection in various domain shift scenarios.

北京阿比特科技有限公司