亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper introduces an active learning (AL) framework for anomalous sound detection (ASD) in machine condition monitoring system. Typically, ASD models are trained solely on normal samples due to the scarcity of anomalous data, leading to decreased accuracy for unseen samples during inference. AL is a promising solution to solve this problem by enabling the model to learn new concepts more effectively with fewer labeled examples, thus reducing manual annotation efforts. However, its effectiveness in ASD remains unexplored. To minimize update costs and time, our proposed method focuses on updating the scoring backend of ASD system without retraining the neural network model. Experimental results on the DCASE 2023 Challenge Task 2 dataset confirm that our AL framework significantly improves ASD performance even with low labeling budgets. Moreover, our proposed sampling strategy outperforms other baselines in terms of the partial area under the receiver operating characteristic score.

相關內容

主(zhu)動學(xue)(xue)習(xi)(xi)(xi)(xi)是(shi)(shi)機器學(xue)(xue)習(xi)(xi)(xi)(xi)(更(geng)普(pu)遍的(de)(de)(de)說是(shi)(shi)人工智能(neng)(neng))的(de)(de)(de)一個(ge)(ge)子(zi)領域,在(zai)統計學(xue)(xue)領域也(ye)叫查詢(xun)學(xue)(xue)習(xi)(xi)(xi)(xi)、最優(you)實驗設計。“學(xue)(xue)習(xi)(xi)(xi)(xi)模塊(kuai)”和(he)“選擇策略”是(shi)(shi)主(zhu)動學(xue)(xue)習(xi)(xi)(xi)(xi)算法的(de)(de)(de)2個(ge)(ge)基本(ben)且重要的(de)(de)(de)模塊(kuai)。 主(zhu)動學(xue)(xue)習(xi)(xi)(xi)(xi)是(shi)(shi)“一種學(xue)(xue)習(xi)(xi)(xi)(xi)方(fang)法,在(zai)這種方(fang)法中,學(xue)(xue)生(sheng)會主(zhu)動或體驗性地參(can)與學(xue)(xue)習(xi)(xi)(xi)(xi)過程(cheng)(cheng),并且根據學(xue)(xue)生(sheng)的(de)(de)(de)參(can)與程(cheng)(cheng)度,有不同程(cheng)(cheng)度的(de)(de)(de)主(zhu)動學(xue)(xue)習(xi)(xi)(xi)(xi)。” (Bonwell&Eison 1991)Bonwell&Eison(1991) 指出:“學(xue)(xue)生(sheng)除了被(bei)動地聽(ting)課(ke)以外,還(huan)從事其他活動。” 在(zai)高等教(jiao)育研究協會(ASHE)的(de)(de)(de)一份報(bao)告(gao)中,作(zuo)(zuo)者討(tao)論(lun)了各種促進主(zhu)動學(xue)(xue)習(xi)(xi)(xi)(xi)的(de)(de)(de)方(fang)法。他們引用了一些文獻(xian),這些文獻(xian)表明(ming)學(xue)(xue)生(sheng)不僅要做聽(ting),還(huan)必須(xu)(xu)做更(geng)多的(de)(de)(de)事情才能(neng)(neng)學(xue)(xue)習(xi)(xi)(xi)(xi)。他們必須(xu)(xu)閱(yue)讀,寫作(zuo)(zuo),討(tao)論(lun)并參(can)與解(jie)決問題。此過程(cheng)(cheng)涉及三個(ge)(ge)學(xue)(xue)習(xi)(xi)(xi)(xi)領域,即知識,技能(neng)(neng)和(he)態度(KSA)。這種學(xue)(xue)習(xi)(xi)(xi)(xi)行為(wei)分(fen)類法可(ke)以被(bei)認為(wei)是(shi)(shi)“學(xue)(xue)習(xi)(xi)(xi)(xi)過程(cheng)(cheng)的(de)(de)(de)目標(biao)”。特別是(shi)(shi),學(xue)(xue)生(sheng)必須(xu)(xu)從事諸如分(fen)析(xi),綜合和(he)評估(gu)之(zhi)類的(de)(de)(de)高級(ji)思維任務。

Augmenting language models with image inputs may enable more effective jailbreak attacks through continuous optimization, unlike text inputs that require discrete optimization. However, new multimodal fusion models tokenize all input modalities using non-differentiable functions, which hinders straightforward attacks. In this work, we introduce the notion of a tokenizer shortcut that approximates tokenization with a continuous function and enables continuous optimization. We use tokenizer shortcuts to create the first end-to-end gradient image attacks against multimodal fusion models. We evaluate our attacks on Chameleon models and obtain jailbreak images that elicit harmful information for 72.5% of prompts. Jailbreak images outperform text jailbreaks optimized with the same objective and require 3x lower compute budget to optimize 50x more input tokens. Finally, we find that representation engineering defenses, like Circuit Breakers, trained only on text attacks can effectively transfer to adversarial image inputs.

This study introduces an online target sound extraction (TSE) process using the similarity-and-independence-aware beamformer (SIBF) derived from an iterative batch algorithm. The study aimed to reduce latency while maintaining extraction accuracy. The SIBF, which is a linear method, provides more accurate estimates of the target than an approximate magnitude spectrogram reference. The transition to an online algorithm reduces latency but presents challenges. First, contrary to the conventional assumption, deriving the online algorithm may degrade accuracy as compared to the batch algorithm using a sliding window. Second, conventional post-processing methods intended for scaling the estimated target may widen the accuracy gap between the two algorithms. This study adopts an approach that addresses these challenges and minimizes the accuracy gap during post-processing. It proposes a novel scaling method based on the single-channel Wiener filter (SWF-based scaling). To further improve accuracy, the study introduces a modified version of the time-frequency-varying variance generalized Gaussian distribution as a source model to represent the joint probability between the target and reference. Experimental results using the CHiME-3 dataset demonstrate several key findings: 1) SWF-based scaling effectively eliminates the gap between the two algorithms and improves accuracy. 2) The new source model achieves optimal accuracy, corresponding to the Laplacian model. 3) Our online SIBF outperforms conventional linear TSE methods, including independent vector extraction and minimum mean square error beamforming. These findings can contribute to the fields of beamforming and blind source separation.

Large language models (LLMs) can generate fluent summaries across domains using prompting techniques, reducing the need to train models for summarization applications. However, crafting effective prompts that guide LLMs to generate summaries with the appropriate level of detail and writing style remains a challenge. In this paper, we explore the use of salient information extracted from the source document to enhance summarization prompts. We show that adding keyphrases in prompts can improve ROUGE F1 and recall, making the generated summaries more similar to the reference and more complete. The number of keyphrases can control the precision-recall trade-off. Furthermore, our analysis reveals that incorporating phrase-level salient information is superior to word- or sentence-level. However, the impact on hallucination is not universally positive across LLMs. To conduct this analysis, we introduce Keyphrase Signal Extractor (SigExt), a lightweight model that can be finetuned to extract salient keyphrases. By using SigExt, we achieve consistent ROUGE improvements across datasets and open-weight and proprietary LLMs without any LLM customization. Our findings provide insights into leveraging salient information in building prompt-based summarization systems.

This work develops a distributed graph neural network (GNN) methodology for mesh-based modeling applications using a consistent neural message passing layer. As the name implies, the focus is on enabling scalable operations that satisfy physical consistency via halo nodes at sub-graph boundaries. Here, consistency refers to the fact that a GNN trained and evaluated on one rank (one large graph) is arithmetically equivalent to evaluations on multiple ranks (a partitioned graph). This concept is demonstrated by interfacing GNNs with NekRS, a GPU-capable exascale CFD solver developed at Argonne National Laboratory. It is shown how the NekRS mesh partitioning can be linked to the distributed GNN training and inference routines, resulting in a scalable mesh-based data-driven modeling workflow. We study the impact of consistency on the scalability of mesh-based GNNs, demonstrating efficient scaling in consistent GNNs for up to O(1B) graph nodes on the Frontier exascale supercomputer.

This letter puts forth a new hybrid horizontal-vertical federated learning (HoVeFL) for mobile edge computing-enabled Internet of Things (EdgeIoT). In this framework, certain EdgeIoT devices train local models using the same data samples but analyze disparate data features, while the others focus on the same features using non-independent and identically distributed (non-IID) data samples. Thus, even though the data features are consistent, the data samples vary across devices. The proposed HoVeFL formulates the training of local and global models to minimize the global loss function. Performance evaluations on CIFAR-10 and SVHN datasets reveal that the testing loss of HoVeFL with 12 horizontal FL devices and six vertical FL devices is 5.5% and 25.2% higher, respectively, compared to a setup with six horizontal FL devices and 12 vertical FL devices.

This paper presents a control variate-based Markov chain Monte Carlo algorithm for efficient sampling from the probability simplex, with a focus on applications in large-scale Bayesian models such as latent Dirichlet allocation. Standard Markov chain Monte Carlo methods, particularly those based on Langevin diffusions, suffer from significant discretization errors near the boundaries of the simplex, which are exacerbated in sparse data settings. To address this issue, we propose an improved approach based on the stochastic Cox--Ingersoll--Ross process, which eliminates discretization errors and enables exact transition densities. Our key contribution is the integration of control variates, which significantly reduces the variance of the stochastic gradient estimator in the Cox--Ingersoll--Ross process, thereby enhancing the accuracy and computational efficiency of the algorithm. We provide a theoretical analysis showing the variance reduction achieved by the control variates approach and demonstrate the practical advantages of our method in data subsampling settings. Empirical results on large datasets show that the proposed method outperforms existing approaches in both accuracy and scalability.

The advent of large language models (LLMs) has spurred considerable interest in advancing autonomous LLMs-based agents, particularly in intriguing applications within smartphone graphical user interfaces (GUIs). When presented with a task goal, these agents typically emulate human actions within a GUI environment until the task is completed. However, a key challenge lies in devising effective plans to guide action prediction in GUI tasks, though planning have been widely recognized as effective for decomposing complex tasks into a series of steps. Specifically, given the dynamic nature of environmental GUIs following action execution, it is crucial to dynamically adapt plans based on environmental feedback and action history.We show that the widely-used ReAct approach fails due to the excessively long historical dialogues. To address this challenge, we propose a novel approach called Dynamic Planning of Thoughts (D-PoT) for LLM-based GUI agents.D-PoT involves the dynamic adjustment of planning based on the environmental feedback and execution history. Experimental results reveal that the proposed D-PoT significantly surpassed the strong GPT-4V baseline by +12.7% (34.66% $\rightarrow$ 47.36%) in accuracy. The analysis highlights the generality of dynamic planning in different backbone LLMs, as well as the benefits in mitigating hallucinations and adapting to unseen tasks. Code is available at //github.com/sqzhang-lazy/D-PoT.

This paper introduces a method for detecting vulnerabilities in smart contracts using static analysis and a multi-objective optimization algorithm. We focus on four types of vulnerabilities: reentrancy, call stack overflow, integer overflow, and timestamp dependencies. Initially, smart contracts are compiled into an abstract syntax tree to analyze relationships between contracts and functions, including calls, inheritance, and data flow. These analyses are transformed into static evaluations and intermediate representations that reveal internal relations. Based on these representations, we examine contract's functions, variables, and data dependencies to detect the specified vulnerabilities. To enhance detection accuracy and coverage, we apply a multi-objective optimization algorithm to the static analysis process. This involves assigning initial numeric values to input data and monitoring changes in statement coverage and detection accuracy. Using coverage and accuracy as fitness values, we calculate Pareto front and crowding distance values to select the best individuals for the new parent population, iterating until optimization criteria are met. We validate our approach using an open-source dataset collected from Etherscan, containing 6,693 smart contracts. Experimental results show that our method outperforms state-of-the-art tools in terms of coverage, accuracy, efficiency, and effectiveness in detecting the targeted vulnerabilities.

The recent proliferation of knowledge graphs (KGs) coupled with incomplete or partial information, in the form of missing relations (links) between entities, has fueled a lot of research on knowledge base completion (also known as relation prediction). Several recent works suggest that convolutional neural network (CNN) based models generate richer and more expressive feature embeddings and hence also perform well on relation prediction. However, we observe that these KG embeddings treat triples independently and thus fail to cover the complex and hidden information that is inherently implicit in the local neighborhood surrounding a triple. To this effect, our paper proposes a novel attention based feature embedding that captures both entity and relation features in any given entity's neighborhood. Additionally, we also encapsulate relation clusters and multihop relations in our model. Our empirical study offers insights into the efficacy of our attention based model and we show marked performance gains in comparison to state of the art methods on all datasets.

In this paper, we propose the joint learning attention and recurrent neural network (RNN) models for multi-label classification. While approaches based on the use of either model exist (e.g., for the task of image captioning), training such existing network architectures typically require pre-defined label sequences. For multi-label classification, it would be desirable to have a robust inference process, so that the prediction error would not propagate and thus affect the performance. Our proposed model uniquely integrates attention and Long Short Term Memory (LSTM) models, which not only addresses the above problem but also allows one to identify visual objects of interests with varying sizes without the prior knowledge of particular label ordering. More importantly, label co-occurrence information can be jointly exploited by our LSTM model. Finally, by advancing the technique of beam search, prediction of multiple labels can be efficiently achieved by our proposed network model.

北京阿比特科技有限公司