亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The prediction of the remaining useful life (RUL) of rolling bearings is a pivotal issue in industrial production. A crucial approach to tackling this issue involves transforming vibration signals into health indicators (HI) to aid model training. This paper presents an end-to-end HI construction method, vector quantised variational autoencoder (VQ-VAE), which addresses the need for dimensionality reduction of latent variables in traditional unsupervised learning methods such as autoencoder. Moreover, concerning the inadequacy of traditional statistical metrics in reflecting curve fluctuations accurately, two novel statistical metrics, mean absolute distance (MAD) and mean variance (MV), are introduced. These metrics accurately depict the fluctuation patterns in the curves, thereby indicating the model's accuracy in discerning similar features. On the PMH2012 dataset, methods employing VQ-VAE for label construction achieved lower values for MAD and MV. Furthermore, the ASTCN prediction model trained with VQ-VAE labels demonstrated commendable performance, attaining the lowest values for MAD and MV.

相關內容

This paper considers the problem of evaluating an autonomous system's competency in performing a task, particularly when working in dynamic and uncertain environments. The inherent opacity of machine learning models, from the perspective of the user, often described as a `black box', poses a challenge. To overcome this, we propose using a measure called the Surprise index, which leverages available measurement data to quantify whether the dynamic system performs as expected. We show that the surprise index can be computed in closed form for dynamic systems when observed evidence in a probabilistic model if the joint distribution for that evidence follows a multivariate Gaussian marginal distribution. We then apply it to a nonlinear spacecraft maneuver problem, where actions are chosen by a reinforcement learning agent and show it can indicate how well the trajectory follows the required orbit.

Recent studies try to use hyperspectral imaging (HSI) to detect foreign matters in products because it enables to visualize the invisible wavelengths including ultraviolet and infrared. Considering the enormous image channels of the HSI, several dimension reduction methods-e.g., PCA or UMAP-can be considered to reduce but those cannot ease the fundamental limitations, as follows: (1) latency of HSI capturing. (2) less explanation ability of the important channels. In this paper, to circumvent the aforementioned methods, one of the ways to channel reduction, on anomaly detection proposed HSI. Different from feature extraction methods (i.e., PCA or UMAP), feature selection can sort the feature by impact and show better explainability so we might redesign the task-optimized and cost-effective spectroscopic camera. Via the extensive experiment results with synthesized MVTec AD dataset, we confirm that the feature selection method shows 6.90x faster at the inference phase compared with feature extraction-based approaches while preserving anomaly detection performance. Ultimately, we conclude the advantage of feature selection which is effective yet fast.

This article introduces a novel, low-cost technique for hiding data in commercially available resistive-RAM (ReRAM) chips. The data is kept hidden in ReRAM cells by manipulating its analog physical properties through switching ($\textit{set/reset}$) operations. This hidden data, later, is retrieved by sensing the changes in cells' physical properties (i.e., $\textit{set/reset}$ time of the memory cells). The proposed system-level hiding technique does not affect the normal memory operations and does not require any hardware modifications. Furthermore, the proposed hiding approach is robust against temperature variations and the aging of the devices through normal read/write operation. The silicon results show that our proposed data hiding technique is acceptably fast with ${\sim}0.4bit/min$ of encoding and ${\sim}15.625bits/s$ of retrieval rates, and the hidden message is unrecoverable without the knowledge of the secret key, which is used to enhance the security of hidden information.

Text-to-SQL, the task of translating natural language questions into SQL queries, is part of various business processes. Its automation, which is an emerging challenge, will empower software practitioners to seamlessly interact with relational databases using natural language, thereby bridging the gap between business needs and software capabilities. In this paper, we consider Large Language Models (LLMs), which have achieved state of the art for various NLP tasks. Specifically, we benchmark Text-to-SQL performance, the evaluation methodologies, as well as input optimization (e.g., prompting). In light of the empirical observations that we have made, we propose two novel metrics that were designed to adequately measure the similarity between SQL queries. Overall, we share with the community various findings, notably on how to select the right LLM on Text-to-SQL tasks. We further demonstrate that a tree-based edit distance constitutes a reliable metric for assessing the similarity between generated SQL queries and the oracle for benchmarking Text2SQL approaches. This metric is important as it relieves researchers from the need to perform computationally expensive experiments such as executing generated queries as done in prior works. Our work implements financial domain use cases and, therefore contributes to the advancement of Text2SQL systems and their practical adoption in this domain.

With the recent emergence of mixed precision hardware, there has been a renewed interest in its use for solving numerical linear algebra problems fast and accurately. The solution of least squares (LS) problems $\min_x\|b-Ax\|_2$, where $A \in \mathbb{R}^{m\times n}$, arise in numerous application areas. Overdetermined standard least squares problems can be solved by using mixed precision within the iterative refinement method of Bj\"{o}rck, which transforms the least squares problem into an $(m+n)\times(m+n)$ ''augmented'' system. It has recently been shown that mixed precision GMRES-based iterative refinement can also be used, in an approach termed GMRES-LSIR. In practice, we often encounter types of least squares problems beyond standard least squares, including weighted least squares (WLS), $\min_x\|D^{1/2}(b-Ax)\|_2$, where $D^{1/2}$ is a diagonal matrix of weights. In this paper, we discuss a mixed precision FGMRES-WLSIR algorithm for solving WLS problems using two different preconditioners.

Conventional distributed approaches to coverage control may suffer from lack of convergence and poor performance, due to the fact that agents have limited information, especially in non-convex discrete environments. To address this issue, we extend the approach of [Marden 2016] which demonstrates how a limited degree of inter-agent communication can be exploited to overcome such pitfalls in one-dimensional discrete environments. The focus of this paper is on extending such results to general dimensional settings. We show that the extension is convergent and keeps the approximation ratio of 2, meaning that any stable solution is guaranteed to have a performance within 50% of the optimal one. We also show that the computational complexity and communication complexity are both polynomial in the size of the problem. The experimental results exhibit that our algorithm outperforms several state-of-the-art algorithms, and also that the runtime is scalable as per theory.

We propose a novel set of Poisson Cluster Process (PCP) models to detect Ultra-Diffuse Galaxies (UDGs), a class of extremely faint, enigmatic galaxies of substantial interest in modern astrophysics. We model the unobserved UDG locations as parent points in a PCP, and infer their positions based on the observed spatial point patterns of their old star cluster systems. Many UDGs have somewhere from a few to hundreds of these old star clusters, which we treat as offspring points in our models. We also present a new framework to construct a marked PCP model using the marks of star clusters. The marked PCP model may enhance the detection of UDGs and offers broad applicability to problems in other disciplines. To assess the overall model performance, we design an innovative assessment tool for spatial prediction problems where only point-referenced ground truth is available, overcoming the limitation of standard ROC analyses where spatial Boolean reference maps are required. We construct a bespoke blocked Gibbs adaptive spatial birth-death-move MCMC algorithm to infer the locations of UDGs using real data from a \textit{Hubble Space Telescope} imaging survey. Based on our performance assessment tool, our novel models significantly outperform existing approaches using the Log-Gaussian Cox Process. We also obtained preliminary evidence that the marked PCP model improves UDG detection performance compared to the model without marks. Furthermore, we find evidence of a potential new ``dark galaxy'' that was not detected by previous methods.

Recent advances in Large Language Models (LLMs) have exhibited remarkable proficiency across various tasks. Given the potent applications of LLMs in numerous fields, there has been a surge in LLM development. In developing LLMs, a common practice involves continual pre-training on previously fine-tuned models. However, this can lead to catastrophic forgetting. In our work, we investigate the phenomenon of forgetting that occurs during continual pre-training on an existing fine-tuned LLM. We evaluate the impact of continuous pre-training on the fine-tuned LLM across various dimensions, including output format, knowledge, and reliability. Experiment results highlight the non-trivial challenge of addressing catastrophic forgetting during continual pre-training, especially the repetition issue.

In recent years, there has been growing interest in the video-based action quality assessment (AQA). Most existing methods typically solve AQA problem by considering the entire video yet overlooking the inherent stage-level characteristics of actions. To address this issue, we design a novel Multi-stage Contrastive Regression (MCoRe) framework for the AQA task. This approach allows us to efficiently extract spatial-temporal information, while simultaneously reducing computational costs by segmenting the input video into multiple stages or procedures. Inspired by the graph contrastive learning, we propose a new stage-wise contrastive learning loss function to enhance performance. As a result, MCoRe demonstrates the state-of-the-art result so far on the widely-adopted fine-grained AQA dataset.

The Rayleigh-product channel model is utilized to characterize the rank deficiency caused by keyhole effects. However, the finite blocklength analysis for Rayleigh-product channels is not available in the literature. In this paper, we will characterize the mutual information density (MID) and perform the FBL analysis to reveal the impact of rank-deficiency in Rayleigh-product channels. To this end, we first set up a central limit theorem for the MID over Rayleigh-product MIMO channels in the asymptotic regime where the number of scatterers, number of antennas, and blocklength go to infinity at the same pace. Then, we utilize the CLT to obtain the upper and lower bounds for the packet error probability, whose approximations in the high and low signal to noise ratio regimes are then derived to illustrate the impact of rank-deficiency. One interesting observation is that rank-deficiency degrades the performance of MIMO systems with FBL and the fundamental limits of Rayleigh-product channels degenerate to those of the Rayleigh case when the number of scatterers approaches infinity.

北京阿比特科技有限公司