亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Emissions of nitric oxide and nitrogen dioxide, which are named as NOx, are a major environmental and health concern.To react to the climate crisis, the South Korean government has strengthened NOx emission regulations. An accurate NOx prediction model can help companies to meet their NOx emission quotas and achieve cost savings. This study focuses on developing a model which forecasts the amount of NOx emissions in Pohang, a heavy industrial city in South Korea with serious air pollution problems.In this study, the Long-short term memory (LSTM) modeling is applied to predict the amount of NOx emissions, with missing data imputation using stochastic regression. Two parameters (i.e., time windows and learning rates) necessary to run the LSTM model are tested and selected using the Adam optimizer, one of the popular optimization methods in LSTM. I found that the model that I applied achieved the acceptable prediction performance since its Mean Absolute Scaled Error (MASE), the most important evaluation criterion, is less than 1. This means that applying the model that I developed in predicting future NOx emissions will perform better than a naive prediction, a model that simply predicts them based on the last observed data point.

相關內容

長短期記憶網絡(LSTM)是一種用于深度學習領域的人工回歸神經網絡(RNN)結構。與標準的前饋神經網絡不同,LSTM具有反饋連接。它不僅可以處理單個數據點(如圖像),還可以處理整個數據序列(如語音或視頻)。例如,LSTM適用于未分段、連接的手寫識別、語音識別、網絡流量或IDSs(入侵檢測系統)中的異常檢測等任務。

In medical image diagnosis, fairness has become increasingly crucial. Without bias mitigation, deploying unfair AI would harm the interests of the underprivileged population and potentially tear society apart. Recent research addresses prediction biases in deep learning models concerning demographic groups (e.g., gender, age, and race) by utilizing demographic (sensitive attribute) information during training. However, many sensitive attributes naturally exist in dermatological disease images. If the trained model only targets fairness for a specific attribute, it remains unfair for other attributes. Moreover, training a model that can accommodate multiple sensitive attributes is impractical due to privacy concerns. To overcome this, we propose a method enabling fair predictions for sensitive attributes during the testing phase without using such information during training. Inspired by prior work highlighting the impact of feature entanglement on fairness, we enhance the model features by capturing the features related to the sensitive and target attributes and regularizing the feature entanglement between corresponding classes. This ensures that the model can only classify based on the features related to the target attribute without relying on features associated with sensitive attributes, thereby improving fairness and accuracy. Additionally, we use disease masks from the Segment Anything Model (SAM) to enhance the quality of the learned feature. Experimental results demonstrate that the proposed method can improve fairness in classification compared to state-of-the-art methods in two dermatological disease datasets.

Bearing measurements,as the most common modality in nature, have recently gained traction in multi-robot systems to enhance mutual localization and swarm collaboration. Despite their advantages, challenges such as sensory noise, obstacle occlusion, and uncoordinated swarm motion persist in real-world scenarios, potentially leading to erroneous state estimation and undermining the system's flexibility, practicality, and robustness.In response to these challenges, in this paper we address theoretical and practical problem related to both mutual localization and swarm planning.Firstly, we propose a certifiable mutual localization algorithm.It features a concise problem formulation coupled with lossless convex relaxation, enabling independence from initial values and globally optimal relative pose recovery.Then, to explore how detection noise and swarm motion influence estimation optimality, we conduct a comprehensive analysis on the interplay between robots' mutual spatial relationship and mutual localization. We develop a differentiable metric correlated with swarm trajectories to explicitly evaluate the noise resistance of optimal estimation.By establishing a finite and pre-computable threshold for this metric and accordingly generating swarm trajectories, the estimation optimality can be strictly guaranteed under arbitrary noise. Based on these findings, an optimization-based swarm planner is proposed to generate safe and smooth trajectories, with consideration of both inter-robot visibility and estimation optimality.Through numerical simulations, we evaluate the optimality and certifiablity of our estimator, and underscore the significance of our planner in enhancing estimation performance.The results exhibit considerable potential of our methods to pave the way for advanced closed-loop intelligence in swarm systems.

Policy gradient methods equipped with deep neural networks have achieved great success in solving high-dimensional reinforcement learning (RL) problems. However, current analyses cannot explain why they are resistant to the curse of dimensionality. In this work, we study the sample complexity of the neural policy mirror descent (NPMD) algorithm with deep convolutional neural networks (CNN). Motivated by the empirical observation that many high-dimensional environments have state spaces possessing low-dimensional structures, such as those taking images as states, we consider the state space to be a $d$-dimensional manifold embedded in the $D$-dimensional Euclidean space with intrinsic dimension $d\ll D$. We show that in each iteration of NPMD, both the value function and the policy can be well approximated by CNNs. The approximation errors are controlled by the size of the networks, and the smoothness of the previous networks can be inherited. As a result, by properly choosing the network size and hyperparameters, NPMD can find an $\epsilon$-optimal policy with $\widetilde{O}(\epsilon^{-\frac{d}{\alpha}-2})$ samples in expectation, where $\alpha\in(0,1]$ indicates the smoothness of environment. Compared to previous work, our result exhibits that NPMD can leverage the low-dimensional structure of state space to escape from the curse of dimensionality, explaining the efficacy of deep policy gradient algorithms.

This study considers tests for coefficient randomness in predictive regressions. Our focus is on how tests for coefficient randomness are influenced by the persistence of random coefficient. We find that when the random coefficient is stationary, or I(0), Nyblom's (1989) LM test loses its optimality (in terms of power), which is established against the alternative of integrated, or I(1), random coefficient. We demonstrate this by constructing tests that are more powerful than the LM test when random coefficient is stationary, although these tests are dominated in terms of power by the LM test when random coefficient is integrated. This implies that the best test for coefficient randomness differs from context to context, and practitioners should take into account the persistence of potentially random coefficient and choose from several tests accordingly. We apply tests for coefficient constancy to real data. The results mostly reverse the conclusion of an earlier empirical study.

Energy theft, characterized by manipulating energy consumption readings to reduce payments, poses a dual threat-causing financial losses for grid operators and undermining the performance of smart grids. Effective Energy Theft Detection (ETD) methods become crucial in mitigating these risks by identifying such fraudulent activities in their early stages. However, the majority of current ETD methods rely on supervised learning, which is hindered by the difficulty of labelling data and the risk of overfitting known attacks. To address these challenges, several unsupervised ETD methods have been proposed, focusing on learning the normal patterns from honest users, specifically the reconstruction of input. However, our investigation reveals a limitation in current unsupervised ETD methods, as they can only detect anomalous behaviours in users exhibiting regular patterns. Users with high-variance behaviours pose a challenge to these methods. In response, this paper introduces a Denoising Diffusion Probabilistic Model (DDPM)-based ETD approach. This innovative approach demonstrates impressive ETD performance on high-variance smart grid data by incorporating additional attributes correlated with energy consumption. The proposed methods improve the average ETD performance on high-variance smart grid data from below 0.5 to over 0.9 w.r.t. AUC. On the other hand, our experimental findings indicate that while the state-of-the-art ETD methods based on reconstruction error can identify ETD attacks for the majority of users, they prove ineffective in detecting attacks for certain users. To address this, we propose a novel ensemble approach that considers both reconstruction error and forecasting error, enhancing the robustness of the ETD methodology. The proposed ensemble method improves the average ETD performance on the stealthiest attacks from nearly 0 to 0.5 w.r.t. 5%-TPR.

Solutions to vision tasks in gastrointestinal endoscopy (GIE) conventionally use image encoders pretrained in a supervised manner with ImageNet-1k as backbones. However, the use of modern self-supervised pretraining algorithms and a recent dataset of 100k unlabelled GIE images (Hyperkvasir-unlabelled) may allow for improvements. In this work, we study the fine-tuned performance of models with ResNet50 and ViT-B backbones pretrained in self-supervised and supervised manners with ImageNet-1k and Hyperkvasir-unlabelled (self-supervised only) in a range of GIE vision tasks. In addition to identifying the most suitable pretraining pipeline and backbone architecture for each task, out of those considered, our results suggest: that self-supervised pretraining generally produces more suitable backbones for GIE vision tasks than supervised pretraining; that self-supervised pretraining with ImageNet-1k is typically more suitable than pretraining with Hyperkvasir-unlabelled, with the notable exception of monocular depth estimation in colonoscopy; and that ViT-Bs are more suitable in polyp segmentation and monocular depth estimation in colonoscopy, ResNet50s are more suitable in polyp detection, and both architectures perform similarly in anatomical landmark recognition and pathological finding characterisation. We hope this work draws attention to the complexity of pretraining for GIE vision tasks, informs this development of more suitable approaches than the convention, and inspires further research on this topic to help advance this development. Code available: \underline{github.com/ESandML/SSL4GIE}

Purpose: Lymph nodes (LNs) in the chest have a tendency to enlarge due to various pathologies, such as lung cancer or pneumonia. Clinicians routinely measure nodal size to monitor disease progression, confirm metastatic cancer, and assess treatment response. However, variations in their shapes and appearances make it cumbersome to identify LNs, which reside outside of most organs. Methods: We propose to segment LNs in the mediastinum by leveraging the anatomical priors of 28 different structures (e.g., lung, trachea etc.) generated by the public TotalSegmentator tool. The CT volumes from 89 patients available in the public NIH CT Lymph Node dataset were used to train three 3D nnUNet models to segment LNs. The public St. Olavs dataset containing 15 patients (out-of-training-distribution) was used to evaluate the segmentation performance. Results: For the 15 test patients, the 3D cascade nnUNet model obtained the highest Dice score of 72.2 +- 22.3 for mediastinal LNs with short axis diameter $\geq$ 8mm and 54.8 +- 23.8 for all LNs respectively. These results represent an improvement of 10 points over a current approach that was evaluated on the same test dataset. Conclusion: To our knowledge, we are the first to harness 28 distinct anatomical priors to segment mediastinal LNs, and our work can be extended to other nodal zones in the body. The proposed method has immense potential for improved patient outcomes through the identification of enlarged nodes in initial staging CT scans.

Heuristics and cognitive biases are an integral part of human decision-making. Automatically detecting a particular cognitive bias could enable intelligent tools to provide better decision-support. Detecting the presence of a cognitive bias currently requires a hand-crafted experiment and human interpretation. Our research aims to explore conversational agents as an effective tool to measure various cognitive biases in different domains. Our proposed conversational agent incorporates a bias measurement mechanism that is informed by the existing experimental designs and various experimental tasks identified in the literature. Our initial experiments to measure framing and loss-aversion biases indicate that the conversational agents can be effectively used to measure the biases.

Deep neural networks (DNNs) are successful in many computer vision tasks. However, the most accurate DNNs require millions of parameters and operations, making them energy, computation and memory intensive. This impedes the deployment of large DNNs in low-power devices with limited compute resources. Recent research improves DNN models by reducing the memory requirement, energy consumption, and number of operations without significantly decreasing the accuracy. This paper surveys the progress of low-power deep learning and computer vision, specifically in regards to inference, and discusses the methods for compacting and accelerating DNN models. The techniques can be divided into four major categories: (1) parameter quantization and pruning, (2) compressed convolutional filters and matrix factorization, (3) network architecture search, and (4) knowledge distillation. We analyze the accuracy, advantages, disadvantages, and potential solutions to the problems with the techniques in each category. We also discuss new evaluation metrics as a guideline for future research.

Clinical Named Entity Recognition (CNER) aims to identify and classify clinical terms such as diseases, symptoms, treatments, exams, and body parts in electronic health records, which is a fundamental and crucial task for clinical and translational research. In recent years, deep neural networks have achieved significant success in named entity recognition and many other Natural Language Processing (NLP) tasks. Most of these algorithms are trained end to end, and can automatically learn features from large scale labeled datasets. However, these data-driven methods typically lack the capability of processing rare or unseen entities. Previous statistical methods and feature engineering practice have demonstrated that human knowledge can provide valuable information for handling rare and unseen cases. In this paper, we address the problem by incorporating dictionaries into deep neural networks for the Chinese CNER task. Two different architectures that extend the Bi-directional Long Short-Term Memory (Bi-LSTM) neural network and five different feature representation schemes are proposed to handle the task. Computational results on the CCKS-2017 Task 2 benchmark dataset show that the proposed method achieves the highly competitive performance compared with the state-of-the-art deep learning methods.

北京阿比特科技有限公司