亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Background: Alzheimers disease is a progressive neurodegenerative disorder and the main cause of dementia in aging. Hippocampus is prone to changes in the early stages of Alzheimers disease. Detection and observation of the hippocampus changes using magnetic resonance imaging (MRI) before the onset of Alzheimers disease leads to the faster preventive and therapeutic measures. Objective: The aim of this study was the segmentation of the hippocampus in magnetic resonance (MR) images of Alzheimers patients using deep machine learning method. Methods: U-Net architecture of convolutional neural network was proposed to segment the hippocampus in the real MRI data. The MR images of the 100 and 35 patients available in Alzheimers disease Neuroimaging Initiative (ADNI) dataset, was used for the train and test of the model, respectively. The performance of the proposed method was compared with manual segmentation by measuring the similarity metrics. Results: The desired segmentation achieved after 10 iterations. A Dice similarity coefficient (DSC) = 92.3%, sensitivity = 96.5%, positive predicted value (PPV) = 90.4%, and Intersection over Union (IoU) value for the train 92.94 and test 92.93 sets were obtained which are acceptable. Conclusion: The proposed approach is promising and can be extended in the prognosis of Alzheimers disease by the prediction of the hippocampus volume changes in the early stage of the disease.

相關內容

機器學習(xi)(xi)(Machine Learning)是一個(ge)研究(jiu)(jiu)(jiu)計(ji)算學習(xi)(xi)方(fang)(fang)(fang)(fang)(fang)法(fa)的(de)(de)國際論(lun)壇。該(gai)雜志(zhi)發表(biao)文(wen)章,報告廣泛的(de)(de)學習(xi)(xi)方(fang)(fang)(fang)(fang)(fang)法(fa)應(ying)(ying)(ying)用(yong)(yong)于各種(zhong)學習(xi)(xi)問(wen)題(ti)(ti)的(de)(de)實(shi)質性(xing)結果。該(gai)雜志(zhi)的(de)(de)特色(se)論(lun)文(wen)描(miao)述(shu)研究(jiu)(jiu)(jiu)的(de)(de)問(wen)題(ti)(ti)和(he)(he)方(fang)(fang)(fang)(fang)(fang)法(fa),應(ying)(ying)(ying)用(yong)(yong)研究(jiu)(jiu)(jiu)和(he)(he)研究(jiu)(jiu)(jiu)方(fang)(fang)(fang)(fang)(fang)法(fa)的(de)(de)問(wen)題(ti)(ti)。有(you)關學習(xi)(xi)問(wen)題(ti)(ti)或(huo)方(fang)(fang)(fang)(fang)(fang)法(fa)的(de)(de)論(lun)文(wen)通過實(shi)證研究(jiu)(jiu)(jiu)、理(li)論(lun)分析或(huo)與心理(li)現(xian)象(xiang)的(de)(de)比(bi)較提供了(le)(le)堅實(shi)的(de)(de)支持(chi)。應(ying)(ying)(ying)用(yong)(yong)論(lun)文(wen)展示了(le)(le)如何應(ying)(ying)(ying)用(yong)(yong)學習(xi)(xi)方(fang)(fang)(fang)(fang)(fang)法(fa)來解決重要的(de)(de)應(ying)(ying)(ying)用(yong)(yong)問(wen)題(ti)(ti)。研究(jiu)(jiu)(jiu)方(fang)(fang)(fang)(fang)(fang)法(fa)論(lun)文(wen)改(gai)進了(le)(le)機器學習(xi)(xi)的(de)(de)研究(jiu)(jiu)(jiu)方(fang)(fang)(fang)(fang)(fang)法(fa)。所有(you)的(de)(de)論(lun)文(wen)都(dou)以(yi)其他研究(jiu)(jiu)(jiu)人員可以(yi)驗證或(huo)復制的(de)(de)方(fang)(fang)(fang)(fang)(fang)式描(miao)述(shu)了(le)(le)支持(chi)證據(ju)。論(lun)文(wen)還詳細說明了(le)(le)學習(xi)(xi)的(de)(de)組成部分,并討(tao)論(lun)了(le)(le)關于知識(shi)表(biao)示和(he)(he)性(xing)能任務的(de)(de)假(jia)設。 官網地址:

The current state-of-the-art deep neural networks (DNNs) for Alzheimer's Disease diagnosis use different biomarker combinations to classify patients, but do not allow extracting knowledge about the interactions of biomarkers. However, to improve our understanding of the disease, it is paramount to extract such knowledge from the learned model. In this paper, we propose a Deep Factorization Machine model that combines the ability of DNNs to learn complex relationships and the ease of interpretability of a linear model. The proposed model has three parts: (i) an embedding layer to deal with sparse categorical data, (ii) a Factorization Machine to efficiently learn pairwise interactions, and (iii) a DNN to implicitly model higher order interactions. In our experiments on data from the Alzheimer's Disease Neuroimaging Initiative, we demonstrate that our proposed model classifies cognitive normal, mild cognitive impaired, and demented patients more accurately than competing models. In addition, we show that valuable knowledge about the interactions among biomarkers can be obtained.

Thoracic diseases are very serious health problems that plague a large number of people. Chest X-ray is currently one of the most popular methods to diagnose thoracic diseases, playing an important role in the healthcare workflow. However, reading the chest X-ray images and giving an accurate diagnosis remain challenging tasks for expert radiologists. With the success of deep learning in computer vision, a growing number of deep neural network architectures were applied to chest X-ray image classification. However, most of the previous deep neural network classifiers were based on deterministic architectures which are usually very noise-sensitive and are likely to aggravate the overfitting issue. In this paper, to make a deep architecture more robust to noise and to reduce overfitting, we propose using deep generative classifiers to automatically diagnose thorax diseases from the chest X-ray images. Unlike the traditional deterministic classifier, a deep generative classifier has a distribution middle layer in the deep neural network. A sampling layer then draws a random sample from the distribution layer and input it to the following layer for classification. The classifier is generative because the class label is generated from samples of a related distribution. Through training the model with a certain amount of randomness, the deep generative classifiers are expected to be robust to noise and can reduce overfitting and then achieve good performances. We implemented our deep generative classifiers based on a number of well-known deterministic neural network architectures, and tested our models on the chest X-ray14 dataset. The results demonstrated the superiority of deep generative classifiers compared with the corresponding deep deterministic classifiers.

Tumor detection in biomedical imaging is a time-consuming process for medical professionals and is not without errors. Thus in recent decades, researchers have developed algorithmic techniques for image processing using a wide variety of mathematical methods, such as statistical modeling, variational techniques, and machine learning. In this paper, we propose a semi-automatic method for liver segmentation of 2D CT scans into three labels denoting healthy, vessel, or tumor tissue based on graph cuts. First, we create a feature vector for each pixel in a novel way that consists of the 59 intensity values in the time series data and propose a simplified perimeter cost term in the energy functional. We normalize the data and perimeter terms in the functional to expedite the graph cut without having to optimize the scaling parameter $\lambda$. In place of a training process, predetermined tissue means are computed based on sample regions identified by expert radiologists. The proposed method also has the advantage of being relatively simple to implement computationally. It was evaluated against the ground truth on a clinical CT dataset of 10 tumors and yielded segmentations with a mean Dice similarity coefficient (DSC) of .77 and mean volume overlap error (VOE) of 36.7%. The average processing time was 1.25 minutes per slice.

Automatic detection of defects in metal castings is a challenging task, owing to the rare occurrence and variation in appearance of defects. However, automatic defect detection systems can lead to significant increases in final product quality. Convolutional neural networks (CNNs) have shown outstanding performance in both image classification and localization tasks. In this work, a system is proposed for the identification of casting defects in X-ray images, based on the mask region-based CNN architecture. The proposed defect detection system simultaneously performs defect detection and segmentation on input images, making it suitable for a range of defect detection tasks. It is shown that training the network to simultaneously perform defect detection and defect instance segmentation, results in a higher defect detection accuracy than training on defect detection alone. Transfer learning is leveraged to reduce the training data demands and increase the prediction accuracy of the trained model. More specifically, the model is first trained with two large openly-available image datasets before fine-tuning on a relatively small metal casting X-ray dataset. The accuracy of the trained model exceeds state-of-the art performance on the GDXray Castings dataset and is fast enough to be used in a production setting. The system also performs well on the GDXray Welds dataset. A number of in-depth studies are conducted to explore how transfer learning, multi-task learning, and multi-class learning influence the performance of the trained system.

This paper introduces a deep-learning based efficient classifier for common dermatological conditions, aimed at people without easy access to skin specialists. We report approximately 80% accuracy, in a situation where primary care doctors have attained 57% success rate, according to recent literature. The rationale of its design is centered on deploying and updating it on handheld devices in near future. Dermatological diseases are common in every population and have a wide spectrum in severity. With a shortage of dermatological expertise being observed in several countries, machine learning solutions can augment medical services and advise regarding existence of common diseases. The paper implements supervised classification of nine distinct conditions which have high occurrence in East Asian countries. Our current attempt establishes that deep learning based techniques are viable avenues for preliminary information to aid patients.

The piecewise constant Mumford-Shah (PCMS) model and the Rudin-Osher-Fatemi (ROF) model are two of the most famous variational models in image segmentation and image restoration, respectively. They have ubiquitous applications in image processing. In this paper, we explore the linkage between these two important models. We prove that for two-phase segmentation problem the optimal solution of the PCMS model can be obtained by thresholding the minimizer of the ROF model. This linkage is still valid for multiphase segmentation under mild assumptions. Thus it opens a new segmentation paradigm: image segmentation can be done via image restoration plus thresholding. This new paradigm, which circumvents the innate non-convex property of the PCMS model, therefore improves the segmentation performance in both efficiency (much faster than state-of-the-art methods based on PCMS model, particularly when the phase number is high) and effectiveness (producing segmentation results with better quality) due to the flexibility of the ROF model in tackling degraded images, such as noisy images, blurry images or images with information loss. As a by-product of the new paradigm, we derive a novel segmentation method, coined thresholded-ROF (T-ROF) method, to illustrate the virtue of manipulating image segmentation through image restoration techniques. The convergence of the T-ROF method under certain conditions is proved, and elaborate experimental results and comparisons are presented.

Over the past decades, state-of-the-art medical image segmentation has heavily rested on signal processing paradigms, most notably registration-based label propagation and pair-wise patch comparison, which are generally slow despite a high segmentation accuracy. In recent years, deep learning has revolutionalized computer vision with many practices outperforming prior art, in particular the convolutional neural network (CNN) studies on image classification. Deep CNN has also started being applied to medical image segmentation lately, but generally involves long training and demanding memory requirements, achieving limited success. We propose a patch-based deep learning framework based on a revisit to the classic neural network model with substantial modernization, including the use of Rectified Linear Unit (ReLU) activation, dropout layers, 2.5D tri-planar patch multi-pathway settings. In a test application to hippocampus segmentation using 100 brain MR images from the ADNI database, our approach significantly outperformed prior art in terms of both segmentation accuracy and speed: scoring a median Dice score up to 90.98% on a near real-time performance (<1s).

We propose a novel attention gate (AG) model for medical imaging that automatically learns to focus on target structures of varying shapes and sizes. Models trained with AGs implicitly learn to suppress irrelevant regions in an input image while highlighting salient features useful for a specific task. This enables us to eliminate the necessity of using explicit external tissue/organ localisation modules of cascaded convolutional neural networks (CNNs). AGs can be easily integrated into standard CNN architectures such as the U-Net model with minimal computational overhead while increasing the model sensitivity and prediction accuracy. The proposed Attention U-Net architecture is evaluated on two large CT abdominal datasets for multi-class image segmentation. Experimental results show that AGs consistently improve the prediction performance of U-Net across different datasets and training sizes while preserving computational efficiency. The code for the proposed architecture is publicly available.

In this work, we present a deep learning framework for multi-class breast cancer image classification as our submission to the International Conference on Image Analysis and Recognition (ICIAR) 2018 Grand Challenge on BreAst Cancer Histology images (BACH). As these histology images are too large to fit into GPU memory, we first propose using Inception V3 to perform patch level classification. The patch level predictions are then passed through an ensemble fusion framework involving majority voting, gradient boosting machine (GBM), and logistic regression to obtain the image level prediction. We improve the sensitivity of the Normal and Benign predicted classes by designing a Dual Path Network (DPN) to be used as a feature extractor where these extracted features are further sent to a second layer of ensemble prediction fusion using GBM, logistic regression, and support vector machine (SVM) to refine predictions. Experimental results demonstrate our framework shows a 12.5$\%$ improvement over the state-of-the-art model.

Image segmentation is considered to be one of the critical tasks in hyperspectral remote sensing image processing. Recently, convolutional neural network (CNN) has established itself as a powerful model in segmentation and classification by demonstrating excellent performances. The use of a graphical model such as a conditional random field (CRF) contributes further in capturing contextual information and thus improving the segmentation performance. In this paper, we propose a method to segment hyperspectral images by considering both spectral and spatial information via a combined framework consisting of CNN and CRF. We use multiple spectral cubes to learn deep features using CNN, and then formulate deep CRF with CNN-based unary and pairwise potential functions to effectively extract the semantic correlations between patches consisting of three-dimensional data cubes. Effective piecewise training is applied in order to avoid the computationally expensive iterative CRF inference. Furthermore, we introduce a deep deconvolution network that improves the segmentation masks. We also introduce a new dataset and experimented our proposed method on it along with several widely adopted benchmark datasets to evaluate the effectiveness of our method. By comparing our results with those from several state-of-the-art models, we show the promising potential of our method.

北京阿比特科技有限公司