亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Liver cancer is one of the most common malignant diseases in the world. Segmentation and labeling of liver tumors and blood vessels in CT images can provide convenience for doctors in liver tumor diagnosis and surgical intervention. In the past decades, automatic CT segmentation methods based on deep learning have received widespread attention in the medical field. Many state-of-the-art segmentation algorithms appeared during this period. Yet, most of the existing segmentation methods only care about the local feature context and have a perception defect in the global relevance of medical images, which significantly affects the segmentation effect of liver tumors and blood vessels. We introduce a multi-scale feature context fusion network called TransFusionNet based on Transformer and SEBottleNet. This network can accurately detect and identify the details of the region of interest of the liver vessel, meanwhile it can improve the recognition of morphologic margins of liver tumors by exploiting the global information of CT images. Experiments show that TransFusionNet is better than the state-of-the-art method on both the public dataset LITS and 3Dircadb and our clinical dataset. Finally, we propose an automatic 3D reconstruction algorithm based on the trained model. The algorithm can complete the reconstruction quickly and accurately in 1 second.

相關內容

在計(ji)(ji)(ji)(ji)算(suan)機(ji)視(shi)覺中(zhong), 三(san)(san)(san)(san)(san)維(wei)(wei)重(zhong)建(jian)是(shi)指(zhi)根據單(dan)視(shi)圖(tu)或者多(duo)視(shi)圖(tu)的(de)(de)(de)圖(tu)像(xiang)重(zhong)建(jian)三(san)(san)(san)(san)(san)維(wei)(wei)信息(xi)的(de)(de)(de)過(guo)(guo)(guo)程(cheng). 由于單(dan)視(shi)頻的(de)(de)(de)信息(xi)不完全,因(yin)此(ci)三(san)(san)(san)(san)(san)維(wei)(wei)重(zhong)建(jian)需要利(li)用(yong)經(jing)驗知識. 而多(duo)視(shi)圖(tu)的(de)(de)(de)三(san)(san)(san)(san)(san)維(wei)(wei)重(zhong)建(jian)(類(lei)(lei)似人(ren)的(de)(de)(de)雙目定位(wei))相對(dui)比較容易, 其方法是(shi)先(xian)對(dui)攝(she)像(xiang)機(ji)進(jin)行標(biao)定, 即計(ji)(ji)(ji)(ji)算(suan)出攝(she)像(xiang)機(ji)的(de)(de)(de)圖(tu)象坐(zuo)標(biao)系與世(shi)界坐(zuo)標(biao)系的(de)(de)(de)關系.然后(hou)利(li)用(yong)多(duo)個二維(wei)(wei)圖(tu)象中(zhong)的(de)(de)(de)信息(xi)重(zhong)建(jian)出三(san)(san)(san)(san)(san)維(wei)(wei)信息(xi)。 物(wu)體三(san)(san)(san)(san)(san)維(wei)(wei)重(zhong)建(jian)是(shi)計(ji)(ji)(ji)(ji)算(suan)機(ji)輔(fu)助幾何設計(ji)(ji)(ji)(ji)(CAGD)、計(ji)(ji)(ji)(ji)算(suan)機(ji)圖(tu)形(xing)學(CG)、計(ji)(ji)(ji)(ji)算(suan)機(ji)動畫、計(ji)(ji)(ji)(ji)算(suan)機(ji)視(shi)覺、醫(yi)學圖(tu)像(xiang)處(chu)理、科學計(ji)(ji)(ji)(ji)算(suan)和虛擬現(xian)(xian)實、數(shu)字媒體創作等(deng)(deng)領(ling)域(yu)的(de)(de)(de)共(gong)性科學問題(ti)和核心(xin)技術(shu)。在計(ji)(ji)(ji)(ji)算(suan)機(ji)內生成(cheng)物(wu)體三(san)(san)(san)(san)(san)維(wei)(wei)表示主(zhu)要有兩類(lei)(lei)方法。一類(lei)(lei)是(shi)使(shi)用(yong)幾何建(jian)模(mo)軟件通過(guo)(guo)(guo)人(ren)機(ji)交互(hu)生成(cheng)人(ren)為控制下的(de)(de)(de)物(wu)體三(san)(san)(san)(san)(san)維(wei)(wei)幾何模(mo)型,另一類(lei)(lei)是(shi)通過(guo)(guo)(guo)一定的(de)(de)(de)手段(duan)獲取(qu)真實物(wu)體的(de)(de)(de)幾何形(xing)狀(zhuang)。前者實現(xian)(xian)技術(shu)已(yi)經(jing)十(shi)分成(cheng)熟(shu),現(xian)(xian)有若(ruo)干軟件支持,比如:3DMAX、Maya、AutoCAD、UG等(deng)(deng)等(deng)(deng),它們一般(ban)使(shi)用(yong)具有數(shu)學表達式的(de)(de)(de)曲線曲面表示幾何形(xing)狀(zhuang)。后(hou)者一般(ban)稱為三(san)(san)(san)(san)(san)維(wei)(wei)重(zhong)建(jian)過(guo)(guo)(guo)程(cheng),三(san)(san)(san)(san)(san)維(wei)(wei)重(zhong)建(jian)是(shi)指(zhi)利(li)用(yong)二維(wei)(wei)投影恢復物(wu)體三(san)(san)(san)(san)(san)維(wei)(wei)信息(xi)(形(xing)狀(zhuang)等(deng)(deng))的(de)(de)(de)數(shu)學過(guo)(guo)(guo)程(cheng)和計(ji)(ji)(ji)(ji)算(suan)機(ji)技術(shu),包(bao)括數(shu)據獲取(qu)、預(yu)處(chu)理、點云拼(pin)接和特征分析等(deng)(deng)步驟。

Emotion recognition using EEG has been widely studied to address the challenges associated with affective computing. Using manual feature extraction method on EEG signals result in sub-optimal performance by the learning models. With the advancements in deep learning as a tool for automated feature engineering, in this work a hybrid of manual and automatic feature extraction method has been proposed. The asymmetry in the different brain regions are captured in a 2-D vector, termed as AsMap from the differential entropy (DE) features of EEG signals. These AsMaps are then used to extract features automatically using Convolutional Neural Network (CNN) model. The proposed feature extraction method has been compared with DE and other DE-based feature extraction methods such as RASM, DASM and DCAU. Experiments are conducted using DEAP and SEED dataset on different classification problems based on number of classes. Results obtained indicate that the proposed method of feature extraction results in higher classification accuracy outperforming the DE based feature extraction methods. Highest classification accuracy of 97.10% is achieved on 3-class classification problem using SEED dataset. Further, the impact of window size on classification accuracy has also been assessed in this work.

U-Net has been providing state-of-the-art performance in many medical image segmentation problems. Many modifications have been proposed for U-Net, such as attention U-Net, recurrent residual convolutional U-Net (R2-UNet), and U-Net with residual blocks or blocks with dense connections. However, all these modifications have an encoder-decoder structure with skip connections, and the number of paths for information flow is limited. We propose LadderNet in this paper, which can be viewed as a chain of multiple U-Nets. Instead of only one pair of encoder branch and decoder branch in U-Net, a LadderNet has multiple pairs of encoder-decoder branches, and has skip connections between every pair of adjacent decoder and decoder branches in each level. Inspired by the success of ResNet and R2-UNet, we use modified residual blocks where two convolutional layers in one block share the same weights. A LadderNet has more paths for information flow because of skip connections and residual blocks, and can be viewed as an ensemble of Fully Convolutional Networks (FCN). The equivalence to an ensemble of FCNs improves segmentation accuracy, while the shared weights within each residual block reduce parameter number. Semantic segmentation is essential for retinal disease detection. We tested LadderNet on two benchmark datasets for blood vessel segmentation in retinal images, and achieved superior performance over methods in the literature. The implementation is provided \url{//github.com/juntang-zhuang/LadderNet}

Deep learning (DL) approaches are state-of-the-art for many medical image segmentation tasks. They offer a number of advantages: they can be trained for specific tasks, computations are fast at test time, and segmentation quality is typically high. In contrast, previously popular multi-atlas segmentation (MAS) methods are relatively slow (as they rely on costly registrations) and even though sophisticated label fusion strategies have been proposed, DL approaches generally outperform MAS. In this work, we propose a DL-based label fusion strategy (VoteNet) which locally selects a set of reliable atlases whose labels are then fused via plurality voting. Experiments on 3D brain MRI data show that by selecting a good initial atlas set MAS with VoteNet significantly outperforms a number of other label fusion strategies as well as a direct DL segmentation approach. We also provide an experimental analysis of the upper performance bound achievable by our method. While unlikely achievable in practice, this bound suggests room for further performance improvements. Lastly, to address the runtime disadvantage of standard MAS, all our results make use of a fast DL registration approach.

Deep neural network models used for medical image segmentation are large because they are trained with high-resolution three-dimensional (3D) images. Graphics processing units (GPUs) are widely used to accelerate the trainings. However, the memory on a GPU is not large enough to train the models. A popular approach to tackling this problem is patch-based method, which divides a large image into small patches and trains the models with these small patches. However, this method would degrade the segmentation quality if a target object spans multiple patches. In this paper, we propose a novel approach for 3D medical image segmentation that utilizes the data-swapping, which swaps out intermediate data from GPU memory to CPU memory to enlarge the effective GPU memory size, for training high-resolution 3D medical images without patching. We carefully tuned parameters in the data-swapping method to obtain the best training performance for 3D U-Net, a widely used deep neural network model for medical image segmentation. We applied our tuning to train 3D U-Net with full-size images of 192 x 192 x 192 voxels in brain tumor dataset. As a result, communication overhead, which is the most important issue, was reduced by 17.1%. Compared with the patch-based method for patches of 128 x 128 x 128 voxels, our training for full-size images achieved improvement on the mean Dice score by 4.48% and 5.32 % for detecting whole tumor sub-region and tumor core sub-region, respectively. The total training time was reduced from 164 hours to 47 hours, resulting in 3.53 times of acceleration.

Tumor detection in biomedical imaging is a time-consuming process for medical professionals and is not without errors. Thus in recent decades, researchers have developed algorithmic techniques for image processing using a wide variety of mathematical methods, such as statistical modeling, variational techniques, and machine learning. In this paper, we propose a semi-automatic method for liver segmentation of 2D CT scans into three labels denoting healthy, vessel, or tumor tissue based on graph cuts. First, we create a feature vector for each pixel in a novel way that consists of the 59 intensity values in the time series data and propose a simplified perimeter cost term in the energy functional. We normalize the data and perimeter terms in the functional to expedite the graph cut without having to optimize the scaling parameter $\lambda$. In place of a training process, predetermined tissue means are computed based on sample regions identified by expert radiologists. The proposed method also has the advantage of being relatively simple to implement computationally. It was evaluated against the ground truth on a clinical CT dataset of 10 tumors and yielded segmentations with a mean Dice similarity coefficient (DSC) of .77 and mean volume overlap error (VOE) of 36.7%. The average processing time was 1.25 minutes per slice.

In this paper, we adopt 3D Convolutional Neural Networks to segment volumetric medical images. Although deep neural networks have been proven to be very effective on many 2D vision tasks, it is still challenging to apply them to 3D tasks due to the limited amount of annotated 3D data and limited computational resources. We propose a novel 3D-based coarse-to-fine framework to effectively and efficiently tackle these challenges. The proposed 3D-based framework outperforms the 2D counterpart to a large margin since it can leverage the rich spatial infor- mation along all three axes. We conduct experiments on two datasets which include healthy and pathological pancreases respectively, and achieve the current state-of-the-art in terms of Dice-S{\o}rensen Coefficient (DSC). On the NIH pancreas segmentation dataset, we outperform the previous best by an average of over 2%, and the worst case is improved by 7% to reach almost 70%, which indicates the reliability of our framework in clinical applications.

In this paper, we focus on three problems in deep learning based medical image segmentation. Firstly, U-net, as a popular model for medical image segmentation, is difficult to train when convolutional layers increase even though a deeper network usually has a better generalization ability because of more learnable parameters. Secondly, the exponential ReLU (ELU), as an alternative of ReLU, is not much different from ReLU when the network of interest gets deep. Thirdly, the Dice loss, as one of the pervasive loss functions for medical image segmentation, is not effective when the prediction is close to ground truth and will cause oscillation during training. To address the aforementioned three problems, we propose and validate a deeper network that can fit medical image datasets that are usually small in the sample size. Meanwhile, we propose a new loss function to accelerate the learning process and a combination of different activation functions to improve the network performance. Our experimental results suggest that our network is comparable or superior to state-of-the-art methods.

One of the most common tasks in medical imaging is semantic segmentation. Achieving this segmentation automatically has been an active area of research, but the task has been proven very challenging due to the large variation of anatomy across different patients. However, recent advances in deep learning have made it possible to significantly improve the performance of image recognition and semantic segmentation methods in the field of computer vision. Due to the data driven approaches of hierarchical feature learning in deep learning frameworks, these advances can be translated to medical images without much difficulty. Several variations of deep convolutional neural networks have been successfully applied to medical images. Especially fully convolutional architectures have been proven efficient for segmentation of 3D medical images. In this article, we describe how to build a 3D fully convolutional network (FCN) that can process 3D images in order to produce automatic semantic segmentations. The model is trained and evaluated on a clinical computed tomography (CT) dataset and shows state-of-the-art performance in multi-organ segmentation.

Recent advances in 3D fully convolutional networks (FCN) have made it feasible to produce dense voxel-wise predictions of volumetric images. In this work, we show that a multi-class 3D FCN trained on manually labeled CT scans of several anatomical structures (ranging from the large organs to thin vessels) can achieve competitive segmentation results, while avoiding the need for handcrafting features or training class-specific models. To this end, we propose a two-stage, coarse-to-fine approach that will first use a 3D FCN to roughly define a candidate region, which will then be used as input to a second 3D FCN. This reduces the number of voxels the second FCN has to classify to ~10% and allows it to focus on more detailed segmentation of the organs and vessels. We utilize training and validation sets consisting of 331 clinical CT images and test our models on a completely unseen data collection acquired at a different hospital that includes 150 CT scans, targeting three anatomical organs (liver, spleen, and pancreas). In challenging organs such as the pancreas, our cascaded approach improves the mean Dice score from 68.5 to 82.2%, achieving the highest reported average score on this dataset. We compare with a 2D FCN method on a separate dataset of 240 CT scans with 18 classes and achieve a significantly higher performance in small organs and vessels. Furthermore, we explore fine-tuning our models to different datasets. Our experiments illustrate the promise and robustness of current 3D FCN based semantic segmentation of medical images, achieving state-of-the-art results. Our code and trained models are available for download: //github.com/holgerroth/3Dunet_abdomen_cascade.

Deep learning (DL) based semantic segmentation methods have been providing state-of-the-art performance in the last few years. More specifically, these techniques have been successfully applied to medical image classification, segmentation, and detection tasks. One deep learning technique, U-Net, has become one of the most popular for these applications. In this paper, we propose a Recurrent Convolutional Neural Network (RCNN) based on U-Net as well as a Recurrent Residual Convolutional Neural Network (RRCNN) based on U-Net models, which are named RU-Net and R2U-Net respectively. The proposed models utilize the power of U-Net, Residual Network, as well as RCNN. There are several advantages of these proposed architectures for segmentation tasks. First, a residual unit helps when training deep architecture. Second, feature accumulation with recurrent residual convolutional layers ensures better feature representation for segmentation tasks. Third, it allows us to design better U-Net architecture with same number of network parameters with better performance for medical image segmentation. The proposed models are tested on three benchmark datasets such as blood vessel segmentation in retina images, skin cancer segmentation, and lung lesion segmentation. The experimental results show superior performance on segmentation tasks compared to equivalent models including U-Net and residual U-Net (ResU-Net).

北京阿比特科技有限公司