亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Advances in the development of largely automated microscopy methods such as MERFISH for imaging cellular structures in mouse brains are providing spatial detection of micron resolution gene expression. While there has been tremendous progress made in the field Computational Anatomy (CA) to perform diffeomorphic mapping technologies at the tissue scales for advanced neuroinformatic studies in common coordinates, integration of molecular- and cellular-scale populations through statistical averaging via common coordinates remains yet unattained. This paper describes the first set of algorithms for calculating geodesics in the space of diffeomorphisms, what we term Image-Varifold LDDMM,extending the family of large deformation diffeomorphic metric mapping (LDDMM) algorithms to accommodate the "copy and paste" varifold action of particles which extends consistently to the tissue scales. We represent the brain data as geometric measures, termed as {\em image varifolds} supported by a large number of unstructured points, % (i.e., not aligned on a 2D or 3D grid), each point representing a small volume in space % (which may be incompletely described) and carrying a list of densities of {\em features} elements of a high-dimensional feature space. The shape of image varifold brain spaces is measured by transforming them by diffeomorphisms. The metric between image varifolds is obtained after embedding these objects in a linear space equipped with the norm, yielding a so-called "chordal metric."

相關內容

Cell tracking is an essential tool in live-cell imaging to determine single-cell features, such as division patterns or elongation rates. Unlike in common multiple object tracking, in microbial live-cell experiments cells are growing, moving, and dividing over time, to form cell colonies that are densely packed in mono-layer structures. With increasing cell numbers, following the precise cell-cell associations correctly over many generations becomes more and more challenging, due to the massively increasing number of possible associations. To tackle this challenge, we propose a fast parameter-free cell tracking approach, which consists of activity-prioritized nearest neighbor assignment of growing cells and a combinatorial solver that assigns splitting mother cells to their daughters. As input for the tracking, Omnipose is utilized for instance segmentation. Unlike conventional nearest-neighbor-based tracking approaches, the assignment steps of our proposed method are based on a Gaussian activity-based metric, predicting the cell-specific migration probability, thereby limiting the number of erroneous assignments. In addition to being a building block for cell tracking, the proposed activity map is a standalone tracking-free metric for indicating cell activity. Finally, we perform a quantitative analysis of the tracking accuracy for different frame rates, to inform life scientists about a suitable (in terms of tracking performance) choice of the frame rate for their cultivation experiments, when cell tracks are the desired key outcome.

Temporal action localization (TAL) is an important and challenging problem in video understanding. However, most existing TAL benchmarks are built upon the coarse granularity of action classes, which exhibits two major limitations in this task. First, coarse-level actions can make the localization models overfit in high-level context information, and ignore the atomic action details in the video. Second, the coarse action classes often lead to the ambiguous annotations of temporal boundaries, which are inappropriate for temporal action localization. To tackle these problems, we develop a novel large-scale and fine-grained video dataset, coined as FineAction, for temporal action localization. In total, FineAction contains 103K temporal instances of 106 action categories, annotated in 17K untrimmed videos. Compared to the existing TAL datasets, our FineAction takes distinct characteristics of fine action classes with rich diversity, dense annotations of multiple instances, and co-occurring actions of different classes, which introduces new opportunities and challenges for temporal action localization. To benchmark FineAction, we systematically investigate the performance of several popular temporal localization methods on it, and deeply analyze the influence of fine-grained instances in temporal action localization. As a minor contribution, we present a simple baseline approach for handling the fine-grained action detection, which achieves an mAP of 13.17% on our FineAction. We believe that FineAction can advance research of temporal action localization and beyond.

Face reconstruction and tracking is a building block of numerous applications in AR/VR, human-machine interaction, as well as medical applications. Most of these applications rely on a metrically correct prediction of the shape, especially, when the reconstructed subject is put into a metrical context (i.e., when there is a reference object of known size). A metrical reconstruction is also needed for any application that measures distances and dimensions of the subject (e.g., to virtually fit a glasses frame). State-of-the-art methods for face reconstruction from a single image are trained on large 2D image datasets in a self-supervised fashion. However, due to the nature of a perspective projection they are not able to reconstruct the actual face dimensions, and even predicting the average human face outperforms some of these methods in a metrical sense. To learn the actual shape of a face, we argue for a supervised training scheme. Since there exists no large-scale 3D dataset for this task, we annotated and unified small- and medium-scale databases. The resulting unified dataset is still a medium-scale dataset with more than 2k identities and training purely on it would lead to overfitting. To this end, we take advantage of a face recognition network pretrained on a large-scale 2D image dataset, which provides distinct features for different faces and is robust to expression, illumination, and camera changes. Using these features, we train our face shape estimator in a supervised fashion, inheriting the robustness and generalization of the face recognition network. Our method, which we call MICA (MetrIC fAce), outperforms the state-of-the-art reconstruction methods by a large margin, both on current non-metric benchmarks as well as on our metric benchmarks (15% and 24% lower average error on NoW, respectively).

Many studies used the Shannon entropy of transcriptome data to determine cell dedifferentiation and differentiation. The collection of evidence has strengthened the certainty that the transcriptome's Shannon entropy may be used to quantify cellular dedifferentiation and differentiation. Quantifying this cellular status is being justified, we propose the term liberality for the quantitative value of cellular dedifferentiation and differentiation. In previous studies, we must convert the raw transcriptome data into quantitative transcriptome data through mapping, tag counting, assembling, and more bioinformatic processing to calculate the liberality. If we could remove this conversion step from estimating liberality, we could save computing resources and time and remove technical difficulties in using the computer. In this study, we propose a method of calculating cellular liberality without those transcriptome data conversion processes. We could calculate liberality by measuring the compression rate of raw transcriptome data. This technique, independent of reference genome data, increased the generality of cellular liberality.

Therapeutic intervention in neurological disorders still relies heavily on pharmacological solutions, while the treatment of patients with drug resistance remains an open challenge. This is particularly true for patients with epilepsy, 30% of whom are refractory to medications. Implantable devices for chronic recording and electrical modulation of brain activity have proved a viable alternative in such cases. To operate, the device should detect the relevant electrographic biomarkers from Local Field Potentials (LFPs) and determine the right time for stimulation. To enable timely interventions, the ideal device should attain biomarker detection with low latency while operating under low power consumption to prolong the battery life. Neuromorphic networks have progressively gained reputation as low-latency low-power computing systems, which makes them a promising candidate as processing core of next-generation implantable neural interfaces. Here we introduce a fully-analog neuromorphic device implemented in CMOS technology for analyzing LFP signals in an in vitro model of acute ictogenesis. We show that the system can detect ictal and interictal events with ms-latency and with high precision, consuming on average 3.50 nW during the task. Our work paves the way to a new generation of brain implantable devices for personalized closed-loop stimulation for epilepsy treatment.

Graphs drawn in the plane are ubiquitous, arising from data sets through a variety of methods ranging from GIS analysis to image classification to shape analysis. A fundamental problem in this type of data is comparison: given a set of such graphs, can we rank how similar they are, in such a way that we capture their geometric "shape" in the plane? In this paper we explore a method to compare two such embedded graphs, via a simplified combinatorial representation called a tail-less merge tree which encodes the structure based on a fixed direction. First, we examine the properties of a distance designed to compare merge trees called the branching distance, and show that the distance as defined in previous work fails to satisfy some of the requirements of a metric. We incorporate this into a new distance function called average branching distance to compare graphs by looking at the branching distance for merge trees defined over many directions. Despite the theoretical issues, we show that the definition is still quite useful in practice by using our open-source code to cluster data sets of embedded graphs.

This work addresses a novel and challenging problem of estimating the full 3D hand shape and pose from a single RGB image. Most current methods in 3D hand analysis from monocular RGB images only focus on estimating the 3D locations of hand keypoints, which cannot fully express the 3D shape of hand. In contrast, we propose a Graph Convolutional Neural Network (Graph CNN) based method to reconstruct a full 3D mesh of hand surface that contains richer information of both 3D hand shape and pose. To train networks with full supervision, we create a large-scale synthetic dataset containing both ground truth 3D meshes and 3D poses. When fine-tuning the networks on real-world datasets without 3D ground truth, we propose a weakly-supervised approach by leveraging the depth map as a weak supervision in training. Through extensive evaluations on our proposed new datasets and two public datasets, we show that our proposed method can produce accurate and reasonable 3D hand mesh, and can achieve superior 3D hand pose estimation accuracy when compared with state-of-the-art methods.

In this paper, we adopt 3D Convolutional Neural Networks to segment volumetric medical images. Although deep neural networks have been proven to be very effective on many 2D vision tasks, it is still challenging to apply them to 3D tasks due to the limited amount of annotated 3D data and limited computational resources. We propose a novel 3D-based coarse-to-fine framework to effectively and efficiently tackle these challenges. The proposed 3D-based framework outperforms the 2D counterpart to a large margin since it can leverage the rich spatial infor- mation along all three axes. We conduct experiments on two datasets which include healthy and pathological pancreases respectively, and achieve the current state-of-the-art in terms of Dice-S{\o}rensen Coefficient (DSC). On the NIH pancreas segmentation dataset, we outperform the previous best by an average of over 2%, and the worst case is improved by 7% to reach almost 70%, which indicates the reliability of our framework in clinical applications.

In this paper, we focus on three problems in deep learning based medical image segmentation. Firstly, U-net, as a popular model for medical image segmentation, is difficult to train when convolutional layers increase even though a deeper network usually has a better generalization ability because of more learnable parameters. Secondly, the exponential ReLU (ELU), as an alternative of ReLU, is not much different from ReLU when the network of interest gets deep. Thirdly, the Dice loss, as one of the pervasive loss functions for medical image segmentation, is not effective when the prediction is close to ground truth and will cause oscillation during training. To address the aforementioned three problems, we propose and validate a deeper network that can fit medical image datasets that are usually small in the sample size. Meanwhile, we propose a new loss function to accelerate the learning process and a combination of different activation functions to improve the network performance. Our experimental results suggest that our network is comparable or superior to state-of-the-art methods.

Recent advances in 3D fully convolutional networks (FCN) have made it feasible to produce dense voxel-wise predictions of volumetric images. In this work, we show that a multi-class 3D FCN trained on manually labeled CT scans of several anatomical structures (ranging from the large organs to thin vessels) can achieve competitive segmentation results, while avoiding the need for handcrafting features or training class-specific models. To this end, we propose a two-stage, coarse-to-fine approach that will first use a 3D FCN to roughly define a candidate region, which will then be used as input to a second 3D FCN. This reduces the number of voxels the second FCN has to classify to ~10% and allows it to focus on more detailed segmentation of the organs and vessels. We utilize training and validation sets consisting of 331 clinical CT images and test our models on a completely unseen data collection acquired at a different hospital that includes 150 CT scans, targeting three anatomical organs (liver, spleen, and pancreas). In challenging organs such as the pancreas, our cascaded approach improves the mean Dice score from 68.5 to 82.2%, achieving the highest reported average score on this dataset. We compare with a 2D FCN method on a separate dataset of 240 CT scans with 18 classes and achieve a significantly higher performance in small organs and vessels. Furthermore, we explore fine-tuning our models to different datasets. Our experiments illustrate the promise and robustness of current 3D FCN based semantic segmentation of medical images, achieving state-of-the-art results. Our code and trained models are available for download: //github.com/holgerroth/3Dunet_abdomen_cascade.

北京阿比特科技有限公司