亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Clinical decision making from magnetic resonance imaging (MRI) combines complementary information from multiple MRI sequences (defined as 'modalities'). MRI image registration aims to geometrically 'pair' diagnoses from different modalities, time points and slices. Both intra- and inter-modality MRI registration are essential components in clinical MRI settings. Further, an MRI image processing pipeline that can address both afine and non-rigid registration is critical, as both types of deformations may be occuring in real MRI data scenarios. Unlike image classification, explainability is not commonly addressed in image registration deep learning (DL) methods, as it is challenging to interpet model-data behaviours against transformation fields. To properly address this, we incorporate Grad-CAM-based explainability frameworks in each major component of our unsupervised multi-modal and multi-organ image registration DL methodology. We previously demonstrated that we were able to reach superior performance (against the current standard Syn method). In this work, we show that our DL model becomes fully explainable, setting the framework to generalise our approach on further medical imaging data.

相關內容

圖(tu)(tu)(tu)像(xiang)(xiang)配準是圖(tu)(tu)(tu)像(xiang)(xiang)處理研究領域中(zhong)的(de)(de)一(yi)個典型問題(ti)和技術難點(dian),其(qi)目的(de)(de)在(zai)(zai)于(yu)比較或融合(he)針(zhen)對(dui)(dui)同(tong)(tong)(tong)一(yi)對(dui)(dui)象在(zai)(zai)不(bu)同(tong)(tong)(tong)條件(jian)下獲取的(de)(de)圖(tu)(tu)(tu)像(xiang)(xiang),例(li)如圖(tu)(tu)(tu)像(xiang)(xiang)會來(lai)(lai)自不(bu)同(tong)(tong)(tong)的(de)(de)采集(ji)設備,取自不(bu)同(tong)(tong)(tong)的(de)(de)時(shi)(shi)間(jian),不(bu)同(tong)(tong)(tong)的(de)(de)拍攝視角等等,有(you)時(shi)(shi)也(ye)需要(yao)用(yong)到(dao)針(zhen)對(dui)(dui)不(bu)同(tong)(tong)(tong)對(dui)(dui)象的(de)(de)圖(tu)(tu)(tu)像(xiang)(xiang)配準問題(ti)。具體地說,對(dui)(dui)于(yu)一(yi)組(zu)圖(tu)(tu)(tu)像(xiang)(xiang)數據集(ji)中(zhong)的(de)(de)兩幅(fu)圖(tu)(tu)(tu)像(xiang)(xiang),通過尋找一(yi)種(zhong)空(kong)(kong)間(jian)變換(huan)把(ba)一(yi)幅(fu)圖(tu)(tu)(tu)像(xiang)(xiang)映射到(dao)另(ling)一(yi)幅(fu)圖(tu)(tu)(tu)像(xiang)(xiang),使得(de)兩圖(tu)(tu)(tu)中(zhong)對(dui)(dui)應于(yu)空(kong)(kong)間(jian)同(tong)(tong)(tong)一(yi)位置的(de)(de)點(dian)一(yi)一(yi)對(dui)(dui)應起來(lai)(lai),從而達到(dao)信息融合(he)的(de)(de)目的(de)(de)。 該技術在(zai)(zai)計(ji)算機視覺(jue)、醫(yi)學(xue)(xue)圖(tu)(tu)(tu)像(xiang)(xiang)處理以及材料力學(xue)(xue)等領域都(dou)具有(you)廣(guang)泛的(de)(de)應用(yong)。根據具體應用(yong)的(de)(de)不(bu)同(tong)(tong)(tong),有(you)的(de)(de)側(ce)重于(yu)通過變換(huan)結果融合(he)兩幅(fu)圖(tu)(tu)(tu)像(xiang)(xiang),有(you)的(de)(de)側(ce)重于(yu)研究變換(huan)本身(shen)以獲得(de)對(dui)(dui)象的(de)(de)一(yi)些力學(xue)(xue)屬性。

The numerical solution of continuum damage mechanics (CDM) problems suffers from convergence-related challenges during the material softening stage, and consequently existing iterative solvers are subject to a trade-off between computational expense and solution accuracy. In this work, we present a novel unified arc-length (UAL) method, and we derive the formulation of the analytical tangent matrix and governing system of equations for both local and non-local gradient damage problems. Unlike existing versions of arc-length solvers that monolithically scale the external force vector, the proposed method treats the latter as an independent variable and determines the position of the system on the equilibrium path based on all the nodal variations of the external force vector. This approach renders the proposed solver substantially more efficient and robust than existing solvers used in CDM problems. We demonstrate the considerable advantages of the proposed algorithm through several benchmark 1D problems with sharp snap-backs and 2D examples under various boundary conditions and loading scenarios. The proposed UAL approach exhibits a superior ability of overcoming critical increments along the equilibrium path. Moreover, the proposed UAL method is 1-2 orders of magnitude faster than force-controlled arc-length and monolithic Newton-Raphson solvers.

Quasiperiodic systems, related to irrational numbers, are space-filling structures without decay nor translation invariance. How to accurately recover these systems, especially for non-smooth cases, presents a big challenge in numerical computation. In this paper, we propose a new algorithm, finite points recovery (FPR) method, which is available for both smooth and non-smooth cases, to address this challenge. The FPR method first establishes a homomorphism between the lower-dimensional definition domain of the quasiperiodic function and the higher-dimensional torus, then recovers the global quasiperiodic system by employing interpolation technique with finite points in the definition domain without dimensional lifting. Furthermore, we develop accurate and efficient strategies of selecting finite points according to the arithmetic properties of irrational numbers. The corresponding mathematical theory, convergence analysis, and computational complexity analysis on choosing finite points are presented. Numerical experiments demonstrate the effectiveness and superiority of FPR approach in recovering both smooth quasiperiodic functions and piecewise constant Fibonacci quasicrystals. While existing spectral methods encounter difficulties in accurately recovering non-smooth quasiperiodic functions.

Much of the Earth and many surfaces of extraterrestrial bodies are composed of in-cohesive particle matter. Locomoting on granular terrain is challenging for common robotic devices, either wheeled or legged. In this work, we discover a robust alternative locomotion mechanism on granular media -- generating movement via self-vibration. To demonstrate the effectiveness of this locomotion mechanism, we develop a cube-shaped robot with an embedded vibratory motor and conduct systematic experiments on diverse granular terrains of various particle properties. We investigate how locomotion changes as a function of vibration frequency/intensity on granular terrains. Compared to hard surfaces, we find such a vibratory locomotion mechanism enables the robot to move faster, and more stable on granular surfaces, facilitated by the interaction between the body and surrounding granules. The simplicity in structural design and controls of this robotic system indicates that vibratory locomotion can be a valuable alternative way to produce robust locomotion on granular terrains. We further demonstrate that such cube-shape robots can be used as modular units for morphologically structured vibratory robots with capabilities of maneuverable forward and turning motions, showing potential practical scenarios for robotic systems.

Accurate calibration is crucial for using multiple cameras to triangulate the position of objects precisely. However, it is also a time-consuming process that needs to be repeated for every displacement of the cameras. The standard approach is to use a printed pattern with known geometry to estimate the intrinsic and extrinsic parameters of the cameras. The same idea can be applied to event-based cameras, though it requires extra work. By using frame reconstruction from events, a printed pattern can be detected. A blinking pattern can also be displayed on a screen. Then, the pattern can be directly detected from the events. Such calibration methods can provide accurate intrinsic calibration for both frame- and event-based cameras. However, using 2D patterns has several limitations for multi-camera extrinsic calibration, with cameras possessing highly different points of view and a wide baseline. The 2D pattern can only be detected from one direction and needs to be of significant size to compensate for its distance to the camera. This makes the extrinsic calibration time-consuming and cumbersome. To overcome these limitations, we propose eWand, a new method that uses blinking LEDs inside opaque spheres instead of a printed or displayed pattern. Our method provides a faster, easier-to-use extrinsic calibration approach that maintains high accuracy for both event- and frame-based cameras.

Surface defect inspection is of great importance for industrial manufacture and production. Though defect inspection methods based on deep learning have made significant progress, there are still some challenges for these methods, such as indistinguishable weak defects and defect-like interference in the background. To address these issues, we propose a transformer network with multi-stage CNN (Convolutional Neural Network) feature injection for surface defect segmentation, which is a UNet-like structure named CINFormer. CINFormer presents a simple yet effective feature integration mechanism that injects the multi-level CNN features of the input image into different stages of the transformer network in the encoder. This can maintain the merit of CNN capturing detailed features and that of transformer depressing noises in the background, which facilitates accurate defect detection. In addition, CINFormer presents a Top-K self-attention module to focus on tokens with more important information about the defects, so as to further reduce the impact of the redundant background. Extensive experiments conducted on the surface defect datasets DAGM 2007, Magnetic tile, and NEU show that the proposed CINFormer achieves state-of-the-art performance in defect detection.

Including information from additional spectral bands (e.g., near-infrared) can improve deep learning model performance for many vision-oriented tasks. There are many possible ways to incorporate this additional information into a deep learning model, but the optimal fusion strategy has not yet been determined and can vary between applications. At one extreme, known as "early fusion," additional bands are stacked as extra channels to obtain an input image with more than three channels. At the other extreme, known as "late fusion," RGB and non-RGB bands are passed through separate branches of a deep learning model and merged immediately before a final classification or segmentation layer. In this work, we characterize the performance of a suite of multispectral deep learning models with different fusion approaches, quantify their relative reliance on different input bands and evaluate their robustness to naturalistic image corruptions affecting one or more input channels.

The unified gas-kinetic wave-particle method (UGKWP) has been developed for the multiscale gas, plasma, and multiphase flow transport processes for the past years. In this work, we propose an implicit unified gas-kinetic wave-particle (IUGKWP) method to remove the CFL time step constraint. Based on the local integral solution of the radiative transfer equation (RTE), the particle transport processes are categorized into the long-$\lambda$ streaming process and the short-$\lambda$ streaming process comparing to a local physical characteristic time $t_p$. In the construction of the IUGKWP method, the long-$\lambda$ streaming process is tracked by the implicit Monte Carlo (IMC) method; the short-$\lambda$ streaming process is evolved by solving the implicit moments equations; and the photon distribution is closed by a local integral solution of RTE. In the IUGKWP method, the multiscale flux of radiation energy and the multiscale closure of photon distribution are constructed based on the local integral solution. The IUGKWP method preserves the second-order asymptotic expansion of RTE in the optically thick regime and adapts its computational complexity to the flow regime. The numerical dissipation is well controlled, and the teleportation error is significantly reduced in the optically thick regime. The computational complexity of the IUGKWP method decreases exponentially as the Knudsen number approaches zero, and the computational efficiency is remarkably improved in the optically thick regime. The IUGKWP is formulated on a generalized unstructured mesh, and multidimensional 2D and 3D algorithms are developed. Numerical tests are presented to validate the capability of IUGKWP in capturing the multiscale photon transport process. The algorithm and code will apply in the engineering applications of inertial confinement fusion (ICF).

This paper proposes a deep sound-field denoiser, a deep neural network (DNN) based denoising of optically measured sound-field images. Sound-field imaging using optical methods has gained considerable attention due to its ability to achieve high-spatial-resolution imaging of acoustic phenomena that conventional acoustic sensors cannot accomplish. However, the optically measured sound-field images are often heavily contaminated by noise because of the low sensitivity of optical interferometric measurements to airborne sound. Here, we propose a DNN-based sound-field denoising method. Time-varying sound-field image sequences are decomposed into harmonic complex-amplitude images by using a time-directional Fourier transform. The complex images are converted into two-channel images consisting of real and imaginary parts and denoised by a nonlinear-activation-free network. The network is trained on a sound-field dataset obtained from numerical acoustic simulations with randomized parameters. We compared the method with conventional ones, such as image filters, a spatiotemporal filter, and other DNN architectures, on numerical and experimental data. The experimental data were measured by parallel phase-shifting interferometry and holographic speckle interferometry. The proposed deep sound-field denoiser significantly outperformed the conventional methods on both the numerical and experimental data. Code is available on GitHub: //github.com/nttcslab/deep-sound-field-denoiser.

Monitoring propeller failures is vital to maintain the safe and reliable operation of quadrotor UAVs. The simulation-to-reality UAV fault diagnosis technique offer a secure and economical approach to identify faults in propellers. However, classifiers trained with simulated data perform poorly in real flights due to the wind disturbance in outdoor scenarios. In this work, we propose an uncertainty-based fault classifier (UFC) to address the challenge of sim-to-real UAV fault diagnosis in windy scenarios. It uses the ensemble of difference-based deep convolutional neural networks (EDDCNN) to reduce model variance and bias. Moreover, it employs an uncertainty-based decision framework to filter out uncertain predictions. Experimental results demonstrate that the UFC can achieve 100% fault-diagnosis accuracy with a data usage rate of 33.6% in the windy outdoor scenario.

Recent advances in 3D fully convolutional networks (FCN) have made it feasible to produce dense voxel-wise predictions of volumetric images. In this work, we show that a multi-class 3D FCN trained on manually labeled CT scans of several anatomical structures (ranging from the large organs to thin vessels) can achieve competitive segmentation results, while avoiding the need for handcrafting features or training class-specific models. To this end, we propose a two-stage, coarse-to-fine approach that will first use a 3D FCN to roughly define a candidate region, which will then be used as input to a second 3D FCN. This reduces the number of voxels the second FCN has to classify to ~10% and allows it to focus on more detailed segmentation of the organs and vessels. We utilize training and validation sets consisting of 331 clinical CT images and test our models on a completely unseen data collection acquired at a different hospital that includes 150 CT scans, targeting three anatomical organs (liver, spleen, and pancreas). In challenging organs such as the pancreas, our cascaded approach improves the mean Dice score from 68.5 to 82.2%, achieving the highest reported average score on this dataset. We compare with a 2D FCN method on a separate dataset of 240 CT scans with 18 classes and achieve a significantly higher performance in small organs and vessels. Furthermore, we explore fine-tuning our models to different datasets. Our experiments illustrate the promise and robustness of current 3D FCN based semantic segmentation of medical images, achieving state-of-the-art results. Our code and trained models are available for download: //github.com/holgerroth/3Dunet_abdomen_cascade.

北京阿比特科技有限公司