亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Accurate segmentation of the heart is essential for personalized blood flow simulations and surgical intervention planning. Segmentations need to be accurate in every spatial dimension, which is not ensured by segmenting data slice by slice. Two cardiac computed tomography (CT) datasets consisting of 760 volumes across the whole cardiac cycle from 39 patients, and of 60 volumes from 60 patients respectively were used to train networks to simultaneously segment multiple regions representing the whole heart in 3D. The segmented regions included the left and right atrium and ventricle, left ventricular myocardium, ascending aorta, pulmonary arteries, pulmonary veins, and left atrial appendage. The widely used 3D U-Net and the UNETR architecture were compared to our proposed method optimized for large volumetric inputs. The proposed network architecture, termed Transformer Residual U-Net (TRUNet), maintains the cascade downsampling encoder, cascade upsampling decoder and skip connections from U-Net, while incorporating a Vision Transformer (ViT) block in the encoder alongside a modified ResNet50 block. TRUNet reached higher segmentation performance for all structures within approximately half the training time needed for 3D U-Net and UNETR. The proposed method achieved more precise vessel boundary segmentations and better captured the heart's overall anatomical structure compared to the other methods. The fast training time and accurate delineation of adjacent structures makes TRUNet a promising candidate for medical image segmentation tasks. The code for TRUNet is available at github.com/ljollans/TRUNet.

相關內容

This paper presents a novel stochastic optimisation methodology to perform empirical Bayesian inference in semi-blind image deconvolution problems. Given a blurred image and a parametric class of possible operators, the proposed optimisation approach automatically calibrates the parameters of the blur model by maximum marginal likelihood estimation, followed by (non-blind) image deconvolution by maximum-a-posteriori estimation conditionally to the estimated model parameters. In addition to the blur model, the proposed approach also automatically calibrates the noise variance as well as any regularisation parameters. The marginal likelihood of the blur, noise variance, and regularisation parameters is generally computationally intractable, as it requires calculating several integrals over the entire solution space. Our approach addresses this difficulty by using a stochastic approximation proximal gradient optimisation scheme, which iteratively solves such integrals by using a Moreau-Yosida regularised unadjusted Langevin Markov chain Monte Carlo algorithm. This optimisation strategy can be easily and efficiently applied to any model that is log-concave, and by using the same gradient and proximal operators that are required to compute the maximum-a-posteriori solution by convex optimisation. We provide convergence guarantees for the proposed optimisation scheme under realistic and easily verifiable conditions and subsequently demonstrate the effectiveness of the approach with a series of deconvolution experiments and comparisons with alternative strategies from the state of the art.

We show that the known list-decoding algorithms for univariate multiplicity and folded Reed-Solomon (FRS) codes can be made to run in nearly-linear time. This yields, to our knowledge, the first known family of codes that can be decoded in nearly linear time, even as they approach the list decoding capacity. Univariate multiplicity codes and FRS codes are natural variants of Reed-Solomon codes that were discovered and studied for their applications to list-decoding. It is known that for every $\epsilon >0$, and rate $R \in (0,1)$, there exist explicit families of these codes that have rate $R$ and can be list-decoded from a $(1-R-\epsilon)$ fraction of errors with constant list size in polynomial time (Guruswami & Wang (IEEE Trans. Inform. Theory, 2013) and Kopparty, Ron-Zewi, Saraf & Wootters (SIAM J. Comput. 2023)). In this work, we present randomized algorithms that perform the above tasks in nearly linear time. Our algorithms have two main components. The first builds upon the lattice-based approach of Alekhnovich (IEEE Trans. Inf. Theory 2005), who designed a nearly linear time list-decoding algorithm for Reed-Solomon codes approaching the Johnson radius. As part of the second component, we design nearly-linear time algorithms for two natural algebraic problems. The first algorithm solves linear differential equations of the form $Q\left(x, f(x), \frac{df}{dx}, \dots,\frac{d^m f}{dx^m}\right) \equiv 0$ where $Q$ has the form $Q(x,y_0,\dots,y_m) = \tilde{Q}(x) + \sum_{i = 0}^m Q_i(x)\cdot y_i$. The second solves functional equations of the form $Q\left(x, f(x), f(\gamma x), \dots,f(\gamma^m x)\right) \equiv 0$ where $\gamma$ is a high-order field element. These algorithms can be viewed as generalizations of classical algorithms of Sieveking (Computing 1972) and Kung (Numer. Math. 1974) for computing the modular inverse of a power series, and might be of independent interest.

We propose a new Bayesian strategy for adaptation to smoothness in nonparametric models based on heavy tailed series priors. We illustrate it in a variety of settings, showing in particular that the corresponding Bayesian posterior distributions achieve adaptive rates of contraction in the minimax sense (up to logarithmic factors) without the need to sample hyperparameters. Unlike many existing procedures, where a form of direct model (or estimator) selection is performed, the method can be seen as performing a soft selection through the prior tail. In Gaussian regression, such heavy tailed priors are shown to lead to (near-)optimal simultaneous adaptation both in the $L^2$- and $L^\infty$-sense. Results are also derived for linear inverse problems, for anisotropic Besov classes, and for certain losses in more general models through the use of tempered posterior distributions. We present numerical simulations corroborating the theory.

Necessary and sufficient conditions of uniform consistency are explored. A hypothesis is simple. Nonparametric sets of alternatives are bounded convex sets in $\mathbb{L}_p$, $p >1$ with "small" balls deleted. The "small" balls have the center at the point of hypothesis and radii of balls tend to zero as sample size increases. For problem of hypothesis testing on a density, we show that, for the sets of alternatives, there are uniformly consistent tests for some sequence of radii of the balls, if and only if, convex set is relatively compact. The results are established for problem of hypothesis testing on a density, for signal detection in Gaussian white noise, for linear ill-posed problems with random Gaussian noise and so on.

Deployable polyhedrons can transform between Platonic and Archimedean polyhedrons to meet the demands of various engineering applications. However, the existing design solutions are often with multiple degrees of freedom and complicated mechanism links and joints, which greatly limited their potential in practice. Combining the fundamentals of solid geometry and mechanism kinematics, this paper proposes a family of kirigami Archimedean polyhedrons based on the N-fold-symmetric loops of spatial 7R linkage, which perform one-DOF radial transformation following tetrahedral, octahedral, or icosahedral symmetry. Moreover, in each symmetric polyhedral group, three different transforming paths can be achieved from one identical deployed configuration. We also demonstrated that such design strategy can be readily applied to polyhedral tessellation. This work provides a family of rich solutions for deployable polyhedrons to facilitate their applications in aerospace exploration, architecture, metamaterials and so on.

This work explores the dimension reduction problem for Bayesian nonparametric regression and density estimation. More precisely, we are interested in estimating a functional parameter $f$ over the unit ball in $\mathbb{R}^d$, which depends only on a $d_0$-dimensional subspace of $\mathbb{R}^d$, with $d_0 < d$.It is well-known that rescaled Gaussian process priors over the function space achieve smoothness adaptation and posterior contraction with near minimax-optimal rates. Moreover, hierarchical extensions of this approach, equipped with subspace projection, can also adapt to the intrinsic dimension $d_0$ (\cite{Tokdar2011DimensionAdapt}).When the ambient dimension $d$ does not vary with $n$, the minimax rate remains of the order $n^{-\beta/(2\beta +d_0)}$.%When $d$ does not vary with $n$, the order of the minimax rate remains the same regardless of the ambient dimension $d$. However, this is up to multiplicative constants that can become prohibitively large when $d$ grows. The dependences between the contraction rate and the ambient dimension have not been fully explored yet and this work provides a first insight: we let the dimension $d$ grow with $n$ and, by combining the arguments of \cite{Tokdar2011DimensionAdapt} and \cite{Jiang2021VariableSelection}, we derive a growth rate for $d$ that still leads to posterior consistency with minimax rate.The optimality of this growth rate is then discussed.Additionally, we provide a set of assumptions under which consistent estimation of $f$ leads to a correct estimation of the subspace projection, assuming that $d_0$ is known.

It is well known that Newton's method, especially when applied to large problems such as the discretization of nonlinear partial differential equations (PDEs), can have trouble converging if the initial guess is too far from the solution. This work focuses on accelerating this convergence, in the context of the discretization of nonlinear elliptic PDEs. We first provide a quick review of existing methods, and justify our choice of learning an initial guess with a Fourier neural operator (FNO). This choice was motivated by the mesh-independence of such operators, whose training and evaluation can be performed on grids with different resolutions. The FNO is trained using a loss minimization over generated data, loss functions based on the PDE discretization. Numerical results, in one and two dimensions, show that the proposed initial guess accelerates the convergence of Newton's method by a large margin compared to a naive initial guess, especially for highly nonlinear or anisotropic problems.

The brains of all bilaterally symmetric animals on Earth are divided into left and right hemispheres. The anatomy and functionality of the hemispheres have a large degree of overlap, but there are asymmetries and they specialize to possess different attributes. Several computation models mimic hemispheric asymmetries with a focus on reproducing human data on semantic and visual processing tasks. In this study, we aimed to understand how dual hemispheres in a bilateral architecture interact to perform well in a given task. We propose a bilateral artificial neural network that imitates lateralization observed in nature: that the left hemisphere specializes in specificity and the right in generalities. We used different training objectives to achieve the desired specialization and tested it on an image classification task with two different CNN backbones -- ResNet and VGG. Our analysis found that the hemispheres represent complementary features that are exploited by a network head which implements a type of weighted attention. The bilateral architecture outperformed a range of baselines of similar representational capacity that don't exploit differential specialization, with the exception of a conventional ensemble of unilateral networks trained on a dual training objective for specifics and generalities. The results demonstrate the efficacy of bilateralism, contribute to the discussion of bilateralism in biological brains and the principle may serves as an inductive bias for new AI systems.

This study proposes a method for qualitatively evaluating and designing human-like driver models for autonomous vehicles. While most existing research on human-likeness has been focused on quantitative evaluation, it is crucial to consider qualitative measures to accurately capture human perception. To this end, we conducted surveys utilizing both video study and human experience-based study. The findings of this research can significantly contribute to the development of naturalistic and human-like driver models for autonomous vehicles, enabling them to safely and efficiently coexist with human-driven vehicles in diverse driving scenarios.

Human-in-the-loop aims to train an accurate prediction model with minimum cost by integrating human knowledge and experience. Humans can provide training data for machine learning applications and directly accomplish some tasks that are hard for computers in the pipeline with the help of machine-based approaches. In this paper, we survey existing works on human-in-the-loop from a data perspective and classify them into three categories with a progressive relationship: (1) the work of improving model performance from data processing, (2) the work of improving model performance through interventional model training, and (3) the design of the system independent human-in-the-loop. Using the above categorization, we summarize major approaches in the field, along with their technical strengths/ weaknesses, we have simple classification and discussion in natural language processing, computer vision, and others. Besides, we provide some open challenges and opportunities. This survey intends to provide a high-level summarization for human-in-the-loop and motivates interested readers to consider approaches for designing effective human-in-the-loop solutions.

北京阿比特科技有限公司