亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Since the Radon transform (RT) consists in a line integral function, some modeling assumptions are made on Computed Tomography (CT) system, making image reconstruction analytical methods, such as Filtered Backprojection (FBP), sensitive to artifacts and noise. In the other hand, recently, a new integral transform, called Scale Space Radon Transform (SSRT), is introduced where, RT is a particular case. Thanks to its interesting properties, such as good scale space behavior, the SSRT has known number of new applications. In this paper, with the aim to improve the reconstructed image quality for these methods, we propose to model the X-ray beam with the Scale Space Radon Transform (SSRT) where, the assumptions done on the physical dimensions of the CT system elements reflect better the reality. After depicting the basic properties and the inversion of SSRT, the FBP algorithm is used to reconstruct the image from the SSRT sinogram where the RT spectrum used in FBP is replaced by SSRT and the Gaussian kernel, expressed in their frequency domain. PSNR and SSIM, as quality measures, are used to compare RT and SSRT-based image reconstruction on Shepp-Logan head and anthropomorphic abdominal phantoms. The first findings show that the SSRT-based method outperforms the methods based on RT, especially, when the number of projections is reduced, making it more appropriate for applications requiring low-dose radiation, such as medical X-ray CT. While SSRT-FBP and RT-FBP have utmost the same runtime, the experiments show that SSRT-FBP is more robust to Poisson-Gaussian noise corrupting CT data.

相關內容

The moments of the coefficients of elliptic curve L-functions are related to numerous arithmetic problems. Rosen and Silverman proved a conjecture of Nagao relating the first moment of one-parameter families satisfying Tate's conjecture to the rank of the corresponding elliptic surface over Q(T); one can also construct families of moderate rank by finding families with large first moments. Michel proved that if j(T) is not constant, then the second moment of the family is of size p^2 + O(p^(3/2)); these two moments show that for suitably small support the behavior of zeros near the central point agree with that of eigenvalues from random matrix ensembles, with the higher moments impacting the rate of convergence. In his thesis, Miller noticed a negative bias in the second moment of every one-parameter family of elliptic curves over the rationals whose second moment had a calculable closed-form expression, specifically the first lower order term which does not average to zero is on average negative. This Bias Conjecture is confirmed for many families; however, these are highly non-generic families whose resulting Legendre sums can be determined. Inspired by the recent successes by Yang-Hui He, Kyu-Hwan Lee, Thomas Oliver, Alexey Pozdnyakov and others in investigations of murmurations of elliptic curve coefficients with machine learning techniques, we pose a similar problem for trying to understand the Bias Conjecture. As a start to this program, we numerically investigate the Bias Conjecture for a family whose bias is positive for half the primes. Since the numerics do not offer conclusive evidence that negative bias for the other half is enough to overwhelm the positive bias, the Bias Conjecture cannot be verified for the family.

It is common to model a deterministic response function, such as the output of a computer experiment, as a Gaussian process with a Mat\'ern covariance kernel. The smoothness parameter of a Mat\'ern kernel determines many important properties of the model in the large data limit, including the rate of convergence of the conditional mean to the response function. We prove that the maximum likelihood estimate of the smoothness parameter cannot asymptotically undersmooth the truth when the data are obtained on a fixed bounded subset of $\mathbb{R}^d$. That is, if the data-generating response function has Sobolev smoothness $\nu_0 > d/2$, then the smoothness parameter estimate cannot be asymptotically less than $\nu_0$. The lower bound is sharp. Additionally, we show that maximum likelihood estimation recovers the true smoothness for a class of compactly supported self-similar functions. For cross-validation we prove an asymptotic lower bound $\nu_0 - d/2$, which however is unlikely to be sharp. The results are based on approximation theory in Sobolev spaces and some general theorems that restrict the set of values that the parameter estimators can take.

Bayesian P-splines and basis determination through Bayesian model selection are both commonly employed strategies for nonparametric regression using spline basis expansions within the Bayesian framework. Despite their widespread use, each method has particular limitations that may introduce potential estimation bias depending on the nature of the target function. To overcome the limitations associated with each method while capitalizing on their respective strengths, we propose a new prior distribution that integrates the essentials of both approaches. The proposed prior distribution assesses the complexity of the spline model based on a penalty term formed by a convex combination of the penalties from both methods. The proposed method exhibits adaptability to the unknown level of smoothness, while achieving the minimax-optimal posterior contraction rate up to a logarithmic factor. We provide an efficient Markov chain Monte Carlo algorithm for implementing the proposed approach. Our extensive simulation study reveals that the proposed method outperforms other competitors in terms of performance metrics or model complexity.

The performance of Hamiltonian Monte Carlo simulations crucially depends on both the integration timestep and the number of integration steps. We present an adaptive general-purpose framework to automatically tune such parameters, based on a local loss function which promotes the fast exploration of phase-space. We show that a good correspondence between loss and autocorrelation time can be established, allowing for gradient-based optimization using a fully-differentiable set-up. The loss is constructed in such a way that it also allows for gradient-driven learning of a distribution over the number of integration steps. Our approach is demonstrated for the one-dimensional harmonic oscillator and alanine dipeptide, a small protein common as a test case for simulation methods. Through the application to the harmonic oscillator, we highlight the importance of not using a fixed timestep to avoid a rugged loss surface with many local minima, otherwise trapping the optimization. In the case of alanine dipeptide, by tuning the only free parameter of our loss definition, we find a good correspondence between it and the autocorrelation times, resulting in a $>100$ fold speed up in optimization of simulation parameters compared to a grid-search. For this system, we also extend the integrator to allow for atom-dependent timesteps, providing a further reduction of $25\%$ in autocorrelation times.

The self-random number generation (SRNG) problem is considered for general setting. In the literature, the optimum SRNG rate with respect to the variational distance has been discussed. In this paper, we first try to characterize the optimum SRNG rate with respect to a subclass of $f$-divergences. The subclass of $f$-divergences considered in this paper includes typical distance measures such as the variational distance, the KL divergence, the Hellinger distance and so on. Hence our result can be considered as a generalization of the previous result with respect to the variational distance. Next, we consider the obtained optimum SRNG rate from several viewpoints. The $\varepsilon$-source coding problem is one of related problems with the SRNG problem. Our results reveal how the SRNG problem with the $f$-divergence relate to the $\varepsilon$-fixed-length source coding problem. We also apply our results to the rate distortion perception (RDP) function. As a result, we can establish a lower bound for the RDP function with respect to $f$-divergences using our findings. Finally, we discuss the representation of the optimum SRNG rate using the smooth R\'enyi entropy.

Current methods based on Neural Radiance Fields (NeRF) significantly lack the capacity to quantify uncertainty in their predictions, particularly on the unseen space including the occluded and outside scene content. This limitation hinders their extensive applications in robotics, where the reliability of model predictions has to be considered for tasks such as robotic exploration and planning in unknown environments. To address this, we propose a novel approach to estimate a 3D Uncertainty Field based on the learned incomplete scene geometry, which explicitly identifies these unseen regions. By considering the accumulated transmittance along each camera ray, our Uncertainty Field infers 2D pixel-wise uncertainty, exhibiting high values for rays directly casting towards occluded or outside the scene content. To quantify the uncertainty on the learned surface, we model a stochastic radiance field. Our experiments demonstrate that our approach is the only one that can explicitly reason about high uncertainty both on 3D unseen regions and its involved 2D rendered pixels, compared with recent methods. Furthermore, we illustrate that our designed uncertainty field is ideally suited for real-world robotics tasks, such as next-best-view selection.

Support Vector Machine (SVM) algorithm requires a high computational cost (both in memory and time) to solve a complex quadratic programming (QP) optimization problem during the training process. Consequently, SVM necessitates high computing hardware capabilities. The central processing unit (CPU) clock frequency cannot be increased due to physical limitations in the miniaturization process. However, the potential of parallel multi-architecture, available in both multi-core CPUs and highly scalable GPUs, emerges as a promising solution to enhance algorithm performance. Therefore, there is an opportunity to reduce the high computational time required by SVM for solving the QP optimization problem. This paper presents a comparative study that implements the SVM algorithm on different parallel architecture frameworks. The experimental results show that SVM MPI-CUDA implementation achieves a speedup over SVM TensorFlow implementation on different datasets. Moreover, SVM TensorFlow implementation provides a cross-platform solution that can be migrated to alternative hardware components, which will reduces the development time.

We investigate resource allocation for quantum entanglement distribution over an optical network. We characterize and model a network architecture that employs a single quasideterministic time-frequency heralded EPR-pair source, and develop a routing scheme for distributing entangled photon pairs over such a network. We focus on fairness in entanglement distribution, and compare both the performance of various spectrum allocation schemes as well as their Jain index.

Mutation validation (MV) is a recently proposed approach for model selection, garnering significant interest due to its unique characteristics and potential benefits compared to the widely used cross-validation (CV) method. In this study, we empirically compared MV and $k$-fold CV using benchmark and real-world datasets. By employing Bayesian tests, we compared generalization estimates yielding three posterior probabilities: practical equivalence, CV superiority, and MV superiority. We also evaluated the differences in the capacity of the selected models and computational efficiency. We found that both MV and CV select models with practically equivalent generalization performance across various machine learning algorithms and the majority of benchmark datasets. MV exhibited advantages in terms of selecting simpler models and lower computational costs. However, in some cases MV selected overly simplistic models leading to underfitting and showed instability in hyperparameter selection. These limitations of MV became more evident in the evaluation of a real-world neuroscientific task of predicting sex at birth using brain functional connectivity.

Bayesian model comparison (BMC) offers a principled approach for assessing the relative merits of competing computational models and propagating uncertainty into model selection decisions. However, BMC is often intractable for the popular class of hierarchical models due to their high-dimensional nested parameter structure. To address this intractability, we propose a deep learning method for performing BMC on any set of hierarchical models which can be instantiated as probabilistic programs. Since our method enables amortized inference, it allows efficient re-estimation of posterior model probabilities and fast performance validation prior to any real-data application. In a series of extensive validation studies, we benchmark the performance of our method against the state-of-the-art bridge sampling method and demonstrate excellent amortized inference across all BMC settings. We then showcase our method by comparing four hierarchical evidence accumulation models that have previously been deemed intractable for BMC due to partly implicit likelihoods. Additionally, we demonstrate how transfer learning can be leveraged to enhance training efficiency. We provide reproducible code for all analyses and an open-source implementation of our method.

北京阿比特科技有限公司