亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Dual-energy X-ray tomography is considered in a context where the target under imaging consists of two distinct materials. The materials are assumed to be possibly intertwined in space, but at any given location there is only one material present. Further, two X-ray energies are chosen so that there is a clear difference in the spectral dependence of the attenuation coefficients of the two materials. A novel regularizer is presented for the inverse problem of reconstructing separate tomographic images for the two materials. A combination of two things, (a) non-negativity constraint, and (b) penalty term containing the inner product between the two material images, promotes the presence of at most one material in a given pixel. A preconditioned interior point method is derived for the minimization of the regularization functional. Numerical tests with digital phantoms suggest that the new algorithm outperforms the baseline method, Joint Total Variation regularization, in terms of correctly material-characterized pixels. While the method is tested only in a two-dimensional setting with two materials and two energies, the approach readily generalizes to three dimensions and more materials. The number of materials just needs to match the number of energies used in imaging.

相關內容

In Chen and Zhou 2021, they consider an inference problem for an Ornstein-Uhlenbeck process driven by a general one-dimensional centered Gaussian process $(G_t)_{t\ge 0}$. The second order mixed partial derivative of the covariance function $ R(t,\, s)=\mathbb{E}[G_t G_s]$ can be decomposed into two parts, one of which coincides with that of fractional Brownian motion and the other is bounded by $(ts)^{H-1}$ with $H\in (\frac12,\,1)$, up to a constant factor. In this paper, we investigate the same problem but with the assumption of $H\in (0,\,\frac12)$. It is well known that there is a significant difference between the Hilbert space associated with the fractional Gaussian processes in the case of $H\in (\frac12, 1)$ and that of $H\in (0, \frac12)$. The starting point of this paper is a new relationship between the inner product of $\mathfrak{H}$ associated with the Gaussian process $(G_t)_{t\ge 0}$ and that of the Hilbert space $\mathfrak{H}_1$ associated with the fractional Brownian motion $(B^{H}_t)_{t\ge 0}$. Then we prove the strong consistency with $H\in (0, \frac12)$, and the asymptotic normality and the Berry-Ess\'{e}en bounds with $H\in (0,\frac38)$ for both the least squares estimator and the moment estimator of the drift parameter constructed from the continuous observations. A good many inequality estimates are involved in and we also make use of the estimation of the inner product based on the results of $\mathfrak{H}_1$ in Hu, Nualart and Zhou 2019.

Estimating free energy differences, an important problem in computational drug discovery and in a wide range of other application areas, commonly involves a computationally intensive process of sampling a family of high-dimensional probability distributions and a procedure for computing estimates based on those samples. The variance of the free energy estimate of interest typically depends strongly on how the total computational resources available for sampling are divided among the distributions, but determining an efficient allocation is difficult without sampling the distributions. Here we introduce the Times Square sampling algorithm, a novel on-the-fly estimation method that dynamically allocates resources in such a way as to significantly accelerate the estimation of free energies and other observables, while providing rigorous convergence guarantees for the estimators. We also show that it is possible, surprisingly, for on-the-fly free energy estimation to achieve lower asymptotic variance than the maximum-likelihood estimator MBAR, raising the prospect that on-the-fly estimation could reduce variance in a variety of other statistical applications.

Binary density ratio estimation (DRE), the problem of estimating the ratio $p_1/p_2$ given their empirical samples, provides the foundation for many state-of-the-art machine learning algorithms such as contrastive representation learning and covariate shift adaptation. In this work, we consider a generalized setting where given samples from multiple distributions $p_1, \ldots, p_k$ (for $k > 2$), we aim to efficiently estimate the density ratios between all pairs of distributions. Such a generalization leads to important new applications such as estimating statistical discrepancy among multiple random variables like multi-distribution $f$-divergence, and bias correction via multiple importance sampling. We then develop a general framework from the perspective of Bregman divergence minimization, where each strictly convex multivariate function induces a proper loss for multi-distribution DRE. Moreover, we rederive the theoretical connection between multi-distribution density ratio estimation and class probability estimation, justifying the use of any strictly proper scoring rule composite with a link function for multi-distribution DRE. We show that our framework leads to methods that strictly generalize their counterparts in binary DRE, as well as new methods that show comparable or superior performance on various downstream tasks.

The notion of generalized rank invariant in the context of multiparameter persistence has become an important ingredient for defining interesting homological structures such as generalized persistence diagrams. Naturally, computing these rank invariants efficiently is a prelude to computing any of these derived structures efficiently. We show that the generalized rank invariant over a finite interval $I$ of a $\mathbb{Z}^2$-indexed persistence module $M$ is equal to the generalized rank invariant of the zigzag module that is induced on the boundary of $I$. Hence, we can compute the generalized rank over $I$ by computing the barcode of the zigzag module obtained by restricting the bifiltration inducing $M$ to the boundary of $I$. If $I$ has $t$ points, this computation takes $O(t^\omega)$ time where $\omega\in[2,2.373)$ is the exponent for matrix multiplication. Among others, we apply this result to obtain an improved algorithm for the following problem. Given a bifiltration inducing a module $M$, determine whether $M$ is interval decomposable and, if so, compute all intervals supporting its summands. Our algorithm runs in time $O(t^{2\omega})$ vastly improving upon existing algorithms for the problem.

Tweedie distributions are a special case of exponential dispersion models, which are often used in classical statistics as distributions for generalized linear models. Here, we reveal that Tweedie distributions also play key roles in modern deep learning era, leading to a distribution independent self-supervised image denoising formula without clean reference images. Specifically, by combining with the recent Noise2Score self-supervised image denoising approach and the saddle point approximation of Tweedie distribution, we can provide a general closed-form denoising formula that can be used for large classes of noise distributions without ever knowing the underlying noise distribution. Similar to the original Noise2Score, the new approach is composed of two successive steps: score matching using perturbed noisy images, followed by a closed form image denoising formula via distribution-independent Tweedie's formula. This also suggests a systematic algorithm to estimate the noise model and noise parameters for a given noisy image data set. Through extensive experiments, we demonstrate that the proposed method can accurately estimate noise models and parameters, and provide the state-of-the-art self-supervised image denoising performance in the benchmark dataset and real-world dataset.

Recent neural rendering methods have demonstrated accurate view interpolation by predicting volumetric density and color with a neural network. Although such volumetric representations can be supervised on static and dynamic scenes, existing methods implicitly bake the complete scene light transport into a single neural network for a given scene, including surface modeling, bidirectional scattering distribution functions, and indirect lighting effects. In contrast to traditional rendering pipelines, this prohibits changing surface reflectance, illumination, or composing other objects in the scene. In this work, we explicitly model the light transport between scene surfaces and we rely on traditional integration schemes and the rendering equation to reconstruct a scene. The proposed method allows BSDF recovery with unknown light conditions and classic light transports such as pathtracing. By learning decomposed transport with surface representations established in conventional rendering methods, the method naturally facilitates editing shape, reflectance, lighting and scene composition. The method outperforms NeRV for relighting under known lighting conditions, and produces realistic reconstructions for relit and edited scenes. We validate the proposed approach for scene editing, relighting and reflectance estimation learned from synthetic and captured views on a subset of NeRV's datasets.

Machine learning models are known to be susceptible to adversarial attacks which can cause misclassification by introducing small but well designed perturbations. In this paper, we consider a classical hypothesis testing problem in order to develop fundamental insight into defending against such adversarial perturbations. We interpret an adversarial perturbation as a nuisance parameter, and propose a defense based on applying the generalized likelihood ratio test (GLRT) to the resulting composite hypothesis testing problem, jointly estimating the class of interest and the adversarial perturbation. While the GLRT approach is applicable to general multi-class hypothesis testing, we first evaluate it for binary hypothesis testing in white Gaussian noise under $\ell_{\infty}$ norm-bounded adversarial perturbations, for which a known minimax defense optimizing for the worst-case attack provides a benchmark. We derive the worst-case attack for the GLRT defense, and show that its asymptotic performance (as the dimension of the data increases) approaches that of the minimax defense. For non-asymptotic regimes, we show via simulations that the GLRT defense is competitive with the minimax approach under the worst-case attack, while yielding a better robustness-accuracy tradeoff under weaker attacks. We also illustrate the GLRT approach for a multi-class hypothesis testing problem, for which a minimax strategy is not known, evaluating its performance under both noise-agnostic and noise-aware adversarial settings, by providing a method to find optimal noise-aware attacks, and heuristics to find noise-agnostic attacks that are close to optimal in the high SNR regime.

A thermal simulation methodology is developed for interconnects enabled by a data-driven learning algorithm accounting for variations of material properties, heat sources and boundary conditions (BCs). The methodology is based on the concepts of model order reduction and domain decomposition to construct a multi-block approach. A generic block model is built to represent a group of interconnect blocks that are used to wire standard cells in the integrated circuits (ICs). The blocks in this group possess identical geometry with various metal/via routings. The data-driven model reduction method is thus applied to learn material property variations induced by different metal/via routings in the blocks, in addition to the variations of heat sources and BCs. The approach is investigated in two very different settings. It is first applied to thermal simulation of a single interconnect block with similar BCs to those in the training of the generic block. It is then implemented in multi-block thermal simulation of a FinFET IC, where the interconnect structure is partitioned into several blocks each modeled by the generic block model. Accuracy of the generic block model is examined in terms of the metal/via routings, BCs and thermal discontinuities at the block interfaces.

We consider a least-squares variational kernel-based method for numerical solution of second order elliptic partial differential equations on a multi-dimensional domain. In this setting it is not assumed that the differential operator is self-adjoint or positive definite as it should be in the Rayleigh-Ritz setting. However, the new scheme leads to a symmetric and positive definite algebraic system of equations. Moreover, the resulting method does not rely on certain subspaces satisfying the boundary conditions. The trial space for discretization is provided via standard kernels that reproduce the Sobolev spaces as their native spaces. The error analysis of the method is given, but it is partly subjected to an inverse inequality on the boundary which is still an open problem. The condition number of the final linear system is approximated in terms of the smoothness of the kernel and the discretization quality. Finally, the results of some computational experiments support the theoretical error bounds.

Most conditional generation tasks expect diverse outputs given a single conditional context. However, conditional generative adversarial networks (cGANs) often focus on the prior conditional information and ignore the input noise vectors, which contribute to the output variations. Recent attempts to resolve the mode collapse issue for cGANs are usually task-specific and computationally expensive. In this work, we propose a simple yet effective regularization term to address the mode collapse issue for cGANs. The proposed method explicitly maximizes the ratio of the distance between generated images with respect to the corresponding latent codes, thus encouraging the generators to explore more minor modes during training. This mode seeking regularization term is readily applicable to various conditional generation tasks without imposing training overhead or modifying the original network structures. We validate the proposed algorithm on three conditional image synthesis tasks including categorical generation, image-to-image translation, and text-to-image synthesis with different baseline models. Both qualitative and quantitative results demonstrate the effectiveness of the proposed regularization method for improving diversity without loss of quality.

北京阿比特科技有限公司