亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Classical generative diffusion models learn an isotropic Gaussian denoising process, treating all spatial regions uniformly, thus neglecting potentially valuable structural information in the data. Inspired by the long-established work on anisotropic diffusion in image processing, we present a novel edge-preserving diffusion model that is a generalization of denoising diffusion probablistic models (DDPM). In particular, we introduce an edge-aware noise scheduler that varies between edge-preserving and isotropic Gaussian noise. We show that our model's generative process converges faster to results that more closely match the target distribution. We demonstrate its capability to better learn the low-to-mid frequencies within the dataset, which plays a crucial role in representing shapes and structural information. Our edge-preserving diffusion process consistently outperforms state-of-the-art baselines in unconditional image generation. It is also more robust for generative tasks guided by a shape-based prior, such as stroke-to-image generation. We present qualitative and quantitative results showing consistent improvements (FID score) of up to 30% for both tasks.

相關內容

 Processing 是一門開源編程語言和與之配套的集成開發環境(IDE)的名稱。Processing 在電子藝術和視覺設計社區被用來教授編程基礎,并運用于大量的新媒體和互動藝術作品中。

Diffusion models exhibit robust generative properties by approximating the underlying distribution of a dataset and synthesizing data by sampling from the approximated distribution. In this work, we explore how the generative performance may be be modulated if noise sources with temporal correlations -- akin to those used in the field of active matter -- are used for the destruction of the data in the forward process. Our numerical and analytical experiments suggest that the corresponding reverse process may exhibit improved generative properties.

We propose nodal auxiliary space preconditioners for facet and edge virtual elements of lowest order by deriving discrete regular decompositions on polytopal grids and generalizing the Hiptmair-Xu preconditioner to the virtual element framework. The preconditioner consists of solving a sequence of elliptic problems on the nodal virtual element space, combined with appropriate smoother steps. Under assumed regularity of the mesh, the preconditioned system is proven to have bounded spectral condition number independent of the mesh size and this is verified by numerical experiments on a sequence of polygonal meshes. Moreover, we observe numerically that the preconditioner is robust on meshes containing elements with high aspect ratios.

A data analyst might worry about generalization if dropping a very small fraction of data points from a study could change its substantive conclusions. Finding the worst-case data subset to drop poses a combinatorial optimization problem. To overcome this intractability, recent works propose using additive approximations, which treat the contribution of a collection of data points as the sum of their individual contributions, and greedy approximations, which iteratively select the point with the highest impact to drop and re-run the data analysis without that point [Broderick et al., 2020, Kuschnig et al., 2021]. We identify that, even in a setting as simple as OLS linear regression, many of these approximations can break down in realistic data arrangements. Several of our examples reflect masking, where one outlier may hide or conceal the effect of another outlier. Based on the failures we identify, we provide recommendations for users and suggest directions for future improvements.

We present a novel variational derivation of the Maxwell-GLM system, which augments the original vacuum Maxwell equations via a generalized Lagrangian multiplier approach (GLM) by adding two supplementary acoustic subsystems and which was originally introduced by Munz et al. for purely numerical purposes in order to treat the divergence constraints of the magnetic and the electric field in the vacuum Maxwell equations within general-purpose and non-structure-preserving numerical schemes for hyperbolic PDE. Among the many mathematically interesting features of the model are: i) its symmetric hyperbolicity, ii) the extra conservation law for the total energy density and, most importantly, iii) the very peculiar combination of the basic differential operators, since both, curl-curl and div-grad combinations are mixed within this kind of system. A similar mixture of Maxwell-type and acoustic-type subsystems has recently been also forwarded by Buchman et al. in the context of a reformulation of the Einstein field equations of general relativity in terms of tetrads. This motivates our interest in this class of PDE, since the system is by itself very interesting from a mathematical point of view and can therefore serve as useful prototype system for the development of new structure-preserving numerical methods. Up to now, to the best of our knowledge, there exists neither a rigorous variational derivation of this class of hyperbolic PDE systems, nor do exactly energy-conserving and asymptotic-preserving schemes exist for them. The objectives of this paper are to derive the Maxwell-GLM system from an underlying variational principle, show its consistency with Hamiltonian mechanics and special relativity, extend it to the general nonlinear case and to develop new exactly energy-conserving and asymptotic-preserving finite volume schemes for its discretization.

In this paper, we investigate score function-based tests to check the significance of an ultrahigh-dimensional sub-vector of the model coefficients when the nuisance parameter vector is also ultrahigh-dimensional in linear models. We first reanalyze and extend a recently proposed score function-based test to derive, under weaker conditions, its limiting distributions under the null and local alternative hypotheses. As it may fail to work when the correlation between testing covariates and nuisance covariates is high, we propose an orthogonalized score function-based test with two merits: debiasing to make the non-degenerate error term degenerate and reducing the asymptotic variance to enhance power performance. Simulations evaluate the finite-sample performances of the proposed tests, and a real data analysis illustrates its application.

We propose a new neural network based large eddy simulation framework for the incompressible Navier-Stokes equations based on the paradigm "discretize first, filter and close next". This leads to full model-data consistency and allows for employing neural closure models in the same environment as where they have been trained. Since the LES discretization error is included in the learning process, the closure models can learn to account for the discretization. Furthermore, we employ a divergence-consistent discrete filter defined through face-averaging and provide novel theoretical and numerical filter analysis. This filter preserves the discrete divergence-free constraint by construction, unlike general discrete filters such as volume-averaging filters. We show that using a divergence-consistent LES formulation coupled with a convolutional neural closure model produces stable and accurate results for both a-priori and a-posteriori training, while a general (divergence-inconsistent) LES model requires a-posteriori training or other stability-enforcing measures.

We present a novel image-based adaptive domain decomposition FEM framework to accelerate the solution of continuum damage mechanics problems. The key idea is to use image-processing techniques in order to identify the moving interface between the healthy subdomain and unhealthy subdomain as damage propagates, and then use an iterative Schur complement approach to efficiently solve the problem. The implementation of the algorithm consists of several modular components. Following the FEM solution of a load increment, the damage detection module is activated, a step that is based on several image-processing operations including colormap manipulation and morphological convolution-based operations. Then, the damage tracking module is invoked, to identify the crack growth direction using geometrical operations and ray casting algorithm. This information is then passed into the domain decomposition module, where the domain is divided into the healthy subdomain which contains only undamaged elements, and the unhealthy subdomain which comprises both damaged and undamaged elements. Continuity between the two regions is restored using penalty constraints. The computational savings of our method stem from the Schur complement, which allows for the iterative solution of the system of equations appertaining only to the unhealthy subdomain. Through an exhaustive comparison between our approach and single domain computations, we demonstrate the accuracy, efficiency, and robustness of the framework. We ensure its compatibility against local and non-local damage laws, structured and unstructured meshes, as well as in cases where different damage paths eventually merge. Since the key novelty lies in using image processing tools to inform the decomposition, our framework can be readily extended beyond damage mechanics and model several classes of non-linear problems such as plasticity and phase-field.

In decision-making, maxitive functions are used for worst-case and best-case evaluations. Maxitivity gives rise to a rich structure that is well-studied in the context of the pointwise order. In this article, we investigate maxitivity with respect to general preorders and provide a representation theorem for such functionals. The results are illustrated for different stochastic orders in the literature, including the usual stochastic order, the increasing convex/concave order, and the dispersive order.

We consider linear models with scalar responses and covariates from a separable Hilbert space. The aim is to detect change points in the error distribution, based on sequential residual empirical distribution functions. Expansions for those estimated functions are more challenging in models with infinite-dimensional covariates than in regression models with scalar or vector-valued covariates due to a slower rate of convergence of the parameter estimators. Yet the suggested change point test is asymptotically distribution-free and consistent for one-change point alternatives. In the latter case we also show consistency of a change point estimator.

We establish a refined version of a graph container lemma due to Galvin and discuss several applications related to the hard-core model on bipartite expander graphs. Given a graph $G$ and $\lambda>0$, the hard-core model on $G$ at activity $\lambda$ is the probability distribution $\mu_{G,\lambda}$ on independent sets in $G$ given by $\mu_{G,\lambda}(I)\propto \lambda^{|I|}$. As one of our main applications, we show that the hard-core model at activity $\lambda$ on the hypercube $Q_d$ exhibits a `structured phase' for $\lambda= \Omega( \log^2 d/d^{1/2})$ in the following sense: in a typical sample from $\mu_{Q_d,\lambda}$, most vertices are contained in one side of the bipartition of $Q_d$. This improves upon a result of Galvin which establishes the same for $\lambda=\Omega(\log d/ d^{1/3})$. As another application, we establish a fully polynomial-time approximation scheme (FPTAS) for the hard-core model on a $d$-regular bipartite $\alpha$-expander, with $\alpha>0$ fixed, when $\lambda= \Omega( \log^2 d/d^{1/2})$. This improves upon the bound $\lambda=\Omega(\log d/ d^{1/4})$ due to the first author, Perkins and Potukuchi. We discuss similar improvements to results of Galvin-Tetali, Balogh-Garcia-Li and Kronenberg-Spinka.

北京阿比特科技有限公司