亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Various natural phenomena exhibit spatial extremal dependence at short distances only, while it usually vanishes as the distance between sites increases arbitrarily. However, models proposed in the literature for spatial extremes, which are based on max-stable or Pareto processes or comparatively less computationally demanding ``sub-asymptotic'' models based on Gaussian location and/or scale mixtures, generally assume that spatial extremal dependence persists across the entire spatial domain. This is a clear limitation when modeling extremes over large geographical domains, but surprisingly, it has been mostly overlooked in the literature. In this paper, we develop a more realistic Bayesian framework based on a novel Gaussian scale mixture model, where the Gaussian process component is defined by a stochastic partial differential equation that yields a sparse precision matrix, and the random scale component is modeled as a low-rank Pareto-tailed or Weibull-tailed spatial process determined by compactly supported basis functions. We show that our proposed model is approximately tail-stationary despite its non-stationary construction in terms of basis functions, and we demonstrate that it can capture a wide range of extremal dependence structures as a function of distance. Furthermore, the inherently sparse structure of our spatial model allows fast Bayesian computations, even in high spatial dimensions, based on a customized Markov chain Monte Carlo algorithm, which prioritize calibration in the tail. In our application, we fit our model to analyze heavy monsoon rainfall data in Bangladesh. Our study indicates that the proposed model outperforms some natural alternatives, and that the model fits precipitation extremes satisfactorily well. Finally, we use the fitted model to draw inferences on long-term return levels for marginal precipitation at each site, and for spatial aggregates.

相關內容

ACM/IEEE第23屆模型驅動工程語言和系統國際會議,是模型驅動軟件和系統工程的首要會議系列,由ACM-SIGSOFT和IEEE-TCSE支持組織。自1998年以來,模型涵蓋了建模的各個方面,從語言和方法到工具和應用程序。模特的參加者來自不同的背景,包括研究人員、學者、工程師和工業專業人士。MODELS 2019是一個論壇,參與者可以圍繞建模和模型驅動的軟件和系統交流前沿研究成果和創新實踐經驗。今年的版本將為建模社區提供進一步推進建模基礎的機會,并在網絡物理系統、嵌入式系統、社會技術系統、云計算、大數據、機器學習、安全、開源等新興領域提出建模的創新應用以及可持續性。 官網鏈接: · TOOLS · 參數空間 · 相互獨立的 · 邊緣化 ·
2022 年 2 月 23 日

Conditionally specified models are often used to describe complex multivariate data. Such models assume implicit structures on the extremes. So far, no methodology exists for calculating extremal characteristics of conditional models since the copula and marginals are not expressed in closed forms. We consider bivariate conditional models that specify the distribution of $X$ and the distribution of $Y$ conditional on $X$. We provide tools to quantify implicit assumptions on the extremes of this class of models. In particular, these tools allow us to approximate the distribution of the tail of $Y$ and the coefficient of asymptotic independence $\eta$ in closed forms. We apply these methods to a widely used conditional model for wave height and wave period. Moreover, we introduce a new condition on the parameter space for the conditional extremes model of Heffernan and Tawn (2004), and prove that the conditional extremes model does not capture $\eta$, when $\eta<1$.

Parameter reconstructions are indispensable in metrology. Here, on wants to explain $K$ experimental measurements by fitting to them a parameterized model of the measurement process. The model parameters are regularly determined by least-square methods, i.e., by minimizing the sum of the squared residuals between the $K$ model predictions and the $K$ experimental observations, $\chi^2$. The model functions often involve computationally demanding numerical simulations. Bayesian optimization methods are specifically suited for minimizing expensive model functions. However, in contrast to least-square methods such as the Levenberg-Marquardt algorithm, they only take the value of $\chi^2$ into account, and neglect the $K$ individual model outputs. We introduce a Bayesian target-vector optimization scheme that considers all $K$ contributions of the model function and that is specifically suited for parameter reconstruction problems which are often based on hundreds of observations. Its performance is compared to established methods for an optical metrology reconstruction problem and two synthetic least-squares problems. The proposed method outperforms established optimization methods. It also enables to determine accurate uncertainty estimates with very few observations of the actual model function by using Markov chain Monte Carlo sampling on a trained surrogate model.

We consider the clustering of extremes for stationary regularly varying random fields over arbitrary growing index sets. We study sufficient assumptions on the index set such that the limit of the point random fields of the exceedances above a high threshold exists. Under the so-called anti-clustering condition, the extremal dependence is only local. Thus the index set can have a general form compared to previous literature [3, 21]. However, we cannot describe the clustering of extreme values in terms of the usual spectral tail measure [23] except for hyperrectangles or index sets in the lattice case. Using the recent extension of the spectral measure for star-shaped equipped space [18], the $\upsilon$-spectral tail measure provides a natural extension that describes the clustering effect in full generality.

The study of Markov processes and broadcasting on trees has deep connections to a variety of areas including statistical physics, graphical models, phylogenetic reconstruction, MCMC algorithms, and community detection in random graphs. Notably, the celebrated Belief Propagation (BP) algorithm achieves optimal performance for the reconstruction problem of predicting the value of the Markov process at the root of the tree from its values at the leaves. Recently, the analysis of low-degree polynomials has emerged as a valuable tool for predicting computational-to-statistical gaps. In this work, we investigate the performance of low-degree polynomials for the reconstruction problem. Perhaps surprisingly, we show that there are simple tree models of fixed arity $d$ and growing depth $\ell$ (so $N = 2^{\ell \log_2(d)}$ leaves) where (1) nontrivial reconstruction of the root value is possible with a simple polynomial time algorithm and with robustness to noise, but not with any polynomial of degree $2^{c \ell} = N^{c/\log_2(d)}$ for $c > 0$ a constant, and (2) when the tree is unknown and given multiple samples with correlated root assignments, nontrivial reconstruction of the root value is possible with a simple, noise-robust, and computationally efficient SQ algorithm but not with any polynomial of degree $2^{c \ell}$. These results clarify limitations of low-degree polynomials vs. polynomial time algorithms for Bayesian estimation problems. They also complement recent work of Moitra, Mossel, and Sandon who studied the circuit complexity of Belief Propagation. As a consequence of our main result, we show that for some $c' > 0$, $\exp(2^{c'\ell}) = \exp(N^{c'/\log_2(d)})$ many samples are needed for RBF kernel regression to obtain nontrivial correlation with the true regression function (BP). We pose related open questions about low-degree polynomials and the Kesten-Stigum threshold.

Employing a forward Markov diffusion chain to gradually map the data to a noise distribution, diffusion probabilistic models learn how to generate the data by inferring a reverse Markov diffusion chain to invert the forward diffusion process. To achieve competitive data generation performance, they demand a long diffusion chain that makes them computationally intensive in not only training but also generation. To significantly improve the computation efficiency, we propose to truncate the forward diffusion chain by abolishing the requirement of diffusing the data to random noise. Consequently, we start the inverse diffusion chain from an implicit generative distribution, rather than random noise, and learn its parameters by matching it to the distribution of the data corrupted by the truncated forward diffusion chain. Experimental results show our truncated diffusion probabilistic models provide consistent improvements over the non-truncated ones in terms of the generation performance and the number of required inverse diffusion steps.

Despite increasing accessibility to function data, effective methods for flexibly estimating underlying functional trend are still scarce. We thereby develop functional version of trend filtering for estimating trend of functional data indexed by time or on general graph by extending the conventional trend filtering, a powerful nonparametric trend estimation technique, for scalar data. We formulate the new trend filtering by introducing penalty terms based on $L_2$-norm of the differences of adjacent trend functions. We develop an efficient iteration algorithm for optimizing the objective function obtained by orthonormal basis expansion. Furthermore, we introduce additional penalty terms to eliminate redundant basis functions, which leads to automatic adaptation of the number of basis functions. The tuning parameter in the proposed method is selected via cross validation. We demonstrate the proposed method through simulation studies and applications to real world datasets.

Domain generalization aims to learn a generalizable model from a known source domain for various unknown target domains. It has been studied widely by domain randomization that transfers source images to different styles in spatial space for learning domain-agnostic features. However, most existing randomization uses GANs that often lack of controls and even alter semantic structures of images undesirably. Inspired by the idea of JPEG that converts spatial images into multiple frequency components (FCs), we propose Frequency Space Domain Randomization (FSDR) that randomizes images in frequency space by keeping domain-invariant FCs (DIFs) and randomizing domain-variant FCs (DVFs) only. FSDR has two unique features: 1) it decomposes images into DIFs and DVFs which allows explicit access and manipulation of them and more controllable randomization; 2) it has minimal effects on semantic structures of images and domain-invariant features. We examined domain variance and invariance property of FCs statistically and designed a network that can identify and fuse DIFs and DVFs dynamically through iterative learning. Extensive experiments over multiple domain generalizable segmentation tasks show that FSDR achieves superior segmentation and its performance is even on par with domain adaptation methods that access target data in training.

An important problem in geostatistics is to build models of the subsurface of the Earth given physical measurements at sparse spatial locations. Typically, this is done using spatial interpolation methods or by reproducing patterns from a reference image. However, these algorithms fail to produce realistic patterns and do not exhibit the wide range of uncertainty inherent in the prediction of geology. In this paper, we show how semantic inpainting with Generative Adversarial Networks can be used to generate varied realizations of geology which honor physical measurements while matching the expected geological patterns. In contrast to other algorithms, our method scales well with the number of data points and mimics a distribution of patterns as opposed to a single pattern or image. The generated conditional samples are state of the art.

Dynamic topic models (DTMs) model the evolution of prevalent themes in literature, online media, and other forms of text over time. DTMs assume that word co-occurrence statistics change continuously and therefore impose continuous stochastic process priors on their model parameters. These dynamical priors make inference much harder than in regular topic models, and also limit scalability. In this paper, we present several new results around DTMs. First, we extend the class of tractable priors from Wiener processes to the generic class of Gaussian processes (GPs). This allows us to explore topics that develop smoothly over time, that have a long-term memory or are temporally concentrated (for event detection). Second, we show how to perform scalable approximate inference in these models based on ideas around stochastic variational inference and sparse Gaussian processes. This way we can train a rich family of DTMs to massive data. Our experiments on several large-scale datasets show that our generalized model allows us to find interesting patterns that were not accessible by previous approaches.

Discrete random structures are important tools in Bayesian nonparametrics and the resulting models have proven effective in density estimation, clustering, topic modeling and prediction, among others. In this paper, we consider nested processes and study the dependence structures they induce. Dependence ranges between homogeneity, corresponding to full exchangeability, and maximum heterogeneity, corresponding to (unconditional) independence across samples. The popular nested Dirichlet process is shown to degenerate to the fully exchangeable case when there are ties across samples at the observed or latent level. To overcome this drawback, inherent to nesting general discrete random measures, we introduce a novel class of latent nested processes. These are obtained by adding common and group-specific completely random measures and, then, normalising to yield dependent random probability measures. We provide results on the partition distributions induced by latent nested processes, and develop an Markov Chain Monte Carlo sampler for Bayesian inferences. A test for distributional homogeneity across groups is obtained as a by product. The results and their inferential implications are showcased on synthetic and real data.

北京阿比特科技有限公司