亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this chapter, we show how to efficiently model high-dimensional extreme peaks-over-threshold events over space in complex non-stationary settings, using extended latent Gaussian Models (LGMs), and how to exploit the fitted model in practice for the computation of long-term return levels. The extended LGM framework assumes that the data follow a specific parametric distribution, whose unknown parameters are transformed using a multivariate link function and are then further modeled at the latent level in terms of fixed and random effects that have a joint Gaussian distribution. In the extremal context, we here assume that the data level distribution is described in terms of a Poisson point process likelihood, motivated by asymptotic extreme-value theory, and which conveniently exploits information from all threshold exceedances. This contrasts with the more common data-wasteful approach based on block maxima, which are typically modeled with the generalized extreme-value (GEV) distribution. When conditional independence can be assumed at the data level and latent random effects have a sparse probabilistic structure, fast approximate Bayesian inference becomes possible in very high dimensions, and we here present the recently proposed inference approach called "Max-and-Smooth", which provides exceptional speed-up compared to alternative methods. The proposed methodology is illustrated by application to satellite-derived precipitation data over Saudi Arabia, obtained from the Tropical Rainfall Measuring Mission, with 2738 grid cells and about 20 million spatio-temporal observations in total. Our fitted model captures the spatial variability of extreme precipitation satisfactorily and our results show that the most intense precipitation events are expected near the south-western part of Saudi Arabia, along the Red Sea coastline.

相關內容

ACM/IEEE第23屆模型驅動工程語言和系統國際會議,是模型驅動軟件和系統工程的首要會議系列,由ACM-SIGSOFT和IEEE-TCSE支持組織。自1998年以來,模型涵蓋了建模的各個方面,從語言和方法到工具和應用程序。模特的參加者來自不同的背景,包括研究人員、學者、工程師和工業專業人士。MODELS 2019是一個論壇,參與者可以圍繞建模和模型驅動的軟件和系統交流前沿研究成果和創新實踐經驗。今年的版本將為建模社區提供進一步推進建模基礎的機會,并在網絡物理系統、嵌入式系統、社會技術系統、云計算、大數據、機器學習、安全、開源等新興領域提出建模的創新應用以及可持續性。 官網鏈接: · 可約的 · 能量函數 · Performer · Better ·
2021 年 11 月 30 日

Event cameras are bio-inspired sensors providing significant advantages over standard cameras such as low latency, high temporal resolution, and high dynamic range. We propose a novel structured-light system using an event camera to tackle the problem of accurate and high-speed depth sensing. Our setup consists of an event camera and a laser-point projector that uniformly illuminates the scene in a raster scanning pattern during 16 ms. Previous methods match events independently of each other, and so they deliver noisy depth estimates at high scanning speeds in the presence of signal latency and jitter. In contrast, we optimize an energy function designed to exploit event correlations, called spatio-temporal consistency. The resulting method is robust to event jitter and therefore performs better at higher scanning speeds. Experiments demonstrate that our method can deal with high-speed motion and outperform state-of-the-art 3D reconstruction methods based on event cameras, reducing the RMSE by 83% on average, for the same acquisition time.

The modeling and simulation of dynamical systems is a necessary step for many control approaches. Using classical, parameter-based techniques for modeling of modern systems, e.g., soft robotics or human-robot interaction, is often challenging or even infeasible due to the complexity of the system dynamics. In contrast, data-driven approaches need only a minimum of prior knowledge and scale with the complexity of the system. In particular, Gaussian process dynamical models (GPDMs) provide very promising results for the modeling of complex dynamics. However, the control properties of these GP models are just sparsely researched, which leads to a "blackbox" treatment in modeling and control scenarios. In addition, the sampling of GPDMs for prediction purpose respecting their non-parametric nature results in non-Markovian dynamics making the theoretical analysis challenging. In this article, we present approximated GPDMs which are Markov and analyze their control theoretical properties. Among others, the approximated error is analyzed and conditions for boundedness of the trajectories are provided. The outcomes are illustrated with numerical examples that show the power of the approximated models while the the computational time is significantly reduced.

This paper develops a Bayesian computational platform at the interface between posterior sampling and optimization in models whose marginal likelihoods are difficult to evaluate. Inspired by adversarial optimization, namely Generative Adversarial Networks (GAN), we reframe the likelihood function estimation problem as a classification problem. Pitting a Generator, who simulates fake data, against a Classifier, who tries to distinguish them from the real data, one obtains likelihood (ratio) estimators which can be plugged into the Metropolis-Hastings algorithm. The resulting Markov chains generate, at a steady state, samples from an approximate posterior whose asymptotic properties we characterize. Drawing upon connections with empirical Bayes and Bayesian mis-specification, we quantify the convergence rate in terms of the contraction speed of the actual posterior and the convergence rate of the Classifier. Asymptotic normality results are also provided which justify inferential potential of our approach. We illustrate the usefulness of our approach on examples which have posed a challenge for existing Bayesian likelihood-free approaches.

In this paper we address the problem of handling inconsistencies in tables with missing values (also called nulls) and functional dependencies. Although the traditional view is that table instances must respect all functional dependencies imposed on them, it is nevertheless relevant to develop theories about how to handle instances that violate some dependencies. Regarding missing values, we make no assumptions on their existence: a missing value exists only if it is inferred from the functional dependencies of the table. We propose a formal framework in which each tuple of a table is associated with a truth value among the following: true, false, inconsistent or unknown; and we show that our framework can be used to study important problems such as consistent query answering, table merging, and data quality measures - to mention just a few. In this paper, however, we focus mainly on consistent query answering, a problem that has received considerable attention during the last decades. The main contributions of the paper are the following: (a) we introduce a new approach to handle inconsistencies in a table with nulls and functional dependencies, (b) we give algorithms for computing all true, inconsistent and false tuples, (c) we investigate the relationship between our approach and Four-valued logic in the context of data merging, and (d) we give a novel solution to the consistent query answering problem and compare our solution to that of table repairs.

Cognitive Diagnosis Models (CDMs) are a special family of discrete latent variable models widely used in educational, psychological and social sciences. In many applications of CDMs, certain hierarchical structures among the latent attributes are assumed by researchers to characterize their dependence structure. Specifically, a directed acyclic graph is used to specify hierarchical constraints on the allowable configurations of the discrete latent attributes. In this paper, we consider the important yet unaddressed problem of testing the existence of latent hierarchical structures in CDMs. We first introduce the concept of testability of hierarchical structures in CDMs and present sufficient conditions. Then we study the asymptotic behaviors of the likelihood ratio test (LRT) statistic, which is widely used for testing nested models. Due to the irregularity of the problem, the asymptotic distribution of LRT becomes nonstandard and tends to provide unsatisfactory finite sample performance under practical conditions. We provide statistical insights on such failures, and propose to use parametric bootstrap to perform the testing. We also demonstrate the effectiveness and superiority of parametric bootstrap for testing the latent hierarchies over non-parametric bootstrap and the na\"ive Chi-squared test through comprehensive simulations and an educational assessment dataset.

We present the near-Maximal Algorithm for Poisson-disk Sampling (nMAPS) to generate point distributions for variable resolution Delaunay triangular and tetrahedral meshes in two and three-dimensions, respectively. nMAPS consists of two principal stages. In the first stage, an initial point distribution is produced using a cell-based rejection algorithm. In the second stage, holes in the sample are detected using an efficient background grid and filled in to obtain a near-maximal covering. Extensive testing shows that nMAPS generates a variable resolution mesh in linear run time with the number of accepted points. We demonstrate nMAPS capabilities by meshing three-dimensional discrete fracture networks (DFN) and the surrounding volume. The discretized boundaries of the fractures, which are represented as planar polygons, are used as the seed of 2D-nMAPS to produce a conforming Delaunay triangulation. The combined mesh of the DFN is used as the seed for 3D-nMAPS, which produces conforming Delaunay tetrahedra surrounding the network. Under a set of conditions that naturally arise in maximal Poisson-disk samples and are satisfied by nMAPS, the two-dimensional Delaunay triangulations are guaranteed to only have well-behaved triangular faces. While nMAPS does not provide triangulation quality bounds in more than two dimensions, we found that low-quality tetrahedra in 3D are infrequent, can be readily detected and removed, and a high quality balanced mesh is produced.

This work deals with the analysis of longitudinal ordinal responses. The novelty of the proposed approach is in modeling simultaneously the temporal dynamics of a latent trait of interest, measured via the observed ordinal responses, and the answering behaviors influenced by response styles, through hidden Markov models (HMMs) with two latent components. This approach enables the modeling of (i) the substantive latent trait, controlling for response styles; (ii) the change over time of latent trait and answering behavior, allowing also dependence on individual characteristics. For the proposed HMMs, estimation procedures, methods for standard errors calculation, measures of goodness of fit and classification, and full-conditional residuals are discussed. The proposed model is fitted to ordinal longitudinal data from the Survey on Household Income and Wealth (Bank of Italy) to give insights on the evolution of the Italian households financial capability.

Disobeying the classical wisdom of statistical learning theory, modern deep neural networks generalize well even though they typically contain millions of parameters. Recently, it has been shown that the trajectories of iterative optimization algorithms can possess fractal structures, and their generalization error can be formally linked to the complexity of such fractals. This complexity is measured by the fractal's intrinsic dimension, a quantity usually much smaller than the number of parameters in the network. Even though this perspective provides an explanation for why overparametrized networks would not overfit, computing the intrinsic dimension (e.g., for monitoring generalization during training) is a notoriously difficult task, where existing methods typically fail even in moderate ambient dimensions. In this study, we consider this problem from the lens of topological data analysis (TDA) and develop a generic computational tool that is built on rigorous mathematical foundations. By making a novel connection between learning theory and TDA, we first illustrate that the generalization error can be equivalently bounded in terms of a notion called the 'persistent homology dimension' (PHD), where, compared with prior work, our approach does not require any additional geometrical or statistical assumptions on the training dynamics. Then, by utilizing recently established theoretical results and TDA tools, we develop an efficient algorithm to estimate PHD in the scale of modern deep neural networks and further provide visualization tools to help understand generalization in deep learning. Our experiments show that the proposed approach can efficiently compute a network's intrinsic dimension in a variety of settings, which is predictive of the generalization error.

This study debuts a new spline dimensional decomposition (SDD) for uncertainty quantification analysis of high-dimensional functions, including those endowed with high nonlinearity and nonsmoothness, if they exist, in a proficient manner. The decomposition creates an hierarchical expansion for an output random variable of interest with respect to measure-consistent orthonormalized basis splines (B-splines) in independent input random variables. A dimensionwise decomposition of a spline space into orthogonal subspaces, each spanned by a reduced set of such orthonormal splines, results in SDD. Exploiting the modulus of smoothness, the SDD approximation is shown to converge in mean-square to the correct limit. The computational complexity of the SDD method is polynomial, as opposed to exponential, thus alleviating the curse of dimensionality to the extent possible. Analytical formulae are proposed to calculate the second-moment properties of a truncated SDD approximation for a general output random variable in terms of the expansion coefficients involved. Numerical results indicate that a low-order SDD approximation of nonsmooth functions calculates the probabilistic characteristics of an output variable with an accuracy matching or surpassing those obtained by high-order approximations from several existing methods. Finally, a 34-dimensional random eigenvalue analysis demonstrates the utility of SDD in solving practical problems.

Both generative adversarial network models and variational autoencoders have been widely used to approximate probability distributions of datasets. Although they both use parametrized distributions to approximate the underlying data distribution, whose exact inference is intractable, their behaviors are very different. In this report, we summarize our experiment results that compare these two categories of models in terms of fidelity and mode collapse. We provide a hypothesis to explain their different behaviors and propose a new model based on this hypothesis. We further tested our proposed model on MNIST dataset and CelebA dataset.

北京阿比特科技有限公司