亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Inferring parameter distributions of complex industrial systems from noisy time series data requires methods to deal with the uncertainty of the underlying data and the used simulation model. Bayesian inference is well suited for these uncertain inverse problems. Standard methods used to identify uncertain parameters are Markov Chain Monte Carlo (MCMC) methods with explicit evaluation of a likelihood function. However, if the likelihood is very complex, such that its evaluation is computationally expensive, or even unknown in its explicit form, Approximate Bayesian Computation (ABC) methods provide a promising alternative. In this work both methods are first applied to artificially generated data and second on a real world problem, by using data of an electric motor test bench. We show that both methods are able to infer the distribution of varying parameters with a Bayesian hierarchical approach. But the proposed ABC method is computationally much more efficient in order to achieve results with similar accuracy. We suggest to use summary statistics in order to reduce the dimension of the data which significantly increases the efficiency of the algorithm. Further the simulation model is replaced by a Polynomial Chaos Expansion (PCE) surrogate to speed up model evaluations. We proof consistency for the proposed surrogate-based ABC method with summary statistics under mild conditions on the (approximated) forward model.

相關內容

Time-to-event endpoints show an increasing popularity in phase II cancer trials. The standard statistical tool for such endpoints in one-armed trials is the one-sample log-rank test. It is widely known, that the asymptotic providing the correctness of this test does not come into effect to full extent for small sample sizes. There have already been some attempts to solve this problem. While some do not allow easy power and sample size calculations, others lack a clear theoretical motivation and require further considerations. The problem itself can partly be attributed to the dependence of the compensated counting process and its variance estimator. We provide a framework in which the variance estimator can be flexibly adopted to the present situation while maintaining its asymptotical properties. We exemplarily suggest a variance estimator which is uncorrelated to the compensated counting process. Furthermore, we provide sample size and power calculations for any approach fitting into our framework. Finally, we compare several methods via simulation studies and the hypothetical setup of a Phase II trial based on real world data.

In magnetic confinement fusion devices, the equilibrium configuration of a plasma is determined by the balance between the hydrostatic pressure in the fluid and the magnetic forces generated by an array of external coils and the plasma itself. The location of the plasma is not known a priori and must be obtained as the solution to a free boundary problem. The partial differential equation that determines the behavior of the combined magnetic field depends on a set of physical parameters (location of the coils, intensity of the electric currents going through them, magnetic permeability, etc.) that are subject to uncertainty and variability. The confinement region is in turn a function of these stochastic parameters as well. In this work, we consider variations on the current intensities running through the external coils as the dominant source of uncertainty. This leads to a parameter space of dimension equal to the number of coils in the reactor. With the aid of a surrogate function built on a sparse grid in parameter space, a Monte Carlo strategy is used to explore the effect that stochasticity in the parameters has on important features of the plasma boundary such as the location of the x-point, the strike points, and shaping attributes such as triangularity and elongation. The use of the surrogate function reduces the time required for the Monte Carlo simulations by factors that range between 7 and over 30.

We provide a complete taxonomic characterization of robust hierarchical clustering methods for directed networks following an axiomatic approach. We begin by introducing three practical properties associated with the notion of robustness in hierarchical clustering: linear scale preservation, stability, and excisiveness. Linear scale preservation enforces imperviousness to change in units of measure whereas stability ensures that a bounded perturbation in the input network entails a bounded perturbation in the clustering output. Excisiveness refers to the local consistency of the clustering outcome. Algorithmically, excisiveness implies that we can reduce computational complexity by only clustering a subset of our data while theoretically guaranteeing that the same hierarchical outcome would be observed when clustering the whole dataset. In parallel to these three properties, we introduce the concept of representability, a generative model for describing clustering methods through the specification of their action on a collection of networks. Our main result is to leverage this generative model to give a precise characterization of all robust -- i.e., excisive, linear scale preserving, and stable -- hierarchical clustering methods for directed networks. We also address the implementation of our methods and describe an application to real data.

This paper studies the asymptotic properties of and improved inference methods for kernel density estimation (KDE) for dyadic data. We first establish novel uniform convergence rates for dyadic KDE under general assumptions. As the existing analytic variance estimator is known to behave unreliably in finite samples, we propose a modified jackknife empirical likelihood procedure for inference. The proposed test statistic is self-normalised and no variance estimator is required. In addition, it is asymptotically pivotal regardless of presence of dyadic clustering. The results are extended to cover the practically relevant case of incomplete dyadic network data. Simulations show that this jackknife empirical likelihood-based inference procedure delivers precise coverage probabilities even under modest sample sizes and with incomplete dyadic data. Finally, we illustrate the method by studying airport congestion.

We recover the gradient of a given function defined on interior points of a submanifold with boundary of the Euclidean space based on a (normally distributed) random sample of function evaluations at points in the manifold. This approach is based on the estimates of the Laplace-Beltrami operator proposed in the theory of Diffusion-Maps. Analytical convergence results of the resulting expansion are proved, and an efficient algorithm is proposed to deal with non-convex optimization problems defined on Euclidean submanifolds. We test and validate our methodology as a post-processing tool in Cryogenic electron microscopy (Cryo-EM). We also apply the method to the classical sphere packing problem.

Random graph models are used to describe the complex structure of real-world networks in diverse fields of knowledge. Studying their behavior and fitting properties are still critical challenges, that in general, require model specific techniques. An important line of research is to develop generic methods able to fit and select the best model among a collection. Approaches based on spectral density (i.e., distribution of the graph adjacency matrix eigenvalues) are appealing for that purpose: they apply to different random graph models. Also, they can benefit from the theoretical background of random matrix theory. This work investigates the convergence properties of model fitting procedures based on the graph spectral density and the corresponding cumulative distribution function. We also review results on the convergence of the spectral density for the most widely used random graph models. Moreover, we explore through simulations the limits of these graph spectral density convergence results, particularly in the case of the block model, where only partial results have been established.

In this work, we study the class of stochastic process that generalizes the Ornstein-Uhlenbeck processes, hereafter called by \emph{Generalized Ornstein-Uhlenbeck Type Process} and denoted by GOU type process. We consider them driven by the class of noise processes such as Brownian motion, symmetric $\alpha$-stable L\'evy process, a L\'evy process, and even a Poisson process. We give necessary and sufficient conditions under the memory kernel function for the time-stationary and the Markov properties for these processes. When the GOU type process is driven by a L\'evy noise we prove that it is infinitely divisible showing its generating triplet. Several examples derived from the GOU type process are illustrated showing some of their basic properties as well as some time series realizations. These examples also present their theoretical and empirical autocorrelation or normalized codifference functions depending on whether the process has a finite or infinite second moment. We also present the maximum likelihood estimation as well as the Bayesian estimation procedures for the so-called \emph{Cosine process}, a particular process in the class of GOU type processes. For the Bayesian estimation method, we consider the power series representation of Fox's H-function to better approximate the density function of a random variable $\alpha$-stable distributed. We consider four goodness-of-fit tests for helping to decide which \emph{Cosine process} (driven by a Gaussian or an $\alpha$-stable noise) best fit real data sets. Two applications of GOU type model are presented: one based on the Apple company stock market price data and the other based on the cardiovascular mortality in Los Angeles County data.

The most realistic information about the transparent sample such as a live cell can be obtained only using bright-field light microscopy. At high-intensity pulsing LED illumination, we captured a primary 12-bit-per-channel (bpc) response from an observed sample using a bright-field microscope equipped with a high-resolution (4872x3248) image sensor. In order to suppress data distortions originating from the light interactions with elements in the optical path, poor sensor reproduction (geometrical defects of the camera sensor and some peculiarities of sensor sensitivity), we propose a spectroscopic approach for the correction of this uncompressed 12-bpc data by simultaneous calibration of all parts of the experimental arrangement. Moreover, the final intensities of the corrected images are proportional to the photon fluxes detected by a camera sensor. It can be visualized in 8-bpc intensity depth after the Least Information Loss compression.

Approximate Message Passing (AMP) algorithms have seen widespread use across a variety of applications. However, the precise forms for their Onsager corrections and state evolutions depend on properties of the underlying random matrix ensemble, limiting the extent to which AMP algorithms derived for white noise may be applicable to data matrices that arise in practice. In this work, we study more general AMP algorithms for random matrices $W$ that satisfy orthogonal rotational invariance in law, where $W$ may have a spectral distribution that is different from the semicircle and Marcenko-Pastur laws characteristic of white noise. The Onsager corrections and state evolutions in these algorithms are defined by the free cumulants or rectangular free cumulants of the spectral distribution of $W$. Their forms were derived previously by Opper, \c{C}akmak, and Winther using non-rigorous dynamic functional theory techniques, and we provide rigorous proofs. Our motivating application is a Bayes-AMP algorithm for Principal Components Analysis, when there is prior structure for the principal components (PCs) and possibly non-white noise. For sufficiently large signal strengths and any non-Gaussian prior distributions for the PCs, we show that this algorithm provably achieves higher estimation accuracy than the sample PCs.

Stochastic gradient Markov chain Monte Carlo (SGMCMC) has become a popular method for scalable Bayesian inference. These methods are based on sampling a discrete-time approximation to a continuous time process, such as the Langevin diffusion. When applied to distributions defined on a constrained space, such as the simplex, the time-discretisation error can dominate when we are near the boundary of the space. We demonstrate that while current SGMCMC methods for the simplex perform well in certain cases, they struggle with sparse simplex spaces; when many of the components are close to zero. However, most popular large-scale applications of Bayesian inference on simplex spaces, such as network or topic models, are sparse. We argue that this poor performance is due to the biases of SGMCMC caused by the discretization error. To get around this, we propose the stochastic CIR process, which removes all discretization error and we prove that samples from the stochastic CIR process are asymptotically unbiased. Use of the stochastic CIR process within a SGMCMC algorithm is shown to give substantially better performance for a topic model and a Dirichlet process mixture model than existing SGMCMC approaches.

北京阿比特科技有限公司