亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We present a novel approach to adaptive optimal design of groundwater surveys - a methodology for choosing the location of the next monitoring well. Our dual-weighted approach borrows ideas from Bayesian Optimisation and goal-oriented error estimation to propose the next monitoring well, given that some data is already available from existing wells. Our method is distinct from other optimal design strategies in that it does not rely on Fisher Information and it instead directly exploits the posterior uncertainty and the expected solution to a dual (or adjoint) problem to construct an acquisition function that optimally reduces the uncertainty in the model as a whole and some engineering quantity of interest in particular. We demonstrate our approach in the context of 2D groundwater flow example and show that employing the expectation of the dual solution as a weighting function improves the posterior estimate of the quantity of interest on average by a factor of 3, compared to the baseline approach, where only the posterior uncertainty is considered.

相關內容

Double hierarchical generalized linear models (DHGLM) are a family of models that are flexible enough as to model hierarchically the mean and scale parameters. In a Bayesian framework, fitting highly parameterized hierarchical models is challenging when this problem is addressed using typical Markov chain Monte Carlo (MCMC) methods due to the potential high correlation between different parameters and effects in the model. The integrated nested Laplace approximation (INLA) could be considered instead to avoid dealing with these problems. However, DHGLM do not fit within the latent Gaussian Markov random field (GMRF) models that INLA can fit. In this paper we show how to fit DHGLM with INLA by combining INLA and importance sampling (IS) algorithms. In particular, we will illustrate how to split DHGLM into submodels that can be fitted with INLA so that the remainder of the parameters are fit using adaptive multiple IS (AMIS) with the aid of the graphical representation of the hierarchical model. This is illustrated using a simulation study on three different types of models and two real data examples.

We show that solution to the Hermite-Pad\'{e} type I approximation problem leads in a natural way to a subclass of solutions of the Hirota (discrete Kadomtsev-Petviashvili) system and of its adjoint linear problem. Our result explains the appearence of various ingredients of the integrable systems theory in application to multiple orthogonal polynomials, numerical algorthms, random matrices, and in other branches of mathematical physics and applied mathematics where the Hermite-Pad\'{e} approximation problem is relevant. We present also the geometric algorithm, based on the notion of Desargues maps, of construction of solutions of the problem in the projective space over the field of rational functions. As a byproduct we obtain the corresponding generalization of the Wynn recurrence. We isolate the boundary data of the Hirota system which provide solutions to Hermite-Pad\'{e} problem showing that the corresponding reduction lowers dimensionality of the system. In particular, we obtain certain equations which, in addition to the known ones given by Paszkowski, can be considered as direct analogs of the Frobenius identities. We study the place of the reduced system within the integrability theory, which results in finding multidimensional (in the sense of number of variables) extension of the discrete-time Toda chain equations.

Supervised classification techniques use training samples to learn a classification rule with small expected 0-1-loss (error probability). Conventional methods enable tractable learning and provide out-of-sample generalization by using surrogate losses instead of the 0-1-loss and considering specific families of rules (hypothesis classes). This paper presents minimax risk classifiers (MRCs) that minimize the worst-case 0-1-loss over general classification rules and provide tight performance guarantees at learning. We show that MRCs are strongly universally consistent using feature mappings given by characteristic kernels. The paper also proposes efficient optimization techniques for MRC learning and shows that the methods presented can provide accurate classification together with tight performance guarantees

A Regret Minimizing Set (RMS) is a useful concept in which a smaller subset of a database is selected while mostly preserving the best scores along every possible utility function. In this paper, we study the $k$-Regret Minimizing Sets ($k$-RMS) and Average Regret Minimizing Sets (ARMS) problems. $k$-RMS selects $r$ records from a database such that the maximum regret ratio between the $k$-th best score in the database and the best score in the selected records for any possible utility function is minimized. Meanwhile, ARMS minimizes the average of this ratio within a distribution of utility functions. Particularly, we study approximation algorithms for $k$-RMS and ARMS from the perspective of approximating the happiness ratio, which is equivalent to one minus the regret ratio. In this paper, we show that the problem of approximating the happiness of a $k$-RMS within any finite factor is NP-Hard when the dimensionality of the database is unconstrained and extend the result to an inapproximability proof for the regret. We then provide approximation algorithms for approximating the happiness of ARMS with better approximation ratios and time complexities than known algorithms for approximating the regret. We further provide dataset reduction schemes which can be used to reduce the runtime of existing heuristic based algorithms, as well as to derive polynomial-time approximation schemes for $k$-RMS when dimensionality is fixed. Finally, we provide experimental validation.

This paper is focused on the optimization approach to the solution of inverse problems. We introduce a stochastic dynamical system in which the parameter-to-data map is embedded, with the goal of employing techniques from nonlinear Kalman filtering to estimate the parameter given the data. The extended Kalman filter (which we refer to as ExKI in the context of inverse problems) can be effective for some inverse problems approached this way, but is impractical when the forward map is not readily differentiable and is given as a black box, and also for high dimensional parameter spaces because of the need to propagate large covariance matrices. Application of ensemble Kalman filters, for example use of the ensemble Kalman inversion (EKI) algorithm, has emerged as a useful tool which overcomes both of these issues: it is derivative free and works with a low-rank covariance approximation formed from the ensemble. In this paper, we work with the ExKI, EKI, and a variant on EKI which we term unscented Kalman inversion (UKI). The paper contains two main contributions. Firstly, we identify a novel stochastic dynamical system in which the parameter-to-data map is embedded. We present theory in the linear case to show exponential convergence of the mean of the filtering distribution to the solution of a regularized least squares problem. This is in contrast to previous work in which the EKI has been employed where the dynamical system used leads to algebraic convergence to an unregularized problem. Secondly, we show that the application of the UKI to this novel stochastic dynamical system yields improved inversion results, in comparison with the application of EKI to the same novel stochastic dynamical system.

The Monte Carlo simulation (MCS) is a statistical methodology used in a large number of applications. It uses repeated random sampling to solve problems with a probability interpretation to obtain high-quality numerical results. The MCS is simple and easy to develop, implement, and apply. However, its computational cost and total runtime can be quite high as it requires many samples to obtain an accurate approximation with low variance. In this paper, a novel MCS, called the self-adaptive BAT-MCS, based on the binary-adaption-tree algorithm (BAT) and our proposed self-adaptive simulation-number algorithm is proposed to simply and effectively reduce the run time and variance of the MCS. The proposed self-adaptive BAT-MCS was applied to a simple benchmark problem to demonstrate its application in network reliability. The statistical characteristics, including the expectation, variance, and simulation number, and the time complexity of the proposed self-adaptive BAT-MCS are discussed. Furthermore, its performance is compared to that of the traditional MCS extensively on a large-scale problem.

In this paper, we introduce adaptive neuron enhancement (ANE) method for the best least-squares approximation using two-layer ReLU neural networks (NNs). For a given function f(x), the ANE method generates a two-layer ReLU NN and a numerical integration mesh such that the approximation accuracy is within the prescribed tolerance. The ANE method provides a natural process for obtaining a good initialization which is crucial for training nonlinear optimization problems. Numerical results of the ANE method are presented for functions of two variables exhibiting either intersecting interface singularities or sharp interior layers.

This paper deals with robust inference for parametric copula models. Estimation using Canonical Maximum Likelihood might be unstable, especially in the presence of outliers. We propose to use a procedure based on the Maximum Mean Discrepancy (MMD) principle. We derive non-asymptotic oracle inequalities, consistency and asymptotic normality of this new estimator. In particular, the oracle inequality holds without any assumption on the copula family, and can be applied in the presence of outliers or under misspecification. Moreover, in our MMD framework, the statistical inference of copula models for which there exists no density with respect to the Lebesgue measure on $[0,1]^d$, as the Marshall-Olkin copula, becomes feasible. A simulation study shows the robustness of our new procedures, especially compared to pseudo-maximum likelihood estimation. An R package implementing the MMD estimator for copula models is available.

We consider applying multi-armed bandits to model-assisted designs for dose-finding clinical trials. Multi-armed bandits are very simple and powerful methods to determine actions to maximize a reward in a limited number of trials. Among the multi-armed bandits, we first consider the use of Thompson sampling which determines actions based on random samples from a posterior distribution. In the small sample size, as shown in dose-finding trials, because the tails of posterior distribution are heavier and random samples are too much variability, we also consider an application of regularized Thompson sampling and greedy algorithm. The greedy algorithm determines a dose based on a posterior mean. In addition, we also propose a method to determine a dose based on a posterior median. We evaluate the performance of our proposed designs for six scenarios via simulation studies.

Targeted high-resolution simulations driven by a general circulation model (GCM) can be used to calibrate GCM parameterizations of processes that are globally unresolvable but can be resolved in limited-area simulations. This raises the question of where to place high-resolution simulations to be maximally informative about the uncertain parameterizations in the global model. Here we construct an ensemble-based parallel algorithm to locate regions that maximize the uncertainty reduction, or information gain, in the uncertainty quantification of GCM parameters with regional data. The algorithm is based on a Bayesian framework that exploits a quantified posterior distribution on GCM parameters as a measure of uncertainty. The algorithm is embedded in the recently developed calibrate-emulate-sample (CES) framework, which performs efficient model calibration and uncertainty quantification with only O(10^2) forward model evaluations, compared with O(10^5) forward model evaluations typically needed for traditional approaches to Bayesian calibration. We demonstrate the algorithm with an idealized GCM, with which we generate surrogates of high-resolution data. In this setting, we calibrate parameters and quantify uncertainties in a quasi-equilibrium convection scheme. We consider (i) localization in space for a statistically stationary problem, and (ii) localization in space and time for a seasonally varying problem. In these proof-of-concept applications, the calculated information gain reflects the reduction in parametric uncertainty obtained from Bayesian inference when harnessing a targeted sample of data. The largest information gain results from regions near the intertropical convergence zone (ITCZ) and indeed the algorithm automatically targets these regions for data collection.

北京阿比特科技有限公司