亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper presents stochastic virtual element methods for propagating uncertainty in linear elastic stochastic problems. We first derive stochastic virtual element equations for 2D and 3D linear elastic problems that may involve uncertainties in material properties, external forces, etc. A stochastic virtual element space that couples the deterministic virtual element space and the stochastic space is constructed for this purpose and used to approximate the unknown stochastic solution. Two numerical frameworks are then developed to solve the derived stochastic virtual element equations, including a Polynomial Chaos approximation based approach and a weakly intrusive approximation based approach. In the PC based framework, the stochastic solution is approximated using the Polynomial Chaos basis and solved via an augmented deterministic virtual element equation that is generated by applying the stochastic Galerkin procedure to the original stochastic virtual element equation. In the weakly intrusive approximation based framework, the stochastic solution is approximated by a summation of a set of products of random variables and deterministic vectors, where the deterministic vectors are solved via converting the original stochastic problem to deterministic virtual element equations by the stochastic Galerkin approach, and the random variables are solved via converting the original stochastic problem to one-dimensional stochastic algebraic equations by the classical Galerkin procedure. This method avoids the curse of dimensionality of high-dimensional stochastic problems successfully since all random inputs are embedded into one-dimensional stochastic algebraic equations whose computational effort weakly depends on the stochastic dimension. Numerical results on 2D and 3D problems with low- and high-dimensional random inputs demonstrate the good performance of the proposed methods.

相關內容

A well-known boundary observability inequality for the elasticity system establishes that the energy of the system can be estimated from the solution on a sufficiently large part of the boundary for a sufficiently large time. This inequality is relevant in different contexts as the exact boundary controllability, boundary stabilization, or some inverse source problems. Here we show that a corresponding boundary observability inequality for the spectral collocation approximation of the linear elasticity system in a d-dimensional cube also holds, uniformly with respect to the discretization parameter. This property is essential to prove that natural numerical approaches to the previous problems based on replacing the elasticity system by collocation discretization will give successful approximations of the continuous counterparts.

Hikers and hillwalkers typically use the gradient in the direction of travel (walking slope) as the main variable in established methods for predicting walking time (via the walking speed) along a route. Research into fell-running has suggested further variables which may improve speed algorithms in this context; the gradient of the terrain (hill slope) and the level of terrain obstruction. Recent improvements in data availability, as well as widespread use of GPS tracking now make it possible to explore these variables in a walking speed model at a sufficient scale to test statistical significance. We tested various established models used to predict walking speed against public GPS data from almost 88,000 km of UK walking / hiking tracks. Tracks were filtered to remove breaks and non-walking sections. A new generalised linear model (GLM) was then used to predict walking speeds. Key differences between the GLM and established rules were that the GLM considered the gradient of the terrain (hill slope) irrespective of walking slope, as well as the terrain type and level of terrain obstruction in off-road travel. All of these factors were shown to be highly significant, and this is supported by a lower root-mean-square-error compared to existing functions. We also observed an increase in RMSE between the GLM and established methods as hill slope increases, further supporting the importance of this variable.

Despite the significant interest and progress in reinforcement learning (RL) problems with adversarial corruption, current works are either confined to the linear setting or lead to an undesired $\tilde{O}(\sqrt{T}\zeta)$ regret bound, where $T$ is the number of rounds and $\zeta$ is the total amount of corruption. In this paper, we consider the contextual bandit with general function approximation and propose a computationally efficient algorithm to achieve a regret of $\tilde{O}(\sqrt{T}+\zeta)$. The proposed algorithm relies on the recently developed uncertainty-weighted least-squares regression from linear contextual bandit and a new weighted estimator of uncertainty for the general function class. In contrast to the existing analysis that heavily relies on the linear structure, we develop a novel technique to control the sum of weighted uncertainty, thus establishing the final regret bounds. We then generalize our algorithm to the episodic MDP setting and first achieve an additive dependence on the corruption level $\zeta$ in the scenario of general function approximation. Notably, our algorithms achieve regret bounds either nearly match the performance lower bound or improve the existing methods for all the corruption levels and in both known and unknown $\zeta$ cases.

This study presents a constructive methodology for designing accelerated convex optimisation algorithms in continuous-time domain. The two key enablers are the classical concept of passivity in control theory and the time-dependent change of variables that maps the output of the internal dynamic system to the optimisation variables. The Lyapunov function associated with the optimisation dynamics is obtained as a natural consequence of specifying the internal dynamics that drives the state evolution as a passive linear time-invariant system. The passivity-based methodology provides a general framework that has the flexibility to generate convex optimisation algorithms with the guarantee of different convergence rate bounds on the objective function value. The same principle applies to the design of online parameter update algorithms for adaptive control by re-defining the output of internal dynamics to allow for the feedback interconnection with tracking error dynamics.

In a task where many similar inverse problems must be solved, evaluating costly simulations is impractical. Therefore, replacing the model $y$ with a surrogate model $y_s$ that can be evaluated quickly leads to a significant speedup. The approximation quality of the surrogate model depends strongly on the number, position, and accuracy of the sample points. With an additional finite computational budget, this leads to a problem of (computer) experimental design. In contrast to the selection of sample points, the trade-off between accuracy and effort has hardly been studied systematically. We therefore propose an adaptive algorithm to find an optimal design in terms of position and accuracy. Pursuing a sequential design by incrementally appending the computational budget leads to a convex and constrained optimization problem. As a surrogate, we construct a Gaussian process regression model. We measure the global approximation error in terms of its impact on the accuracy of the identified parameter and aim for a uniform absolute tolerance, assuming that $y_s$ is computed by finite element calculations. A priori error estimates and a coarse estimate of computational effort relate the expected improvement of the surrogate model error to computational effort, resulting in the most efficient combination of sample point and evaluation tolerance. We also allow for improving the accuracy of already existing sample points by continuing previously truncated finite element solution procedures.

The principle of maximum entropy, as introduced by Jaynes in information theory, has contributed to advancements in various domains such as Statistical Mechanics, Machine Learning, and Ecology. Its resultant solutions have served as a catalyst, facilitating researchers in mapping their empirical observations to the acquisition of unbiased models, whilst deepening the understanding of complex systems and phenomena. However, when we consider situations in which the model elements are not directly observable, such as when noise or ocular occlusion is present, possibilities arise for which standard maximum entropy approaches may fail, as they are unable to match feature constraints. Here we show the Principle of Uncertain Maximum Entropy as a method that both encodes all available information in spite of arbitrarily noisy observations while surpassing the accuracy of some ad-hoc methods. Additionally, we utilize the output of a black-box machine learning model as input into an uncertain maximum entropy model, resulting in a novel approach for scenarios where the observation function is unavailable. Previous remedies either relaxed feature constraints when accounting for observation error, given well-characterized errors such as zero-mean Gaussian, or chose to simply select the most likely model element given an observation. We anticipate our principle finding broad applications in diverse fields due to generalizing the traditional maximum entropy method with the ability to utilize uncertain observations.

We present an adaptive algorithm for the computation of quantities of interest involving the solution of a stochastic elliptic PDE where the diffusion coefficient is parametrized by means of a Karhunen-Lo\`eve expansion. The approximation of the equivalent parametric problem requires a restriction of the countably infinite-dimensional parameter space to a finite-dimensional parameter set, a spatial discretization and an approximation in the parametric variables. We consider a sparse grid approach between these approximation directions in order to reduce the computational effort and propose a dimension-adaptive combination technique. In addition, a sparse grid quadrature for the high-dimensional parametric approximation is employed and simultaneously balanced with the spatial and stochastic approximation. Our adaptive algorithm constructs a sparse grid approximation based on the benefit-cost ratio such that the regularity and thus the decay of the Karhunen-Lo\`eve coefficients is not required beforehand. The decay is detected and exploited as the algorithm adjusts to the anisotropy in the parametric variables. We include numerical examples for the Darcy problem with a lognormal permeability field, which illustrate a good performance of the algorithm: For sufficiently smooth random fields, we essentially recover the rate of the spatial discretization as asymptotic convergence rate with respect to the computational cost.

We establish sparsity and summability results for coefficient sequences of Wiener-Hermite polynomial chaos expansions of countably-parametric solutions of linear elliptic and parabolic divergence-form partial differential equations with Gaussian random field inputs. The novel proof technique developed here is based on analytic continuation of parametric solutions into the complex domain. It differs from previous works that used bootstrap arguments and induction on the differentiation order of solution derivatives with respect to the parameters. The present holomorphy-based argument allows a unified, ``differentiation-free'' proof of sparsity (expressed in terms of $\ell^p$-summability or weighted $\ell^2$-summability) of sequences of Wiener-Hermite coefficients in polynomial chaos expansions in various scales of function spaces. The analysis also implies corresponding analyticity and sparsity results for posterior densities in Bayesian inverse problems subject to Gaussian priors on uncertain inputs from function spaces. Our results furthermore yield dimension-independent convergence rates of various \emph{constructive} high-dimensional deterministic numerical approximation schemes such as single-level and multi-level versions of Hermite-Smolyak anisotropic sparse-grid interpolation and quadrature in both forward and inverse computational uncertainty quantification.

Due to their increasing spread, confidence in neural network predictions became more and more important. However, basic neural networks do not deliver certainty estimates or suffer from over or under confidence. Many researchers have been working on understanding and quantifying uncertainty in a neural network's prediction. As a result, different types and sources of uncertainty have been identified and a variety of approaches to measure and quantify uncertainty in neural networks have been proposed. This work gives a comprehensive overview of uncertainty estimation in neural networks, reviews recent advances in the field, highlights current challenges, and identifies potential research opportunities. It is intended to give anyone interested in uncertainty estimation in neural networks a broad overview and introduction, without presupposing prior knowledge in this field. A comprehensive introduction to the most crucial sources of uncertainty is given and their separation into reducible model uncertainty and not reducible data uncertainty is presented. The modeling of these uncertainties based on deterministic neural networks, Bayesian neural networks, ensemble of neural networks, and test-time data augmentation approaches is introduced and different branches of these fields as well as the latest developments are discussed. For a practical application, we discuss different measures of uncertainty, approaches for the calibration of neural networks and give an overview of existing baselines and implementations. Different examples from the wide spectrum of challenges in different fields give an idea of the needs and challenges regarding uncertainties in practical applications. Additionally, the practical limitations of current methods for mission- and safety-critical real world applications are discussed and an outlook on the next steps towards a broader usage of such methods is given.

The notion of uncertainty is of major importance in machine learning and constitutes a key element of machine learning methodology. In line with the statistical tradition, uncertainty has long been perceived as almost synonymous with standard probability and probabilistic predictions. Yet, due to the steadily increasing relevance of machine learning for practical applications and related issues such as safety requirements, new problems and challenges have recently been identified by machine learning scholars, and these problems may call for new methodological developments. In particular, this includes the importance of distinguishing between (at least) two different types of uncertainty, often refereed to as aleatoric and epistemic. In this paper, we provide an introduction to the topic of uncertainty in machine learning as well as an overview of hitherto attempts at handling uncertainty in general and formalizing this distinction in particular.

北京阿比特科技有限公司