亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We consider a class of density-driven flow problems. We are particularly interested in the problem of the salinization of coastal aquifers. We consider the Henry saltwater intrusion problem with uncertain porosity, permeability, and recharge parameters as a test case. The reason for the presence of uncertainties is the lack of knowledge, inaccurate measurements, and inability to measure parameters at each spatial or time location. This problem is nonlinear and time-dependent. The solution is the salt mass fraction, which is uncertain and changes in time. Uncertainties in porosity, permeability, recharge, and mass fraction are modeled using random fields. This work investigates the applicability of the well-known multilevel Monte Carlo (MLMC) method for such problems. The MLMC method can reduce the total computational and storage costs. Moreover, the MLMC method runs multiple scenarios on different spatial and time meshes and then estimates the mean value of the mass fraction. The parallelization is performed in both the physical space and stochastic space. To solve every deterministic scenario, we run the parallel multigrid solver ug4 in a black-box fashion. We use the solution obtained from the quasi-Monte Carlo method as a reference solution.

相關內容

MASS:IEEE International Conference on Mobile Ad-hoc and Sensor Systems。 Explanation:移動Ad hoc和傳感器系統IEEE國際會議。 Publisher:IEEE。 SIT:

Tomographic reconstruction, despite its revolutionary impact on a wide range of applications, suffers from its ill-posed nature in that there is no unique solution because of limited and noisy measurements. Therefore, in the absence of ground truth, quantifying the solution quality is highly desirable but under-explored. In this work, we address this challenge through Gaussian process modeling to flexibly and explicitly incorporate prior knowledge of sample features and experimental noises through the choices of the kernels and noise models. Our proposed method yields not only comparable reconstruction to existing practical reconstruction methods (e.g., regularized iterative solver for inverse problem) but also an efficient way of quantifying solution uncertainties. We demonstrate the capabilities of the proposed approach on various images and show its unique capability of uncertainty quantification in the presence of various noises.

Tumble dryers offer a fast and convenient way of drying textiles independent of weather conditions and therefore are frequently used in ordinary households. However, artificial drying of textiles consumes considerable amounts of energy, approximately 8.2 percent of the residential electricity consumption is for drying of textiles in northern European countries (Cranston et al., 2019). Several authors have investigated the aspects of the clothes drying cycle with experimental and numerical methods to understand and improve the process. The first turning point study on understanding the physics of evaporation for tumble dryers was presented by Lambert et al. (1991) in the early 90s. With the aid of Chilton_Colburn analogy, they introduced the concept of area-mass transfer coefficient to address evaporation rate. Afterwards, several experimental or numerical studies were published based on this concept, and furthermore, the model was then developed into 0-dimensional (Deans, 2001) and 1-dimensional (Wei et al., 2017) to gain more accuracy. The evaporation rate is considered to be the main system parameter for dryers with which other performance parameters including drying time, effectiveness, moisture content and efficiency can be estimated. More recent literature focused on utilizing dimensional analysis or image processing techniques to correlate drying indices with system parameters. However, the validity of these regressed models is machine-specific, and hence, cannot be generalized yet. All the previous models for estimating the evaporation rate in tumble dryers are discussed. The review of the related literature showed that all of the previous models for the prediction of the evaporation rate in the clothes dryers have some limitations in terms of accuracy and applicability.

Inferring the parameters of ordinary differential equations (ODEs) from noisy observations is an important problem in many scientific fields. Currently, most parameter estimation methods that bypass numerical integration tend to rely on basis functions or Gaussian processes to approximate the ODE solution and its derivatives. Due to the sensitivity of the ODE solution to its derivatives, these methods can be hindered by estimation error, especially when only sparse time-course observations are available. We present a Bayesian collocation framework that operates on the integrated form of the ODEs and also avoids the expensive use of numerical solvers. Our methodology has the capability to handle general nonlinear ODE systems. We demonstrate the accuracy of the proposed method through a simulation study, where the estimated parameters and recovered system trajectories are compared with other recent methods. A real data example is also provided.

In this paper we investigate the stability properties of the so-called gBBKS and GeCo methods, which belong to the class of nonstandard schemes and preserve the positivity as well as all linear invariants of the underlying system of ordinary differential equations for any step size. A stability investigation for these methods, which are outside the class of general linear methods, is challenging since the iterates are always generated by a nonlinear map even for linear problems. Recently, a stability theorem was derived presenting criteria for understanding such schemes. For the analysis, the schemes are applied to general linear equations and proven to be generated by $\mathcal C^1$-maps with locally Lipschitz continuous first derivatives. As a result, the above mentioned stability theorem can be applied to investigate the Lyapunov stability of non-hyperbolic fixed points of the numerical method by analyzing the spectrum of the corresponding Jacobian of the generating map. In addition, if a fixed point is proven to be stable, the theorem guarantees the local convergence of the iterates towards it. In the case of first and second order gBBKS schemes the stability domain coincides with that of the underlying Runge--Kutta method. Furthermore, while the first order GeCo scheme converts steady states to stable fixed points for all step sizes and all linear test problems of finite size, the second order GeCo scheme has a bounded stability region for the considered test problems. Finally, all theoretical predictions from the stability analysis are validated numerically.

Quantifying predictive uncertainty of neural networks has recently attracted increasing attention. In this work, we focus on measuring uncertainty of graph neural networks (GNNs) for the task of node classification. Most existing GNNs model message passing among nodes. The messages are often deterministic. Questions naturally arise: Does there exist uncertainty in the messages? How could we propagate such uncertainty over a graph together with messages? To address these issues, we propose a Bayesian uncertainty propagation (BUP) method, which embeds GNNs in a Bayesian modeling framework, and models predictive uncertainty of node classification with Bayesian confidence of predictive probability and uncertainty of messages. Our method proposes a novel uncertainty propagation mechanism inspired by Gaussian models. Moreover, we present an uncertainty oriented loss for node classification that allows the GNNs to clearly integrate predictive uncertainty in learning procedure. Consequently, the training examples with large predictive uncertainty will be penalized. We demonstrate the BUP with respect to prediction reliability and out-of-distribution (OOD) predictions. The learned uncertainty is also analyzed in depth. The relations between uncertainty and graph topology, as well as predictive uncertainty in the OOD cases are investigated with extensive experiments. The empirical results with popular benchmark datasets demonstrate the superior performance of the proposed method.

In the present paper we consider the initial data, external force, viscosity coefficients, and heat conductivity coefficient as random data for the compressible Navier--Stokes--Fourier system. The Monte Carlo method, which is frequently used for the approximation of statistical moments, is combined with a suitable deterministic discretisation method in physical space and time. Under the assumption that numerical densities and temperatures are bounded in probability, we prove the convergence of random finite volume solutions to a statistical strong solution by applying genuine stochastic compactness arguments. Further, we show the convergence and error estimates for the Monte Carlo estimators of the expectation and deviation. We present several numerical results to illustrate the theoretical results.

Using the equivalent inclusion method (a method strongly related to the Hashin-Shtrikman variational principle) as a surrogate model, we propose a variance reduction strategy for the numerical homogenization of random composites made of inclusions (or rather inhomogeneities) embedded in a homogeneous matrix. The efficiency of this strategy is demonstrated within the framework of two-dimensional, linear conductivity. Significant computational gains vs full-field simulations are obtained even for high contrast values. We also show that our strategy allows to investigate the influence of parameters of the microstructure on the macroscopic response. Our strategy readily extends to three-dimensional problems and to linear elasticity. Attention is paid to the computational cost of the surrogate model. In particular, an inexpensive approximation of the so-called influence tensors (that are used to compute the surrogate model) is proposed.

An in-depth understanding of uncertainty is the first step to making effective decisions under uncertainty. Deep/machine learning (ML/DL) has been hugely leveraged to solve complex problems involved with processing high-dimensional data. However, reasoning and quantifying different types of uncertainties to achieve effective decision-making have been much less explored in ML/DL than in other Artificial Intelligence (AI) domains. In particular, belief/evidence theories have been studied in KRR since the 1960s to reason and measure uncertainties to enhance decision-making effectiveness. We found that only a few studies have leveraged the mature uncertainty research in belief/evidence theories in ML/DL to tackle complex problems under different types of uncertainty. In this survey paper, we discuss several popular belief theories and their core ideas dealing with uncertainty causes and types and quantifying them, along with the discussions of their applicability in ML/DL. In addition, we discuss three main approaches that leverage belief theories in Deep Neural Networks (DNNs), including Evidential DNNs, Fuzzy DNNs, and Rough DNNs, in terms of their uncertainty causes, types, and quantification methods along with their applicability in diverse problem domains. Based on our in-depth survey, we discuss insights, lessons learned, limitations of the current state-of-the-art bridging belief theories and ML/DL, and finally, future research directions.

Artificial neural networks thrive in solving the classification problem for a particular rigid task, acquiring knowledge through generalized learning behaviour from a distinct training phase. The resulting network resembles a static entity of knowledge, with endeavours to extend this knowledge without targeting the original task resulting in a catastrophic forgetting. Continual learning shifts this paradigm towards networks that can continually accumulate knowledge over different tasks without the need to retrain from scratch. We focus on task incremental classification, where tasks arrive sequentially and are delineated by clear boundaries. Our main contributions concern 1) a taxonomy and extensive overview of the state-of-the-art, 2) a novel framework to continually determine the stability-plasticity trade-off of the continual learner, 3) a comprehensive experimental comparison of 11 state-of-the-art continual learning methods and 4 baselines. We empirically scrutinize method strengths and weaknesses on three benchmarks, considering Tiny Imagenet and large-scale unbalanced iNaturalist and a sequence of recognition datasets. We study the influence of model capacity, weight decay and dropout regularization, and the order in which the tasks are presented, and qualitatively compare methods in terms of required memory, computation time, and storage.

Ensembles over neural network weights trained from different random initialization, known as deep ensembles, achieve state-of-the-art accuracy and calibration. The recently introduced batch ensembles provide a drop-in replacement that is more parameter efficient. In this paper, we design ensembles not only over weights, but over hyperparameters to improve the state of the art in both settings. For best performance independent of budget, we propose hyper-deep ensembles, a simple procedure that involves a random search over different hyperparameters, themselves stratified across multiple random initializations. Its strong performance highlights the benefit of combining models with both weight and hyperparameter diversity. We further propose a parameter efficient version, hyper-batch ensembles, which builds on the layer structure of batch ensembles and self-tuning networks. The computational and memory costs of our method are notably lower than typical ensembles. On image classification tasks, with MLP, LeNet, and Wide ResNet 28-10 architectures, our methodology improves upon both deep and batch ensembles.

北京阿比特科技有限公司