亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper presents a comparison of two multi-fidelity methods for the forward uncertainty quantification of a naval engineering problem. Specifically, we consider the problem of quantifying the uncertainty of the hydrodynamic resistance of a roll-on/roll-off passengers ferry advancing in calm water and subject to two operational uncertainties (ship speed and payload). The first four statistical moments (mean, variance, skewness, kurtosis), and the probability density function for such quantity of interest (QoI) are computed with two multi-fidelity methods, i.e., the Multi-Index Stochastic Collocation (MISC) method and an adaptive multi-fidelity Stochastic Radial Basis Functions (SRBF) algorithm. The QoI is evaluated via computational fluid dynamics simulations, which are performed with the in-house unsteady Reynolds-Averaged Navier-Stokes (RANS) multi-grid solver $\chi$navis. The different fidelities employed by both methods are obtained by stopping the RANS solver at different grid levels of the multi-grid cycle. The performance of both methods are presented and discussed: in a nutshell, the findings suggest that, at least for the current implementations of both algorithms, MISC could be preferred whenever a limited computational budget is available, whereas for a larger computational budget SRBFs seem to be preferable, thanks to its robustness to the numerical noise in the evaluations of the QoI.

相關內容

Parareal is a well-studied algorithm for numerically integrating systems of time-dependent differential equations by parallelising the temporal domain. Given approximate initial values at each temporal sub-interval, the algorithm locates a solution in a fixed number of iterations using a predictor-corrector, stopping once a tolerance is met. This iterative process combines solutions located by inexpensive (coarse resolution) and expensive (fine resolution) numerical integrators. In this paper, we introduce a stochastic parareal algorithm aimed at accelerating the convergence of the deterministic parareal algorithm. Instead of providing the predictor-corrector with a deterministically located set of initial values, the stochastic algorithm samples initial values from dynamically varying probability distributions in each temporal sub-interval. All samples are then propagated in parallel using the expensive integrator. The set of sampled initial values yielding the most continuous (smoothest) trajectory across consecutive sub-intervals are fed into the predictor-corrector, converging in fewer iterations than the deterministic algorithm with a given probability. The performance of the stochastic algorithm, implemented using various probability distributions, is illustrated on low-dimensional systems of ordinary differential equations (ODEs). We provide numerical evidence that when the number of sampled initial values is large enough, stochastic parareal converges almost certainly in fewer iterations than the deterministic algorithm, maintaining solution accuracy. Given its stochastic nature, we also highlight that multiple simulations of stochastic parareal return a distribution of solutions that can represent a measure of uncertainty over the ODE solution.

In recent work (Maierhofer & Huybrechs, 2022, Adv. Comput. Math.), the authors showed that least-squares oversampling can improve the convergence properties of collocation methods for boundary integral equations involving operators of certain pseudo-differential form. The underlying principle is that the discrete method approximates a Bubnov$-$Galerkin method in a suitable sense. In the present work, we extend this analysis to the case when the integral operator is perturbed by a compact operator $\mathcal{K}$ which is continuous as a map on Sobolev spaces on the boundary, $\mathcal{K}:H^{p}\rightarrow H^{q}$ for all $p,q\in\mathbb{R}$. This study is complicated by the fact that both the test and trial functions in the discrete Bubnov-Galerkin orthogonality conditions are modified over the unperturbed setting. Our analysis guarantees that previous results concerning optimal convergence rates and sufficient rates of oversampling are preserved in the more general case. Indeed, for the first time, this analysis provides a complete explanation of the advantages of least-squares oversampled collocation for boundary integral formulations of the Laplace equation on arbitrary smooth Jordan curves in 2D. Our theoretical results are shown to be in very good agreement with numerical experiments.

This study introduces a novel computational framework for Robust Topology Optimization (RTO) considering imprecise random field parameters. Unlike the worst-case approach, the present method provides upper and lower bounds for the mean and standard deviation of compliance as well as the optimized topological layouts of a structure for various scenarios. In the proposed approach, the imprecise random field variables are determined utilizing parameterized p-boxes with different confidence intervals. The Karhunen-Lo\`eve (K-L) expansion is extended to provide a spectral description of the imprecise random field. The linear superposition method in conjunction with a linear combination of orthogonal functions is employed to obtain explicit mathematical expressions for the first and second order statistical moments of the structural compliance. Then, an interval sensitivity analysis is carried out, applying the Orthogonal Similarity Transformation (OST) method with the boundaries of each of the intermediate variable searched efficiently at every iteration using a Combinatorial Approach (CA). Finally, the validity, accuracy, and applicability of the work are rigorously checked by comparing the outputs of the proposed approach with those obtained using the particle swarm optimization (PSO) and Quasi-Monte-Carlo Simulation (QMCS) methods. Three different numerical examples with imprecise random field loads are presented to show the effectiveness and feasibility of the study.

This paper studies a novel multi-access coded caching (MACC) model in two-dimensional (2D) topology, which is a generalization of the one-dimensional (1D) MACC model proposed by Hachem et al. We formulate a 2D MACC coded caching model, formed by a server containing $N$ files, $K_1\times K_2$ cache-nodes with limited memory $M$ which are placed on a grid with $K_1$ rows and $K_2$ columns, and $K_1\times K_2$ cache-less users such that each user is connected to $L^2$ nearby cache-nodes. More precisely, for each row (or column), every user can access $L$ consecutive cache-nodes, referred to as row (or column) 1D MACC problem in the 2D MACC model. The server is connected to the users through an error-free shared link, while the users can retrieve the cached content of the connected cache-nodes without cost. Our objective is to minimize the worst-case transmission load among all possible users' demands. In this paper, we propose a baseline scheme which directly extends an existing 1D MACC scheme to the 2D model by using a Minimum Distance Separable (MDS) code, and two improved schemes. In the first scheme referred to as grouping scheme, which works when $K_1$ and $K_2$ are divisible by $L$, we partition the cache-nodes and users into $L^2$ groups according to their positions, such that no two users in the same group share any cache-node, and we utilize the seminal shared-link coded caching scheme proposed by Maddah-Ali and Niesen for each group. Subsequently, for any model parameters satisfying $\min\{K_1,K_2\}>L$ we propose the second scheme, referred to as hybrid scheme, consisting in a highly non-trivial way to construct a 2D MACC scheme through a vertical and a horizontal 1D MACC schemes.

The quadratic Wasserstein metric has shown its power in measuring the difference between probability densities, which benefits optimization objective function with better convexity and is insensitive to data noise. Nevertheless, it is always an important question to make the seismic signals suitable for comparison using the quadratic Wasserstein metric. The squaring scaling is worth exploring since it guarantees the convexity caused by data shift. However, as mentioned in [Commun. Inf. Syst., 2019, 19:95-145], the squaring scaling may lose uniqueness and result in more local minima to the misfit function. In our previous work [J. Comput. Phys., 2018, 373:188-209], the quadratic Wasserstein metric with squaring scaling was successfully applied to the earthquake location problem. But it only discussed the inverse problem with few degrees of freedom. In this work, we will present a more in-depth study on the combination of squaring scaling technique and the quadratic Wasserstein metric. By discarding some inapplicable data, picking seismic phases, and developing a new normalization method, we successfully invert the seismic velocity structure based on the squaring scaling technique and the quadratic Wasserstein metric. The numerical experiments suggest that this newly proposed method is an efficient approach to obtain more accurate inversion results.

We introduce a novel framework for optimization based on energy-conserving Hamiltonian dynamics in a strongly mixing (chaotic) regime and establish its key properties analytically and numerically. The prototype is a discretization of Born-Infeld dynamics, with a squared relativistic speed limit depending on the objective function. This class of frictionless, energy-conserving optimizers proceeds unobstructed until slowing naturally near the minimal loss, which dominates the phase space volume of the system. Building from studies of chaotic systems such as dynamical billiards, we formulate a specific algorithm with good performance on machine learning and PDE-solving tasks, including generalization. It cannot stop at a high local minimum and cannot overshoot the global minimum, yielding an advantage in non-convex loss functions, and proceeds faster than GD+momentum in shallow valleys.

How should social scientists understand and communicate the uncertainty of statistically estimated causal effects? It is well-known that the conventional significance-vs.-insignificance approach is associated with misunderstandings and misuses. Behavioral research suggests people understand uncertainty more appropriately in a numerical, continuous scale than in a verbal, discrete scale. Motivated by these backgrounds, I propose presenting the probabilities of different effect sizes. Probability is an intuitive continuous measure of uncertainty. It allows researchers to better understand and communicate the uncertainty of statistically estimated effects. In addition, my approach needs no decision threshold for an uncertainty measure or an effect size, unlike the conventional approaches, allowing researchers to be agnostic about a decision threshold such as p<5% and a justification for that. I apply my approach to a previous social scientific study, showing it enables richer inference than the significance-vs.-insignificance approach taken by the original study. The accompanying R package makes my approach easy to implement.

We present a hybrid sampling-surrogate approach for reducing the computational expense of uncertainty quantification in nonlinear dynamical systems. Our motivation is to enable rapid uncertainty quantification in complex mechanical systems such as automotive propulsion systems. Our approach is to build upon ideas from multifidelity uncertainty quantification to leverage the benefits of both sampling and surrogate modeling, while mitigating their downsides. In particular, the surrogate model is selected to exploit problem structure, such as smoothness, and offers a highly correlated information source to the original nonlinear dynamical system. We utilize an intrusive generalized Polynomial Chaos surrogate because it avoids any statistical errors in its construction and provides analytic estimates of output statistics. We then leverage a Monte Carlo-based Control Variate technique to correct the bias caused by the surrogate approximation error. The primary theoretical contribution of this work is the analysis and solution of an estimator design strategy that optimally balances the computational effort needed to adapt a surrogate compared with sampling the original expensive nonlinear system. While previous works have similarly combined surrogates and sampling, to our best knowledge this work is the first to provide rigorous analysis of estimator design. We deploy our approach on multiple examples stemming from the simulation of mechanical automotive propulsion system models. We show that the estimator is able to achieve orders of magnitude reduction in mean squared error of statistics estimation in some cases under comparable costs of purely sampling or purely surrogate approaches.

Due to their increasing spread, confidence in neural network predictions became more and more important. However, basic neural networks do not deliver certainty estimates or suffer from over or under confidence. Many researchers have been working on understanding and quantifying uncertainty in a neural network's prediction. As a result, different types and sources of uncertainty have been identified and a variety of approaches to measure and quantify uncertainty in neural networks have been proposed. This work gives a comprehensive overview of uncertainty estimation in neural networks, reviews recent advances in the field, highlights current challenges, and identifies potential research opportunities. It is intended to give anyone interested in uncertainty estimation in neural networks a broad overview and introduction, without presupposing prior knowledge in this field. A comprehensive introduction to the most crucial sources of uncertainty is given and their separation into reducible model uncertainty and not reducible data uncertainty is presented. The modeling of these uncertainties based on deterministic neural networks, Bayesian neural networks, ensemble of neural networks, and test-time data augmentation approaches is introduced and different branches of these fields as well as the latest developments are discussed. For a practical application, we discuss different measures of uncertainty, approaches for the calibration of neural networks and give an overview of existing baselines and implementations. Different examples from the wide spectrum of challenges in different fields give an idea of the needs and challenges regarding uncertainties in practical applications. Additionally, the practical limitations of current methods for mission- and safety-critical real world applications are discussed and an outlook on the next steps towards a broader usage of such methods is given.

Despite the state-of-the-art performance for medical image segmentation, deep convolutional neural networks (CNNs) have rarely provided uncertainty estimations regarding their segmentation outputs, e.g., model (epistemic) and image-based (aleatoric) uncertainties. In this work, we analyze these different types of uncertainties for CNN-based 2D and 3D medical image segmentation tasks. We additionally propose a test-time augmentation-based aleatoric uncertainty to analyze the effect of different transformations of the input image on the segmentation output. Test-time augmentation has been previously used to improve segmentation accuracy, yet not been formulated in a consistent mathematical framework. Hence, we also propose a theoretical formulation of test-time augmentation, where a distribution of the prediction is estimated by Monte Carlo simulation with prior distributions of parameters in an image acquisition model that involves image transformations and noise. We compare and combine our proposed aleatoric uncertainty with model uncertainty. Experiments with segmentation of fetal brains and brain tumors from 2D and 3D Magnetic Resonance Images (MRI) showed that 1) the test-time augmentation-based aleatoric uncertainty provides a better uncertainty estimation than calculating the test-time dropout-based model uncertainty alone and helps to reduce overconfident incorrect predictions, and 2) our test-time augmentation outperforms a single-prediction baseline and dropout-based multiple predictions.

北京阿比特科技有限公司