亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This article presents a high-order accurate numerical method for the evaluation of singular volume integral operators, with attention focused on operators associated with the Poisson and Helmholtz equations in two dimensions. Following the ideas of the density interpolation method for boundary integral operators, the proposed methodology leverages Green's third identity and a local polynomial interpolant of the density function to recast the volume potential as a sum of single- and double-layer potentials and a volume integral with a regularized (bounded or smoother) integrand. The layer potentials can be accurately and efficiently evaluated everywhere in the plane by means of existing methods (e.g.\ the density interpolation method), while the regularized volume integral can be accurately evaluated by applying elementary quadrature rules. We describe the method both for domains meshed by mapped quadrilaterals and triangles, introducing for each case (i) well-conditioned methods for the production of certain requisite source polynomial interpolants and (ii) efficient translation formulae for polynomial particular solutions. Compared to straightforwardly computing corrections for every singular and nearly-singular volume target, the method significantly reduces the amount of required specialized quadrature by pushing all singular and near-singular corrections to near-singular layer-potential evaluations at target points in a small neighborhood of the domain boundary. Error estimates for the regularization and quadrature approximations are provided. The method is compatible with well-established fast algorithms, being both efficient not only in the online phase but also to set-up. Numerical examples demonstrate the high-order accuracy and efficiency of the proposed methodology.

相關內容

Integration:Integration, the VLSI Journal。 Explanation:集成,VLSI雜志。 Publisher:Elsevier。 SIT:

As global attention on renewable and clean energy grows, the research and implementation of microgrids become paramount. This paper delves into the methodology of exploring the relationship between the operational and environmental costs of microgrids through multi-objective optimization models. By integrating various optimization algorithms like Genetic Algorithm, Simulated Annealing, Ant Colony Optimization, and Particle Swarm Optimization, we propose an integrated approach for microgrid optimization. Simulation results depict that these algorithms provide different dispatch results under economic and environmental dispatch, revealing distinct roles of diesel generators and micro gas turbines in microgrids. Overall, this study offers in-depth insights and practical guidance for microgrid design and operation.

We study in this paper the monotonicity properties of the numerical solutions to Volterra integral equations with nonincreasing completely positive kernels on nonuniform meshes. There is a duality between the complete positivity and the properties of the complementary kernel being nonnegative and nonincreasing. Based on this, we propose the ``complementary monotonicity'' to describe the nonincreasing completely positive kernels, and the ``right complementary monotone'' (R-CMM) kernels as the analogue for nonuniform meshes. We then establish the monotonicity properties of the numerical solutions inherited from the continuous equation if the discretization has the R-CMM property. Such a property seems weaker than being log-convex and there is no resctriction on the step size ratio of the discretization for the R-CMM property to hold.

Nowadays, numerical models are widely used in most of engineering fields to simulate the behaviour of complex systems, such as for example power plants or wind turbine in the energy sector. Those models are nevertheless affected by uncertainty of different nature (numerical, epistemic) which can affect the reliability of their predictions. We develop here a new method for quantifying conditional parameter uncertainty within a chain of two numerical models in the context of multiphysics simulation. More precisely, we aim to calibrate the parameters $\theta$ of the second model of the chain conditionally on the value of parameters $\lambda$ of the first model, while assuming the probability distribution of $\lambda$ is known. This conditional calibration is carried out from the available experimental data of the second model. In doing so, we aim to quantify as well as possible the impact of the uncertainty of $\lambda$ on the uncertainty of $\theta$. To perform this conditional calibration, we set out a nonparametric Bayesian formalism to estimate the functional dependence between $\theta$ and $\lambda$, denoted by $\theta(\lambda)$. First, each component of $\theta(\lambda)$ is assumed to be the realization of a Gaussian process prior. Then, if the second model is written as a linear function of $\theta(\lambda)$, the Bayesian machinery allows us to compute analytically the posterior predictive distribution of $\theta(\lambda)$ for any set of realizations $\lambda$. The effectiveness of the proposed method is illustrated on several analytical examples.

Preference-based optimization algorithms are iterative procedures that seek the optimal calibration of a decision vector based only on comparisons between couples of different tunings. At each iteration, a human decision-maker expresses a preference between two calibrations (samples), highlighting which one, if any, is better than the other. The optimization procedure must use the observed preferences to find the tuning of the decision vector that is most preferred by the decision-maker, while also minimizing the number of comparisons. In this work, we formulate the preference-based optimization problem from a utility theory perspective. Then, we propose GLISp-r, an extension of a recent preference-based optimization procedure called GLISp. The latter uses a Radial Basis Function surrogate to describe the tastes of the decision-maker. Iteratively, GLISp proposes new samples to compare with the best calibration available by trading off exploitation of the surrogate model and exploration of the decision space. In GLISp-r, we propose a different criterion to use when looking for new candidate samples that is inspired by MSRS, a popular procedure in the black-box optimization framework. Compared to GLISp, GLISp-r is less likely to get stuck on local optima of the preference-based optimization problem. We motivate this claim theoretically, with a proof of global convergence, and empirically, by comparing the performances of GLISp and GLISp-r on several benchmark optimization problems.

Temporal analysis of products (TAP) reactors enable experiments that probe numerous kinetic processes within a single set of experimental data through variations in pulse intensity, delay, or temperature. Selecting additional TAP experiments often involves arbitrary selection of reaction conditions or the use of chemical intuition. To make experiment selection in TAP more robust, we explore the efficacy of model-based design of experiments (MBDoE) for precision in TAP reactor kinetic modeling. We successfully applied this approach to a case study of synthetic oxidative propane dehydrogenation (OPDH) that involves pulses of propane and oxygen. We found that experiments identified as optimal through the MBDoE for precision generally reduce parameter uncertainties to a higher degree than alternative experiments. The performance of MBDoE for model divergence was also explored for OPDH, with the relevant active sites (catalyst structure) being unknown. An experiment that maximized the divergence between the three proposed mechanisms was identified and led to clear mechanism discrimination. However, re-optimization of kinetic parameters eliminated the ability to discriminate. The findings yield insight into the prospects and limitations of MBDoE for TAP and transient kinetic experiments.

In this article, we propose an interval constraint programming method for globally solving catalog-based categorical optimization problems. It supports catalogs of arbitrary size and properties of arbitrary dimension, and does not require any modeling effort from the user. A novel catalog-based contractor (or filtering operator) guarantees consistency between the categorical properties and the existing catalog items. This results in an intuitive and generic approach that is exact, rigorous (robust to roundoff errors) and can be easily implemented in an off-the-shelf interval-based continuous solver that interleaves branching and constraint propagation. We demonstrate the validity of the approach on a numerical problem in which a categorical variable is described by a two-dimensional property space. A Julia prototype is available as open-source software under the MIT license at //github.com/cvanaret/CateGOrical.jl

This paper aims to investigate nonparametric estimation of the volatility component in a heteroscedastic scalar-on-function regression model when the underlying discrete-time process is ergodic and affected by a missing at random mechanism. First, we introduce a simplified estimator of the regression and volatility operators based on observed data only. We study their asymptotic properties, such as almost sure uniform consistency rate and asymptotic distribution. Then, the simplified estimators are used to impute the missing data in the original process in order to improve the estimation of the regression and volatility components. The asymptotic properties of the imputed estimators are also investigated. A numerical comparison between the estimators is discussed through simulated data. Finally, a real-data analysis is conducted to model the volatility of daily Brent crude oil returns using intraday, 1-minute frequency, natural gas returns.

This paper presents a physics and data co-driven surrogate modeling method for efficient rare event simulation of civil and mechanical systems with high-dimensional input uncertainties. The method fuses interpretable low-fidelity physical models with data-driven error corrections. The hypothesis is that a well-designed and well-trained simplified physical model can preserve salient features of the original model, while data-fitting techniques can fill the remaining gaps between the surrogate and original model predictions. The coupled physics-data-driven surrogate model is adaptively trained using active learning, aiming to achieve a high correlation and small bias between the surrogate and original model responses in the critical parametric region of a rare event. A final importance sampling step is introduced to correct the surrogate model-based probability estimations. Static and dynamic problems with input uncertainties modeled by random field and stochastic process are studied to demonstrate the proposed method.

We investigate a class of parametric elliptic semilinear partial differential equations of second order with homogeneous essential boundary conditions, where the coefficients and the right-hand side (and hence the solution) may depend on a parameter. This model can be seen as a reaction-diffusion problem with a polynomial nonlinearity in the reaction term. The efficiency of various numerical approximations across the entire parameter space is closely related to the regularity of the solution with respect to the parameter. We show that if the coefficients and the right-hand side are analytic or Gevrey class regular with respect to the parameter, the same type of parametric regularity is valid for the solution. The key ingredient of the proof is the combination of the alternative-to-factorial technique from our previous work [1] with a novel argument for the treatment of the power-type nonlinearity in the reaction term. As an application of this abstract result, we obtain rigorous convergence estimates for numerical integration of semilinear reaction-diffusion problems with random coefficients using Gaussian and Quasi-Monte Carlo quadrature. Our theoretical findings are confirmed in numerical experiments.

In this paper, we revisit McFadden (1978)'s correction factor for sampling of alternatives in multinomial logit (MNL) and mixed multinomial logit (MMNL) models. McFadden (1978) proved that consistent parameter estimates are obtained when estimating MNL models using a sampled subset of alternatives, including the chosen alternative, in combination with a correction factor. We decompose this correction factor into i) a correction for overestimating the MNL choice probability due to using a smaller subset of alternatives, and ii) a correction for which a subset of alternatives is contrasted through utility differences and thereby the extent to which we learn about the parameters of interest in MNL. Keane and Wasi (2016) proved that the overall expected positive information divergence - comprising the above two elements - is minimised between the true and sampled likelihood when applying a sampling protocol satisfying uniform conditioning. We generalise their result to the case of positive conditioning and show that whilst McFadden (1978)'s correction factor may not minimise the overall expected information divergence, it does minimise the expected information loss with respect to the parameters of interest. We apply this result in the context of Bayesian analysis and show that McFadden (1978)'s correction factor minimises the expected information loss regarding the parameters of interest across the entire posterior density irrespective of sample size. In other words, McFadden (1978)'s correction factor has desirable small and large sample properties. We also show that our results for Bayesian MNL models transfer to MMNL and that only McFadden (1978) correction factor is sufficient to minimise the expected information loss in the parameters of interest. Monte Carlo simulations illustrate the successful application of sampling of alternatives in Bayesian MMNL models.

北京阿比特科技有限公司