亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this article, we propose an interval constraint programming method for globally solving catalog-based categorical optimization problems. It supports catalogs of arbitrary size and properties of arbitrary dimension, and does not require any modeling effort from the user. A novel catalog-based contractor (or filtering operator) guarantees consistency between the categorical properties and the existing catalog items. This results in an intuitive and generic approach that is exact, rigorous (robust to roundoff errors) and can be easily implemented in an off-the-shelf interval-based continuous solver that interleaves branching and constraint propagation. We demonstrate the validity of the approach on a numerical problem in which a categorical variable is described by a two-dimensional property space. A Julia prototype is available as open-source software under the MIT license at //github.com/cvanaret/CateGOrical.jl

相關內容

In this work, we present a high-order finite volume framework for the numerical simulation of shallow water flows. The method is designed to accurately capture complex dynamics inherent in shallow water systems, particularly suited for applications such as tsunami simulations. The arbitrarily high-order framework ensures precise representation of flow behaviors, crucial for simulating phenomena characterized by rapid changes and fine-scale features. Thanks to an {\it ad-hoc} reformulation in terms of production-destruction terms, the time integration ensures positivity preservation without any time-step restrictions, a vital attribute for physical consistency, especially in scenarios where negative water depth reconstructions could lead to unrealistic results. In order to introduce the preservation of general steady equilibria dictated by the underlying balance law, the high-order reconstruction and numerical flux are blended in a convex fashion with a well-balanced approximation, which is able to provide exact preservation of both static and moving equilibria. Through numerical experiments, we demonstrate the effectiveness and robustness of the proposed approach in capturing the intricate dynamics of shallow water flows, while preserving key physical properties essential for flood simulations.

Motivated by a recent work on a preconditioned MINRES for flipped linear systems in imaging, in this note we extend the scope of that research for including more precise boundary conditions such as reflective and anti-reflective ones. We prove spectral results for the matrix-sequences associated to the original problem, which justify the use of the MINRES in the current setting. The theoretical spectral analysis is supported by a wide variety of numerical experiments, concerning the visualization of the spectra of the original matrices in various ways. We also report numerical tests regarding the convergence speed and regularization features of the associated GMRES and MINRES methods. Conclusions and open problems end the present study.

In this paper, we apply quasi-Monte Carlo (QMC) methods with an initial preintegration step to estimate cumulative distribution functions and probability density functions in uncertainty quantification (UQ). The distribution and density functions correspond to a quantity of interest involving the solution to an elliptic partial differential equation (PDE) with a lognormally distributed coefficient and a normally distributed source term. There is extensive previous work on using QMC to compute expected values in UQ, which have proven very successful in tackling a range of different PDE problems. However, the use of QMC for density estimation applied to UQ problems will be explored here for the first time. Density estimation presents a more difficult challenge compared to computing the expected value due to discontinuities present in the integral formulations of both the distribution and density. Our strategy is to use preintegration to eliminate the discontinuity by integrating out a carefully selected random parameter, so that QMC can be used to approximate the remaining integral. First, we establish regularity results for the PDE quantity of interest that are required for smoothing by preintegration to be effective. We then show that an $N$-point lattice rule can be constructed for the integrands corresponding to the distribution and density, such that after preintegration the QMC error is of order $\mathcal{O}(N^{-1+\epsilon})$ for arbitrarily small $\epsilon>0$. This is the same rate achieved for computing the expected value of the quantity of interest. Numerical results are presented to reaffirm our theory.

The ability to extract material parameters from quantitative experimental analysis is essential for rational design and theory advancement. However, the difficulty of this analysis increases significantly with the complexity of the theoretical model and the number of material parameters. Here we use Bayesian optimization to develop an analysis platform that can extract up to 8 fundamental material parameters of an organometallic perovskite semiconductor from a transient photoluminescence experiment, based on a complex full physics model that includes drift-diffusion of carriers and dynamic defect occupation. An example study of thermal degradation reveals that changes in doping concentration and carrier mobility dominate, while the defect energy level remains nearly unchanged. This platform can be conveniently applied to other experiments or to combinations of experiments, accelerating materials discovery and optimization of semiconductor materials for photovoltaics and other applications.

In this contribution we apply an adaptive model hierarchy, consisting of a full-order model, a reduced basis reduced order model, and a machine learning surrogate, to parametrized linear-quadratic optimal control problems. The involved reduced order models are constructed adaptively and are called in such a way that the model hierarchy returns an approximate solution of given accuracy for every parameter value. At the same time, the fastest model of the hierarchy is evaluated whenever possible and slower models are only queried if the faster ones are not sufficiently accurate. The performance of the model hierarchy is studied for a parametrized heat equation example with boundary value control.

Literature is full of inference techniques developed to estimate the parameters of stochastic dynamical systems driven by the well-known Brownian noise. Such diffusion models are often inappropriate models to properly describe the dynamics reflected in many real-world data which are dominated by jump discontinuities of various sizes and frequencies. To account for the presence of jumps, jump-diffusion models are introduced and some inference techniques are developed. Jump-diffusion models are also inadequate models since they fail to reflect the frequent occurrence as well as the continuous spectrum of natural jumps. It is, therefore, crucial to depart from the classical stochastic systems like diffusion and jump-diffusion models and resort to stochastic systems where the regime of stochasticity is governed by the stochastic fluctuations of L\'evy type. Reconstruction of L\'evy-driven dynamical systems, however, has been a major challenge. The literature on the reconstruction of L\'evy-driven systems is rather poor: there are few reconstruction algorithms developed which suffer from one or several problems such as being data-hungry, failing to provide a full reconstruction of noise parameters, tackling only some specific systems, failing to cope with multivariate data in practice, lacking proper validation mechanisms, and many more. This letter introduces a maximum likelihood estimation procedure which grants a full reconstruction of the system, requires less data, and its implementation for multivariate data is quite straightforward. To the best of our knowledge this contribution is the first to tackle all the mentioned shortcomings. We apply our algorithm to simulated data as well as an ice-core dataset spanning the last glaciation. In particular, we find new insights about the dynamics of the climate in the curse of the last glaciation which was not found in previous studies.

We consider the numerical behavior of the fixed-stress splitting method for coupled poromechanics as undrained regimes are approached. We explain that pressure stability is related to the splitting error of the scheme, not the fact that the discrete saddle point matrix never appears in the fixed-stress approach. This observation reconciles previous results regarding the pressure stability of the splitting method. Using examples of compositional poromechanics with application to geological CO$_2$ sequestration, we see that solutions obtained using the fixed-stress scheme with a low order finite element-finite volume discretization which is not inherently inf-sup stable can exhibit the same pressure oscillations obtained with the corresponding fully implicit scheme. Moreover, pressure jump stabilization can effectively remove these spurious oscillations in the fixed-stress setting, while also improving the efficiency of the scheme in terms of the number of iterations required at every time step to reach convergence.

In this paper, we introduce a new simple approach to developing and establishing the convergence of splitting methods for a large class of stochastic differential equations (SDEs), including additive, diagonal and scalar noise types. The central idea is to view the splitting method as a replacement of the driving signal of an SDE, namely Brownian motion and time, with a piecewise linear path that yields a sequence of ODEs $-$ which can be discretized to produce a numerical scheme. This new way of understanding splitting methods is inspired by, but does not use, rough path theory. We show that when the driving piecewise linear path matches certain iterated stochastic integrals of Brownian motion, then a high order splitting method can be obtained. We propose a general proof methodology for establishing the strong convergence of these approximations that is akin to the general framework of Milstein and Tretyakov. That is, once local error estimates are obtained for the splitting method, then a global rate of convergence follows. This approach can then be readily applied in future research on SDE splitting methods. By incorporating recently developed approximations for iterated integrals of Brownian motion into these piecewise linear paths, we propose several high order splitting methods for SDEs satisfying a certain commutativity condition. In our experiments, which include the Cox-Ingersoll-Ross model and additive noise SDEs (noisy anharmonic oscillator, stochastic FitzHugh-Nagumo model, underdamped Langevin dynamics), the new splitting methods exhibit convergence rates of $O(h^{3/2})$ and outperform schemes previously proposed in the literature.

In this paper, we aim to perform sensitivity analysis of set-valued models and, in particular, to quantify the impact of uncertain inputs on feasible sets, which are key elements in solving a robust optimization problem under constraints. While most sensitivity analysis methods deal with scalar outputs, this paper introduces a novel approach for performing sensitivity analysis with set-valued outputs. Our innovative methodology is designed for excursion sets, but is versatile enough to be applied to set-valued simulators, including those found in viability fields, or when working with maps like pollutant concentration maps or flood zone maps. We propose to use the Hilbert-Schmidt Independence Criterion (HSIC) with a kernel designed for set-valued outputs. After proposing a probabilistic framework for random sets, a first contribution is the proof that this kernel is characteristic, an essential property in a kernel-based sensitivity analysis context. To measure the contribution of each input, we then propose to use HSIC-ANOVA indices. With these indices, we can identify which inputs should be neglected (screening) and we can rank the others according to their influence (ranking). The estimation of these indices is also adapted to the set-valued outputs. Finally, we test the proposed method on three test cases of excursion sets.

The spread of the Internet of Things (IoT) is demanding new, powerful architectures for handling the huge amounts of data produced by the IoT devices. In many scenarios, many existing isolated solutions applied to IoT devices use a set of rules to detect, report and mitigate malware activities or threats. This paper describes a development environment that allows the programming and debugging of such rule-based multi-agent solutions. The solution consists of the integration of a rule engine into the agent, the use of a specialized, wrapping agent class with a graphical user interface for programming and testing purposes, and a mechanism for the incremental composition of behaviors. Finally, a set of examples and a comparative study were accomplished to test the suitability and validity of the approach. The JADE multi-agent middleware has been used for the practical implementation of the approach.

北京阿比特科技有限公司