亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper, we propose a multidimensional statistical model of intraday electricity prices at the scale of the trading session, which allows all products to be simulated simultaneously. This model, based on Poisson measures and inspired by the Common Shock Poisson Model, reproduces the Samuelson effect (intensity and volatility increases as time to maturity decreases). It also reproduces the price correlation structure, highlighted here in the data, which decreases as two maturities move apart. This model has only three parameters that can be estimated using a moment method that we propose here. We demonstrate the usefulness of the model on a case of storage valuation by dynamic programming over a trading session.

相關內容

ACM/IEEE第23屆模型驅動工程語言和系統國際會議,是模型驅動軟件和系統工程的首要會議系列,由ACM-SIGSOFT和IEEE-TCSE支持組織。自1998年以來,模型涵蓋了建模的各個方面,從語言和方法到工具和應用程序。模特的參加者來自不同的背景,包括研究人員、學者、工程師和工業專業人士。MODELS 2019是一個論壇,參與者可以圍繞建模和模型驅動的軟件和系統交流前沿研究成果和創新實踐經驗。今年的版本將為建模社區提供進一步推進建模基礎的機會,并在網絡物理系統、嵌入式系統、社會技術系統、云計算、大數據、機器學習、安全、開源等新興領域提出建模的創新應用以及可持續性。 官網鏈接: · 情景 · 最優化 · 可約的 · MoDELS ·
2023 年 9 月 20 日

We examine a stochastic formulation for data-driven optimization wherein the decision-maker is not privy to the true distribution, but has knowledge that it lies in some hypothesis set and possesses a historical data set, from which information about it can be gleaned. We define a prescriptive solution as a decision rule mapping such a data set to decisions. As there does not exist prescriptive solutions that are generalizable over the entire hypothesis set, we define out-of-sample optimality as a local average over a neighbourhood of hypotheses, and averaged over the sampling distribution. We prove sufficient conditions for local out-of-sample optimality, which reduces to functions of the sufficient statistic of the hypothesis family. We present an optimization problem that would solve for such an out-of-sample optimal solution, and does so efficiently by a combination of sampling and bisection search algorithms. Finally, we illustrate our model on the newsvendor model, and find strong performance when compared against alternatives in the literature. There are potential implications of our research on end-to-end learning and Bayesian optimization.

In this study, we propose a staging area for ingesting new superconductors' experimental data in SuperCon that is machine-collected from scientific articles. Our objective is to enhance the efficiency of updating SuperCon while maintaining or enhancing the data quality. We present a semi-automatic staging area driven by a workflow combining automatic and manual processes on the extracted database. An anomaly detection automatic process aims to pre-screen the collected data. Users can then manually correct any errors through a user interface tailored to simplify the data verification on the original PDF documents. Additionally, when a record is corrected, its raw data is collected and utilised to improve machine learning models as training data. Evaluation experiments demonstrate that our staging area significantly improves curation quality. We compare the interface with the traditional manual approach of reading PDF documents and recording information in an Excel document. Using the interface boosts the precision and recall by 6% and 50%, respectively to an average increase of 40% in F1-score.

In this paper we investigate the relationships between a multipreferential semantics for defeasible reasoning in knowledge representation and a multilayer neural network model. Weighted knowledge bases for a simple description logic with typicality are considered under a (many-valued) ``concept-wise" multipreference semantics. The semantics is used to provide a preferential interpretation of MultiLayer Perceptrons (MLPs). A model checking and an entailment based approach are exploited in the verification of conditional properties of MLPs.

Despite the growing availability of sensing and data in general, we remain unable to fully characterise many in-service engineering systems and structures from a purely data-driven approach. The vast data and resources available to capture human activity are unmatched in our engineered world, and, even in cases where data could be referred to as ``big,'' they will rarely hold information across operational windows or life spans. This paper pursues the combination of machine learning technology and physics-based reasoning to enhance our ability to make predictive models with limited data. By explicitly linking the physics-based view of stochastic processes with a data-based regression approach, a spectrum of possible Gaussian process models are introduced that enable the incorporation of different levels of expert knowledge of a system. Examples illustrate how these approaches can significantly reduce reliance on data collection whilst also increasing the interpretability of the model, another important consideration in this context.

In this work, we theoretically and numerically discuss the time fractional subdiffusion-normal transport equation, which depicts a crossover from sub-diffusion (as $t\rightarrow 0$) to normal diffusion (as $t\rightarrow \infty$). Firstly, the well-posedness and regularities of the model are studied by using the bivariate Mittag-Leffler function. Theoretical results show that after introducing the first-order derivative operator, the regularity of the solution can be improved in substance. Then, a numerical scheme with high-precision is developed no matter the initial value is smooth or non-smooth. More specifically, we use the contour integral method (CIM) with parameterized hyperbolic contour to approximate the temporal local and non-local operators, and employ the standard Galerkin finite element method for spacial discretization. Rigorous error estimates show that the proposed numerical scheme has spectral accuracy in time and optimal convergence order in space. Besides, we further improve the algorithm and reduce the computational cost by using the barycentric Lagrange interpolation. Finally, the obtained theoretical results as well as the acceleration algorithm are verified by several 1-D and 2-D numerical experiments, which also show that the numerical scheme developed in this paper is effective and robust.

In this paper, we evaluate the portability of the SYCL programming model on some of the latest CPUs and GPUs from a wide range of vendors, utilizing the two main compilers: DPC++ and hipSYCL/OpenSYCL. Both compilers currently support GPUs from all three major vendors; we evaluate performance on the Intel(R) Data Center GPU Max 1100, the NVIDIA A100 GPU, and the AMD MI250X GPU. Support on CPUs currently is less established, with DPC++ only supporting x86 CPUs through OpenCL, however, OpenSYCL does have an OpenMP backend capable of targeting all modern CPUs; we benchmark the Intel Xeon Platinum 8360Y Processor (Ice Lake), the AMD EPYC 9V33X (Genoa-X), and the Ampere Altra platforms. We study a range of primarily bandwidth-bound applications implemented using the OPS and OP2 DSLs, evaluate different formulations in SYCL, and contrast their performance to "native" programming approaches where available (CUDA/HIP/OpenMP). On GPU architectures SCYL on average even slightly outperforms native approaches, while on CPUs it falls behind - highlighting a continued need for improving CPU performance. While SYCL does not solve all the challenges of performance portability (e.g. needing different algorithms on different hardware), it does provide a single programming model and ecosystem to target most current HPC architectures productively.

In this paper, we consider experiments involving both discrete factors and continuous factors under general parametric statistical models. To search for optimal designs under the D-criterion, we propose a new algorithm, called the ForLion algorithm, which performs an exhaustive search in a design space with discrete and continuous factors while keeping high efficiency and a reduced number of design points. Its optimality is guaranteed by the general equivalence theorem. We show its advantage using a real-life experiment under multinomial logistic models, and further specialize the algorithm for generalized linear models to show the improved efficiency with model-specific formulae and iterative steps.

In this paper, we combine the Smolyak technique for multi-dimensional interpolation with the Filon-Clenshaw-Curtis (FCC) rule for one-dimensional oscillatory integration, to obtain a new Filon-Clenshaw-Curtis-Smolyak (FCCS) rule for oscillatory integrals with linear phase over the $d-$dimensional cube $[-1,1]^d$. By combining stability and convergence estimates for the FCC rule with error estimates for the Smolyak interpolation operator, we obtain an error estimate for the FCCS rule, consisting of the product of a Smolyak-type error estimate multiplied by a term that decreases with $\mathcal{O}(k^{-\tilde{d}})$, where $k$ is the wavenumber and $\tilde{d}$ is the number of oscillatory dimensions. If all dimensions are oscillatory, a higher negative power of $k$ appears in the estimate. As an application, we consider the forward problem of uncertainty quantification (UQ) for a one-space-dimensional Helmholtz problem with wavenumber $k$ and a random heterogeneous refractive index, depending in an affine way on $d$ i.i.d. uniform random variables. After applying a classical hybrid numerical-asymptotic approximation, expectations of functionals of the solution of this problem can be formulated as a sum of oscillatory integrals over $[-1,1]^d$, which we compute using the FCCS rule. We give numerical results for the FCCS rule and the UQ algorithm showing that accuracy improves when both $k$ and the order of the rule increase. We also give results for dimension-adaptive sparse grid FCCS quadrature showing its efficiency as dimension increases.

In this paper we present an abstract nonsmooth optimization problem for which we recall existence and uniqueness results. We show a numerical scheme to approximate its solution. The theory is later applied to a sample static contact problem describing an elastic body in frictional contact with a foundation. This problem leads to a hemivariational inequality which we solve numerically. Finally, we compare three computational methods of solving contact mechanical problems: direct optimization method, augmented Lagrangian method and primal-dual active set strategy.

Testing cross-sectional independence in panel data models is of fundamental importance in econometric analysis with high-dimensional panels. Recently, econometricians began to turn their attention to the problem in the presence of serial dependence. The existing procedure for testing cross-sectional independence with serial correlation is based on the sum of the sample cross-sectional correlations, which generally performs well when the alternative has dense cross-sectional correlations, but suffers from low power against sparse alternatives. To deal with sparse alternatives, we propose a test based on the maximum of the squared sample cross-sectional correlations. Furthermore, we propose a combined test to combine the p-values of the max based and sum based tests, which performs well under both dense and sparse alternatives. The combined test relies on the asymptotic independence of the max based and sum based test statistics, which we show rigorously. We show that the proposed max based and combined tests have attractive theoretical properties and demonstrate the superior performance via extensive simulation results. We apply the two new tests to analyze the weekly returns on the securities in the S\&P 500 index under the Fama-French three-factor model, and confirm the usefulness of the proposed combined test in detecting cross-sectional independence.

北京阿比特科技有限公司