亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper the accuracy and robustness of quality measures for the assessment of machine learning models are investigated. The prediction quality of a machine learning model is evaluated model-independent based on a cross-validation approach, where the approximation error is estimated for unknown data. The presented measures quantify the amount of explained variation in the model prediction. The reliability of these measures is assessed by means of several numerical examples, where an additional data set for the verification of the estimated prediction error is available. Furthermore, the confidence bounds of the presented quality measures are estimated and local quality measures are derived from the prediction residuals obtained by the cross-validation approach.

相關內容

ACM/IEEE第23屆模型驅動工程語言和系統國際會議,是模型驅動軟件和系統工程的首要會議系列,由ACM-SIGSOFT和IEEE-TCSE支持組織。自1998年以來,模型涵蓋了建模的各個方面,從語言和方法到工具和應用程序。模特的參加者來自不同的背景,包括研究人員、學者、工程師和工業專業人士。MODELS 2019是一個論壇,參與者可以圍繞建模和模型驅動的軟件和系統交流前沿研究成果和創新實踐經驗。今年的版本將為建模社區提供進一步推進建模基礎的機會,并在網絡物理系統、嵌入式系統、社會技術系統、云計算、大數據、機器學習、安全、開源等新興領域提出建模的創新應用以及可持續性。 官網鏈接: · UniFormer · · 噪聲 · 估計/估計量 ·
2024 年 11 月 13 日

This paper analyzes a full discretization of a three-dimensional stochastic Allen-Cahn equation with multiplicative noise. The discretization uses the Euler scheme for temporal discretization and the finite element method for spatial discretization. A key contribution of this work is the introduction of a novel stability estimate for a discrete stochastic convolution, which plays a crucial role in establishing pathwise uniform convergence estimates for fully discrete approximations of nonlinear stochastic parabolic equations. By using this stability estimate in conjunction with the discrete stochastic maximal $L^p$-regularity estimate, the study derives a pathwise uniform convergence rate that encompasses general general spatial $L^q$-norms. Moreover, the theoretical convergence rate is verified by numerical experiments.

This paper deals with two fields related to active imaging system. First, we begin to explore image processing algorithms to restore the artefacts like speckle, scintillation and image dancing caused by atmospheric turbulence. Next, we examine how to evaluate the performance of this kind of systems. To do this task, we propose a modified version of the german TRM3 metric which permits to get MTF-like measures. We use the database acquired during NATO-TG40 field trials to make our tests.

Missing data is a common problem that challenges the study of effects of treatments. In the context of mediation analysis, this paper addresses missingness in the two key variables, mediator and outcome, focusing on identification. We consider self-separated missingness models where identification is achieved by conditional independence assumptions only and self-connected missingness models where identification relies on so-called shadow variables. The first class is somewhat limited as it is constrained by the need to remove a certain number of connections from the model. The second class turns out to include substantial variation in the position of the shadow variable in the causal structure (vis-a-vis the mediator and outcome) and the corresponding implications for the model. In constructing the models, to improve plausibility, we pay close attention to allowing, where possible, dependencies due to unobserved causes of the missingness. In this exploration, we develop theory where needed. This results in templates for identification in this mediation setting, generally useful identification techniques, and perhaps most significantly, synthesis and substantial expansion of shadow variable theory.

Polynomial approximations of functions are widely used in scientific computing. In certain applications, it is often desired to require the polynomial approximation to be non-negative (resp. non-positive), or bounded within a given range, due to constraints posed by the underlying physical problems. Efficient numerical methods are thus needed to enforce such conditions. In this paper, we discuss effective numerical algorithms for polynomial approximation under non-negativity constraints. We first formulate the constrained optimization problem, its primal and dual forms, and then discuss efficient first-order convex optimization methods, with a particular focus on high dimensional problems. Numerical examples are provided, for up to $200$ dimensions, to demonstrate the effectiveness and scalability of the methods.

Agent-based modeling (ABM) offers powerful insights into complex systems, but its practical utility has been limited by computational constraints and simplistic agent behaviors, especially when simulating large populations. Recent advancements in large language models (LLMs) could enhance ABMs with adaptive agents, but their integration into large-scale simulations remains challenging. This work introduces a novel methodology that bridges this gap by efficiently integrating LLMs into ABMs, enabling the simulation of millions of adaptive agents. We present LLM archetypes, a technique that balances behavioral complexity with computational efficiency, allowing for nuanced agent behavior in large-scale simulations. Our analysis explores the crucial trade-off between simulation scale and individual agent expressiveness, comparing different agent architectures ranging from simple heuristic-based agents to fully adaptive LLM-powered agents. We demonstrate the real-world applicability of our approach through a case study of the COVID-19 pandemic, simulating 8.4 million agents representing New York City and capturing the intricate interplay between health behaviors and economic outcomes. Our method significantly enhances ABM capabilities for predictive and counterfactual analyses, addressing limitations of historical data in policy design. By implementing these advances in an open-source framework, we facilitate the adoption of LLM archetypes across diverse ABM applications. Our results show that LLM archetypes can markedly improve the realism and utility of large-scale ABMs while maintaining computational feasibility, opening new avenues for modeling complex societal challenges and informing data-driven policy decisions.

This study examines the effect that different feature selection methods have on models created with XGBoost, a popular machine learning algorithm with superb regularization methods. It shows that three different ways for reducing the dimensionality of features produces no statistically significant change in the prediction accuracy of the model. This suggests that the traditional idea of removing the noisy training data to make sure models do not overfit may not apply to XGBoost. But it may still be viable in order to reduce computational complexity.

Fairness holds a pivotal role in the realm of machine learning, particularly when it comes to addressing groups categorised by protected attributes, e.g., gender, race. Prevailing algorithms in fair learning predominantly hinge on accessibility or estimations of these protected attributes, at least in the training process. We design a single group-blind projection map that aligns the feature distributions of both groups in the source data, achieving (demographic) group parity, without requiring values of the protected attribute for individual samples in the computation of the map, as well as its use. Instead, our approach utilises the feature distributions of the privileged and unprivileged groups in a boarder population and the essential assumption that the source data are unbiased representation of the population. We present numerical results on synthetic data and real data.

We first present a simple recursive algorithm that generates cyclic rotation Gray codes for stamp foldings and semi-meanders, where consecutive strings differ by a stamp rotation. These are the first known Gray codes for stamp foldings and semi-meanders, and we thus solve an open problem posted by Sawada and Li in [Electron. J. Comb. 19(2), 2012]. We then introduce an iterative algorithm that generates the same rotation Gray codes for stamp foldings and semi-meanders. Both the recursive and iterative algorithms generate stamp foldings and semi-meanders in constant amortized time and $O(n)$-amortized time per string respectively, using a linear amount of memory.

In this paper, we propose high order numerical methods to solve a 2D advection diffusion equation, in the highly oscillatory regime. We use an integrator strategy that allows the construction of arbitrary high-order schemes {leading} to an accurate approximation of the solution without any time step-size restriction. This paper focuses on the multiscale challenges {in time} of the problem, that come from the velocity, an $\varepsilon-$periodic function, whose expression is explicitly known. $\varepsilon$-uniform third order in time numerical approximations are obtained. For the space discretization, this strategy is combined with high order finite difference schemes. Numerical experiments show that the proposed methods {achieve} the expected order of accuracy, and it is validated by several tests across diverse domains and boundary conditions. The novelty of the paper consists of introducing a numerical scheme that is high order accurate in space and time, with a particular attention to the dependency on a small parameter in the time scale. The high order in space is obtained enlarging the interpolation stencil already established in [44], and further refined in [46], with a special emphasis on the squared boundary, especially when a Dirichlet condition is assigned. In such case, we compute an \textit{ad hoc} Taylor expansion of the solution to ensure that there is no degradation of the accuracy order at the boundary. On the other hand, the high accuracy in time is obtained extending the work proposed in [19]. The combination of high-order accuracy in both space and time is particularly significant due to the presence of two small parameters-$\delta$ and $\varepsilon$-in space and time, respectively.

In this paper we develop a novel neural network model for predicting implied volatility surface. Prior financial domain knowledge is taken into account. A new activation function that incorporates volatility smile is proposed, which is used for the hidden nodes that process the underlying asset price. In addition, financial conditions, such as the absence of arbitrage, the boundaries and the asymptotic slope, are embedded into the loss function. This is one of the very first studies which discuss a methodological framework that incorporates prior financial domain knowledge into neural network architecture design and model training. The proposed model outperforms the benchmarked models with the option data on the S&P 500 index over 20 years. More importantly, the domain knowledge is satisfied empirically, showing the model is consistent with the existing financial theories and conditions related to implied volatility surface.

北京阿比特科技有限公司