亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Many materials processes and properties depend on the anisotropy of the energy of grain boundaries, i.e. on the fact that this energy is a function of the five geometric degrees of freedom (DOF) of the grain boundaries. To access this parameter space in an efficient way and discover energy cusps in unexplored regions, a method was recently established, which combines atomistic simulations with statistical methods 10.1002/adts.202100615. This sequential sampling technique is now extended in the spirit of an active learning algorithm by adding a criterion to decide when the sampling is advanced enough to stop. To this instance, two parameters to analyse the sampling results on the fly are introduced: the number of cusps, which correspond to the most interesting and important regions of the energy landscape, and the maximum change of energy between two sequential iterations. Monitoring these two quantities provides valuable insight into how the subspaces are energetically structured. The combination of both parameters provides the necessary information to evaluate the sampling of the 2D subspaces of grain boundary plane inclinations of even non-periodic, low angle grain boundaries. With a reasonable number of datapoints in the initial design, only a few sequential iterations already influence the accuracy of the sampling substantially and the new algorithm outperforms regular high-throughput sampling.

相關內容

We consider the problem of estimating the roughness of the volatility in a stochastic volatility model that arises as a nonlinear function of fractional Brownian motion with drift. To this end, we introduce a new estimator that measures the so-called roughness exponent of a continuous trajectory, based on discrete observations of its antiderivative. We provide conditions on the underlying trajectory under which our estimator converges in a strictly pathwise sense. Then we verify that these conditions are satisfied by almost every sample path of fractional Brownian motion (with drift). As a consequence, we obtain strong consistency theorems in the context of a large class of rough volatility models. Numerical simulations show that our estimation procedure performs well after passing to a scale-invariant modification of our estimator.

A variant of the standard notion of branching bisimilarity for processes with discrete relative timing is proposed which is coarser than the standard notion. Using a version of ACP (Algebra of Communicating Processes) with abstraction for processes with discrete relative timing, it is shown that the proposed variant allows of both the functional correctness and the performance properties of the PAR (Positive Acknowledgement with Retransmission) protocol to be analyzed. In the version of ACP concerned, the difference between the standard notion of branching bisimilarity and its proposed variant is characterized by a single axiom schema.

This article revisits the fundamental problem of parameter selection for Gaussian process interpolation. By choosing the mean and the covariance functions of a Gaussian process within parametric families, the user obtains a family of Bayesian procedures to perform predictions about the unknown function, and must choose a member of the family that will hopefully provide good predictive performances. We base our study on the general concept of scoring rules, which provides an effective framework for building leave-one-out selection and validation criteria, and a notion of extended likelihood criteria based on an idea proposed by Fasshauer and co-authors in 2009, which makes it possible to recover standard selection criteria such as, for instance, the generalized cross-validation criterion. Under this setting, we empirically show on several test problems of the literature that the choice of an appropriate family of models is often more important than the choice of a particular selection criterion (e.g., the likelihood versus a leave-one-out selection criterion). Moreover, our numerical results show that the regularity parameter of a Mat{\'e}rn covariance can be selected effectively by most selection criteria.

The application of deep learning to non-stationary temporal datasets can lead to overfitted models that underperform under regime changes. In this work, we propose a modular machine learning pipeline for ranking predictions on temporal panel datasets which is robust under regime changes. The modularity of the pipeline allows the use of different models, including Gradient Boosting Decision Trees (GBDTs) and Neural Networks, with and without feature engineering. We evaluate our framework on financial data for stock portfolio prediction, and find that GBDT models with dropout display high performance, robustness and generalisability with reduced complexity and computational cost. We then demonstrate how online learning techniques, which require no retraining of models, can be used post-prediction to enhance the results. First, we show that dynamic feature projection improves robustness by reducing drawdown in regime changes. Second, we demonstrate that dynamical model ensembling based on selection of models with good recent performance leads to improved Sharpe and Calmar ratios of out-of-sample predictions. We also evaluate the robustness of our pipeline across different data splits and random seeds with good reproducibility.

Forecasting of renewable energy generation provides key insights which may help with decision-making towards global decarbonisation. Renewable energy generation can often be represented through cross-sectional hierarchies, whereby a single farm may have multiple individual generators. Hierarchical forecasting through reconciliation has demonstrated a significant increase in the quality of forecasts both theoretically and empirically. However, it is not evident whether forecasts generated by individual temporal and cross-sectional aggregation can be superior to integrated cross-temporal forecasts and to individual forecasts on more granular data. In this study, we investigate the accuracies of different cross-sectional and cross-temporal reconciliation methods using both linear regression and gradient boosting machine learning for forecasting wind farm power generation. We found that cross-temporal reconciliation is superior to individual cross-sectional reconciliation at multiple temporal aggregations. Cross-temporally reconciled machine learning base forecasts also demonstrated a high accuracy at coarser temporal granularities, which may encourage adoption for short-term wind forecasts. We also show that linear regression can outperform machine learning models across most levels in cross-sectional wind time series.

Text summarization is an essential task in natural language processing, and researchers have developed various approaches over the years, ranging from rule-based systems to neural networks. However, there is no single model or approach that performs well on every type of text. We propose a system that recommends the most suitable summarization model for a given text. The proposed system employs a fully connected neural network that analyzes the input content and predicts which summarizer should score the best in terms of ROUGE score for a given input. The meta-model selects among four different summarization models, developed for the Slovene language, using different properties of the input, in particular its Doc2Vec document representation. The four Slovene summarization models deal with different challenges associated with text summarization in a less-resourced language. We evaluate the proposed SloMetaSum model performance automatically and parts of it manually. The results show that the system successfully automates the step of manually selecting the best model.

The purpose of this paper is to introduce a new numerical method to solve multi-marginal optimal transport problems with pairwise interaction costs. The complexity of multi-marginal optimal transport generally scales exponentially in the number of marginals $m$. We introduce a one parameter family of cost functions that interpolates between the original and a special cost function for which the problem's complexity scales linearly in $m$. We then show that the solution to the original problem can be recovered by solving an ordinary differential equation in the parameter $\epsilon$, whose initial condition corresponds to the solution for the special cost function mentioned above; we then present some simulations, using both explicit Euler and explicit higher order Runge-Kutta schemes to compute solutions to the ODE, and, as a result, the multi-marginal optimal transport problem.

In this paper we propose a new finite element discretization for the two-field formulation of poroelasticity which uses the elastic displacement and the pore pressure as primary variables. The main goal is to develop a numerical method with small problem sizes which still achieve key features such as parameter-robustness, local mass conservation, and robust preconditionor construction. For this we combine a nonconforming finite element and the interior over-stabilized enriched Galerkin methods with a suitable stabilization term. Robust a priori error estimates and parameter-robust preconditioner construction are proved, and numerical results illustrate our theoretical findings.

A population-averaged additive subdistribution hazards model is proposed to assess the marginal effects of covariates on the cumulative incidence function and to analyze correlated failure time data subject to competing risks. This approach extends the population-averaged additive hazards model by accommodating potentially dependent censoring due to competing events other than the event of interest. Assuming an independent working correlation structure, an estimating equations approach is outlined to estimate the regression coefficients and a new sandwich variance estimator is proposed. The proposed sandwich variance estimator accounts for both the correlations between failure times and between the censoring times, and is robust to misspecification of the unknown dependency structure within each cluster. We further develop goodness-of-fit tests to assess the adequacy of the additive structure of the subdistribution hazards for the overall model and each covariate. Simulation studies are conducted to investigate the performance of the proposed methods in finite samples. We illustrate our methods using data from the STrategies to Reduce Injuries and Develop confidence in Elders (STRIDE) trial.

Stochastic processes have found numerous applications in science, as they are broadly used to model a variety of natural phenomena. Due to their intrinsic randomness and uncertainty, they are however difficult to characterize. Here, we introduce an unsupervised machine learning approach to determine the minimal set of parameters required to effectively describe the dynamics of a stochastic process. Our method builds upon an extended $\beta$-variational autoencoder architecture. By means of simulated datasets corresponding to paradigmatic diffusion models, we showcase its effectiveness in extracting the minimal relevant parameters that accurately describe these dynamics. Furthermore, the method enables the generation of new trajectories that faithfully replicate the expected stochastic behavior. Overall, our approach enables for the autonomous discovery of unknown parameters describing stochastic processes, hence enhancing our comprehension of complex phenomena across various fields.

北京阿比特科技有限公司