亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

A model for corrosion-induced cracking of reinforced concrete subjected to non-uniform chloride-induced corrosion is presented. The gradual corrosion initiation of the steel surface is investigated by simulating chloride transport considering binding. The transport of iron from the steel surface, its subsequent precipitation into rust, and the associated precipitation-induced pressure are explicitly modelled. Model results, obtained through finite element simulations, agree very well with experimental data, showing significantly improved accuracy over uniform corrosion modelling. The results obtained from case studies reveal that crack-facilitated transport of chlorides cannot be neglected, that the size of the anodic region must be considered, and that precipitate accumulation in pores can take years.

相關內容

ACM/IEEE第23屆模型驅動工程語言和系統國際會議,是模型驅動軟件和系統工程的首要會議系列,由ACM-SIGSOFT和IEEE-TCSE支持組織。自1998年以來,模型涵蓋了建模的各個方面,從語言和方法到工具和應用程序。模特的參加者來自不同的背景,包括研究人員、學者、工程師和工業專業人士。MODELS 2019是一個論壇,參與者可以圍繞建模和模型驅動的軟件和系統交流前沿研究成果和創新實踐經驗。今年的版本將為建模社區提供進一步推進建模基礎的機會,并在網絡物理系統、嵌入式系統、社會技術系統、云計算、大數據、機器學習、安全、開源等新興領域提出建模的創新應用以及可持續性。 官網鏈接: · 泛函 · 訓練集 · 情景 · 可約的 ·
2024 年 1 月 31 日

Data generation remains a bottleneck in training surrogate models to predict molecular properties. We demonstrate that multitask Gaussian process regression overcomes this limitation by leveraging both expensive and cheap data sources. In particular, we consider training sets constructed from coupled-cluster (CC) and density function theory (DFT) data. We report that multitask surrogates can predict at CC level accuracy with a reduction to data generation cost by over an order of magnitude. Of note, our approach allows the training set to include DFT data generated by a heterogeneous mix of exchange-correlation functionals without imposing any artificial hierarchy on functional accuracy. More generally, the multitask framework can accommodate a wider range of training set structures -- including full disparity between the different levels of fidelity -- than existing kernel approaches based on $\Delta$-learning, though we show that the accuracy of the two approaches can be similar. Consequently, multitask regression can be a tool for reducing data generation costs even further by opportunistically exploiting existing data sources.

It is well-known that decision-making problems from stochastic control can be formulated by means of forward-backward stochastic differential equation (FBSDE). Recently, the authors of Ji et al. 2022 proposed an efficient deep learning-based algorithm which was based on the stochastic maximum principle (SMP). In this paper, we provide a convergence result for this deep SMP-BSDE algorithm and compare its performance with other existing methods. In particular, by adopting a similar strategy as in Han and Long 2020, we derive a posteriori error estimate, and show that the total approximation error can be bounded by the value of the loss functional and the discretization error. We present numerical examples for high-dimensional stochastic control problems, both in case of drift- and diffusion control, which showcase superior performance compared to existing algorithms.

Chemical reactivity models are developed to predict chemical reaction outcomes in the form of classification (success/failure) or regression (product yield) tasks. The vast majority of the reported models are trained solely on chemical information such as reactants, products, reagents, and solvents, but not on the details of a synthetic protocol. Herein incorporation of procedural text with the aim to augment the Graphormer reactivity model and improve its accuracy is presented. Two major approaches are used: training an adapter Graphormer model that is provided with a GPT-2-derived latent representation of the text procedure (ReacLLaMA-Adapter) and labeling an unlabeled part of a dataset with the LLaMA 2 model followed by training the Graphormer on an extended dataset (Zero-Shot Labeling ReacLLaMA). Both methodologies enhance the discernment of unpromising reactions, thereby providing more accurate models with improved specificity.

We present an overview of recent developments on the convergence analysis of numerical methods for inviscid multidimensional compressible flows that preserve underlying physical structures. We introduce the concept of generalized solutions, the so-called dissipative solutions, and explain their relationship to other commonly used solution concepts. In numerical experiments we apply K-convergence of numerical solutions and approximate turbulent solutions together with the Reynolds stress defect and the energy defect.

A popular method for variance reduction in observational causal inference is propensity-based trimming, the practice of removing units with extreme propensities from the sample. This practice has theoretical grounding when the data are homoscedastic and the propensity model is parametric (Yang and Ding, 2018; Crump et al. 2009), but in modern settings where heteroscedastic data are analyzed with non-parametric models, existing theory fails to support current practice. In this work, we address this challenge by developing new methods and theory for sample trimming. Our contributions are three-fold: first, we describe novel procedures for selecting which units to trim. Our procedures differ from previous work in that we trim not only units with small propensities, but also units with extreme conditional variances. Second, we give new theoretical guarantees for inference after trimming. In particular, we show how to perform inference on the trimmed subpopulation without requiring that our regressions converge at parametric rates. Instead, we make only fourth-root rate assumptions like those in the double machine learning literature. This result applies to conventional propensity-based trimming as well and thus may be of independent interest. Finally, we propose a bootstrap-based method for constructing simultaneously valid confidence intervals for multiple trimmed sub-populations, which are valuable for navigating the trade-off between sample size and variance reduction inherent in trimming. We validate our methods in simulation, on the 2007-2008 National Health and Nutrition Examination Survey, and on a semi-synthetic Medicare dataset and find promising results in all settings.

We consider time-harmonic scalar transmission problems between dielectric and dispersive materials with generalized Lorentz frequency laws. For certain frequency ranges such equations involve a sign-change in their principle part. Due to the resulting loss of coercivity properties, the numerical simulation of such problems is demanding. Furthermore, the related eigenvalue problems are nonlinear and give rise to additional challenges. We present a new finite element method for both of these types of problems, which is based on a weakly coercive reformulation of the PDE. The new scheme can handle $C^{1,1}$-interfaces consisting piecewise of elementary geometries. Neglecting quadrature errors, the method allows for a straightforward convergence analysis. In our implementation we apply a simple, but nonstandard quadrature rule to achieve negligible quadrature errors. We present computational experiments in 2D and 3D for both source and eigenvalue problems which confirm the stability and convergence of the new scheme.

Treatment approaches for colorectal cancer (CRC) are highly dependent on the molecular subtype, as immunotherapy has shown efficacy in cases with microsatellite instability (MSI) but is ineffective for the microsatellite stable (MSS) subtype. There is promising potential in utilizing deep neural networks (DNNs) to automate the differentiation of CRC subtypes by analyzing Hematoxylin and Eosin (H\&E) stained whole-slide images (WSIs). Due to the extensive size of WSIs, Multiple Instance Learning (MIL) techniques are typically explored. However, existing MIL methods focus on identifying the most representative image patches for classification, which may result in the loss of critical information. Additionally, these methods often overlook clinically relevant information, like the tendency for MSI class tumors to predominantly occur on the proximal (right side) colon. We introduce `CIMIL-CRC', a DNN framework that: 1) solves the MSI/MSS MIL problem by efficiently combining a pre-trained feature extraction model with principal component analysis (PCA) to aggregate information from all patches, and 2) integrates clinical priors, particularly the tumor location within the colon, into the model to enhance patient-level classification accuracy. We assessed our CIMIL-CRC method using the average area under the curve (AUC) from a 5-fold cross-validation experimental setup for model development on the TCGA-CRC-DX cohort, contrasting it with a baseline patch-level classification, MIL-only approach, and Clinically-informed patch-level classification approach. Our CIMIL-CRC outperformed all methods (AUROC: $0.92\pm0.002$ (95\% CI 0.91-0.92), vs. $0.79\pm0.02$ (95\% CI 0.76-0.82), $0.86\pm0.01$ (95\% CI 0.85-0.88), and $0.87\pm0.01$ (95\% CI 0.86-0.88), respectively). The improvement was statistically significant.

We determine the material parameters in the relaxed micromorphic generalized continuum model for a given periodic microstructure in this work. This is achieved through a least squares fitting of the total energy of the relaxed micromorphic homogeneous continuum to the total energy of the fully-resolved heterogeneous microstructure, governed by classical linear elasticity. The relaxed micromorphic model is a generalized continuum that utilizes the $\Curl$ of a micro-distortion field instead of its full gradient as in the classical micromorphic theory, leading to several advantages and differences. The most crucial advantage is that it operates between two well-defined scales. These scales are determined by linear elasticity with microscopic and macroscopic elasticity tensors, which respectively bound the stiffness of the relaxed micromorphic continuum from above and below. While the macroscopic elasticity tensor is established a priori through standard periodic first-order homogenization, the microscopic elasticity tensor remains to be determined. Additionally, the characteristic length parameter, associated with curvature measurement, controls the transition between the micro- and macro-scales. Both the microscopic elasticity tensor and the characteristic length parameter are here determined using a computational approach based on the least squares fitting of energies. This process involves the consideration of an adequate number of quadratic deformation modes and different specimen sizes. We conduct a comparative analysis between the least square fitting results of the relaxed micromorphic model, the fitting of a skew-symmetric micro-distortion field (Cosserat-micropolar model), and the fitting of the classical micromorphic model with two different formulations for the curvature...

The notion of an e-value has been recently proposed as a possible alternative to critical regions and p-values in statistical hypothesis testing. In this paper we consider testing the nonparametric hypothesis of symmetry, introduce analogues for e-values of three popular nonparametric tests, define an analogue for e-values of Pitman's asymptotic relative efficiency, and apply it to the three nonparametric tests. We discuss limitations of our simple definition of asymptotic relative efficiency and list directions of further research.

This paper aims to front with dimensionality reduction in regression setting when the predictors are a mixture of functional variable and high-dimensional vector. A flexible model, combining both sparse linear ideas together with semiparametrics, is proposed. A wide scope of asymptotic results is provided: this covers as well rates of convergence of the estimators as asymptotic behaviour of the variable selection procedure. Practical issues are analysed through finite sample simulated experiments while an application to Tecator's data illustrates the usefulness of our methodology.

北京阿比特科技有限公司