亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Defining the effect of exposure of interest and selecting an appropriate estimation method are prerequisite for causal inference. Understanding the ways in which association between heatwaves (i.e., consecutive days of extreme high temperature) and an outcome depends on whether adjustment was made for temperature and how such adjustment was conducted, is limited. This paper aims to investigate this dependency, demonstrate that temperature is a confounder in heatwave-outcome associations, and introduce a new modeling approach to estimate a new heatwave-outcome relation: E[R(Y)|HW=1, Z]/E[R(Y)|T=OT, Z], where HW is a daily binary variable to indicate the presence of a heatwave; R(Y) is the risk of an outcome, Y; T is a temperature variable; OT is optimal temperature; and Z is a set of confounders including typical confounders but also some types of T as a confounder. We recommend characterization of heatwave-outcome relations and careful selection of modeling approaches to understand the impacts of heatwaves under climate change. We demonstrate our approach using real-world data for Seoul, which suggests that the total effect of heatwaves may be larger than what may be inferred from the extant literature. An R package, HEAT (Heatwave effect Estimation via Adjustment for Temperature), was developed and made publicly available.

相關內容

Stress prediction in porous materials and structures is challenging due to the high computational cost associated with direct numerical simulations. Convolutional Neural Network (CNN) based architectures have recently been proposed as surrogates to approximate and extrapolate the solution of such multiscale simulations. These methodologies are usually limited to 2D problems due to the high computational cost of 3D voxel based CNNs. We propose a novel geometric learning approach based on a Graph Neural Network (GNN) that efficiently deals with three-dimensional problems by performing convolutions over 2D surfaces only. Following our previous developments using pixel-based CNN, we train the GNN to automatically add local fine-scale stress corrections to an inexpensively computed coarse stress prediction in the porous structure of interest. Our method is Bayesian and generates densities of stress fields, from which credible intervals may be extracted. As a second scientific contribution, we propose to improve the extrapolation ability of our network by deploying a strategy of online physics-based corrections. Specifically, we condition the posterior predictions of our probabilistic predictions to satisfy partial equilibrium at the microscale, at the inference stage. This is done using an Ensemble Kalman algorithm, to ensure tractability of the Bayesian conditioning operation. We show that this innovative methodology allows us to alleviate the effect of undesirable biases observed in the outputs of the uncorrected GNN, and improves the accuracy of the predictions in general.

The flexoelectric effect, coupling polarization and strain gradient as well as strain and electric field gradients, is universal to dielectrics, but, as compared to piezoelectricity, it is more difficult to harness as it requires field gradients and it is a small-scale effect. These drawbacks can be overcome by suitably designing metamaterials made of a non-piezoelectric base material but exhibiting apparent piezoelectricity. We develop a theoretical and computational framework to perform topology optimization of the representative volume element of such metamaterials by accurately modeling the governing equations of flexoelectricity using a Cartesian B-spline method, describing geometry with a level set, and resorting to genetic algorithms for optimization. We consider a multi-objective optimization problem where area fraction competes with four fundamental piezoelectric functionalities (stress/strain sensor/ actuator). We computationally obtain Pareto fronts, and discuss the different geometries depending on the apparent piezoelectric coefficient being optimized. In general, we find competitive estimations of apparent piezoelectricity as compared to reference materials such as quartz and PZT ceramics. This opens the possibility to design devices for sensing, actuation and energy harvesting from a much wider, cheaper and effective class of materials.

The use of the non-parametric Restricted Mean Survival Time endpoint (RMST) has grown in popularity as trialists look to analyse time-to-event outcomes without the restrictions of the proportional hazards assumption. In this paper, we evaluate the power and type I error rate of the parametric and non-parametric RMST estimators when treatment effect is explained by multiple covariates, including an interaction term. Utilising the RMST estimator in this way allows the combined treatment effect to be summarised as a one-dimensional estimator, which is evaluated using a one-sided hypothesis Z-test. The estimators are either fully specified or misspecified, both in terms of unaccounted covariates or misspecified knot points (where trials exhibit crossing survival curves). A placebo-controlled trial of Gamma interferon is used as a motivating example to simulate associated survival times. When correctly specified, the parametric RMST estimator has the greatest power, regardless of the time of analysis. The misspecified RMST estimator generally performs similarly when covariates mirror those of the fitted case study dataset. However, as the magnitude of the unaccounted covariate increases, the associated power of the estimator decreases. In all cases, the non-parametric RMST estimator has the lowest power, and power remains very reliant on the time of analysis (with a later analysis time correlated with greater power).

We propose a new framework for the simultaneous inference of monotone and smoothly time-varying functions under complex temporal dynamics utilizing the monotone rearrangement and the nonparametric estimation. We capitalize the Gaussian approximation for the nonparametric monotone estimator and construct the asymptotically correct simultaneous confidence bands (SCBs) by carefully designed bootstrap methods. We investigate two general and practical scenarios. The first is the simultaneous inference of monotone smooth trends from moderately high-dimensional time series, and the proposed algorithm has been employed for the joint inference of temperature curves from multiple areas. Specifically, most existing methods are designed for a single monotone smooth trend. In such cases, our proposed SCB empirically exhibits the narrowest width among existing approaches while maintaining confidence levels, and has been used for testing several hypotheses tailored to global warming. The second scenario involves simultaneous inference of monotone and smoothly time-varying regression coefficients in time-varying coefficient linear models. The proposed algorithm has been utilized for testing the impact of sunshine duration on temperature which is believed to be increasing by the increasingly severe greenhouse effect. The validity of the proposed methods has been justified in theory as well as by extensive simulations.

In prediction settings where data are collected over time, it is often of interest to understand both the importance of variables for predicting the response at each time point and the importance summarized over the time series. Building on recent advances in estimation and inference for variable importance measures, we define summaries of variable importance trajectories. These measures can be estimated and the same approaches for inference can be applied regardless of the choice of the algorithm(s) used to estimate the prediction function. We propose a nonparametric efficient estimation and inference procedure as well as a null hypothesis testing procedure that are valid even when complex machine learning tools are used for prediction. Through simulations, we demonstrate that our proposed procedures have good operating characteristics, and we illustrate their use by investigating the longitudinal importance of risk factors for suicide attempt.

At least two, different approaches to define and solve statistical models for the analysis of economic systems exist: the typical, econometric one, interpreting the Gravity Model specification as the expected link weight of an arbitrary probability distribution, and the one rooted into statistical physics, constructing maximum-entropy distributions constrained to satisfy certain network properties. In a couple of recent, companion papers they have been successfully integrated within the framework induced by the constrained minimisation of the Kullback-Leibler divergence: specifically, two, broad classes of models have been devised, i.e. the integrated and the conditional ones, defined by different, probabilistic rules to place links, load them with weights and turn them into proper, econometric prescriptions. Still, the recipes adopted by the two approaches to estimate the parameters entering into the definition of each model differ. In econometrics, a likelihood that decouples the binary and weighted parts of a model, treating a network as deterministic, is typically maximised; to restore its random character, two alternatives exist: either solving the likelihood maximisation on each configuration of the ensemble and taking the average of the parameters afterwards or taking the average of the likelihood function and maximising the latter one. The difference between these approaches lies in the order in which the operations of averaging and maximisation are taken - a difference that is reminiscent of the quenched and annealed ways of averaging out the disorder in spin glasses. The results of the present contribution, devoted to comparing these recipes in the case of continuous, conditional network models, indicate that the annealed estimation recipe represents the best alternative to the deterministic one.

The estimation of the effect of environmental exposures and overall mixtures on a survival time outcome is common in environmental epidemiological studies. While advanced statistical methods are increasingly being used for mixture analyses, their applicability and performance for survival outcomes has yet to be explored. We identified readily available methods for analyzing an environmental mixture's effect on a survival outcome and assessed their performance via simulations replicating various real-life scenarios. Using prespecified criteria, we selected Bayesian Additive Regression Trees (BART), Cox Elastic Net, Cox Proportional Hazards (PH) with and without penalized splines, Gaussian Process Regression (GPR) and Multivariate Adaptive Regression Splines (MARS) to compare the bias and efficiency produced when estimating individual exposure, overall mixture, and interaction effects on a survival outcome. We illustrate the selected methods in a real-world data application. We estimated the effects of arsenic, cadmium, molybdenum, selenium, tungsten, and zinc on incidence of cardiovascular disease in American Indians using data from the Strong Heart Study (SHS). In the simulation study, there was a consistent bias-variance trade off. The more flexible models (BART, GPR and MARS) were found to be most advantageous in the presence of nonproportional hazards, where the Cox models often did not capture the true effects due to their higher bias and lower variance. In the SHS, estimates of the effect of selenium and the overall mixture indicated negative effects, but the magnitudes of the estimated effects varied across methods. In practice, we recommend evaluating if findings are consistent across methods.

We propose a new method for the construction of layer-adapted meshes for singularly perturbed differential equations (SPDEs), based on mesh partial differential equations (MPDEs) that incorporate \emph{a posteriori} solution information. There are numerous studies on the development of parameter robust numerical methods for SPDEs that depend on the layer-adapted mesh of Bakhvalov. In~\citep{HiMa2021}, a novel MPDE-based approach for constructing a generalisation of these meshes was proposed. Like with most layer-adapted mesh methods, the algorithms in that article depended on detailed derivations of \emph{a priori} bounds on the SPDE's solution and its derivatives. In this work we extend that approach so that it instead uses \emph{a posteriori} computed estimates of the solution. We present detailed algorithms for the efficient implementation of the method, and numerical results for the robust solution of two-parameter reaction-convection-diffusion problems, in one and two dimensions. We also provide full FEniCS code for a one-dimensional example.

Palimpsests refer to historical manuscripts where erased writings have been partially covered by the superimposition of a second writing. By employing imaging techniques, e.g., multispectral imaging, it becomes possible to identify features that are imperceptible to the naked eye, including faded and erased inks. When dealing with overlapping inks, Artificial Intelligence techniques can be utilized to disentangle complex nodes of overlapping letters. In this work, we propose deep learning-based semantic segmentation as a method for identifying and segmenting individual letters in overlapping characters. The experiment was conceived as a proof of concept, focusing on the palimpsests of the Ars Grammatica by Prisciano as a case study. Furthermore, caveats and prospects of our approach combined with multispectral imaging are also discussed.

We propose to approximate a (possibly discontinuous) multivariate function f (x) on a compact set by the partial minimizer arg miny p(x, y) of an appropriate polynomial p whose construction can be cast in a univariate sum of squares (SOS) framework, resulting in a highly structured convex semidefinite program. In a number of non-trivial cases (e.g. when f is a piecewise polynomial) we prove that the approximation is exact with a low-degree polynomial p. Our approach has three distinguishing features: (i) It is mesh-free and does not require the knowledge of the discontinuity locations. (ii) It is model-free in the sense that we only assume that the function to be approximated is available through samples (point evaluations). (iii) The size of the semidefinite program is independent of the ambient dimension and depends linearly on the number of samples. We also analyze the sample complexity of the approach, proving a generalization error bound in a probabilistic setting. This allows for a comparison with machine learning approaches.

北京阿比特科技有限公司