亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The estimation of the effect of environmental exposures and overall mixtures on a survival time outcome is common in environmental epidemiological studies. While advanced statistical methods are increasingly being used for mixture analyses, their applicability and performance for survival outcomes has yet to be explored. We identified readily available methods for analyzing an environmental mixture's effect on a survival outcome and assessed their performance via simulations replicating various real-life scenarios. Using prespecified criteria, we selected Bayesian Additive Regression Trees (BART), Cox Elastic Net, Cox Proportional Hazards (PH) with and without penalized splines, Gaussian Process Regression (GPR) and Multivariate Adaptive Regression Splines (MARS) to compare the bias and efficiency produced when estimating individual exposure, overall mixture, and interaction effects on a survival outcome. We illustrate the selected methods in a real-world data application. We estimated the effects of arsenic, cadmium, molybdenum, selenium, tungsten, and zinc on incidence of cardiovascular disease in American Indians using data from the Strong Heart Study (SHS). In the simulation study, there was a consistent bias-variance trade off. The more flexible models (BART, GPR and MARS) were found to be most advantageous in the presence of nonproportional hazards, where the Cox models often did not capture the true effects due to their higher bias and lower variance. In the SHS, estimates of the effect of selenium and the overall mixture indicated negative effects, but the magnitudes of the estimated effects varied across methods. In practice, we recommend evaluating if findings are consistent across methods.

相關內容

Dependence is undoubtedly a central concept in statistics. Though, it proves difficult to locate in the literature a formal definition which goes beyond the self-evident 'dependence = non-independence'. This absence has allowed the term 'dependence' and its declination to be used vaguely and indiscriminately for qualifying a variety of disparate notions, leading to numerous incongruities. For example, the classical Pearson's, Spearman's or Kendall's correlations are widely regarded as 'dependence measures' of major interest, in spite of returning 0 in some cases of deterministic relationships between the variables at play, evidently not measuring dependence at all. Arguing that research on such a fundamental topic would benefit from a slightly more rigid framework, this paper suggests a general definition of the dependence between two random variables defined on the same probability space. Natural enough for aligning with intuition, that definition is still sufficiently precise for allowing unequivocal identification of a 'universal' representation of the dependence structure of any bivariate distribution. Links between this representation and familiar concepts are highlighted, and ultimately, the idea of a dependence measure based on that universal representation is explored and shown to satisfy Renyi's postulates.

Microring resonators (MRRs) are promising devices for time-delay photonic reservoir computing, but the impact of the different physical effects taking place in the MRRs on the reservoir computing performance is yet to be fully understood. We numerically analyze the impact of linear losses as well as thermo-optic and free-carrier effects relaxation times on the prediction error of the time-series task NARMA-10. We demonstrate the existence of three regions, defined by the input power and the frequency detuning between the optical source and the microring resonance, that reveal the cavity transition from linear to nonlinear regimes. One of these regions offers very low error in time-series prediction under relatively low input power and number of nodes while the other regions either lack nonlinearity or become unstable. This study provides insight into the design of the MRR and the optimization of its physical properties for improving the prediction performance of time-delay reservoir computing.

Many interesting physical problems described by systems of hyperbolic conservation laws are stiff, and thus impose a very small time-step because of the restrictive CFL stability condition. In this case, one can exploit the superior stability properties of implicit time integration which allows to choose the time-step only from accuracy requirements, and thus avoid the use of small time-steps. We discuss an efficient framework to devise high order implicit schemes for stiff hyperbolic systems without tailoring it to a specific problem. The nonlinearity of high order schemes, due to space- and time-limiting procedures which control nonphysical oscillations, makes the implicit time integration difficult, e.g.~because the discrete system is nonlinear also on linear problems. This nonlinearity of the scheme is circumvented as proposed in (Puppo et al., Comm.~Appl.~Math.~\& Comput., 2023) for scalar conservation laws, where a first order implicit predictor is computed to freeze the nonlinear coefficients of the essentially non-oscillatory space reconstruction, and also to assist limiting in time. In addition, we propose a novel conservative flux-centered a-posteriori time-limiting procedure using numerical entropy indicators to detect troubled cells. The numerical tests involve classical and artificially devised stiff problems using the Euler's system of gas-dynamics.

Polarization information of the light can provide rich cues for computer vision and scene understanding tasks, such as the type of material, pose, and shape of the objects. With the advent of new and cheap polarimetric sensors, this imaging modality is becoming accessible to a wider public for solving problems such as pose estimation, 3D reconstruction, underwater navigation, and depth estimation. However, we observe several limitations regarding the usage of this sensorial modality, as well as a lack of standards and publicly available tools to analyze polarization images. Furthermore, although polarization camera manufacturers usually provide acquisition tools to interface with their cameras, they rarely include processing algorithms that make use of the polarization information. In this paper, we review recent advances in applications that involve polarization imaging, including a comprehensive survey of recent advances on polarization for vision and robotics perception tasks. We also introduce a complete software toolkit that provides common standards to communicate with and process information from most of the existing micro-grid polarization cameras on the market. The toolkit also implements several image processing algorithms for this modality, and it is publicly available on GitHub: //github.com/vibot-lab/Pola4all_JEI_2023.

Disability insurance claims are often affected by lengthy reporting delays and adjudication processes. The classic multistate life insurance modeling framework is ill-suited to handle such information delays since the cash flow and available information can no longer be based on the biometric multistate process determining the contractual payments. We propose a new individual reserving model for disability insurance schemes which describes the claim evolution in real-time. Under suitable independence assumptions between the available information and the underlying biometric multistate process, we show that these new reserves may be calculated as natural modifications of the classic reserves. We propose suitable parametric estimators for the model constituents and a real data application shows the practical relevance of our concepts and results.

This work studies nonparametric Bayesian estimation of the intensity function of an inhomogeneous Poisson point process in the important case where the intensity depends on covariates, based on the observation of a single realisation of the point pattern over a large area. It is shown how the presence of covariates allows to borrow information from far away locations in the observation window, enabling consistent inference in the growing domain asymptotics. In particular, optimal posterior contraction rates under both global and point-wise loss functions are derived. The rates in global loss are obtained under conditions on the prior distribution resembling those in the well established theory of Bayesian nonparametrics, here combined with concentration inequalities for functionals of stationary processes to control certain random covariate-dependent loss functions appearing in the analysis. The local rates are derived with an ad-hoc study that builds on recent advances in the theory of P\'olya tree priors, extended to the present multivariate setting with a novel construction that makes use of the random geometry induced by the covariates.

Recent discussions on the future of metropolitan cities underscore the pivotal role of (social) equity, driven by demographic and economic trends. More equal policies can foster and contribute to a city's economic success and social stability. In this work, we focus on identifying metropolitan areas with distinct economic and social levels in the greater Los Angeles area, one of the most diverse yet unequal areas in the United States. Utilizing American Community Survey data, we propose a Bayesian model for boundary detection based on income distributions. The model identifies areas with significant income disparities, offering actionable insights for policymakers to address social and economic inequalities. Our approach formalized as a Bayesian structural learning framework, models areal densities through finite mixture models. Efficient posterior computation is facilitated by a transdimensional Markov Chain Monte Carlo sampler. The methodology is validated via extensive simulations and applied to the income distributions in the greater Los Angeles area. We identify several boundaries in the income distributions which can be explained in light of other social dynamics such as crime rates and healthcare, showing the usefulness of such an analysis to policymakers.

Compositional data are contemporarily defined as positive vectors, the ratios among whose elements are of interest to the researcher. Financial statement analysis by means of accounting ratios fulfils this definition to the letter. Compositional data analysis solves the major problems in statistical analysis of standard financial ratios at industry level, such as skewness, non-normality, non-linearity and dependence of the results on the choice of which accounting figure goes to the numerator and to the denominator of the ratio. In spite of this, compositional applications to financial statement analysis are still rare. In this article, we present some transformations within compositional data analysis that are particularly useful for financial statement analysis. We show how to compute industry or sub-industry means of standard financial ratios from a compositional perspective. We show how to visualise firms in an industry with a compositional biplot, to classify them with compositional cluster analysis and to relate financial and non-financial indicators with compositional regression models. We show an application to the accounting statements of Spanish wineries using DuPont analysis, and a step-by-step tutorial to the compositional freeware CoDaPack.

Rational best approximations (in a Chebyshev sense) to real functions are characterized by an equioscillating approximation error. Similar results do not hold true for rational best approximations to complex functions in general. In the present work, we consider unitary rational approximations to the exponential function on the imaginary axis, which map the imaginary axis to the unit circle. In the class of unitary rational functions, best approximations are shown to exist, to be uniquely characterized by equioscillation of a phase error, and to possess a super-linear convergence rate. Furthermore, the best approximations have full degree (i.e., non-degenerate), achieve their maximum approximation error at points of equioscillation, and interpolate at intermediate points. Asymptotic properties of poles, interpolation nodes, and equioscillation points of these approximants are studied. Three algorithms, which are found very effective to compute unitary rational approximations including candidates for best approximations, are sketched briefly. Some consequences to numerical time-integration are discussed. In particular, time propagators based on unitary best approximants are unitary, symmetric and A-stable.

Knowledge graphs (KGs) of real-world facts about entities and their relationships are useful resources for a variety of natural language processing tasks. However, because knowledge graphs are typically incomplete, it is useful to perform knowledge graph completion or link prediction, i.e. predict whether a relationship not in the knowledge graph is likely to be true. This paper serves as a comprehensive survey of embedding models of entities and relationships for knowledge graph completion, summarizing up-to-date experimental results on standard benchmark datasets and pointing out potential future research directions.

北京阿比特科技有限公司