亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Spatio-temporal pathogen spread is often partially observed at the metapopulation scale. Available data correspond to proxies and are incomplete, censored and heterogeneous. Moreover, representing such biological systems often leads to complex stochastic models. Such complexity together with data characteristics make the analysis of these systems a challenge. Our objective was to develop a new inference procedure to estimate key parameters of stochastic metapopulation models of animal disease spread from longitudinal and spatial datasets, while accurately accounting for characteristics of census data. We applied our procedure to provide new knowledge on the regional spread of \emph{Mycobacterium avium} subsp. \emph{paratuberculosis} (\emph{Map}), which causes bovine paratuberculosis, a worldwide endemic disease. \emph{Map} spread between herds through trade movements was modeled with a stochastic mechanistic model. Comprehensive data from 2005 to 2013 on cattle movements in 12,857 dairy herds in Brittany (western France) and partial data on animal infection status in 2,278 herds sampled from 2007 to 2013 were used. Inference was performed using a new criterion based on a Monte-Carlo approximation of a composite likelihood, coupled to a numerical optimization algorithm (Nelder-Mead Simplex-like). Our criterion showed a clear superiority to alternative ones in identifying the right parameter values, as assessed by an empirical identifiability on simulated data. Point estimates and profile likelihoods allowed us to establish the initial state of the system, identify the risk of pathogen introduction from outside the metapopulation, and confirm the assumption of the low sensitivity of the diagnostic test. Our inference procedure could easily be applied to other spatio-temporal infection dynamics, especially when ABC-like methods face challenges in defining relevant summary statistics.

相關內容

Most existing neural network-based approaches for solving stochastic optimal control problems using the associated backward dynamic programming principle rely on the ability to simulate the underlying state variables. However, in some problems, this simulation is infeasible, leading to the discretization of state variable space and the need to train one neural network for each data point. This approach becomes computationally inefficient when dealing with large state variable spaces. In this paper, we consider a class of this type of stochastic optimal control problems and introduce an effective solution employing multitask neural networks. To train our multitask neural network, we introduce a novel scheme that dynamically balances the learning across tasks. Through numerical experiments on real-world derivatives pricing problems, we prove that our method outperforms state-of-the-art approaches.

The classification of different grapevine varieties is a relevant phenotyping task in Precision Viticulture since it enables estimating the growth of vineyard rows dedicated to different varieties, among other applications concerning the wine industry. This task can be performed with destructive methods that require time-consuming tasks, including data collection and analysis in the laboratory. However, Unmanned Aerial Vehicles (UAV) provide a more efficient and less prohibitive approach to collecting hyperspectral data, despite acquiring noisier data. Therefore, the first task is the processing of these data to correct and downsample large amounts of data. In addition, the hyperspectral signatures of grape varieties are very similar. In this work, a Convolutional Neural Network (CNN) is proposed for classifying seventeen varieties of red and white grape variants. Rather than classifying single samples, these are processed together with their neighbourhood. Hence, the extraction of spatial and spectral features is addressed with 1) a spatial attention layer and 2) Inception blocks. The pipeline goes from processing to dataset elaboration, finishing with the training phase. The fitted model is evaluated in terms of response time, accuracy and data separability, and compared with other state-of-the-art CNNs for classifying hyperspectral data. Our network was proven to be much more lightweight with a reduced number of input bands, a lower number of trainable weights and therefore, reduced training time. Despite this, the evaluated metrics showed much better results for our network (~99% overall accuracy), in comparison with previous works barely achieving 81% OA.

Network meta-analysis combines aggregate data (AgD) from multiple randomised controlled trials, assuming that any effect modifiers are balanced across populations. Individual patient data (IPD) meta-regression is the ``gold standard'' method to relax this assumption, however IPD are frequently only available in a subset of studies. Multilevel network meta-regression (ML-NMR) extends IPD meta-regression to incorporate AgD studies whilst avoiding aggregation bias, but currently requires the aggregate-level likelihood to have a known closed form. Notably, this prevents application to time-to-event outcomes. We extend ML-NMR to individual-level likelihoods of any form, by integrating the individual-level likelihood function over the AgD covariate distributions to obtain the respective marginal likelihood contributions. We illustrate with two examples of time-to-event outcomes, showing the performance of ML-NMR in a simulated comparison with little loss of precision from a full IPD analysis, and demonstrating flexible modelling of baseline hazards using cubic M-splines with synthetic data on newly diagnosed multiple myeloma. ML-NMR is a general method for synthesising individual and aggregate level data in networks of all sizes. Extension to general likelihoods, including for survival outcomes, greatly increases the applicability of the method. R and Stan code is provided, and the methods are implemented in the multinma R package.

A crucial challenge for solving problems in conflict research is in leveraging the semi-supervised nature of the data that arise. Observed response data such as counts of battle deaths over time indicate latent processes of interest such as intensity and duration of conflicts, but defining and labeling instances of these unobserved processes requires nuance and imprecision. The availability of such labels, however, would make it possible to study the effect of intervention-related predictors - such as ceasefires - directly on conflict dynamics (e.g., latent intensity) rather than through an intermediate proxy like observed counts of battle deaths. Motivated by this problem and the new availability of the ETH-PRIO Civil Conflict Ceasefires data set, we propose a Bayesian autoregressive (AR) hidden Markov model (HMM) framework as a sufficiently flexible machine learning approach for semi-supervised regime labeling with uncertainty quantification. We motivate our approach by illustrating the way it can be used to study the role that ceasefires play in shaping conflict dynamics. This ceasefires data set is the first systematic and globally comprehensive data on ceasefires, and our work is the first to analyze this new data and to explore the effect of ceasefires on conflict dynamics in a comprehensive and cross-country manner.

In this work some advances in the theory of curvature of two-dimensional probability manifolds corresponding to families of distributions are proposed. It is proved that location-scale distributions are hyperbolic in the Information Geometry sense even when the generatrix is non-even or non-smooth. A novel formula is obtained for the computation of curvature in the case of exponential families: this formula implies some new flatness criteria in dimension 2. Finally, it is observed that many two parameter distributions, widely used in applications, are locally hyperbolic, which highlights the role of hyperbolic geometry in the study of commonly employed probability manifolds. These results have benefited from the use of explainable computational tools, which can substantially boost scientific productivity.

In this study, a finite deformation phase-field formulation is developed to investigate the effect of hygrothermal conditions on the viscoelastic-viscoplastic fracture behavior of epoxy nanocomposites under cyclic loading. The formulation incorporates a definition of the Helmholtz free energy, which considers the effect of nanoparticles, moisture content, and temperature. The free energy is additively decomposed into a deviatoric equilibrium, a deviatoric non-equilibrium, and a volumetric contribution, with distinct definitions for tension and compression. The proposed derivation offers a realistic modeling of damage and viscoplasticity mechanisms in the nanocomposites by coupling the phase-field damage model with a modified crack driving force and a viscoelastic-viscoplastic model. Numerical simulations are conducted to study the cyclic force-displacement response of both dry and saturated boehmite nanoparticle (BNP)/epoxy samples, considering BNP contents and temperature. Comparing numerical results with experimental data shows good agreement at various BNP contents. In addition, the predictive capability of the phase-field model is evaluated through simulations of single-edge notched nanocomposite plates subjected to monolithic tensile and shear loading.

Recently, a family of unconventional integrators for ODEs with polynomial vector fields was proposed, based on the polarization of vector fields. The simplest instance is the by now famous Kahan discretization for quadratic vector fields. All these integrators seem to possess remarkable conservation properties. In particular, it has been proved that, when the underlying ODE is Hamiltonian, its polarization discretization possesses an integral of motion and an invariant volume form. In this note, we propose a new algebraic approach to derivation of the integrals of motion for polarization discretizations.

During multiple testing, researchers often adjust their alpha level to control the familywise error rate for a statistical inference about a joint union alternative hypothesis (e.g., "H1 or H2"). However, in some cases, they do not make this inference and instead make separate inferences about each of the individual hypotheses that comprise the joint hypothesis (e.g., H1 and H2). For example, a researcher might use a Bonferroni correction to adjust their alpha level from the conventional level of 0.050 to 0.025 when testing H1 and H2, find a significant result for H1 (p < 0.025) and not for H2 (p > .0.025), and so claim support for H1 and not for H2. However, these separate individual inferences do not require an alpha adjustment. Only a statistical inference about the union alternative hypothesis "H1 or H2" requires an alpha adjustment because it is based on "at least one" significant result among the two tests, and so it depends on the familywise error rate. When a researcher corrects their alpha level during multiple testing but does not make an inference about the union alternative hypothesis, their correction is redundant. In the present article, I discuss this redundant correction problem, including its associated loss of statistical power and its potential causes vis-\`a-vis error rate confusions and the alpha adjustment ritual. I also provide three illustrations of redundant corrections from recent psychology studies. I conclude that redundant corrections represent a symptom of statisticism, and I call for a more nuanced and context-specific approach to multiple testing corrections.

Methods for estimating heterogeneous treatment effects (HTE) from observational data have largely focused on continuous or binary outcomes, with less attention paid to survival outcomes and almost none to settings with competing risks. In this work, we develop censoring unbiased transformations (CUTs) for survival outcomes both with and without competing risks.After converting time-to-event outcomes using these CUTs, direct application of HTE learners for continuous outcomes yields consistent estimates of heterogeneous cumulative incidence effects, total effects, and separable direct effects. Our CUTs enable application of a much larger set of state of the art HTE learners for censored outcomes than had previously been available, especially in competing risks settings. We provide generic model-free learner-specific oracle inequalities bounding the finite-sample excess risk. The oracle efficiency results depend on the oracle selector and estimated nuisance functions from all steps involved in the transformation. We demonstrate the empirical performance of the proposed methods in simulation studies.

Comparisons of frequency distributions often invoke the concept of shift to describe directional changes in properties such as the mean. In the present study, we sought to define shift as a property in and of itself. Specifically, we define distributional shift (DS) as the concentration of frequencies away from the discrete class having the greatest value (e.g., the right-most bin of a histogram). We derive a measure of DS using the normalized sum of exponentiated cumulative frequencies. We then define relative distributional shift (RDS) as the difference in DS between two distributions, revealing the magnitude and direction by which one distribution is concentrated to lesser or greater discrete classes relative to another. We find that RDS is highly related to popular measures that, while based on the comparison of frequency distributions, do not explicitly consider shift. While RDS provides a useful complement to other comparative measures, DS allows shift to be quantified as a property of individual distributions, similar in concept to a statistical moment.

北京阿比特科技有限公司