亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Purpose: The purpose of this study is to identify the important determinants responsible for the variation in women's attitude towards intimate partner violence (IPV). Methods: A nationally representative Bangladesh Demographic and Health Survey 2014 data of 17,863 women is used to address the research questions. In the study, two response variables are constructed from the five attitude questions, and a series of individual and community-level predictors are tested. The preliminary statistical methods employed in the study include univariate and bivariate distributions, while the adopted statistical models include binary logistic, ordinal logistic, mixed-effects multilevel logistic models for each response variable, and finally, the generalized ordinal logistic regression. Results: Statistical analyses reveal that among the individual-level independent variables age at first marriage, respondent's education, decision score, religion, NGO membership, access to information, husband's education, normalized wealth score, and division indicator have significant effects on the women's attitude towards IPV. Among the three community-level variables, only the mean decision score is found significant in lowering the likelihood. Conclusions: It is evident that other than religion, NGO membership, and division indicator, the higher the value of the variable, the lower the likelihood of justifying IPV. However, being a Muslim, NGO member, and resident of other divisions, women are found more tolerant of IPV from their respective counterparts. These findings suggest the government, policymakers, practitioners, academicians, and all other stakeholders to work on the significant determinants to divert women's wrong attitude towards IPV, and thus help to take away this deep-rooted problem from society.

相關內容

In the initial wave of the COVID-19 pandemic we observed great discrepancies in both infection and mortality rates between countries. Besides the biological and epidemiological factors, a multitude of social and economic criteria also influence the extent to which these discrepancies appear. Consequently, there is an active debate regarding the critical socio-economic and health factors that correlate with the infection and mortality rates outcome of the pandemic. Here, we leverage Bayesian model averaging techniques and country level data to investigate the potential of 28 variables, describing a diverse set of health and socio-economic characteristics, in being correlates of the final number of infections and deaths during the first wave of the coronavirus pandemic. We show that only few variables are able to robustly correlate with these outcomes. To understand the relationship between the potential correlates in explaining the infection and death rates, we create a Jointness Space. Using this space, we conclude that the extent to which each variable is able to provide a credible explanation for the COVID-19 infections/mortality outcome varies between countries because of their heterogeneous features.

Opioid misuse is a national epidemic and a significant drug related threat to the United States. While the scale of the problem is undeniable, estimates of the local prevalence of opioid misuse are lacking, despite their importance to policy-making and resource allocation. This is due, in part, to the challenge of directly measuring opioid misuse at a local level. In this paper, we develop a Bayesian hierarchical spatio-temporal abundance model that integrates indirect county-level data on opioid-related outcomes with state-level survey estimates on prevalence of opioid misuse to estimate the latent county-level prevalence and counts of people who misuse opioids. A simulation study shows that our integrated model accurately recovers the latent counts and prevalence. We apply our model to county-level surveillance data on opioid overdose deaths and treatment admissions from the state of Ohio. Our proposed framework can be applied to other applications of small area estimation for hard to reach populations, which is a common occurrence with many health conditions such as those related to illicit behaviors.

The methodological development of this paper is motivated by the need to address the following scientific question: does the issuance of heat alerts prevent adverse health effects? Our goal is to address this question within a causal inference framework in the context of time series data. A key challenge is that causal inference methods require the overlap assumption to hold: each unit (i.e., a day) must have a positive probability of receiving the treatment (i.e., issuing a heat alert on that day). In our motivating example, the overlap assumption is often violated: the probability of issuing a heat alert on a cool day is zero. To overcome this challenge, we propose a stochastic intervention for time series data which is implemented via an incremental time-varying propensity score (ItvPS). The ItvPS intervention is executed by multiplying the probability of issuing a heat alert on day $t$ -- conditional on past information up to day $t$ -- by an odds ratio $\delta_t$. First, we introduce a new class of causal estimands that relies on the ItvPS intervention. We provide theoretical results to show that these causal estimands can be identified and estimated under a weaker version of the overlap assumption. Second, we propose nonparametric estimators based on the ItvPS and derive an upper bound for the variances of these estimators. Third, we extend this framework to multi-site time series using a meta-analysis approach. Fourth, we show that the proposed estimators perform well in terms of bias and root mean squared error via simulations. Finally, we apply our proposed approach to estimate the causal effects of increasing the probability of issuing heat alerts on each warm-season day in reducing deaths and hospitalizations among Medicare enrollees in $2,837$ U.S. counties.

Sport climbing, which made its Olympic debut at the 2020 Summer Games, generally consists of three separate disciplines: speed climbing, bouldering, and lead climbing. However, the International Olympic Committee (IOC) only allowed one set of medals each for men and women in sport climbing. As a result, the governing body of sport climbing, rather than choosing only one of the three disciplines to include in the Olympics, decided to create a competition combining all three disciplines. In order to determine a winner, a combined scoring system was created using the product of the ranks across the three disciplines to determine an overall score for each climber. In this work, the rank-product scoring system of sport climbing is evaluated through simulation to investigate its general features, specifically, the advancement probabilities and scores for climbers given certain placements. Additionally, analyses of historical climbing contest results are presented and real examples of violations of the independence of irrelevant alternatives are illustrated. Finally, this work finds evidence that the current competition format is putting speed climbers at a disadvantage.

Recent work has demonstrated the catastrophic effects of poor cardinality estimates on query processing time. In particular, underestimating query cardinality can result in overly optimistic query plans which take orders of magnitude longer to complete than one generated with the true cardinality. Cardinality bounding avoids this pitfall by computing a strict upper bound on the query's output size using statistics about the database such as table sizes and degrees, i.e. value frequencies. In this paper, we extend this line of work by proving a novel bound called the Degree Sequence Bound which takes into account the full degree sequences and the max tuple multiplicity. This bound improves upon previous work incorporating degree constraints which focused on the maximum degree rather than the degree sequence. Further, we describe how to practically compute this bound using a learned approximation of the true degree sequences.

Probabilistic modelling needs specialized tools to support modelers, decision-makers or researchers in the design, checking, refinement and communication of models. Users' comprehension of probabilistic models is vital in all above cases and interactive visualisations could enhance it. Although there are various studies evaluating interactivity in Bayesian reasoning and available tools for visualizing the inference-related distributions, we focus specifically on evaluating the effect of interaction on users' comprehension of probabilistic models' structure. We conducted a user study based on our Interactive Pair Plot for visualizing models' distribution and conditioning sample space graphically. Our results suggest that improvements in the understanding of the interactive group are most pronounced for more exotic structures, such as hierarchical models or unfamiliar parameterisations in comparison to the static group. As the detail of the inferred information increases, interaction does not lead to considerably longer response times. Finally, interaction improves users' confidence.

Counterfactual explanations are usually generated through heuristics that are sensitive to the search's initial conditions. The absence of guarantees of performance and robustness hinders trustworthiness. In this paper, we take a disciplined approach towards counterfactual explanations for tree ensembles. We advocate for a model-based search aiming at "optimal" explanations and propose efficient mixed-integer programming approaches. We show that isolation forests can be modeled within our framework to focus the search on plausible explanations with a low outlier score. We provide comprehensive coverage of additional constraints that model important objectives, heterogeneous data types, structural constraints on the feature space, along with resource and actionability restrictions. Our experimental analyses demonstrate that the proposed search approach requires a computational effort that is orders of magnitude smaller than previous mathematical programming algorithms. It scales up to large data sets and tree ensembles, where it provides, within seconds, systematic explanations grounded on well-defined models solved to optimality.

Most existing work on automated fact checking is concerned with predicting the veracity of claims based on metadata, social network spread, language used in claims, and, more recently, evidence supporting or denying claims. A crucial piece of the puzzle that is still missing is to understand how to automate the most elaborate part of the process -- generating justifications for verdicts on claims. This paper provides the first study of how these explanations can be generated automatically based on available claim context, and how this task can be modelled jointly with veracity prediction. Our results indicate that optimising both objectives at the same time, rather than training them separately, improves the performance of a fact checking system. The results of a manual evaluation further suggest that the informativeness, coverage and overall quality of the generated explanations are also improved in the multi-task model.

The previous work for event extraction has mainly focused on the predictions for event triggers and argument roles, treating entity mentions as being provided by human annotators. This is unrealistic as entity mentions are usually predicted by some existing toolkits whose errors might be propagated to the event trigger and argument role recognition. Few of the recent work has addressed this problem by jointly predicting entity mentions, event triggers and arguments. However, such work is limited to using discrete engineering features to represent contextual information for the individual tasks and their interactions. In this work, we propose a novel model to jointly perform predictions for entity mentions, event triggers and arguments based on the shared hidden representations from deep learning. The experiments demonstrate the benefits of the proposed method, leading to the state-of-the-art performance for event extraction.

This paper addresses the problem of viewpoint estimation of an object in a given image. It presents five key insights that should be taken into consideration when designing a CNN that solves the problem. Based on these insights, the paper proposes a network in which (i) The architecture jointly solves detection, classification, and viewpoint estimation. (ii) New types of data are added and trained on. (iii) A novel loss function, which takes into account both the geometry of the problem and the new types of data, is propose. Our network improves the state-of-the-art results for this problem by 9.8%.

北京阿比特科技有限公司