Cancer is a significant health issue globally and it is well known that cancer risk varies geographically. However in many countries there are no small area level data on cancer risk factors with high resolution and complete reach, which hinders the development of targeted prevention strategies. Using Australia as a case study, the 2017-2018 National Health Survey was used to generate prevalence estimates for 2221 small areas across Australia for eight cancer risk factor measures covering smoking, alcohol, physical activity, diet and weight. Utilising a recently developed Bayesian two-stage small area estimation methodology, the model incorporated survey-only covariates, spatial smoothing and hierarchical modelling techniques, along with a vast array of small area level auxiliary data, including census, remoteness, and socioeconomic data. The models borrowed strength from previously published cancer risk estimates provided by the Social Health Atlases of Australia. Estimates were internally and externally validated. We illustrated that in 2017-18 health behaviours across Australia exhibited more spatial disparities than previously realised by improving the reach and resolution of formerly published cancer risk factors. The derived estimates reveal higher prevalence of unhealthy behaviours in more remote areas, and areas of lower socioeconomic status; a trend that aligns well with previous work. Our study addresses the gaps in small area level cancer risk factor estimates in Australia. The new estimates provide improved spatial resolution and reach and will enable more targeted cancer prevention strategies at the small area level, supporting policy makers, researchers, and the general public in understanding the spatial distribution of cancer risk factors in Australia. To help disseminate the results of this work, they will be made available in the Australian Cancer Atlas 2.0.
There are various applications, where companies need to decide to which individuals they should best allocate treatment. To support such decisions, uplift models are applied to predict treatment effects on an individual level. Based on the predicted treatment effects, individuals can be ranked and treatment allocation can be prioritized according to this ranking. An implicit assumption, which has not been doubted in the previous uplift modeling literature, is that this treatment prioritization approach tends to bring individuals with high treatment effects to the top and individuals with low treatment effects to the bottom of the ranking. In our research, we show that heteroskedastictity in the training data can cause a bias of the uplift model ranking: individuals with the highest treatment effects can get accumulated in large numbers at the bottom of the ranking. We explain theoretically how heteroskedasticity can bias the ranking of uplift models and show this process in a simulation and on real-world data. We argue that this problem of ranking bias due to heteroskedasticity might occur in many real-world applications and requires modification of the treatment prioritization to achieve an efficient treatment allocation.
Brain atrophy and white matter hyperintensity (WMH) are critical neuroimaging features for ascertaining brain injury in cerebrovascular disease and multiple sclerosis. Automated segmentation and quantification is desirable but existing methods require high-resolution MRI with good signal-to-noise ratio (SNR). This precludes application to clinical and low-field portable MRI (pMRI) scans, thus hampering large-scale tracking of atrophy and WMH progression, especially in underserved areas where pMRI has huge potential. Here we present a method that segments white matter hyperintensity and 36 brain regions from scans of any resolution and contrast (including pMRI) without retraining. We show results on six public datasets and on a private dataset with paired high- and low-field scans (3T and 64mT), where we attain strong correlation between the WMH ($\rho$=.85) and hippocampal volumes (r=.89) estimated at both fields. Our method is publicly available as part of FreeSurfer, at: //surfer.nmr.mgh.harvard.edu/fswiki/WMH-SynthSeg.
In Bayesian theory, the role of information is central. The influence exerted by prior information on posterior outcomes often jeopardizes Bayesian studies, due to the potentially subjective nature of the prior choice. When the studied model is not enriched with sufficiently a priori information, the reference prior theory emerges as a proficient tool. Based on the mutual information criterion, the theory handles the construction of a non informative prior whose choice could be called objective. We unveil an original analogy between reference prior theory and Global Sensitivity Analysis, from which we propose a natural generalization of the mutual information definition. A class of our generalized metrics is studied and our results reinforce the Jeffreys' prior choice which satisfies our extended definition of reference prior.
Efficiently counting or detecting defective items is a crucial task in various fields ranging from biological testing to quality control to streaming algorithms. The \emph{group testing estimation problem} concerns estimating the number of defective elements $d$ in a collection of $n$ total within a given factor. We primarily consider the classical query model, in which a query reveals whether the selected group of elements contains a defective one. We show that any non-adaptive randomized algorithm that estimates the value of $d$ within a constant factor requires $\Omega(\log n)$ queries. This confirms that a known $O(\log n)$ upper bound by Bshouty (2019) is tight and resolves a conjecture by Damaschke and Sheikh Muhammad (2010). Additionally, we prove similar matching upper and lower bounds in the threshold query model.
Radiology reports are an instrumental part of modern medicine, informing key clinical decisions such as diagnosis and treatment. The worldwide shortage of radiologists, however, restricts access to expert care and imposes heavy workloads, contributing to avoidable errors and delays in report delivery. While recent progress in automated report generation with vision-language models offer clear potential in ameliorating the situation, the path to real-world adoption has been stymied by the challenge of evaluating the clinical quality of AI-generated reports. In this study, we build a state-of-the-art report generation system for chest radiographs, \textit{Flamingo-CXR}, by fine-tuning a well-known vision-language foundation model on radiology data. To evaluate the quality of the AI-generated reports, a group of 16 certified radiologists provide detailed evaluations of AI-generated and human written reports for chest X-rays from an intensive care setting in the United States and an inpatient setting in India. At least one radiologist (out of two per case) preferred the AI report to the ground truth report in over 60$\%$ of cases for both datasets. Amongst the subset of AI-generated reports that contain errors, the most frequently cited reasons were related to the location and finding, whereas for human written reports, most mistakes were related to severity and finding. This disparity suggested potential complementarity between our AI system and human experts, prompting us to develop an assistive scenario in which \textit{Flamingo-CXR} generates a first-draft report, which is subsequently revised by a clinician. This is the first demonstration of clinician-AI collaboration for report writing, and the resultant reports are assessed to be equivalent or preferred by at least one radiologist to reports written by experts alone in 80$\%$ of in-patient cases and 60$\%$ of intensive care cases.
Alzheimer's disease has an increasing prevalence in the population world-wide, yet current diagnostic methods based on recommended biomarkers are only available in specialized clinics. Due to these circumstances, Alzheimer's disease is usually diagnosed late, which contrasts with the currently available treatment options that are only effective for patients at an early stage. Blood-based biomarkers could fill in the gap of easily accessible and low-cost methods for early diagnosis of the disease. In particular, immune-based blood-biomarkers might be a promising option, given the recently discovered cross-talk of immune cells of the central nervous system with those in the peripheral immune system. With the help of machine learning algorithms and mechanistic modeling approaches, such as agent-based modeling, an in-depth analysis of the simulation of cell dynamics is possible as well as of high-dimensional omics resources indicative of pathway signaling changes. Here, we give a background on advances in research on brain-immune system cross-talk in Alzheimer's disease and review recent machine learning and mechanistic modeling approaches which leverage modern omics technologies for blood-based immune system-related biomarker discovery.
Physics informed neural networks (PINNs) have recently been widely used for robust and accurate approximation of PDEs. We provide rigorous upper bounds on the generalization error of PINNs approximating solutions of the forward problem for PDEs. An abstract formalism is introduced and stability properties of the underlying PDE are leveraged to derive an estimate for the generalization error in terms of the training error and number of training samples. This abstract framework is illustrated with several examples of nonlinear PDEs. Numerical experiments, validating the proposed theory, are also presented.
Hesitant fuzzy sets are widely used in the instances of uncertainty and hesitation. The inclusion relationship is an important and foundational definition for sets. Hesitant fuzzy set, as a kind of set, needs explicit definition of inclusion relationship. Base on the hesitant fuzzy membership degree of discrete form, several kinds of inclusion relationships for hesitant fuzzy sets are proposed. And then some foundational propositions of hesitant fuzzy sets and the families of hesitant fuzzy sets are presented. Finally, some foundational propositions of hesitant fuzzy information systems with respect to parameter reductions are put forward, and an example and an algorithm are given to illustrate the processes of parameter reductions.
The identification of essential proteins in protein-protein interaction networks (PINs) can help to discover drug targets and prevent disease. In order to improve the accuracy of the identification of essential proteins, researchers attempted to obtain a refined PIN by combining multiple biological information to filter out some unreliable interactions in the PIN. Unfortunately, such approaches drastically reduce the number of nodes in the PIN after multiple refinements and result in a sparser PIN. It makes a considerable portion of essential proteins unidentifiable. In this paper, we propose a multi-layer refined network (MR-PIN) that addresses this problem. Firstly, four refined networks are constructed by respectively integrating different biological information into the static PIN to form a multi-layer heterogeneous network. Then scores of proteins in each network layer are calculated by the existing node ranking method, and the importance score of a protein in the MR-PIN is evaluated in terms of the geometric mean of its scores in all layers. Finally, all nodes are sorted by their importance scores to determine their essentiality. To evaluate the effectiveness of the multi-layer refined network model, we apply 16 node ranking methods on the MR-PIN, and compare the results with those on the SPIN, DPIN and RDPIN. Then the predictive performances of these ranking methods are validated in terms of the identification number of essential protein at top100 - top600, sensitivity, specificity, positive predictive value, negative predictive value, F-measure, accuracy, Jackknife, ROCAUC and PRAUC. The experimental results show that the MR-PIN is superior to the existing refined PINs in the identification accuracy of essential proteins.
Breast cancer remains a global challenge, causing over 1 million deaths globally in 2018. To achieve earlier breast cancer detection, screening x-ray mammography is recommended by health organizations worldwide and has been estimated to decrease breast cancer mortality by 20-40%. Nevertheless, significant false positive and false negative rates, as well as high interpretation costs, leave opportunities for improving quality and access. To address these limitations, there has been much recent interest in applying deep learning to mammography; however, obtaining large amounts of annotated data poses a challenge for training deep learning models for this purpose, as does ensuring generalization beyond the populations represented in the training dataset. Here, we present an annotation-efficient deep learning approach that 1) achieves state-of-the-art performance in mammogram classification, 2) successfully extends to digital breast tomosynthesis (DBT; "3D mammography"), 3) detects cancers in clinically-negative prior mammograms of cancer patients, 4) generalizes well to a population with low screening rates, and 5) outperforms five-out-of-five full-time breast imaging specialists by improving absolute sensitivity by an average of 14%. Our results demonstrate promise towards software that can improve the accuracy of and access to screening mammography worldwide.