亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Recent advancements in AI applications to healthcare have shown incredible promise in surpassing human performance in diagnosis and disease prognosis. With the increasing complexity of AI models, however, concerns regarding their opacity, potential biases, and the need for interpretability. To ensure trust and reliability in AI systems, especially in clinical risk prediction models, explainability becomes crucial. Explainability is usually referred to as an AI system's ability to provide a robust interpretation of its decision-making logic or the decisions themselves to human stakeholders. In clinical risk prediction, other aspects of explainability like fairness, bias, trust, and transparency also represent important concepts beyond just interpretability. In this review, we address the relationship between these concepts as they are often used together or interchangeably. This review also discusses recent progress in developing explainable models for clinical risk prediction, highlighting the importance of quantitative and clinical evaluation and validation across multiple common modalities in clinical practice. It emphasizes the need for external validation and the combination of diverse interpretability methods to enhance trust and fairness. Adopting rigorous testing, such as using synthetic datasets with known generative factors, can further improve the reliability of explainability methods. Open access and code-sharing resources are essential for transparency and reproducibility, enabling the growth and trustworthiness of explainable research. While challenges exist, an end-to-end approach to explainability in clinical risk prediction, incorporating stakeholders from clinicians to developers, is essential for success.

相關內容

ACM/IEEE第23屆模型驅動工程語言和系統國際會議,是模型驅動軟件和系統工程的首要會議系列,由ACM-SIGSOFT和IEEE-TCSE支持組織。自1998年以來,模型涵蓋了建模的各個方面,從語言和方法到工具和應用程序。模特的參加者來自不同的背景,包括研究人員、學者、工程師和工業專業人士。MODELS 2019是一個論壇,參與者可以圍繞建模和模型驅動的軟件和系統交流前沿研究成果和創新實踐經驗。今年的版本將為建模社區提供進一步推進建模基礎的機會,并在網絡物理系統、嵌入式系統、社會技術系統、云計算、大數據、機器學習、安全、開源等新興領域提出建模的創新應用以及可持續性。 官網鏈接: · Automator · 宏F1 · 標注 · 深度學習 ·
2023 年 10 月 5 日

Objective: Semantic indexing of biomedical literature is usually done at the level of MeSH descriptors with several related but distinct biomedical concepts often grouped together and treated as a single topic. This study proposes a new method for the automated refinement of subject annotations at the level of MeSH concepts. Methods: Lacking labelled data, we rely on weak supervision based on concept occurrence in the abstract of an article, which is also enhanced by dictionary-based heuristics. In addition, we investigate deep learning approaches, making design choices to tackle the particular challenges of this task. The new method is evaluated on a large-scale retrospective scenario, based on concepts that have been promoted to descriptors. Results: In our experiments concept occurrence was the strongest heuristic achieving a macro-F1 score of about 0.63 across several labels. The proposed method improved it further by more than 4pp. Conclusion: The results suggest that concept occurrence is a strong heuristic for refining the coarse-grained labels at the level of MeSH concepts and the proposed method improves it further.

Graph neural networks (GNNs) are becoming increasingly popular in the medical domain for the tasks of disease classification and outcome prediction. Since patient data is not readily available as a graph, most existing methods either manually define a patient graph, or learn a latent graph based on pairwise similarities between the patients. There are also hypergraph neural network (HGNN)-based methods that were introduced recently to exploit potential higher order associations between the patients by representing them as a hypergraph. In this work, we propose a patient hypergraph network (PHGN), which has been investigated in an inductive learning setup for binary outcome prediction in oropharyngeal cancer (OPC) patients using computed tomography (CT)-based radiomic features for the first time. Additionally, the proposed model was extended to perform time-to-event analyses, and compared with GNN and baseline linear models.

This paper addresses the problem of selective classification for deep neural networks, where a model is allowed to abstain from low-confidence predictions to avoid potential errors. We focus on so-called post-hoc methods, which replace the confidence estimator of a given classifier without retraining or modifying it, thus being practically appealing. Considering neural networks with softmax outputs, our goal is to identify the best confidence estimator that can be computed directly from the unnormalized logits. This problem is motivated by the intriguing observation in recent work that many classifiers appear to have a "broken" confidence estimator, in the sense that their selective classification performance is much worse than what could be expected by their corresponding accuracies. We perform an extensive experimental study of many existing and proposed confidence estimators applied to 84 pretrained ImageNet classifiers available from popular repositories. Our results show that a simple $p$-norm normalization of the logits, followed by taking the maximum logit as the confidence estimator, can lead to considerable gains in selective classification performance, completely fixing the pathological behavior observed in many classifiers. As a consequence, the selective classification performance of any classifier becomes almost entirely determined by its corresponding accuracy. Moreover, these results are shown to be consistent under distribution shift. We also investigate why certain classifiers innately have a good confidence estimator that apparently cannot be improved by post-hoc methods.

Agent-based models are widely used to predict infectious disease spread. For these predictions, one needs to understand how each input parameter affects the result. Here, some parameters may affect the sensitivities of others, requiring the analysis of higher order coefficients through e.g. Sobol sensitivity analysis. The geographical structures of real-world regions are distinct in that they are difficult to reduce to single parameter values, making a unified sensitivity analysis intractable. Yet analyzing the importance of geographical structure on the sensitivity of other input parameters is important because a strong effect would justify the use of models with real-world geographical representations, as opposed to stylized ones. Here we perform a grouped Sobol's sensitivity analysis on COVID-19 spread simulations across a set of three diverse real-world geographical representations. We study the differences in both results and the sensitivity of non-geographical parameters across these geographies. By comparing Sobol indices of parameters across geographies, we find evidence that infection rate could have more sensitivity in regions where the population is segregated, while parameters like recovery period of mild cases are more sensitive in regions with mixed populations. We also show how geographical structure affects parameter sensitivity changes over time.

Mobile device proficiency is increasingly important for everyday living, including to deliver healthcare services. Human-device interactions represent a potential in cognitive neurology and aging research. Although traditional pen-and-paper evaluations serve as valuable tools within public health strategies for population-scale cognitive assessments, digital devices could amplify cognitive assessment. However, even person-centered studies often fail to incorporate measures of mobile device proficiency and research with digital mobile technology frequently neglects these evaluations. Besides that, cognitive screening, a fundamental part of brain health evaluation and a widely accepted strategy to identify high-risk individuals vulnerable to cognitive impairment and dementia, has research using digital devices for older adults in need for standardization. To address this shortfall, the DigiTAU collaborative and interdisciplinary project is creating refined methodological parameters for the investigation of digital biomarkers. With careful consideration of cognitive design elements, here we describe the open-source and performance-based Mobile Device Abilities Test (MDAT), a simple, low-cost, and reproductible open-sourced test framework. This result was achieved with a cross-sectional study population sample of 101 low and middle-income subjects aged 20 to 79 years old. Partial least squares structural equation modeling (PLS-SEM) was used to assess the measurement of the construct. It was possible to achieve a reliable method with internal consistency, good content validity related to digital competences, and that does not have much interference with auto-perceived global functional disability, health self-perception, and motor dexterity. Limitations for this method are discussed and paths to improve and establish better standards are highlighted.

The need for skilled medical support is growing in the era of digital healthcare. This research presents an innovative strategy, utilizing the RuBERT model, for categorizing user inquiries in the field of medical consultation with a focus on expert specialization. By harnessing the capabilities of transformers, we fine-tuned the pre-trained RuBERT model on a varied dataset, which facilitates precise correspondence between queries and particular medical specialisms. Using a comprehensive dataset, we have demonstrated our approach's superior performance with an F1-score of over 92%, calculated through both cross-validation and the traditional split of test and train datasets. Our approach has shown excellent generalization across medical domains such as cardiology, neurology and dermatology. This methodology provides practical benefits by directing users to appropriate specialists for prompt and targeted medical advice. It also enhances healthcare system efficiency, reduces practitioner burden, and improves patient care quality. In summary, our suggested strategy facilitates the attainment of specific medical knowledge, offering prompt and precise advice within the digital healthcare field.

Unscheduled treatment interruptions may lead to reduced quality of care in radiation therapy (RT). Identifying the RT prescription dose effects on the outcome of treatment interruptions, mediated through doses distributed into different organs-at-risk (OARs), can inform future treatment planning. The radiation exposure to OARs can be summarized by a matrix of dose-volume histograms (DVH) for each patient. Although various methods for high-dimensional mediation analysis have been proposed recently, few studies investigated how matrix-valued data can be treated as mediators. In this paper, we propose a novel Bayesian joint mediation model for high-dimensional matrix-valued mediators. In this joint model, latent features are extracted from the matrix-valued data through an adaptation of probabilistic multilinear principal components analysis (MPCA), retaining the inherent matrix structure. We derive and implement a Gibbs sampling algorithm to jointly estimate all model parameters, and introduce a Varimax rotation method to identify active indicators of mediation among the matrix-valued data. Our simulation study finds that the proposed joint model has higher efficiency in estimating causal decomposition effects compared to an alternative two-step method, and demonstrates that the mediation effects can be identified and visualized in the matrix form. We apply the method to study the effect of prescription dose on treatment interruptions in anal canal cancer patients.

We study pointwise estimation and uncertainty quantification for a sparse variational Gaussian process method with eigenvector inducing variables. For a rescaled Brownian motion prior, we derive theoretical guarantees and limitations for the frequentist size and coverage of pointwise credible sets. For sufficiently many inducing variables, we precisely characterize the asymptotic frequentist coverage, deducing when credible sets from this variational method are conservative and when overconfident/misleading. We numerically illustrate the applicability of our results and discuss connections with other common Gaussian process priors.

Active surveillance (AS) is a suitable management option for newly-diagnosed prostate cancer (PCa), which usually presents low to intermediate clinical risk. Patients enrolled in AS have their tumor closely monitored via longitudinal multiparametric magnetic resonance imaging (mpMRI), serum prostate-specific antigen tests, and biopsies. Hence, the patient is prescribed treatment when these tests identify progression to higher-risk PCa. However, current AS protocols rely on detecting tumor progression through direct observation according to standardized monitoring strategies. This approach limits the design of patient-specific AS plans and may lead to the late detection and treatment of tumor progression. Here, we propose to address these issues by leveraging personalized computational predictions of PCa growth. Our forecasts are obtained with a spatiotemporal biomechanistic model informed by patient-specific longitudinal mpMRI data. Our results show that our predictive technology can represent and forecast the global tumor burden for individual patients, achieving concordance correlation coefficients ranging from 0.93 to 0.99 across our cohort (n=7). Additionally, we identify a model-based biomarker of higher-risk PCa: the mean proliferation activity of the tumor (p=0.041). Using logistic regression, we construct a PCa risk classifier based on this biomarker that achieves an area under the receiver operating characteristic curve of 0.83. We further show that coupling our tumor forecasts with this PCa risk classifier enables the early identification of PCa progression to higher-risk disease by more than one year. Thus, we posit that our predictive technology constitutes a promising clinical decision-making tool to design personalized AS plans for PCa patients.

Recent advances in 3D fully convolutional networks (FCN) have made it feasible to produce dense voxel-wise predictions of volumetric images. In this work, we show that a multi-class 3D FCN trained on manually labeled CT scans of several anatomical structures (ranging from the large organs to thin vessels) can achieve competitive segmentation results, while avoiding the need for handcrafting features or training class-specific models. To this end, we propose a two-stage, coarse-to-fine approach that will first use a 3D FCN to roughly define a candidate region, which will then be used as input to a second 3D FCN. This reduces the number of voxels the second FCN has to classify to ~10% and allows it to focus on more detailed segmentation of the organs and vessels. We utilize training and validation sets consisting of 331 clinical CT images and test our models on a completely unseen data collection acquired at a different hospital that includes 150 CT scans, targeting three anatomical organs (liver, spleen, and pancreas). In challenging organs such as the pancreas, our cascaded approach improves the mean Dice score from 68.5 to 82.2%, achieving the highest reported average score on this dataset. We compare with a 2D FCN method on a separate dataset of 240 CT scans with 18 classes and achieve a significantly higher performance in small organs and vessels. Furthermore, we explore fine-tuning our models to different datasets. Our experiments illustrate the promise and robustness of current 3D FCN based semantic segmentation of medical images, achieving state-of-the-art results. Our code and trained models are available for download: //github.com/holgerroth/3Dunet_abdomen_cascade.

北京阿比特科技有限公司