亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Radiology reports are an instrumental part of modern medicine, informing key clinical decisions such as diagnosis and treatment. The worldwide shortage of radiologists, however, restricts access to expert care and imposes heavy workloads, contributing to avoidable errors and delays in report delivery. While recent progress in automated report generation with vision-language models offer clear potential in ameliorating the situation, the path to real-world adoption has been stymied by the challenge of evaluating the clinical quality of AI-generated reports. In this study, we build a state-of-the-art report generation system for chest radiographs, $\textit{Flamingo-CXR}$, by fine-tuning a well-known vision-language foundation model on radiology data. To evaluate the quality of the AI-generated reports, a group of 16 certified radiologists provide detailed evaluations of AI-generated and human written reports for chest X-rays from an intensive care setting in the United States and an inpatient setting in India. At least one radiologist (out of two per case) preferred the AI report to the ground truth report in over 60$\%$ of cases for both datasets. Amongst the subset of AI-generated reports that contain errors, the most frequently cited reasons were related to the location and finding, whereas for human written reports, most mistakes were related to severity and finding. This disparity suggested potential complementarity between our AI system and human experts, prompting us to develop an assistive scenario in which Flamingo-CXR generates a first-draft report, which is subsequently revised by a clinician. This is the first demonstration of clinician-AI collaboration for report writing, and the resultant reports are assessed to be equivalent or preferred by at least one radiologist to reports written by experts alone in 80$\%$ of in-patient cases and 60$\%$ of intensive care cases.

相關內容

CASES:International Conference on Compilers, Architectures, and Synthesis for Embedded Systems。 Explanation:嵌入式系統編譯器、體系結構和綜合國際會議。 Publisher:ACM。 SIT:

Medical imaging datasets are fundamental to artificial intelligence (AI) in healthcare. The accuracy, robustness and fairness of diagnostic algorithms depend on the data (and its quality) on which the models are trained and evaluated. Medical imaging datasets have become increasingly available to the public, and are often hosted on Community-Contributed Platforms (CCP), including private companies like Kaggle or HuggingFace. While open data is important to enhance the redistribution of data's public value, we find that the current CCP governance model fails to uphold the quality needed and recommended practices for sharing, documenting, and evaluating datasets. In this paper we investigate medical imaging datasets on CCPs and how they are documented, shared, and maintained. We first highlight some differences between medical imaging and computer vision, particularly in the potentially harmful downstream effects due to poor adoption of recommended dataset management practices. We then analyze 20 (10 medical and 10 computer vision) popular datasets on CCPs and find vague licenses, lack of persistent identifiers and storage, duplicates and missing metadata, with differences between the platforms. We present "actionability" as a conceptual metric to reveal the data quality gap between characteristics of data on CCPs and the desired characteristics of data for AI in healthcare. Finally, we propose a commons-based stewardship model for documenting, sharing and maintaining datasets on CCPs and end with a discussion of limitations and open questions.

When generating in-silico clinical electrophysiological outputs, such as electrocardiograms (ECGs) and body surface potential maps (BSPMs), mathematical models have relied on single physics, i.e. of the cardiac electrophysiology (EP), neglecting the role of the heart motion. Since the heart is the most powerful source of electrical activity in the human body, its motion dynamically shifts the position of the principal electrical sources in the torso, influencing electrical potential distribution and potentially altering the EP outputs. In this work, we propose a computational model for the simulation of ECGs and BSPMs by coupling a cardiac electromechanical model with a model that simulates the propagation of the EP signal in the torso, thanks to a flexible numerical approach, that simulates the torso domain deformation induced by the myocardial displacement. Our model accounts for the major mechano-electrical feedbacks, along with unidirectional displacement and potential couplings from the heart to the surrounding body. For the numerical discretization, we employ a versatile intergrid transfer operator that allows for the use of different Finite Element spaces to be used in the cardiac and torso domains. Our numerical results are obtained on a realistic 3D biventricular-torso geometry, and cover both cases of sinus rhythm and ventricular tachycardia (VT), solving both the electromechanical-torso model in dynamical domains, and the classical electrophysiology-torso model in static domains. By comparing standard 12-lead ECG and BSPMs, we highlight the non-negligible effects of the myocardial contraction on the EP-outputs, especially in pathological conditions, such as the VT.

Longitudinal data are important in numerous fields, such as healthcare, sociology and seismology, but real-world datasets present notable challenges for practitioners because they can be high-dimensional, contain structured missingness patterns, and measurement time points can be governed by an unknown stochastic process. While various solutions have been suggested, the majority of them have been designed to account for only one of these challenges. In this work, we propose a flexible and efficient latent-variable model that is capable of addressing all these limitations. Our approach utilizes Gaussian processes to capture temporal correlations between samples and their associated missingness masks as well as to model the underlying point process. We construct our model as a variational autoencoder together with deep neural network parameterised encoder and decoder models, and develop a scalable amortised variational inference approach for efficient model training. We demonstrate competitive performance using both simulated and real datasets.

Autonomous vehicles rely on accurate trajectory prediction to inform decision-making processes related to navigation and collision avoidance. However, current trajectory prediction models show signs of overfitting, which may lead to unsafe or suboptimal behavior. To address these challenges, this paper presents a comprehensive framework that categorizes and assesses the definitions and strategies used in the literature on evaluating and improving the robustness of trajectory prediction models. This involves a detailed exploration of various approaches, including data slicing methods, perturbation techniques, model architecture changes, and post-training adjustments. In the literature, we see many promising methods for increasing robustness, which are necessary for safe and reliable autonomous driving.

Over the last decades,it has been shown that teaching and learning statistics is complex, regardless of the teaching methodology. This research presents the different learning profiles identified in a group of future Primary Education (PE) teachers during the study of the Statistics blockdepending on the methodology used and gender, where the sample consists of 132 students in the third year of the PE undergraduate degree in theUniversity of the Basque Country(Universidad del Pa\'is Vasco/Euskal Herriko Unibertsitatea, UPV/EHU). To determine the profiles, a cluster analysis technique has been used, where the main variables to determine them are, on the one hand, their statistical competence development and, on the other hand, the evolutionof their attitude towards statistics. In order to better understand the nature of the profiles obtained, the type of teaching methodology used to work on the Statistics block has been taken into account. This comparison is based on the fact that the sample is divided into two groups: one has worked with a Project Based Learning (PBL) methodology,while the other has worked with a methodology in which theoretical explanations and typically decontextualized exercises predominate. Among the results obtained,three differentiated profiles areobserved, highlighting the proportion of students with an advantageous profile in the group where PBL is included.With regard to gender, the results show that women's attitudes towardstatistics evolvedmore positively than men's after the sessions devoted to statistics in the PBL group.

Understanding the mechanisms through which neural networks extract statistics from input-label pairs is one of the most important unsolved problems in supervised learning. Prior works have identified that the gram matrices of the weights in trained neural networks of general architectures are proportional to the average gradient outer product of the model, in a statement known as the Neural Feature Ansatz (NFA). However, the reason these quantities become correlated during training is poorly understood. In this work, we explain the emergence of this correlation. We identify that the NFA is equivalent to alignment between the left singular structure of the weight matrices and a significant component of the empirical neural tangent kernels associated with those weights. We establish that the NFA introduced in prior works is driven by a centered NFA that isolates this alignment. We show that the speed of NFA development can be predicted analytically at early training times in terms of simple statistics of the inputs and labels. Finally, we introduce a simple intervention to increase NFA correlation at any given layer, which dramatically improves the quality of features learned.

We present MIPS, a novel method for program synthesis based on automated mechanistic interpretability of neural networks trained to perform the desired task, auto-distilling the learned algorithm into Python code. We test MIPS on a benchmark of 62 algorithmic tasks that can be learned by an RNN and find it highly complementary to GPT-4: MIPS solves 32 of them, including 13 that are not solved by GPT-4 (which also solves 30). MIPS uses an integer autoencoder to convert the RNN into a finite state machine, then applies Boolean or integer symbolic regression to capture the learned algorithm. As opposed to large language models, this program synthesis technique makes no use of (and is therefore not limited by) human training data such as algorithms and code from GitHub. We discuss opportunities and challenges for scaling up this approach to make machine-learned models more interpretable and trustworthy.

We present a graph-based discretization method for solving hyperbolic systems of conservation laws using discontinuous finite elements. The method is based on the convex limiting technique technique introduced by Guermond et al. (SIAM J. Sci. Comput. 40, A3211--A3239, 2018). As such, these methods are mathematically guaranteed to be invariant-set preserving and to satisfy discrete pointwise entropy inequalities. In this paper we extend the theory for the specific case of discontinuous finite elements, incorporating the effect of boundary conditions into the formulation. From a practical point of view, the implementation of these methods is algebraic, meaning, that they operate directly on the stencil of the spatial discretization. This first paper in a sequence of two papers introduces and verifies essential building blocks for the convex limiting procedure using discontinuous Galerkin discretizations. In particular, we discuss a minimally stabilized high-order discontinuous Galerkin method that exhibits optimal convergence rates comparable to linear stabilization techniques for cell-based methods. In addition, we discuss a proper choice of local bounds for the convex limiting procedure. A follow-up contribution will focus on the high-performance implementation, benchmarking and verification of the method. We verify convergence rates on a sequence of one- and two-dimensional tests with differing regularity. In particular, we obtain optimal convergence rates for single rarefaction waves. We also propose a simple test in order to verify the implementation of boundary conditions and their convergence rates.

We hypothesize that due to the greedy nature of learning in multi-modal deep neural networks, these models tend to rely on just one modality while under-fitting the other modalities. Such behavior is counter-intuitive and hurts the models' generalization, as we observe empirically. To estimate the model's dependence on each modality, we compute the gain on the accuracy when the model has access to it in addition to another modality. We refer to this gain as the conditional utilization rate. In the experiments, we consistently observe an imbalance in conditional utilization rates between modalities, across multiple tasks and architectures. Since conditional utilization rate cannot be computed efficiently during training, we introduce a proxy for it based on the pace at which the model learns from each modality, which we refer to as the conditional learning speed. We propose an algorithm to balance the conditional learning speeds between modalities during training and demonstrate that it indeed addresses the issue of greedy learning. The proposed algorithm improves the model's generalization on three datasets: Colored MNIST, Princeton ModelNet40, and NVIDIA Dynamic Hand Gesture.

Graph representation learning for hypergraphs can be used to extract patterns among higher-order interactions that are critically important in many real world problems. Current approaches designed for hypergraphs, however, are unable to handle different types of hypergraphs and are typically not generic for various learning tasks. Indeed, models that can predict variable-sized heterogeneous hyperedges have not been available. Here we develop a new self-attention based graph neural network called Hyper-SAGNN applicable to homogeneous and heterogeneous hypergraphs with variable hyperedge sizes. We perform extensive evaluations on multiple datasets, including four benchmark network datasets and two single-cell Hi-C datasets in genomics. We demonstrate that Hyper-SAGNN significantly outperforms the state-of-the-art methods on traditional tasks while also achieving great performance on a new task called outsider identification. Hyper-SAGNN will be useful for graph representation learning to uncover complex higher-order interactions in different applications.

北京阿比特科技有限公司