亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Identifying predictive biomarkers, which forecast individual treatment effectiveness, is crucial for personalized medicine and informs decision-making across diverse disciplines. These biomarkers are extracted from pre-treatment data, often within randomized controlled trials, and have to be distinguished from prognostic biomarkers, which are independent of treatment assignment. Our study focuses on the discovery of predictive imaging biomarkers, aiming to leverage pre-treatment images to unveil new causal relationships. Previous approaches relied on labor-intensive handcrafted or manually derived features, which may introduce biases. In response, we present a new task of discovering predictive imaging biomarkers directly from the pre-treatment images to learn relevant image features. We propose an evaluation protocol for this task to assess a model's ability to identify predictive imaging biomarkers and differentiate them from prognostic ones. It employs statistical testing and a comprehensive analysis of image feature attribution. We explore the suitability of deep learning models originally designed for estimating the conditional average treatment effect (CATE) for this task, which previously have been primarily assessed for the precision of CATE estimation, overlooking the evaluation of imaging biomarker discovery. Our proof-of-concept analysis demonstrates promising results in discovering and validating predictive imaging biomarkers from synthetic outcomes and real-world image datasets.

相關內容

Predictive estimation, which comprises model calibration, model prediction, and validation, is a common objective when performing inverse uncertainty quantification (UQ) in diverse scientific applications. These techniques typically require thousands to millions of realisations of the forward model, leading to high computational costs. Surrogate models are often used to approximate these simulations. However, many surrogate models suffer from the fundamental limitation of being unable to estimate plausible high-dimensional outputs, inevitably compromising their use in the UQ framework. To address this challenge, this study introduces an efficient surrogate modelling workflow tailored for high-dimensional outputs. Specifically, a two-step approach is developed: (1) a dimensionality reduction technique is used for extracting data features and mapping the original output space into a reduced space; and (2) a multivariate surrogate model is constructed directly on the reduced space. The combined approach is shown to improve the accuracy of the surrogate model while retaining the computational efficiency required for UQ inversion. The proposed surrogate method, combined with Bayesian inference, is evaluated for a civil engineering application by performing inverse analyses on a laterally loaded pile problem. The results demonstrate the superiority of the proposed framework over traditional surrogate methods in dealing with high-dimensional outputs for sequential inversion analysis.

New knowledge builds upon existing foundations, which means an interdependent relationship exists between knowledge, manifested in the historical development of the scientific system for hundreds of years. By leveraging natural language processing techniques, this study introduces the Scientific Concept Navigator (SciConNav), an embedding-based navigation model to infer the "knowledge pathway" from the research trajectories of millions of scholars. We validate that the learned representations effectively delineate disciplinary boundaries and capture the intricate relationships between diverse concepts. The utility of the inferred navigation space is showcased through multiple applications. Firstly, we demonstrated the multi-step analogy inferences within the knowledge space and the interconnectivity between concepts in different disciplines. Secondly, we formulated the attribute dimensions of knowledge across domains, observing the distributional shifts in the arrangement of 19 disciplines along these conceptual dimensions, including "Theoretical" to "Applied", and "Chemical" to "Biomedical', highlighting the evolution of functional attributes within knowledge domains. Lastly, by analyzing the high-dimensional knowledge network structure, we found that knowledge connects with shorter global pathways, and interdisciplinary knowledge plays a critical role in the accessibility of the global knowledge network. Our framework offers a novel approach to mining knowledge inheritance pathways in extensive scientific literature, which is of great significance for understanding scientific progression patterns, tailoring scientific learning trajectories, and accelerating scientific progress.

The pressure-correction method is a well established approach for simulating unsteady, incompressible fluids. It is well-known that implicit discretization of the time derivative in the momentum equation e.g. using a backward differentiation formula with explicit handling of the nonlinear term results in a conditionally stable method. In certain scenarios, employing explicit time integration in the momentum equation can be advantageous, as it avoids the need to solve for a system matrix involving each differential operator. Additionally, we will demonstrate that the fully discrete method can be expressed in the form of simple matrix-vector multiplications allowing for efficient implementation on modern and highly parallel acceleration hardware. Despite being a common practice in various commercial codes, there is currently no available literature on error analysis for this scenario. In this work, we conduct a theoretical analysis of both implicit and two explicit variants of the pressure-correction method in a fully discrete setting. We demonstrate to which extend the presented implicit and explicit methods exhibit conditional stability. Furthermore, we establish a Courant-Friedrichs-Lewy (CFL) type condition for the explicit scheme and show that the explicit variant demonstrate the same asymptotic behavior as the implicit variant when the CFL condition is satisfied.

To solve many problems on graphs, graph traversals are used, the usual variants of which are the depth-first search and the breadth-first search. Implementing a graph traversal we consequently reach all vertices of the graph that belong to a connected component. The breadth-first search is the usual choice when constructing efficient algorithms for finding connected components of a graph. Methods of simple iteration for solving systems of linear equations with modified graph adjacency matrices and with the properly specified right-hand side can be considered as graph traversal algorithms. These traversal algorithms, generally speaking, turn out to be non-equivalent neither to the depth-first search nor the breadth-first search. The example of such a traversal algorithm is the one associated with the Gauss-Seidel method. For an arbitrary connected graph, to visit all its vertices, the algorithm requires not more iterations than that is required for BFS. For a large number of instances of the problem, fewer iterations will be required.

The current prevalence of conspiracy theories on the internet is a significant issue, tackled by many computational approaches. However, these approaches fail to recognize the relevance of distinguishing between texts which contain a conspiracy theory and texts which are simply critical and oppose mainstream narratives. Furthermore, little attention is usually paid to the role of inter-group conflict in oppositional narratives. We contribute by proposing a novel topic-agnostic annotation scheme that differentiates between conspiracies and critical texts, and that defines span-level categories of inter-group conflict. We also contribute with the multilingual XAI-DisInfodemics corpus (English and Spanish), which contains a high-quality annotation of Telegram messages related to COVID-19 (5,000 messages per language). We also demonstrate the feasibility of an NLP-based automatization by performing a range of experiments that yield strong baseline solutions. Finally, we perform an analysis which demonstrates that the promotion of intergroup conflict and the presence of violence and anger are key aspects to distinguish between the two types of oppositional narratives, i.e., conspiracy vs. critical.

The adoption of contrast agents in medical imaging protocols is crucial for accurate and timely diagnosis. While highly effective and characterized by an excellent safety profile, the use of contrast agents has its limitation, including rare risk of allergic reactions, potential environmental impact and economic burdens on patients and healthcare systems. In this work, we address the contrast agent reduction (CAR) problem, which involves reducing the administered dosage of contrast agent while preserving the visual enhancement. The current literature on the CAR task is based on deep learning techniques within a fully image processing framework. These techniques digitally simulate high-dose images from images acquired with a low dose of contrast agent. We investigate the feasibility of a ``learned inverse problem'' (LIP) approach, as opposed to the end-to-end paradigm in the state-of-the-art literature. Specifically, we learn the image-to-image operator that maps high-dose images to their corresponding low-dose counterparts, and we frame the CAR task as an inverse problem. We then solve this problem through a regularized optimization reformulation. Regularization methods are well-established mathematical techniques that offer robustness and explainability. Our approach combines these rigorous techniques with cutting-edge deep learning tools. Numerical experiments performed on pre-clinical medical images confirm the effectiveness of this strategy, showing improved stability and accuracy in the simulated high-dose images.

Deep learning models can exhibit what appears to be a sudden ability to solve a new problem as training time, training data, or model size increases, a phenomenon known as emergence. In this paper, we present a framework where each new ability (a skill) is represented as a basis function. We solve a simple multi-linear model in this skill-basis, finding analytic expressions for the emergence of new skills, as well as for scaling laws of the loss with training time, data size, model size, and optimal compute ($C$). We compare our detailed calculations to direct simulations of a two-layer neural network trained on multitask sparse parity, where the tasks in the dataset are distributed according to a power-law. Our simple model captures, using a single fit parameter, the sigmoidal emergence of multiple new skills as training time, data size or model size increases in the neural network.

Causal networks are useful in a wide variety of applications, from medical diagnosis to root-cause analysis in manufacturing. In practice, however, causal networks are often incomplete with missing causal relations. This paper presents a novel approach, called CausalLP, that formulates the issue of incomplete causal networks as a knowledge graph completion problem. More specifically, the task of finding new causal relations in an incomplete causal network is mapped to the task of knowledge graph link prediction. The use of knowledge graphs to represent causal relations enables the integration of external domain knowledge; and as an added complexity, the causal relations have weights representing the strength of the causal association between entities in the knowledge graph. Two primary tasks are supported by CausalLP: causal explanation and causal prediction. An evaluation of this approach uses a benchmark dataset of simulated videos for causal reasoning, CLEVRER-Humans, and compares the performance of multiple knowledge graph embedding algorithms. Two distinct dataset splitting approaches are used for evaluation: (1) random-based split, which is the method typically employed to evaluate link prediction algorithms, and (2) Markov-based split, a novel data split technique that utilizes the Markovian property of causal relations. Results show that using weighted causal relations improves causal link prediction over the baseline without weighted relations.

We hypothesize that due to the greedy nature of learning in multi-modal deep neural networks, these models tend to rely on just one modality while under-fitting the other modalities. Such behavior is counter-intuitive and hurts the models' generalization, as we observe empirically. To estimate the model's dependence on each modality, we compute the gain on the accuracy when the model has access to it in addition to another modality. We refer to this gain as the conditional utilization rate. In the experiments, we consistently observe an imbalance in conditional utilization rates between modalities, across multiple tasks and architectures. Since conditional utilization rate cannot be computed efficiently during training, we introduce a proxy for it based on the pace at which the model learns from each modality, which we refer to as the conditional learning speed. We propose an algorithm to balance the conditional learning speeds between modalities during training and demonstrate that it indeed addresses the issue of greedy learning. The proposed algorithm improves the model's generalization on three datasets: Colored MNIST, Princeton ModelNet40, and NVIDIA Dynamic Hand Gesture.

Graph representation learning for hypergraphs can be used to extract patterns among higher-order interactions that are critically important in many real world problems. Current approaches designed for hypergraphs, however, are unable to handle different types of hypergraphs and are typically not generic for various learning tasks. Indeed, models that can predict variable-sized heterogeneous hyperedges have not been available. Here we develop a new self-attention based graph neural network called Hyper-SAGNN applicable to homogeneous and heterogeneous hypergraphs with variable hyperedge sizes. We perform extensive evaluations on multiple datasets, including four benchmark network datasets and two single-cell Hi-C datasets in genomics. We demonstrate that Hyper-SAGNN significantly outperforms the state-of-the-art methods on traditional tasks while also achieving great performance on a new task called outsider identification. Hyper-SAGNN will be useful for graph representation learning to uncover complex higher-order interactions in different applications.

北京阿比特科技有限公司