亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Accurate prediction of Drug-Target Affinity (DTA) is of vital importance in early-stage drug discovery, facilitating the identification of drugs that can effectively interact with specific targets and regulate their activities. While wet experiments remain the most reliable method, they are time-consuming and resource-intensive, resulting in limited data availability that poses challenges for deep learning approaches. Existing methods have primarily focused on developing techniques based on the available DTA data, without adequately addressing the data scarcity issue. To overcome this challenge, we present the SSM-DTA framework, which incorporates three simple yet highly effective strategies: (1) A multi-task training approach that combines DTA prediction with masked language modeling (MLM) using paired drug-target data. (2) A semi-supervised training method that leverages large-scale unpaired molecules and proteins to enhance drug and target representations. This approach differs from previous methods that only employed molecules or proteins in pre-training. (3) The integration of a lightweight cross-attention module to improve the interaction between drugs and targets, further enhancing prediction accuracy. Through extensive experiments on benchmark datasets such as BindingDB, DAVIS, and KIBA, we demonstrate the superior performance of our framework. Additionally, we conduct case studies on specific drug-target binding activities, virtual screening experiments, drug feature visualizations, and real-world applications, all of which showcase the significant potential of our work. In conclusion, our proposed SSM-DTA framework addresses the data limitation challenge in DTA prediction and yields promising results, paving the way for more efficient and accurate drug discovery processes. Our code is available at $\href{//github.com/QizhiPei/SSM-DTA}{Github}$.

相關內容

IFIP TC13 Conference on Human-Computer Interaction是人機交互領域的研究者和實踐者展示其工作的重要平臺。多年來,這些會議吸引了來自幾個國家和文化的研究人員。官網鏈接: · ChatGPT · Extensibility · 相互獨立的 · Performer ·
2023 年 12 月 4 日

Large language models (LLMs) have performed well in providing general and extensive health suggestions in single-turn conversations, exemplified by systems such as ChatGPT, ChatGLM, ChatDoctor, DoctorGLM, and etc. However, the limited information provided by users during single turn results in inadequate personalization and targeting of the generated suggestions, which requires users to independently select the useful part. It is mainly caused by the missing ability to engage in multi-turn questioning. In real-world medical consultations, doctors usually employ a series of iterative inquiries to comprehend the patient's condition thoroughly, enabling them to provide effective and personalized suggestions subsequently, which can be defined as chain of questioning (CoQ) for LLMs. To improve the CoQ of LLMs, we propose BianQue, a ChatGLM-based LLM finetuned with the self-constructed health conversation dataset BianQueCorpus that is consist of multiple turns of questioning and health suggestions polished by ChatGPT. Experimental results demonstrate that the proposed BianQue can simultaneously balance the capabilities of both questioning and health suggestions, which will help promote the research and application of LLMs in the field of proactive health.

Coronary stent designs have undergone significant transformations in geometry, materials, and drug elution coatings, contributing to the continuous improvement of stenting success over recent decades. However, the increasing use of percutaneous coronary intervention techniques on complex coronary artery disease anatomy continues to be a challenge and pushes the boundary to improve stent designs. Design optimisation techniques especially are a unique set of tools used to assess and balance competing design objectives, thus unlocking the capacity to maximise the performance of stents. This review provides a brief history of metallic and bioresorbable stent design evolution, before exploring the latest developments in performance metrics and design optimisation techniques in detail. This includes insights on different contemporary stent designs, structural and haemodynamic performance metrics, shape and topology representation, and optimisation along with the use of surrogates to deal with the underlying computationally expensive nature of the problem. Finally, an exploration of current key gaps and future possibilities is provided that includes hybrid optimisation of clinically relevant metrics, non-geometric variables such as material properties, and the possibility of personalised stenting devices.

Digital phenotyping in mental health often consists of collecting behavioral and experience-based information through sensory and self-reported data from devices such as smartphones. Such rich and comprehensive data could be used to develop insights into the relationships between daily behavior and a range of mental health conditions. However, current analytical approaches have shown limited application due to these datasets being both high dimensional and multimodal in nature. This study demonstrates the first use of a principled method which consolidates the complexities of subjective self-reported data (Ecological Momentary Assessments - EMAs) with concurrent sensor-based data. In this study the CrossCheck dataset is used to analyse data from 50 participants diagnosed with schizophrenia. Network Analysis is applied to EMAs at an individual (n-of-1) level while sensor data is used to identify periods of various behavioral context. Networks generated during periods of certain behavioral contexts, such as variations in the daily number of locations visited, were found to significantly differ from baseline networks and networks generated from randomly sampled periods of time. The framework presented here lays a foundation to reveal behavioural contexts and the concurrent impact of self-reporting at an n-of-1 level. These insights are valuable in the management of serious mental illnesses such as schizophrenia.

Magnetic resonance imaging (MRI) is commonly used for brain tumor segmentation, which is critical for patient evaluation and treatment planning. To reduce the labor and expertise required for labeling, weakly-supervised semantic segmentation (WSSS) methods with class activation mapping (CAM) have been proposed. However, existing CAM methods suffer from low resolution due to strided convolution and pooling layers, resulting in inaccurate predictions. In this study, we propose a novel CAM method, Attentive Multiple-Exit CAM (AME-CAM), that extracts activation maps from multiple resolutions to hierarchically aggregate and improve prediction accuracy. We evaluate our method on the BraTS 2021 dataset and show that it outperforms state-of-the-art methods.

Objective: Despite the recent increase in research activity, deep-learning models have not yet been widely accepted in medicine. The shortage of high-quality annotated data often hinders the development of robust and generalizable models, which do not suffer from degraded effectiveness when presented with newly-collected, out-of-distribution (OOD) datasets. Methods: Contrastive Self-Supervised Learning (SSL) offers a potential solution to the scarcity of labeled data as it takes advantage of unlabeled data to increase model effectiveness and robustness. In this research, we propose applying contrastive SSL for detecting abnormalities in phonocardiogram (PCG) samples by learning a generalized representation of the signal. Specifically, we perform an extensive comparative evaluation of a wide range of audio-based augmentations and evaluate trained classifiers on multiple datasets across different downstream tasks. Results: We experimentally demonstrate that, depending on its training distribution, the effectiveness of a fully-supervised model can degrade up to 32% when evaluated on unseen data, while SSL models only lose up to 10% or even improve in some cases. Conclusions: Contrastive SSL pretraining can assist in providing robust classifiers which can generalize to unseen, OOD data, without relying on time- and labor-intensive annotation processes by medical experts. Furthermore, the proposed extensive evaluation protocol sheds light on the most promising and appropriate augmentations for robust PCG signal processing. Significance: We provide researchers and practitioners with a roadmap towards producing robust models for PCG classification, in addition to an open-source codebase for developing novel approaches.

Digital Imaging and Communication System (DICOM) is widely used throughout the public health sector for portability in medical imaging. However, these DICOM files have vulnerabilities present in the preamble section. Successful exploitation of these vulnerabilities can allow attackers to embed executable codes in the 128-Byte preamble of DICOM files. Embedding the malicious executable will not interfere with the readability or functionality of DICOM imagery. However, it will affect the underline system silently upon viewing these files. This paper shows the infiltration of Windows malware executables into DICOM files. On viewing the files, the malicious DICOM will get executed and eventually infect the entire hospital network through the radiologist's workstation. The code injection process of executing malware in DICOM files affects the hospital networks and workstations' memory. Memory forensics for the infected radiologist's workstation is crucial as it can detect which malware disrupts the hospital environment, and future detection methods can be deployed. In this paper, we consider the machine learning (ML) algorithms to conduct memory forensics on three memory dump categories: Trojan, Spyware, and Ransomware, taken from the CIC-MalMem-2022 dataset. We obtain the highest accuracy of 75\% with the Random Forest model. For estimating the feature importance for ML model prediction, we leveraged the concept of Shapley values.

Studies have shown marked sex disparities in Coronary Artery Diseases (CAD) epidemiology, yet the underlying mechanisms remain unclear. We explored sex disparities in the coronary anatomy and the resulting haemodynamics in patients with suspected, but no significant CAD. Left Main (LM) bifurcations were reconstructed from CTCA images of 127 cases (42 males and 85 females, aged 38 to 81). Detailed shape parameters were measured for comparison, including bifurcation angles, curvature, and diameters, before solving the haemodynamic metrics using CFD. The severity and location of the normalised vascular area exposed to physiologically adverse haemodynamics were statistically compared between sexes for all branches. We found significant differences between sexes in potentially adverse haemodynamics. Females were more likely than males to exhibit adversely low Time Averaged Endothelial Shear Stress along the inner wall of a bifurcation (16.8% vs. 10.7%). Males had a higher percentage of areas exposed to both adversely high Relative Residence Time (6.1% vs 4.2%, p=0.001) and high Oscillatory Shear Index (4.6% vs 2.3%, p<0.001). However, the OSI values were generally small and should be interpreted cautiously. Males had larger arteries (M vs F, LM: 4.0mm vs 3.3mm, LAD: 3.6mm 3.0mm, LCX:3.5mm vs 2.9mm), and females exhibited higher curvatures in all three branches (M vs F, LM: 0.40 vs 0.46, LAD: 0.45 vs 0.51, LCx: 0.47 vs 0.55, p<0.001) and larger inflow angle of the LM trunk (M: 12.9{\deg} vs F: 18.5{\deg}, p=0.025). Haemodynamic differences were found between male and female patients, which may contribute, at least in part, to differences in CAD risk. This work may facilitate a better understanding of sex differences in the clinical presentation of CAD, contributing to improved sex-specific screening, especially relevant for women with CAD who currently have worse predictive outcomes.

Transparency of information disclosure has always been considered an instrumental component of effective governance, accountability, and ethical behavior in any organization or system. However, a natural question follows: \emph{what is the cost or benefit of being transparent}, as one may suspect that transparency imposes additional constraints on the information structure, decreasing the maneuverability of the information provider. This work proposes and quantitatively investigates the \emph{price of transparency} (PoT) in strategic information disclosure by comparing the perfect Bayesian equilibrium payoffs under two representative information structures: overt persuasion and covert signaling models. PoT is defined as the ratio between the payoff outcomes in covert and overt interactions. As the main contribution, this work develops a bilevel-bilinear programming approach, called $Z$-programming, to solve for non-degenerate perfect Bayesian equilibria of dynamic incomplete information games with finite states and actions. Using $Z$-programming, we show that it is always in the information provider's interest to choose the transparent information structure, as $0\leq \textrm{PoT}\leq 1$. The upper bound is attainable for any strictly Bayesian-posterior competitive games, of which zero-sum games are a particular case. For continuous games, the PoT, still upper-bounded by $1$, can be arbitrarily close to $0$, indicating the tightness of the lower bound. This tight lower bound suggests that the lack of transparency can result in significant loss for the provider. We corroborate our findings using quadratic games and numerical examples.

Due to its conceptual simplicity and generality, compressive neural representation has emerged as a promising alternative to traditional compression methods for managing massive volumetric datasets. The current practice of neural compression utilizes a single large multilayer perceptron (MLP) to encode the global volume, incurring slow training and inference. This paper presents an efficient compressive neural representation (ECNR) solution for time-varying data compression, utilizing the Laplacian pyramid for adaptive signal fitting. Following a multiscale structure, we leverage multiple small MLPs at each scale for fitting local content or residual blocks. By assigning similar blocks to the same MLP via size uniformization, we enable balanced parallelization among MLPs to significantly speed up training and inference. Working in concert with the multiscale structure, we tailor a deep compression strategy to compact the resulting model. We show the effectiveness of ECNR with multiple datasets and compare it with state-of-the-art compression methods (mainly SZ3, TTHRESH, and neurcomp). The results position ECNR as a promising solution for volumetric data compression.

We propose a novel attention gate (AG) model for medical imaging that automatically learns to focus on target structures of varying shapes and sizes. Models trained with AGs implicitly learn to suppress irrelevant regions in an input image while highlighting salient features useful for a specific task. This enables us to eliminate the necessity of using explicit external tissue/organ localisation modules of cascaded convolutional neural networks (CNNs). AGs can be easily integrated into standard CNN architectures such as the U-Net model with minimal computational overhead while increasing the model sensitivity and prediction accuracy. The proposed Attention U-Net architecture is evaluated on two large CT abdominal datasets for multi-class image segmentation. Experimental results show that AGs consistently improve the prediction performance of U-Net across different datasets and training sizes while preserving computational efficiency. The code for the proposed architecture is publicly available.

北京阿比特科技有限公司