亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In healthcare, multimodal data is prevalent and requires to be comprehensively analyzed before diagnostic decisions, including medical images, clinical reports, etc. However, current large-scale artificial intelligence models predominantly focus on single-modal cognitive abilities and neglect the integration of multiple modalities. Therefore, we propose Stone Needle, a general multimodal large-scale model framework tailored explicitly for healthcare applications. Stone Needle serves as a comprehensive medical multimodal model foundation, integrating various modalities such as text, images, videos, and audio to surpass the limitations of single-modal systems. Through the framework components of intent analysis, medical foundation models, prompt manager, and medical language module, our architecture can perform multi-modal interaction in multiple rounds of dialogue. Our method is a general multimodal large-scale model framework, integrating diverse modalities and allowing us to tailor for specific tasks. The experimental results demonstrate the superior performance of our method compared to single-modal systems. The fusion of different modalities and the ability to process complex medical information in Stone Needle benefits accurate diagnosis, treatment recommendations, and patient care.

相關內容

Log data is pivotal in activities like anomaly detection and failure diagnosis in the automated maintenance of software systems. Due to their unstructured format, log parsing is often required to transform them into a structured format for automated analysis. A variety of log parsers exist, making it vital to benchmark these tools to comprehend their features and performance. However, existing datasets for log parsing are limited in terms of scale and representativeness, posing challenges for studies that aim to evaluate or develop log parsers. This problem becomes more pronounced when these parsers are evaluated for production use. To address these issues, we introduce a new collection of large-scale annotated log datasets, named LogPub, which more accurately mirrors log data observed in real-world software systems. LogPub comprises 14 datasets, each averaging 3.6 million log lines. Utilizing LogPub, we re-evaluate 15 log parsers in a more rigorous and practical setting. We also propose a new evaluation metric to lessen the sensitivity of current metrics to imbalanced data distribution. Furthermore, we are the first to scrutinize the detailed performance of log parsers on logs that represent rare system events and offer comprehensive information for system troubleshooting. Parsing such logs accurately is vital yet challenging. We believe that our work could shed light on the design and evaluation of log parsers in more realistic settings, thereby facilitating their implementation in production systems.

The analysis of patterns of walking is an important area of research that has numerous applications in security, healthcare, sports and human-computer interaction. Lately, walking patterns have been regarded as a unique fingerprinting method for automatic person identification at a distance. In this work, we propose a novel gait recognition architecture called Gait Pyramid Transformer (GaitPT) that leverages pose estimation skeletons to capture unique walking patterns, without relying on appearance information. GaitPT adopts a hierarchical transformer architecture that effectively extracts both spatial and temporal features of movement in an anatomically consistent manner, guided by the structure of the human skeleton. Our results show that GaitPT achieves state-of-the-art performance compared to other skeleton-based gait recognition works, in both controlled and in-the-wild scenarios. GaitPT obtains 82.6% average accuracy on CASIA-B, surpassing other works by a margin of 6%. Moreover, it obtains 52.16% Rank-1 accuracy on GREW, outperforming both skeleton-based and appearance-based approaches.

For many data-processing applications, a comprehensive set of efficient operations for the management of priority values is required. Indexed priority queues are particularly promising to satisfy this requirement by design. In this work, we report the design and analysis of an efficient indexed priority queue with a comprehensive set of operations. In particular, $\mathtt{insert}$, $\mathtt{delete}$ and $\mathtt{decrease}$ all run in expected $O(\log^{*}{n})$ time, while $\mathtt{increase}$ is conjectured by means of Monte Carlo simulations to run in expected $O(\log\log{n})$ time. The space complexity as well as the time complexity for the construction of the empty heap data structure is $O(n)$. For certain massive computational problems, such as specific analyses of extremely large graphs and (chemical) simulations, this heap system may exhibit considerable utility.

Metaverse-enabled digital healthcare systems are expected to exploit an unprecedented amount of personal health data, while ensuring that sensitive or private information of individuals are not disclosed. Machine learning and artificial intelligence (ML/AI) techniques can be widely utilized in metaverse healthcare systems, such as virtual clinics and intelligent consultations. In such scenarios, the key challenge is that data privacy laws might not allow virtual clinics to share their medical data with other parties. Moreover, clinical AI/ML models themselves carry extensive information about the medical datasets, such that private attributes can be easily inferred by malicious actors in the metaverse (if not rigorously privatized). In this paper, inspired by the idea of "incognito mode", which has recently been developed as a promising solution to safeguard metaverse users' privacy, we propose global differential privacy for the distributed metaverse healthcare systems. In our scheme, a randomized mechanism in the format of artificial "mix-up" noise is applied to the federated clinical ML/AI models before sharing with other peers. This way, we provide an adjustable level of distributed privacy against both the malicious actors and honest-but curious metaverse servers. Our evaluations on breast cancer Wisconsin dataset (BCWD) highlight the privacy-utility trade-off (PUT) in terms of diagnosis accuracy and loss function for different levels of privacy. We also compare our private scheme with the non-private centralized setup in terms of diagnosis accuracy.

Integration against, and hence sampling from, high-dimensional probability distributions is of essential importance in many application areas and has been an active research area for decades. One approach that has drawn increasing attention in recent years has been the generation of samples from a target distribution $\mathbb{P}_{\mathrm{tar}}$ using transport maps: if $\mathbb{P}_{\mathrm{tar}} = T_\# \mathbb{P}_{\mathrm{ref}}$ is the pushforward of an easily-sampled probability distribution $\mathbb{P}_{\mathrm{ref}}$ under the transport map $T$, then the application of $T$ to $\mathbb{P}_{\mathrm{ref}}$-distributed samples yields $\mathbb{P}_{\mathrm{tar}}$-distributed samples. This paper proposes the application of transport maps not just to random samples, but also to quasi-Monte Carlo points, higher-order nets, and sparse grids in order for the transformed samples to inherit the original convergence rates that are often better than $N^{-1/2}$, $N$ being the number of samples/quadrature nodes. Our main result is the derivation of an explicit transport map for the case that $\mathbb{P}_{\mathrm{tar}}$ is a mixture of simple distributions, e.g.\ a Gaussian mixture, in which case application of the transport map $T$ requires the solution of an \emph{explicit} ODE with \emph{closed-form} right-hand side. Mixture distributions are of particular applicability and interest since many methods proceed by first approximating $\mathbb{P}_{\mathrm{tar}}$ by a mixture and then sampling from that mixture (often using importance reweighting). Hence, this paper allows for the sampling step to provide a better convergence rate than $N^{-1/2}$ for all such methods.

Accurate face recognition systems are increasingly important in sensitive applications like border control or migration management. Therefore, it becomes crucial to quantify the quality of facial images to ensure that low-quality images are not affecting recognition accuracy. In this context, the current draft of ISO/IEC 29794-5 introduces the concept of component quality to estimate how single factors of variation affect recognition outcomes. In this study, we propose a quality measure (NeutrEx) based on the accumulated distances of a 3D face reconstruction to a neutral expression anchor. Our evaluations demonstrate the superiority of our proposed method compared to baseline approaches obtained by training Support Vector Machines on face embeddings extracted from a pre-trained Convolutional Neural Network for facial expression classification. Furthermore, we highlight the explainable nature of our NeutrEx measures by computing per-vertex distances to unveil the most impactful face regions and allow operators to give actionable feedback to subjects.

In healthcare, the role of AI is continually evolving and understanding the challenges its introduction poses on relationships between healthcare providers and patients will require a regulatory and behavioural approach that can provide a guiding base for all users involved. In this paper, we present ACIPS (Acceptability, Comfortability, Informed Consent, Privacy, and Security), a framework for evaluating patient response to the introduction of AI-enabled digital technologies in healthcare settings. We justify the need for ACIPS with a general introduction of the challenges with and perceived relevance of AI in human-welfare centered fields, with an emphasis on the provision of healthcare. The framework is composed of five principles that measure the perceptions of acceptability, comfortability, informed consent, privacy, and security patients hold when learning how AI is used in their healthcare. We propose that the tenets composing this framework can be translated into guidelines outlining the proper use of AI in healthcare while broadening the limited understanding of this topic.

In recommendation systems (RS), user behavior data is observational rather than experimental, resulting in widespread bias in the data. Consequently, tackling bias has emerged as a major challenge in the field of recommendation systems. Recently, Doubly Robust Learning (DR) has gained significant attention due to its remarkable performance and robust properties. However, our experimental findings indicate that existing DR methods are severely impacted by the presence of so-called Poisonous Imputation, where the imputation significantly deviates from the truth and becomes counterproductive. To address this issue, this work proposes Conservative Doubly Robust strategy (CDR) which filters imputations by scrutinizing their mean and variance. Theoretical analyses show that CDR offers reduced variance and improved tail bounds.In addition, our experimental investigations illustrate that CDR significantly enhances performance and can indeed reduce the frequency of poisonous imputation.

Designing and generating new data under targeted properties has been attracting various critical applications such as molecule design, image editing and speech synthesis. Traditional hand-crafted approaches heavily rely on expertise experience and intensive human efforts, yet still suffer from the insufficiency of scientific knowledge and low throughput to support effective and efficient data generation. Recently, the advancement of deep learning induces expressive methods that can learn the underlying representation and properties of data. Such capability provides new opportunities in figuring out the mutual relationship between the structural patterns and functional properties of the data and leveraging such relationship to generate structural data given the desired properties. This article provides a systematic review of this promising research area, commonly known as controllable deep data generation. Firstly, the potential challenges are raised and preliminaries are provided. Then the controllable deep data generation is formally defined, a taxonomy on various techniques is proposed and the evaluation metrics in this specific domain are summarized. After that, exciting applications of controllable deep data generation are introduced and existing works are experimentally analyzed and compared. Finally, the promising future directions of controllable deep data generation are highlighted and five potential challenges are identified.

Applying artificial intelligence techniques in medical imaging is one of the most promising areas in medicine. However, most of the recent success in this area highly relies on large amounts of carefully annotated data, whereas annotating medical images is a costly process. In this paper, we propose a novel method, called FocalMix, which, to the best of our knowledge, is the first to leverage recent advances in semi-supervised learning (SSL) for 3D medical image detection. We conducted extensive experiments on two widely used datasets for lung nodule detection, LUNA16 and NLST. Results show that our proposed SSL methods can achieve a substantial improvement of up to 17.3% over state-of-the-art supervised learning approaches with 400 unlabeled CT scans.

北京阿比特科技有限公司