亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Retrieving information from EHR systems is essential for answering specific questions about patient journeys and improving the delivery of clinical care. Despite this fact, most EHR systems still rely on keyword-based searches. With the advent of generative large language models (LLMs), retrieving information can lead to better search and summarization capabilities. Such retrievers can also feed Retrieval-augmented generation (RAG) pipelines to answer any query. However, the task of retrieving information from EHR real-world clinical data contained within EHR systems in order to solve several downstream use cases is challenging due to the difficulty in creating query-document support pairs. We provide a blueprint for creating such datasets in an affordable manner using large language models. Our method results in a retriever that is 30-50 F-1 points better than propriety counterparts such as Ada and Mistral for oncology data elements. We further compare our model, called Onco-Retriever, against fine-tuned PubMedBERT model as well. We conduct an extensive manual evaluation on real-world EHR data along with latency analysis of the different models and provide a path forward for healthcare organizations to build domain-specific retrievers.

相關內容

《計算機信息》雜志發表高質量的論文,擴大了運籌學和計算的范圍,尋求有關理論、方法、實驗、系統和應用方面的原創研究論文、新穎的調查和教程論文,以及描述新的和有用的軟件工具的論文。官網鏈接: · 代價 · INFORMS · 可辨認的 · Seven ·
2024 年 5 月 27 日

The current study presents a comprehensive review of the benefits and barriers associated with the adoption of Building Information Modeling (BIM) in infrastructure projects, focusing on the period from 2013 to 2023. The research explores the manifold advantages offered by BIM, spanning the entire project life cycle, including planning, design, construction, maintenance, and sustainability. Notably, BIM enhances collaboration, facilitates real-time data-driven decision-making, and leads to substantial cost and time savings. In parallel, a systematic literature review was conducted to identify and categorize the barriers hindering BIM adoption within the infrastructure industry. Eleven studies were selected for in-depth analysis, revealing a total of 74 obstacles. Through synthetic analysis and thematic clustering, seven primary impediments to BIM adoption were identified, encompassing challenges related to education/training, resistance to change, business value clarity, perceived cost, lack of standards and guidelines, lack of mandates, and lack of initiatives. This review explores the benefits and barriers in the industry that are facing BIM adoption in infrastructure projects, giving an important perspective toward improving effective BIM adoption strategies, policies, and standards. Future directions for research and industry development are outlined, including efforts to enhance education and training, promote standardization, advocate for policy and mandates, and integrate BIM with emerging technologies.

This paper introduces Ethics Pathways, a design activity aimed at understanding HCI and design researchers' ethics engagements and flows during their research process. Despite a strong ethical commitment in these fields, challenges persist in grasping the complexity of researchers' engagement with ethics -- practices conducted to operationalize ethics -- in situated institutional contexts. Ethics Pathways, developed through six playtesting sessions, offers a design approach to understanding the complexities of researchers' past ethics engagements in their work. This activity involves four main tasks: recalling ethical incidents; describing stakeholders involved in the situation; recounting their actions or speculative alternatives; and reflection and emotion walk-through. The paper reflects on the role of design decisions and facilitation strategies in achieving these goals. The design activity contributes to the discourse on ethical HCI research by conceptualizing ethics engagement as a part of ongoing research processing, highlighting connections between individual affective experiences, social interactions across power differences, and institutional goals.

We introduce Score identity Distillation (SiD), an innovative data-free method that distills the generative capabilities of pretrained diffusion models into a single-step generator. SiD not only facilitates an exponentially fast reduction in Fr\'echet inception distance (FID) during distillation but also approaches or even exceeds the FID performance of the original teacher diffusion models. By reformulating forward diffusion processes as semi-implicit distributions, we leverage three score-related identities to create an innovative loss mechanism. This mechanism achieves rapid FID reduction by training the generator using its own synthesized images, eliminating the need for real data or reverse-diffusion-based generation, all accomplished within significantly shortened generation time. Upon evaluation across four benchmark datasets, the SiD algorithm demonstrates high iteration efficiency during distillation and surpasses competing distillation approaches, whether they are one-step or few-step, data-free, or dependent on training data, in terms of generation quality. This achievement not only redefines the benchmarks for efficiency and effectiveness in diffusion distillation but also in the broader field of diffusion-based generation. The PyTorch implementation is available at //github.com/mingyuanzhou/SiD

The rapid advancement of artificial intelligence technologies has given rise to diversified intelligent services, which place unprecedented demands on massive connectivity and gigantic data aggregation. However, the scarce radio resources and stringent latency requirement make it challenging to meet these demands. To tackle these challenges, over-the-air computation (AirComp) emerges as a potential technology. Specifically, AirComp seamlessly integrates the communication and computation procedures through the superposition property of multiple-access channels, which yields a revolutionary multiple-access paradigm shift from "compute-after-communicate" to "compute-when-communicate". By this means, AirComp enables spectral-efficient and low-latency wireless data aggregation by allowing multiple devices to occupy the same channel for transmission. In this paper, we aim to present the recent advancement of AirComp in terms of foundations, technologies, and applications. The mathematical form and communication design are introduced as the foundations of AirComp, and the critical issues of AirComp over different network architectures are then discussed along with the review of existing literature. The technologies employed for the analysis and optimization on AirComp are reviewed from the information theory and signal processing perspectives. Moreover, we present the existing studies that tackle the practical implementation issues in AirComp systems, and elaborate the applications of AirComp in Internet of Things and edge intelligent networks. Finally, potential research directions are highlighted to motivate the future development of AirComp.

This paper reports on an audit study of generative AI systems (ChatGPT, Bing Chat, and Perplexity) which investigates how these new search engines construct responses and establish authority for topics of public importance. We collected system responses using a set of 48 authentic queries for 4 topics over a 7-day period and analyzed the data using sentiment analysis, inductive coding and source classification. Results provide an overview of the nature of system responses across these systems and provide evidence of sentiment bias based on the queries and topics, and commercial and geographic bias in sources. The quality of sources used to support claims is uneven, relying heavily on News and Media, Business and Digital Media websites. Implications for system users emphasize the need to critically examine Generative AI system outputs when making decisions related to public interest and personal well-being.

The success of current Large-Language Models (LLMs) hinges on extensive training data that is collected and stored centrally, called Centralized Learning (CL). However, such a collection manner poses a privacy threat, and one potential solution is Federated Learning (FL), which transfers gradients, not raw data, among clients. Unlike traditional networks, FL for LLMs incurs significant communication costs due to their tremendous parameters. This study introduces an innovative approach to compress gradients to improve communication efficiency during LLM FL, formulating the new FL pipeline named CG-FedLLM. This approach integrates an encoder on the client side to acquire the compressed gradient features and a decoder on the server side to reconstruct the gradients. We also developed a novel training strategy that comprises Temporal-ensemble Gradient-Aware Pre-training (TGAP) to identify characteristic gradients of the target model and Federated AutoEncoder-Involved Fine-tuning (FAF) to compress gradients adaptively. Extensive experiments confirm that our approach reduces communication costs and improves performance (e.g., average 3 points increment compared with traditional CL- and FL-based fine-tuning with LlaMA on a well-recognized benchmark, C-Eval). This improvement is because our encoder-decoder, trained via TGAP and FAF, can filter gradients while selectively preserving critical features. Furthermore, we present a series of experimental analyses focusing on the signal-to-noise ratio, compression rate, and robustness within this privacy-centric framework, providing insight into developing more efficient and secure LLMs.

The rise of Web3 social ecosystems signifies the dawn of a new chapter in digital interaction, offering significant prospects for user engagement and financial advancement. Nonetheless, this progress is shadowed by potential privacy concessions, especially as these platforms frequently merge with existing Web2.0 social media accounts, amplifying data privacy risks for users. In this study, we investigate the nuanced dynamics between user engagement on Web3 social platforms and the consequent privacy concerns. We scrutinize the widespread phenomenon of fabricated activities, which encompasses the establishment of bogus accounts aimed at mimicking popularity and the deliberate distortion of social interactions by some individuals to gain financial rewards. Such deceptive maneuvers not only distort the true measure of the active user base but also amplify privacy threats for all members of the user community. We also find that, notwithstanding their attempts to limit social exposure, users remain entangled in privacy vulnerabilities. The actions of those highly engaged users, albeit often a minority group, can inadvertently breach the privacy of the larger collective. By casting light on the delicate interplay between user engagement, financial motives, and privacy issues, we offer a comprehensive examination of the intrinsic challenges and hazards present in the Web3 social milieu. We highlight the urgent need for more stringent privacy measures and ethical protocols to navigate the complex web of social exchanges and financial ambitions in the rapidly evolving Web3.

In pace with developments in the research field of artificial intelligence, knowledge graphs (KGs) have attracted a surge of interest from both academia and industry. As a representation of semantic relations between entities, KGs have proven to be particularly relevant for natural language processing (NLP), experiencing a rapid spread and wide adoption within recent years. Given the increasing amount of research work in this area, several KG-related approaches have been surveyed in the NLP research community. However, a comprehensive study that categorizes established topics and reviews the maturity of individual research streams remains absent to this day. Contributing to closing this gap, we systematically analyzed 507 papers from the literature on KGs in NLP. Our survey encompasses a multifaceted review of tasks, research types, and contributions. As a result, we present a structured overview of the research landscape, provide a taxonomy of tasks, summarize our findings, and highlight directions for future work.

With the advent of 5G commercialization, the need for more reliable, faster, and intelligent telecommunication systems are envisaged for the next generation beyond 5G (B5G) radio access technologies. Artificial Intelligence (AI) and Machine Learning (ML) are not just immensely popular in the service layer applications but also have been proposed as essential enablers in many aspects of B5G networks, from IoT devices and edge computing to cloud-based infrastructures. However, most of the existing surveys in B5G security focus on the performance of AI/ML models and their accuracy, but they often overlook the accountability and trustworthiness of the models' decisions. Explainable AI (XAI) methods are promising techniques that would allow system developers to identify the internal workings of AI/ML black-box models. The goal of using XAI in the security domain of B5G is to allow the decision-making processes of the security of systems to be transparent and comprehensible to stakeholders making the systems accountable for automated actions. In every facet of the forthcoming B5G era, including B5G technologies such as RAN, zero-touch network management, E2E slicing, this survey emphasizes the role of XAI in them and the use cases that the general users would ultimately enjoy. Furthermore, we presented the lessons learned from recent efforts and future research directions on top of the currently conducted projects involving XAI.

Since deep neural networks were developed, they have made huge contributions to everyday lives. Machine learning provides more rational advice than humans are capable of in almost every aspect of daily life. However, despite this achievement, the design and training of neural networks are still challenging and unpredictable procedures. To lower the technical thresholds for common users, automated hyper-parameter optimization (HPO) has become a popular topic in both academic and industrial areas. This paper provides a review of the most essential topics on HPO. The first section introduces the key hyper-parameters related to model training and structure, and discusses their importance and methods to define the value range. Then, the research focuses on major optimization algorithms and their applicability, covering their efficiency and accuracy especially for deep learning networks. This study next reviews major services and toolkits for HPO, comparing their support for state-of-the-art searching algorithms, feasibility with major deep learning frameworks, and extensibility for new modules designed by users. The paper concludes with problems that exist when HPO is applied to deep learning, a comparison between optimization algorithms, and prominent approaches for model evaluation with limited computational resources.

北京阿比特科技有限公司