亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Understanding the properties of excited states of complex molecules is crucial for many chemical and physical processes. Calculating these properties is often significantly more resource-intensive than calculating their ground state counterparts. We present a quantum machine learning model that predicts excited-state properties from the molecular ground state for different geometric configurations. The model comprises a symmetry-invariant quantum neural network and a conventional neural network and is able to provide accurate predictions with only a few training data points. The proposed procedure is fully NISQ compatible. This is achieved by using a quantum circuit that requires a number of parameters linearly proportional to the number of molecular orbitals, along with a parameterized measurement observable, thereby reducing the number of necessary measurements. We benchmark the algorithm on three different molecules by evaluating its performance in predicting excited state transition energies and transition dipole moments. We show that, in many instances, the procedure is able to outperform various classical models that rely solely on classical features.

相關內容

神(shen)(shen)(shen)經(jing)(jing)(jing)(jing)網(wang)(wang)絡(luo)(luo)(Neural Networks)是(shi)世界上(shang)三個(ge)最古老的(de)(de)(de)神(shen)(shen)(shen)經(jing)(jing)(jing)(jing)建(jian)模學(xue)會(hui)的(de)(de)(de)檔案期刊:國際神(shen)(shen)(shen)經(jing)(jing)(jing)(jing)網(wang)(wang)絡(luo)(luo)學(xue)會(hui)(INNS)、歐洲神(shen)(shen)(shen)經(jing)(jing)(jing)(jing)網(wang)(wang)絡(luo)(luo)學(xue)會(hui)(ENNS)和(he)(he)(he)(he)日本神(shen)(shen)(shen)經(jing)(jing)(jing)(jing)網(wang)(wang)絡(luo)(luo)學(xue)會(hui)(JNNS)。神(shen)(shen)(shen)經(jing)(jing)(jing)(jing)網(wang)(wang)絡(luo)(luo)提供(gong)了(le)(le)一個(ge)論(lun)壇,以發(fa)(fa)展(zhan)和(he)(he)(he)(he)培育(yu)一個(ge)國際社會(hui)的(de)(de)(de)學(xue)者和(he)(he)(he)(he)實踐(jian)者感(gan)(gan)興趣(qu)的(de)(de)(de)所有方(fang)面(mian)的(de)(de)(de)神(shen)(shen)(shen)經(jing)(jing)(jing)(jing)網(wang)(wang)絡(luo)(luo)和(he)(he)(he)(he)相關方(fang)法的(de)(de)(de)計(ji)(ji)算(suan)智能。神(shen)(shen)(shen)經(jing)(jing)(jing)(jing)網(wang)(wang)絡(luo)(luo)歡迎高質量論(lun)文(wen)的(de)(de)(de)提交(jiao),有助(zhu)于全面(mian)的(de)(de)(de)神(shen)(shen)(shen)經(jing)(jing)(jing)(jing)網(wang)(wang)絡(luo)(luo)研(yan)究,從行為(wei)和(he)(he)(he)(he)大(da)腦(nao)建(jian)模,學(xue)習(xi)算(suan)法,通過數學(xue)和(he)(he)(he)(he)計(ji)(ji)算(suan)分析(xi),系(xi)統的(de)(de)(de)工程(cheng)和(he)(he)(he)(he)技(ji)術應用(yong),大(da)量使用(yong)神(shen)(shen)(shen)經(jing)(jing)(jing)(jing)網(wang)(wang)絡(luo)(luo)的(de)(de)(de)概念和(he)(he)(he)(he)技(ji)術。這一獨特而廣泛的(de)(de)(de)范圍促(cu)進了(le)(le)生(sheng)物(wu)(wu)和(he)(he)(he)(he)技(ji)術研(yan)究之間(jian)的(de)(de)(de)思想交(jiao)流,并有助(zhu)于促(cu)進對(dui)生(sheng)物(wu)(wu)啟發(fa)(fa)的(de)(de)(de)計(ji)(ji)算(suan)智能感(gan)(gan)興趣(qu)的(de)(de)(de)跨學(xue)科(ke)(ke)社區的(de)(de)(de)發(fa)(fa)展(zhan)。因此,神(shen)(shen)(shen)經(jing)(jing)(jing)(jing)網(wang)(wang)絡(luo)(luo)編委(wei)會(hui)代表的(de)(de)(de)專家領域包括心理(li)學(xue),神(shen)(shen)(shen)經(jing)(jing)(jing)(jing)生(sheng)物(wu)(wu)學(xue),計(ji)(ji)算(suan)機科(ke)(ke)學(xue),工程(cheng),數學(xue),物(wu)(wu)理(li)。該雜(za)志發(fa)(fa)表文(wen)章(zhang)、信件(jian)和(he)(he)(he)(he)評(ping)論(lun)以及(ji)給編輯的(de)(de)(de)信件(jian)、社論(lun)、時事、軟件(jian)調查和(he)(he)(he)(he)專利信息。文(wen)章(zhang)發(fa)(fa)表在五個(ge)部分之一:認知(zhi)科(ke)(ke)學(xue),神(shen)(shen)(shen)經(jing)(jing)(jing)(jing)科(ke)(ke)學(xue),學(xue)習(xi)系(xi)統,數學(xue)和(he)(he)(he)(he)計(ji)(ji)算(suan)分析(xi)、工程(cheng)和(he)(he)(he)(he)應用(yong)。 官網(wang)(wang)地址:

Large Language Models (LLMs) have shown excellent generalization capabilities that have led to the development of numerous models. These models propose various new architectures, tweaking existing architectures with refined training strategies, increasing context length, using high-quality training data, and increasing training time to outperform baselines. Analyzing new developments is crucial for identifying changes that enhance training stability and improve generalization in LLMs. This survey paper comprehensively analyses the LLMs architectures and their categorization, training strategies, training datasets, and performance evaluations and discusses future research directions. Moreover, the paper also discusses the basic building blocks and concepts behind LLMs, followed by a complete overview of LLMs, including their important features and functions. Finally, the paper summarizes significant findings from LLM research and consolidates essential architectural and training strategies for developing advanced LLMs. Given the continuous advancements in LLMs, we intend to regularly update this paper by incorporating new sections and featuring the latest LLM models.

Recent artificial intelligence (AI) systems have reached milestones in "grand challenges" ranging from Go to protein-folding. The capability to retrieve medical knowledge, reason over it, and answer medical questions comparably to physicians has long been viewed as one such grand challenge. Large language models (LLMs) have catalyzed significant progress in medical question answering; Med-PaLM was the first model to exceed a "passing" score in US Medical Licensing Examination (USMLE) style questions with a score of 67.2% on the MedQA dataset. However, this and other prior work suggested significant room for improvement, especially when models' answers were compared to clinicians' answers. Here we present Med-PaLM 2, which bridges these gaps by leveraging a combination of base LLM improvements (PaLM 2), medical domain finetuning, and prompting strategies including a novel ensemble refinement approach. Med-PaLM 2 scored up to 86.5% on the MedQA dataset, improving upon Med-PaLM by over 19% and setting a new state-of-the-art. We also observed performance approaching or exceeding state-of-the-art across MedMCQA, PubMedQA, and MMLU clinical topics datasets. We performed detailed human evaluations on long-form questions along multiple axes relevant to clinical applications. In pairwise comparative ranking of 1066 consumer medical questions, physicians preferred Med-PaLM 2 answers to those produced by physicians on eight of nine axes pertaining to clinical utility (p < 0.001). We also observed significant improvements compared to Med-PaLM on every evaluation axis (p < 0.001) on newly introduced datasets of 240 long-form "adversarial" questions to probe LLM limitations. While further studies are necessary to validate the efficacy of these models in real-world settings, these results highlight rapid progress towards physician-level performance in medical question answering.

Following unprecedented success on the natural language tasks, Transformers have been successfully applied to several computer vision problems, achieving state-of-the-art results and prompting researchers to reconsider the supremacy of convolutional neural networks (CNNs) as {de facto} operators. Capitalizing on these advances in computer vision, the medical imaging field has also witnessed growing interest for Transformers that can capture global context compared to CNNs with local receptive fields. Inspired from this transition, in this survey, we attempt to provide a comprehensive review of the applications of Transformers in medical imaging covering various aspects, ranging from recently proposed architectural designs to unsolved issues. Specifically, we survey the use of Transformers in medical image segmentation, detection, classification, reconstruction, synthesis, registration, clinical report generation, and other tasks. In particular, for each of these applications, we develop taxonomy, identify application-specific challenges as well as provide insights to solve them, and highlight recent trends. Further, we provide a critical discussion of the field's current state as a whole, including the identification of key challenges, open problems, and outlining promising future directions. We hope this survey will ignite further interest in the community and provide researchers with an up-to-date reference regarding applications of Transformer models in medical imaging. Finally, to cope with the rapid development in this field, we intend to regularly update the relevant latest papers and their open-source implementations at \url{//github.com/fahadshamshad/awesome-transformers-in-medical-imaging}.

Graph Neural Networks (GNNs) have shown promising results on a broad spectrum of applications. Most empirical studies of GNNs directly take the observed graph as input, assuming the observed structure perfectly depicts the accurate and complete relations between nodes. However, graphs in the real world are inevitably noisy or incomplete, which could even exacerbate the quality of graph representations. In this work, we propose a novel Variational Information Bottleneck guided Graph Structure Learning framework, namely VIB-GSL, in the perspective of information theory. VIB-GSL advances the Information Bottleneck (IB) principle for graph structure learning, providing a more elegant and universal framework for mining underlying task-relevant relations. VIB-GSL learns an informative and compressive graph structure to distill the actionable information for specific downstream tasks. VIB-GSL deduces a variational approximation for irregular graph data to form a tractable IB objective function, which facilitates training stability. Extensive experimental results demonstrate that the superior effectiveness and robustness of VIB-GSL.

Human-in-the-loop aims to train an accurate prediction model with minimum cost by integrating human knowledge and experience. Humans can provide training data for machine learning applications and directly accomplish some tasks that are hard for computers in the pipeline with the help of machine-based approaches. In this paper, we survey existing works on human-in-the-loop from a data perspective and classify them into three categories with a progressive relationship: (1) the work of improving model performance from data processing, (2) the work of improving model performance through interventional model training, and (3) the design of the system independent human-in-the-loop. Using the above categorization, we summarize major approaches in the field, along with their technical strengths/ weaknesses, we have simple classification and discussion in natural language processing, computer vision, and others. Besides, we provide some open challenges and opportunities. This survey intends to provide a high-level summarization for human-in-the-loop and motivates interested readers to consider approaches for designing effective human-in-the-loop solutions.

Generalization to out-of-distribution (OOD) data is a capability natural to humans yet challenging for machines to reproduce. This is because most learning algorithms strongly rely on the i.i.d.~assumption on source/target data, which is often violated in practice due to domain shift. Domain generalization (DG) aims to achieve OOD generalization by using only source data for model learning. Since first introduced in 2011, research in DG has made great progresses. In particular, intensive research in this topic has led to a broad spectrum of methodologies, e.g., those based on domain alignment, meta-learning, data augmentation, or ensemble learning, just to name a few; and has covered various vision applications such as object recognition, segmentation, action recognition, and person re-identification. In this paper, for the first time a comprehensive literature review is provided to summarize the developments in DG for computer vision over the past decade. Specifically, we first cover the background by formally defining DG and relating it to other research fields like domain adaptation and transfer learning. Second, we conduct a thorough review into existing methods and present a categorization based on their methodologies and motivations. Finally, we conclude this survey with insights and discussions on future research directions.

Graph Neural Networks (GNNs) have proven to be useful for many different practical applications. However, many existing GNN models have implicitly assumed homophily among the nodes connected in the graph, and therefore have largely overlooked the important setting of heterophily, where most connected nodes are from different classes. In this work, we propose a novel framework called CPGNN that generalizes GNNs for graphs with either homophily or heterophily. The proposed framework incorporates an interpretable compatibility matrix for modeling the heterophily or homophily level in the graph, which can be learned in an end-to-end fashion, enabling it to go beyond the assumption of strong homophily. Theoretically, we show that replacing the compatibility matrix in our framework with the identity (which represents pure homophily) reduces to GCN. Our extensive experiments demonstrate the effectiveness of our approach in more realistic and challenging experimental settings with significantly less training data compared to previous works: CPGNN variants achieve state-of-the-art results in heterophily settings with or without contextual node features, while maintaining comparable performance in homophily settings.

The recent proliferation of knowledge graphs (KGs) coupled with incomplete or partial information, in the form of missing relations (links) between entities, has fueled a lot of research on knowledge base completion (also known as relation prediction). Several recent works suggest that convolutional neural network (CNN) based models generate richer and more expressive feature embeddings and hence also perform well on relation prediction. However, we observe that these KG embeddings treat triples independently and thus fail to cover the complex and hidden information that is inherently implicit in the local neighborhood surrounding a triple. To this effect, our paper proposes a novel attention based feature embedding that captures both entity and relation features in any given entity's neighborhood. Additionally, we also encapsulate relation clusters and multihop relations in our model. Our empirical study offers insights into the efficacy of our attention based model and we show marked performance gains in comparison to state of the art methods on all datasets.

We introduce a multi-task setup of identifying and classifying entities, relations, and coreference clusters in scientific articles. We create SciERC, a dataset that includes annotations for all three tasks and develop a unified framework called Scientific Information Extractor (SciIE) for with shared span representations. The multi-task setup reduces cascading errors between tasks and leverages cross-sentence relations through coreference links. Experiments show that our multi-task model outperforms previous models in scientific information extraction without using any domain-specific features. We further show that the framework supports construction of a scientific knowledge graph, which we use to analyze information in scientific literature.

Within the rapidly developing Internet of Things (IoT), numerous and diverse physical devices, Edge devices, Cloud infrastructure, and their quality of service requirements (QoS), need to be represented within a unified specification in order to enable rapid IoT application development, monitoring, and dynamic reconfiguration. But heterogeneities among different configuration knowledge representation models pose limitations for acquisition, discovery and curation of configuration knowledge for coordinated IoT applications. This paper proposes a unified data model to represent IoT resource configuration knowledge artifacts. It also proposes IoT-CANE (Context-Aware recommendatioN systEm) to facilitate incremental knowledge acquisition and declarative context driven knowledge recommendation.

北京阿比特科技有限公司