亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Supervised machine learning approaches have been increasingly used in accelerating electronic structure prediction as surrogates of first-principle computational methods, such as density functional theory (DFT). While numerous quantum chemistry datasets focus on chemical properties and atomic forces, the ability to achieve accurate and efficient prediction of the Hamiltonian matrix is highly desired, as it is the most important and fundamental physical quantity that determines the quantum states of physical systems and chemical properties. In this work, we generate a new Quantum Hamiltonian dataset, named as QH9, to provide precise Hamiltonian matrices for 2,399 molecular dynamics trajectories and 130,831 stable molecular geometries, based on the QM9 dataset. By designing benchmark tasks with various molecules, we show that current machine learning models have the capacity to predict Hamiltonian matrices for arbitrary molecules. Both the QH9 dataset and the baseline models are provided to the community through an open-source benchmark, which can be highly valuable for developing machine learning methods and accelerating molecular and materials design for scientific and technological applications. Our benchmark is publicly available at //github.com/divelab/AIRS/tree/main/OpenDFT/QHBench.

相關內容

機器學習(xi)(Machine Learning)是一個研(yan)(yan)究(jiu)(jiu)(jiu)(jiu)計算(suan)學習(xi)方(fang)法(fa)(fa)(fa)(fa)的(de)(de)(de)國(guo)際論(lun)(lun)壇。該(gai)雜(za)志(zhi)發表文(wen)(wen)章,報告廣泛的(de)(de)(de)學習(xi)方(fang)法(fa)(fa)(fa)(fa)應用(yong)于各種學習(xi)問(wen)題(ti)的(de)(de)(de)實質(zhi)性結果。該(gai)雜(za)志(zhi)的(de)(de)(de)特(te)色論(lun)(lun)文(wen)(wen)描(miao)(miao)述研(yan)(yan)究(jiu)(jiu)(jiu)(jiu)的(de)(de)(de)問(wen)題(ti)和(he)方(fang)法(fa)(fa)(fa)(fa),應用(yong)研(yan)(yan)究(jiu)(jiu)(jiu)(jiu)和(he)研(yan)(yan)究(jiu)(jiu)(jiu)(jiu)方(fang)法(fa)(fa)(fa)(fa)的(de)(de)(de)問(wen)題(ti)。有關學習(xi)問(wen)題(ti)或方(fang)法(fa)(fa)(fa)(fa)的(de)(de)(de)論(lun)(lun)文(wen)(wen)通過實證(zheng)研(yan)(yan)究(jiu)(jiu)(jiu)(jiu)、理(li)(li)論(lun)(lun)分(fen)析或與心理(li)(li)現象的(de)(de)(de)比較提(ti)供(gong)了(le)(le)(le)堅實的(de)(de)(de)支持。應用(yong)論(lun)(lun)文(wen)(wen)展示了(le)(le)(le)如(ru)何(he)應用(yong)學習(xi)方(fang)法(fa)(fa)(fa)(fa)來解決(jue)重要的(de)(de)(de)應用(yong)問(wen)題(ti)。研(yan)(yan)究(jiu)(jiu)(jiu)(jiu)方(fang)法(fa)(fa)(fa)(fa)論(lun)(lun)文(wen)(wen)改進(jin)了(le)(le)(le)機器學習(xi)的(de)(de)(de)研(yan)(yan)究(jiu)(jiu)(jiu)(jiu)方(fang)法(fa)(fa)(fa)(fa)。所(suo)有的(de)(de)(de)論(lun)(lun)文(wen)(wen)都以其他研(yan)(yan)究(jiu)(jiu)(jiu)(jiu)人員可以驗證(zheng)或復制的(de)(de)(de)方(fang)式描(miao)(miao)述了(le)(le)(le)支持證(zheng)據。論(lun)(lun)文(wen)(wen)還詳細(xi)說明了(le)(le)(le)學習(xi)的(de)(de)(de)組(zu)成部分(fen),并討論(lun)(lun)了(le)(le)(le)關于知(zhi)識表示和(he)性能任務(wu)的(de)(de)(de)假設(she)。 官(guan)網地址(zhi):

With the introduction of deep learning models, semantic parsingbased knowledge base question answering (KBQA) systems have achieved high performance in handling complex questions. However, most existing approaches primarily focus on enhancing the model's effectiveness on individual benchmark datasets, disregarding the high costs of adapting the system to disparate datasets in real-world scenarios (e.g., multi-tenant platform). Therefore, we present ADMUS, a progressive knowledge base question answering framework designed to accommodate a wide variety of datasets, including multiple languages, diverse backbone knowledge bases, and disparate question answering datasets. To accomplish the purpose, we decouple the architecture of conventional KBQA systems and propose this dataset-independent framework. Our framework supports the seamless integration of new datasets with minimal effort, only requiring creating a dataset-related micro-service at a negligible cost. To enhance the usability of ADUMS, we design a progressive framework consisting of three stages, ranges from executing exact queries, generating approximate queries and retrieving open-domain knowledge referring from large language models. An online demonstration of ADUMS is available at: //answer.gstore.cn/pc/index.html

Recently, uncertainty-aware deep learning methods for multiclass labeling problems have been developed that provide calibrated class prediction probabilities and out-of-distribution (OOD) indicators, letting machine learning (ML) consumers and engineers gauge a model's confidence in its predictions. However, this extra neural network prediction information is challenging to scalably convey visually for arbitrary data sources under multiple uncertainty contexts. To address these challenges, we present ScatterUQ, an interactive system that provides targeted visualizations to allow users to better understand model performance in context-driven uncertainty settings. ScatterUQ leverages recent advances in distance-aware neural networks, together with dimensionality reduction techniques, to construct robust, 2-D scatter plots explaining why a model predicts a test example to be (1) in-distribution and of a particular class, (2) in-distribution but unsure of the class, and (3) out-of-distribution. ML consumers and engineers can visually compare the salient features of test samples with training examples through the use of a ``hover callback'' to understand model uncertainty performance and decide follow up courses of action. We demonstrate the effectiveness of ScatterUQ to explain model uncertainty for a multiclass image classification on a distance-aware neural network trained on Fashion-MNIST and tested on Fashion-MNIST (in distribution) and MNIST digits (out of distribution), as well as a deep learning model for a cyber dataset. We quantitatively evaluate dimensionality reduction techniques to optimize our contextually driven UQ visualizations. Our results indicate that the ScatterUQ system should scale to arbitrary, multiclass datasets. Our code is available at //github.com/mit-ll-responsible-ai/equine-webapp

The design of asynchronous circuits typically requires a judicious definition of signals and modules, combined with a proper specification of their timing constraints, which can be a complex and error-prone process, using standard Hardware Description Languages (HDLs). In this paper we introduce Yak, a new dataflow description language for asynchronous bundled data circuits. Yak allows designers to generate Verilog and timing constraints automatically, from a textual description of bundled data control flow structures and combinational logic blocks. The timing constraints are generated using the Local Clock Set methodology and can be consumed by standard industry tools. Yak includes ergonomic language features such as structured bindings of channels undergoing fork and join operations, named value scope propagation along channels, and channel typing. Here we present Yak's language front-end and compare the automated synthesis and layout results of an example circuit with a manual constraint specification approach.

Contrastive learning based cross-modality pretraining approaches have recently exhibited impressive success in diverse fields. In this paper, we propose GEmo-CLAP, a kind of gender-attribute-enhanced contrastive language-audio pretraining (CLAP) method for speech emotion recognition. Specifically, a novel emotion CLAP model (Emo-CLAP) is first built, utilizing pre-trained WavLM and RoBERTa models. Second, given the significance of the gender attribute in speech emotion modeling, two novel soft label based GEmo-CLAP (SL-GEmo-CLAP) and multi-task learning based GEmo-CLAP (ML-GEmo-CLAP) models are further proposed to integrate emotion and gender information of speech signals, forming more reasonable objectives. Extensive experiments on IEMOCAP show that our proposed two GEmo-CLAP models consistently outperform the baseline Emo-CLAP, while also achieving the best recognition performance compared with recent state-of-the-art methods. Noticeably, the proposed SL-GEmo-CLAP model achieves the best UAR of 81.43\% and WAR of 83.16\% which performs better than other state-of-the-art SER methods by at least 3\%.

Object-centric representation is an essential abstraction for forward prediction. Most existing forward models learn this representation through extensive supervision (e.g., object class and bounding box) although such ground-truth information is not readily accessible in reality. To address this, we introduce KINet (Keypoint Interaction Network) -- an end-to-end unsupervised framework to reason about object interactions based on a keypoint representation. Using visual observations, our model learns to associate objects with keypoint coordinates and discovers a graph representation of the system as a set of keypoint embeddings and their relations. It then learns an action-conditioned forward model using contrastive estimation to predict future keypoint states. By learning to perform physical reasoning in the keypoint space, our model automatically generalizes to scenarios with a different number of objects, novel backgrounds, and unseen object geometries. Experiments demonstrate the effectiveness of our model in accurately performing forward prediction and learning plannable object-centric representations for downstream robotic pushing manipulation tasks.

There recently has been a surge of interest in developing a new class of deep learning (DL) architectures that integrate an explicit time dimension as a fundamental building block of learning and representation mechanisms. In turn, many recent results show that topological descriptors of the observed data, encoding information on the shape of the dataset in a topological space at different scales, that is, persistent homology of the data, may contain important complementary information, improving both performance and robustness of DL. As convergence of these two emerging ideas, we propose to enhance DL architectures with the most salient time-conditioned topological information of the data and introduce the concept of zigzag persistence into time-aware graph convolutional networks (GCNs). Zigzag persistence provides a systematic and mathematically rigorous framework to track the most important topological features of the observed data that tend to manifest themselves over time. To integrate the extracted time-conditioned topological descriptors into DL, we develop a new topological summary, zigzag persistence image, and derive its theoretical stability guarantees. We validate the new GCNs with a time-aware zigzag topological layer (Z-GCNETs), in application to traffic forecasting and Ethereum blockchain price prediction. Our results indicate that Z-GCNET outperforms 13 state-of-the-art methods on 4 time series datasets.

Language model pre-training, such as BERT, has significantly improved the performances of many natural language processing tasks. However, pre-trained language models are usually computationally expensive and memory intensive, so it is difficult to effectively execute them on some resource-restricted devices. To accelerate inference and reduce model size while maintaining accuracy, we firstly propose a novel transformer distillation method that is a specially designed knowledge distillation (KD) method for transformer-based models. By leveraging this new KD method, the plenty of knowledge encoded in a large teacher BERT can be well transferred to a small student TinyBERT. Moreover, we introduce a new two-stage learning framework for TinyBERT, which performs transformer distillation at both the pre-training and task-specific learning stages. This framework ensures that TinyBERT can capture both the general-domain and task-specific knowledge of the teacher BERT. TinyBERT is empirically effective and achieves comparable results with BERT in GLUE datasets, while being 7.5x smaller and 9.4x faster on inference. TinyBERT is also significantly better than state-of-the-art baselines, even with only about 28% parameters and 31% inference time of baselines.

This paper surveys the machine learning literature and presents machine learning as optimization models. Such models can benefit from the advancement of numerical optimization techniques which have already played a distinctive role in several machine learning settings. Particularly, mathematical optimization models are presented for commonly used machine learning approaches for regression, classification, clustering, and deep neural networks as well new emerging applications in machine teaching and empirical model learning. The strengths and the shortcomings of these models are discussed and potential research directions are highlighted.

Graph-based semi-supervised learning (SSL) is an important learning problem where the goal is to assign labels to initially unlabeled nodes in a graph. Graph Convolutional Networks (GCNs) have recently been shown to be effective for graph-based SSL problems. GCNs inherently assume existence of pairwise relationships in the graph-structured data. However, in many real-world problems, relationships go beyond pairwise connections and hence are more complex. Hypergraphs provide a natural modeling tool to capture such complex relationships. In this work, we explore the use of GCNs for hypergraph-based SSL. In particular, we propose HyperGCN, an SSL method which uses a layer-wise propagation rule for convolutional neural networks operating directly on hypergraphs. To the best of our knowledge, this is the first principled adaptation of GCNs to hypergraphs. HyperGCN is able to encode both the hypergraph structure and hypernode features in an effective manner. Through detailed experimentation, we demonstrate HyperGCN's effectiveness at hypergraph-based SSL.

Deep learning has emerged as a powerful machine learning technique that learns multiple layers of representations or features of the data and produces state-of-the-art prediction results. Along with the success of deep learning in many other application domains, deep learning is also popularly used in sentiment analysis in recent years. This paper first gives an overview of deep learning and then provides a comprehensive survey of its current applications in sentiment analysis.

北京阿比特科技有限公司