亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The advent of Large Language Models (LLMs) has made a transformative impact. However, the potential that LLMs such as ChatGPT can be exploited to generate misinformation has posed a serious concern to online safety and public trust. A fundamental research question is: will LLM-generated misinformation cause more harm than human-written misinformation? We propose to tackle this question from the perspective of detection difficulty. We first build a taxonomy of LLM-generated misinformation. Then we categorize and validate the potential real-world methods for generating misinformation with LLMs. Then, through extensive empirical investigation, we discover that LLM-generated misinformation can be harder to detect for humans and detectors compared to human-written misinformation with the same semantics, which suggests it can have more deceptive styles and potentially cause more harm. We also discuss the implications of our discovery on combating misinformation in the age of LLMs and the countermeasures.

相關內容

分(fen)(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei)(lei)(lei)學(xue)(xue)是(shi)(shi)(shi)分(fen)(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei)(lei)(lei)的(de)實踐(jian)和科學(xue)(xue)。Wikipedia類(lei)(lei)(lei)(lei)(lei)(lei)(lei)(lei)別(bie)說明了(le)一(yi)(yi)種分(fen)(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei)(lei)(lei)法(fa)(fa),可(ke)以(yi)通過自動方(fang)式提取Wikipedia類(lei)(lei)(lei)(lei)(lei)(lei)(lei)(lei)別(bie)的(de)完整分(fen)(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei)(lei)(lei)法(fa)(fa)。截至(zhi)2009年,已(yi)經證明,可(ke)以(yi)使(shi)用(yong)人工構(gou)建的(de)分(fen)(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei)(lei)(lei)法(fa)(fa)(例如(ru)像WordNet這樣(yang)(yang)的(de)計算詞(ci)典的(de)分(fen)(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei)(lei)(lei)法(fa)(fa))來(lai)改(gai)進和重組(zu)(zu)Wikipedia類(lei)(lei)(lei)(lei)(lei)(lei)(lei)(lei)別(bie)分(fen)(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei)(lei)(lei)法(fa)(fa)。 從(cong)廣義上講,分(fen)(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei)(lei)(lei)法(fa)(fa)還(huan)適用(yong)于(yu)除父(fu)子(zi)(zi)層次結(jie)構(gou)以(yi)外(wai)的(de)關(guan)系方(fang)案,例如(ru)網絡(luo)結(jie)構(gou)。然后分(fen)(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei)(lei)(lei)法(fa)(fa)可(ke)能包括有多父(fu)母的(de)單身孩(hai)子(zi)(zi),例如(ru),“汽車”可(ke)能與父(fu)母雙方(fang)一(yi)(yi)起出現“車輛”和“鋼結(jie)構(gou)”;但(dan)是(shi)(shi)(shi)對(dui)某些人而言,這僅意(yi)味著“汽車”是(shi)(shi)(shi)幾種不同分(fen)(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei)(lei)(lei)法(fa)(fa)的(de)一(yi)(yi)部分(fen)(fen)(fen)(fen)。分(fen)(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei)(lei)(lei)法(fa)(fa)也可(ke)能只是(shi)(shi)(shi)將事物組(zu)(zu)織成組(zu)(zu),或者(zhe)是(shi)(shi)(shi)按字(zi)母順序排列的(de)列表(biao);但(dan)是(shi)(shi)(shi)在這里,術(shu)語詞(ci)匯更(geng)(geng)合(he)適。在知識(shi)管理(li)中(zhong)的(de)當前用(yong)法(fa)(fa)中(zhong),分(fen)(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei)(lei)(lei)法(fa)(fa)被認為比本(ben)體論窄,因為本(ben)體論應用(yong)了(le)各(ge)種各(ge)樣(yang)(yang)的(de)關(guan)系類(lei)(lei)(lei)(lei)(lei)(lei)(lei)(lei)型。 在數學(xue)(xue)上,分(fen)(fen)(fen)(fen)層分(fen)(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei)(lei)(lei)法(fa)(fa)是(shi)(shi)(shi)給定對(dui)象(xiang)集的(de)分(fen)(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei)(lei)(lei)樹結(jie)構(gou)。該結(jie)構(gou)的(de)頂部是(shi)(shi)(shi)適用(yong)于(yu)所有對(dui)象(xiang)的(de)單個分(fen)(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei)(lei)(lei),即根節點。此(ci)根下的(de)節點是(shi)(shi)(shi)更(geng)(geng)具體的(de)分(fen)(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei)(lei)(lei),適用(yong)于(yu)總(zong)分(fen)(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei)(lei)(lei)對(dui)象(xiang)集的(de)子(zi)(zi)集。推理(li)的(de)進展(zhan)從(cong)一(yi)(yi)般到更(geng)(geng)具體。

知識薈萃

精品(pin)入門和(he)進階教程、論文(wen)和(he)代碼(ma)整理(li)等

更多

查看相關VIP內容、論(lun)文、資訊等

Interactive Natural Language Processing (iNLP) has emerged as a novel paradigm within the field of NLP, aimed at addressing limitations in existing frameworks while aligning with the ultimate goals of artificial intelligence. This paradigm considers language models as agents capable of observing, acting, and receiving feedback iteratively from external entities. Specifically, language models in this context can: (1) interact with humans for better understanding and addressing user needs, personalizing responses, aligning with human values, and improving the overall user experience; (2) interact with knowledge bases for enriching language representations with factual knowledge, enhancing the contextual relevance of responses, and dynamically leveraging external information to generate more accurate and informed responses; (3) interact with models and tools for effectively decomposing and addressing complex tasks, leveraging specialized expertise for specific subtasks, and fostering the simulation of social behaviors; and (4) interact with environments for learning grounded representations of language, and effectively tackling embodied tasks such as reasoning, planning, and decision-making in response to environmental observations. This paper offers a comprehensive survey of iNLP, starting by proposing a unified definition and framework of the concept. We then provide a systematic classification of iNLP, dissecting its various components, including interactive objects, interaction interfaces, and interaction methods. We proceed to delve into the evaluation methodologies used in the field, explore its diverse applications, scrutinize its ethical and safety issues, and discuss prospective research directions. This survey serves as an entry point for researchers who are interested in this rapidly evolving area and offers a broad view of the current landscape and future trajectory of iNLP.

Graph Neural Networks (GNNs) have shown promising results on a broad spectrum of applications. Most empirical studies of GNNs directly take the observed graph as input, assuming the observed structure perfectly depicts the accurate and complete relations between nodes. However, graphs in the real world are inevitably noisy or incomplete, which could even exacerbate the quality of graph representations. In this work, we propose a novel Variational Information Bottleneck guided Graph Structure Learning framework, namely VIB-GSL, in the perspective of information theory. VIB-GSL advances the Information Bottleneck (IB) principle for graph structure learning, providing a more elegant and universal framework for mining underlying task-relevant relations. VIB-GSL learns an informative and compressive graph structure to distill the actionable information for specific downstream tasks. VIB-GSL deduces a variational approximation for irregular graph data to form a tractable IB objective function, which facilitates training stability. Extensive experimental results demonstrate that the superior effectiveness and robustness of VIB-GSL.

Recently, a considerable literature has grown up around the theme of Graph Convolutional Network (GCN). How to effectively leverage the rich structural information in complex graphs, such as knowledge graphs with heterogeneous types of entities and relations, is a primary open challenge in the field. Most GCN methods are either restricted to graphs with a homogeneous type of edges (e.g., citation links only), or focusing on representation learning for nodes only instead of jointly propagating and updating the embeddings of both nodes and edges for target-driven objectives. This paper addresses these limitations by proposing a novel framework, namely the Knowledge Embedding based Graph Convolutional Network (KE-GCN), which combines the power of GCNs in graph-based belief propagation and the strengths of advanced knowledge embedding (a.k.a. knowledge graph embedding) methods, and goes beyond. Our theoretical analysis shows that KE-GCN offers an elegant unification of several well-known GCN methods as specific cases, with a new perspective of graph convolution. Experimental results on benchmark datasets show the advantageous performance of KE-GCN over strong baseline methods in the tasks of knowledge graph alignment and entity classification.

Graph Neural Networks (GNNs) have proven to be useful for many different practical applications. However, many existing GNN models have implicitly assumed homophily among the nodes connected in the graph, and therefore have largely overlooked the important setting of heterophily, where most connected nodes are from different classes. In this work, we propose a novel framework called CPGNN that generalizes GNNs for graphs with either homophily or heterophily. The proposed framework incorporates an interpretable compatibility matrix for modeling the heterophily or homophily level in the graph, which can be learned in an end-to-end fashion, enabling it to go beyond the assumption of strong homophily. Theoretically, we show that replacing the compatibility matrix in our framework with the identity (which represents pure homophily) reduces to GCN. Our extensive experiments demonstrate the effectiveness of our approach in more realistic and challenging experimental settings with significantly less training data compared to previous works: CPGNN variants achieve state-of-the-art results in heterophily settings with or without contextual node features, while maintaining comparable performance in homophily settings.

Graph Neural Networks (GNNs) draw their strength from explicitly modeling the topological information of structured data. However, existing GNNs suffer from limited capability in capturing the hierarchical graph representation which plays an important role in graph classification. In this paper, we innovatively propose hierarchical graph capsule network (HGCN) that can jointly learn node embeddings and extract graph hierarchies. Specifically, disentangled graph capsules are established by identifying heterogeneous factors underlying each node, such that their instantiation parameters represent different properties of the same entity. To learn the hierarchical representation, HGCN characterizes the part-whole relationship between lower-level capsules (part) and higher-level capsules (whole) by explicitly considering the structure information among the parts. Experimental studies demonstrate the effectiveness of HGCN and the contribution of each component.

Graph Convolutional Networks (GCNs) have received increasing attention in recent machine learning. How to effectively leverage the rich structural information in complex graphs, such as knowledge graphs with heterogeneous types of entities and relations, is a primary open challenge in the field. Most GCN methods are either restricted to graphs with a homogeneous type of edges (e.g., citation links only), or focusing on representation learning for nodes only instead of jointly optimizing the embeddings of both nodes and edges for target-driven objectives. This paper addresses these limitations by proposing a novel framework, namely the GEneralized Multi-relational Graph Convolutional Networks (GEM-GCN), which combines the power of GCNs in graph-based belief propagation and the strengths of advanced knowledge-base embedding methods, and goes beyond. Our theoretical analysis shows that GEM-GCN offers an elegant unification of several well-known GCN methods as specific cases, with a new perspective of graph convolution. Experimental results on benchmark datasets show the advantageous performance of GEM-GCN over strong baseline methods in the tasks of knowledge graph alignment and entity classification.

Compared with cheap addition operation, multiplication operation is of much higher computation complexity. The widely-used convolutions in deep neural networks are exactly cross-correlation to measure the similarity between input feature and convolution filters, which involves massive multiplications between float values. In this paper, we present adder networks (AdderNets) to trade these massive multiplications in deep neural networks, especially convolutional neural networks (CNNs), for much cheaper additions to reduce computation costs. In AdderNets, we take the $\ell_1$-norm distance between filters and input feature as the output response. The influence of this new similarity measure on the optimization of neural network have been thoroughly analyzed. To achieve a better performance, we develop a special back-propagation approach for AdderNets by investigating the full-precision gradient. We then propose an adaptive learning rate strategy to enhance the training procedure of AdderNets according to the magnitude of each neuron's gradient. As a result, the proposed AdderNets can achieve 74.9% Top-1 accuracy 91.7% Top-5 accuracy using ResNet-50 on the ImageNet dataset without any multiplication in convolution layer.

Embedding models for deterministic Knowledge Graphs (KG) have been extensively studied, with the purpose of capturing latent semantic relations between entities and incorporating the structured knowledge into machine learning. However, there are many KGs that model uncertain knowledge, which typically model the inherent uncertainty of relations facts with a confidence score, and embedding such uncertain knowledge represents an unresolved challenge. The capturing of uncertain knowledge will benefit many knowledge-driven applications such as question answering and semantic search by providing more natural characterization of the knowledge. In this paper, we propose a novel uncertain KG embedding model UKGE, which aims to preserve both structural and uncertainty information of relation facts in the embedding space. Unlike previous models that characterize relation facts with binary classification techniques, UKGE learns embeddings according to the confidence scores of uncertain relation facts. To further enhance the precision of UKGE, we also introduce probabilistic soft logic to infer confidence scores for unseen relation facts during training. We propose and evaluate two variants of UKGE based on different learning objectives. Experiments are conducted on three real-world uncertain KGs via three tasks, i.e. confidence prediction, relation fact ranking, and relation fact classification. UKGE shows effectiveness in capturing uncertain knowledge by achieving promising results on these tasks, and consistently outperforms baselines on these tasks.

Graph Convolutional Networks (GCNs) and their variants have experienced significant attention and have become the de facto methods for learning graph representations. GCNs derive inspiration primarily from recent deep learning approaches, and as a result, may inherit unnecessary complexity and redundant computation. In this paper, we reduce this excess complexity through successively removing nonlinearities and collapsing weight matrices between consecutive layers. We theoretically analyze the resulting linear model and show that it corresponds to a fixed low-pass filter followed by a linear classifier. Notably, our experimental evaluation demonstrates that these simplifications do not negatively impact accuracy in many downstream applications. Moreover, the resulting model scales to larger datasets, is naturally interpretable, and yields up to two orders of magnitude speedup over FastGCN.

We introduce an effective model to overcome the problem of mode collapse when training Generative Adversarial Networks (GAN). Firstly, we propose a new generator objective that finds it better to tackle mode collapse. And, we apply an independent Autoencoders (AE) to constrain the generator and consider its reconstructed samples as "real" samples to slow down the convergence of discriminator that enables to reduce the gradient vanishing problem and stabilize the model. Secondly, from mappings between latent and data spaces provided by AE, we further regularize AE by the relative distance between the latent and data samples to explicitly prevent the generator falling into mode collapse setting. This idea comes when we find a new way to visualize the mode collapse on MNIST dataset. To the best of our knowledge, our method is the first to propose and apply successfully the relative distance of latent and data samples for stabilizing GAN. Thirdly, our proposed model, namely Generative Adversarial Autoencoder Networks (GAAN), is stable and has suffered from neither gradient vanishing nor mode collapse issues, as empirically demonstrated on synthetic, MNIST, MNIST-1K, CelebA and CIFAR-10 datasets. Experimental results show that our method can approximate well multi-modal distribution and achieve better results than state-of-the-art methods on these benchmark datasets. Our model implementation is published here: //github.com/tntrung/gaan

北京阿比特科技有限公司