亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

With the rapid growth of the developer community, the amount of posts on online technical forums has been growing rapidly, which poses difficulties for users to filter useful posts and find important information. Tags provide a concise feature dimension for users to locate their interested posts and for search engines to index the most relevant posts according to the queries. However, most tags are only focused on the technical perspective (e.g., program language, platform, tool). In most cases, forum posts in online developer communities reveal the author's intentions to solve a problem, ask for advice, share information, etc. The modeling of the intentions of posts can provide an extra dimension to the current tag taxonomy. By referencing previous studies and learning from industrial perspectives, we create a refined taxonomy for the intentions of technical forum posts. Through manual labeling and analysis on a sampled post dataset extracted from online forums, we understand the relevance between the constitution of posts (code, error messages) and their intentions. Furthermore, inspired by our manual study, we design a pre-trained transformer-based model to automatically predict post intentions. The best variant of our intention prediction framework, which achieves a Micro F1-score of 0.589, Top 1-3 accuracy of 62.6% to 87.8%, and an average AUC of 0.787, outperforms the state-of-the-art baseline approach. Our characterization and automated classification of forum posts regarding their intentions may help forum maintainers or third-party tool developers improve the organization and retrieval of posts on technical forums. We have released our annotated dataset and codes in our supplementary material package.

相關內容

分(fen)(fen)(fen)(fen)(fen)類(lei)(lei)學是(shi)分(fen)(fen)(fen)(fen)(fen)類(lei)(lei)的(de)(de)(de)(de)(de)(de)實踐和科(ke)學。Wikipedia類(lei)(lei)別(bie)(bie)說明了一種(zhong)(zhong)分(fen)(fen)(fen)(fen)(fen)類(lei)(lei)法(fa)(fa)(fa)(fa),可(ke)(ke)以(yi)(yi)(yi)通(tong)過自動方(fang)式提取Wikipedia類(lei)(lei)別(bie)(bie)的(de)(de)(de)(de)(de)(de)完整分(fen)(fen)(fen)(fen)(fen)類(lei)(lei)法(fa)(fa)(fa)(fa)。截至2009年(nian),已(yi)經證明,可(ke)(ke)以(yi)(yi)(yi)使用(yong)(yong)人工構(gou)(gou)建(jian)的(de)(de)(de)(de)(de)(de)分(fen)(fen)(fen)(fen)(fen)類(lei)(lei)法(fa)(fa)(fa)(fa)(例如像WordNet這樣的(de)(de)(de)(de)(de)(de)計(ji)算詞典的(de)(de)(de)(de)(de)(de)分(fen)(fen)(fen)(fen)(fen)類(lei)(lei)法(fa)(fa)(fa)(fa))來改進和重組(zu)Wikipedia類(lei)(lei)別(bie)(bie)分(fen)(fen)(fen)(fen)(fen)類(lei)(lei)法(fa)(fa)(fa)(fa)。 從廣義(yi)上講,分(fen)(fen)(fen)(fen)(fen)類(lei)(lei)法(fa)(fa)(fa)(fa)還適(shi)用(yong)(yong)于除父(fu)子(zi)層(ceng)次結構(gou)(gou)以(yi)(yi)(yi)外(wai)的(de)(de)(de)(de)(de)(de)關系方(fang)案(an),例如網絡結構(gou)(gou)。然(ran)后(hou)分(fen)(fen)(fen)(fen)(fen)類(lei)(lei)法(fa)(fa)(fa)(fa)可(ke)(ke)能包括有(you)多父(fu)母(mu)的(de)(de)(de)(de)(de)(de)單身孩子(zi),例如,“汽車(che)”可(ke)(ke)能與父(fu)母(mu)雙方(fang)一起(qi)出現“車(che)輛”和“鋼(gang)結構(gou)(gou)”;但是(shi)對某些人而言,這僅意味著“汽車(che)”是(shi)幾種(zhong)(zhong)不同分(fen)(fen)(fen)(fen)(fen)類(lei)(lei)法(fa)(fa)(fa)(fa)的(de)(de)(de)(de)(de)(de)一部(bu)(bu)分(fen)(fen)(fen)(fen)(fen)。分(fen)(fen)(fen)(fen)(fen)類(lei)(lei)法(fa)(fa)(fa)(fa)也可(ke)(ke)能只(zhi)是(shi)將事物組(zu)織成組(zu),或者(zhe)是(shi)按字(zi)母(mu)順序排列的(de)(de)(de)(de)(de)(de)列表(biao);但是(shi)在這里,術(shu)語詞匯更合適(shi)。在知識管理中的(de)(de)(de)(de)(de)(de)當前(qian)用(yong)(yong)法(fa)(fa)(fa)(fa)中,分(fen)(fen)(fen)(fen)(fen)類(lei)(lei)法(fa)(fa)(fa)(fa)被認為比本體(ti)(ti)論(lun)窄,因為本體(ti)(ti)論(lun)應用(yong)(yong)了各(ge)種(zhong)(zhong)各(ge)樣的(de)(de)(de)(de)(de)(de)關系類(lei)(lei)型。 在數學上,分(fen)(fen)(fen)(fen)(fen)層(ceng)分(fen)(fen)(fen)(fen)(fen)類(lei)(lei)法(fa)(fa)(fa)(fa)是(shi)給定(ding)對象(xiang)集(ji)的(de)(de)(de)(de)(de)(de)分(fen)(fen)(fen)(fen)(fen)類(lei)(lei)樹結構(gou)(gou)。該結構(gou)(gou)的(de)(de)(de)(de)(de)(de)頂部(bu)(bu)是(shi)適(shi)用(yong)(yong)于所(suo)有(you)對象(xiang)的(de)(de)(de)(de)(de)(de)單個分(fen)(fen)(fen)(fen)(fen)類(lei)(lei),即(ji)根節點。此根下的(de)(de)(de)(de)(de)(de)節點是(shi)更具體(ti)(ti)的(de)(de)(de)(de)(de)(de)分(fen)(fen)(fen)(fen)(fen)類(lei)(lei),適(shi)用(yong)(yong)于總分(fen)(fen)(fen)(fen)(fen)類(lei)(lei)對象(xiang)集(ji)的(de)(de)(de)(de)(de)(de)子(zi)集(ji)。推理的(de)(de)(de)(de)(de)(de)進展從一般到(dao)更具體(ti)(ti)。

知識薈萃

精品入門和(he)進階教程、論文和(he)代碼整理等

更多

查看相關VIP內容、論文(wen)、資訊等

With the rise of social media, users are exposed to many misleading claims. However, the pervasive noise inherent in these posts presents a challenge in identifying precise and prominent claims that require verification. Extracting the important claims from such posts is arduous and time-consuming, yet it is an underexplored problem. Here, we aim to bridge this gap. We introduce a novel task, Claim Normalization (aka ClaimNorm), which aims to decompose complex and noisy social media posts into more straightforward and understandable forms, termed normalized claims. We propose CACN, a pioneering approach that leverages chain-of-thought and claim check-worthiness estimation, mimicking human reasoning processes, to comprehend intricate claims. Moreover, we capitalize on the in-context learning capabilities of large language models to provide guidance and to improve claim normalization. To evaluate the effectiveness of our proposed model, we meticulously compile a comprehensive real-world dataset, CLAN, comprising more than 6k instances of social media posts alongside their respective normalized claims. Our experiments demonstrate that CACN outperforms several baselines across various evaluation measures. Finally, our rigorous error analysis validates CACN's capabilities and pitfalls.

With the rise of online abuse, the NLP community has begun investigating the use of neural architectures to generate counterspeech that can "counter" the vicious tone of such abusive speech and dilute/ameliorate their rippling effect over the social network. However, most of the efforts so far have been primarily focused on English. To bridge the gap for low-resource languages such as Bengali and Hindi, we create a benchmark dataset of 5,062 abusive speech/counterspeech pairs, of which 2,460 pairs are in Bengali and 2,602 pairs are in Hindi. We implement several baseline models considering various interlingual transfer mechanisms with different configurations to generate suitable counterspeech to set up an effective benchmark. We observe that the monolingual setup yields the best performance. Further, using synthetic transfer, language models can generate counterspeech to some extent; specifically, we notice that transferability is better when languages belong to the same language family.

In the emerging space economy, autonomous robotic missions with specialized goals such as mapping and mining are gaining traction, with agencies and enterprises increasingly investing resources. Multirobot systems (MRS) research has provided many approaches to establish control and communication layers to facilitate collaboration from a technical perspective, such as granting more autonomy to heterogeneous robotic groups through auction-based interactions in mesh networks. However, stakeholders' competing economic interests often prevent them from cooperating within a proprietary ecosystem. Related work suggests that distributed ledger technology (DLT) might serve as a mechanism for enterprises to coordinate workflows and trade services to explore space resources through a transparent, reliable, non-proprietary digital platform. We challenge this perspective by pointing to the core technical weaknesses of blockchains, in particular, increased energy consumption, low throughput, and full transparency through redundancy. Our objective is to advance the discussion in a direction where the benefits of DLT from an economic perspective are weighted against the drawbacks from a technical perspective. We finally present a possible DLT-driven heterogeneous MRS for map exploration to study the opportunities for economic collaboration and competitiveness.

Current state-of-the-art recommender systems predominantly rely on either implicit or explicit feedback from users to suggest new items. While effective in recommending novel options, these conventional systems often use uninterpretable embeddings. This lack of transparency not only limits user understanding of why certain items are suggested but also reduces the user's ability to easily scrutinize and edit their preferences. For example, if a user has a change in interests, they would need to make significant changes to their interaction history to adjust the model's recommendations. To address these limitations, we introduce a novel method that utilizes user reviews to craft personalized, natural language profiles describing users' preferences. Through these descriptive profiles, our system provides transparent recommendations in natural language. Our evaluations show that this novel approach maintains a performance level on par with established recommender systems, but with the added benefits of transparency and user control. By enabling users to scrutinize why certain items are recommended, they can more easily verify, adjust, and have greater autonomy over their recommendations.

In Federated Learning (FL), forgetting, or the loss of knowledge across rounds, hampers algorithm convergence, particularly in the presence of severe data heterogeneity among clients. This study explores the nuances of this issue, emphasizing the critical role of forgetting in FL's inefficient learning within heterogeneous data contexts. Knowledge loss occurs in both client-local updates and server-side aggregation steps; addressing one without the other fails to mitigate forgetting. We introduce a metric to measure forgetting granularly, ensuring distinct recognition amid new knowledge acquisition. Leveraging these insights, we propose Flashback, an FL algorithm with a dynamic distillation approach that is used to regularize the local models, and effectively aggregate their knowledge. Across different benchmarks, Flashback outperforms other methods, mitigates forgetting, and achieves faster round-to-target-accuracy, by converging in 6 to 16 rounds.

The fusion of causal models with deep learning introducing increasingly intricate data sets, such as the causal associations within images or between textual components, has surfaced as a focal research area. Nonetheless, the broadening of original causal concepts and theories to such complex, non-statistical data has been met with serious challenges. In response, our study proposes redefinitions of causal data into three distinct categories from the standpoint of causal structure and representation: definite data, semi-definite data, and indefinite data. Definite data chiefly pertains to statistical data used in conventional causal scenarios, while semi-definite data refers to a spectrum of data formats germane to deep learning, including time-series, images, text, and others. Indefinite data is an emergent research sphere inferred from the progression of data forms by us. To comprehensively present these three data paradigms, we elaborate on their formal definitions, differences manifested in datasets, resolution pathways, and development of research. We summarize key tasks and achievements pertaining to definite and semi-definite data from myriad research undertakings, present a roadmap for indefinite data, beginning with its current research conundrums. Lastly, we classify and scrutinize the key datasets presently utilized within these three paradigms.

Recently, graph neural networks have been gaining a lot of attention to simulate dynamical systems due to their inductive nature leading to zero-shot generalizability. Similarly, physics-informed inductive biases in deep-learning frameworks have been shown to give superior performance in learning the dynamics of physical systems. There is a growing volume of literature that attempts to combine these two approaches. Here, we evaluate the performance of thirteen different graph neural networks, namely, Hamiltonian and Lagrangian graph neural networks, graph neural ODE, and their variants with explicit constraints and different architectures. We briefly explain the theoretical formulation highlighting the similarities and differences in the inductive biases and graph architecture of these systems. We evaluate these models on spring, pendulum, gravitational, and 3D deformable solid systems to compare the performance in terms of rollout error, conserved quantities such as energy and momentum, and generalizability to unseen system sizes. Our study demonstrates that GNNs with additional inductive biases, such as explicit constraints and decoupling of kinetic and potential energies, exhibit significantly enhanced performance. Further, all the physics-informed GNNs exhibit zero-shot generalizability to system sizes an order of magnitude larger than the training system, thus providing a promising route to simulate large-scale realistic systems.

Over the past few years, the rapid development of deep learning technologies for computer vision has greatly promoted the performance of medical image segmentation (MedISeg). However, the recent MedISeg publications usually focus on presentations of the major contributions (e.g., network architectures, training strategies, and loss functions) while unwittingly ignoring some marginal implementation details (also known as "tricks"), leading to a potential problem of the unfair experimental result comparisons. In this paper, we collect a series of MedISeg tricks for different model implementation phases (i.e., pre-training model, data pre-processing, data augmentation, model implementation, model inference, and result post-processing), and experimentally explore the effectiveness of these tricks on the consistent baseline models. Compared to paper-driven surveys that only blandly focus on the advantages and limitation analyses of segmentation models, our work provides a large number of solid experiments and is more technically operable. With the extensive experimental results on both the representative 2D and 3D medical image datasets, we explicitly clarify the effect of these tricks. Moreover, based on the surveyed tricks, we also open-sourced a strong MedISeg repository, where each of its components has the advantage of plug-and-play. We believe that this milestone work not only completes a comprehensive and complementary survey of the state-of-the-art MedISeg approaches, but also offers a practical guide for addressing the future medical image processing challenges including but not limited to small dataset learning, class imbalance learning, multi-modality learning, and domain adaptation. The code has been released at: //github.com/hust-linyi/MedISeg

Deep neural networks (DNNs) are successful in many computer vision tasks. However, the most accurate DNNs require millions of parameters and operations, making them energy, computation and memory intensive. This impedes the deployment of large DNNs in low-power devices with limited compute resources. Recent research improves DNN models by reducing the memory requirement, energy consumption, and number of operations without significantly decreasing the accuracy. This paper surveys the progress of low-power deep learning and computer vision, specifically in regards to inference, and discusses the methods for compacting and accelerating DNN models. The techniques can be divided into four major categories: (1) parameter quantization and pruning, (2) compressed convolutional filters and matrix factorization, (3) network architecture search, and (4) knowledge distillation. We analyze the accuracy, advantages, disadvantages, and potential solutions to the problems with the techniques in each category. We also discuss new evaluation metrics as a guideline for future research.

Object detection typically assumes that training and test data are drawn from an identical distribution, which, however, does not always hold in practice. Such a distribution mismatch will lead to a significant performance drop. In this work, we aim to improve the cross-domain robustness of object detection. We tackle the domain shift on two levels: 1) the image-level shift, such as image style, illumination, etc, and 2) the instance-level shift, such as object appearance, size, etc. We build our approach based on the recent state-of-the-art Faster R-CNN model, and design two domain adaptation components, on image level and instance level, to reduce the domain discrepancy. The two domain adaptation components are based on H-divergence theory, and are implemented by learning a domain classifier in adversarial training manner. The domain classifiers on different levels are further reinforced with a consistency regularization to learn a domain-invariant region proposal network (RPN) in the Faster R-CNN model. We evaluate our newly proposed approach using multiple datasets including Cityscapes, KITTI, SIM10K, etc. The results demonstrate the effectiveness of our proposed approach for robust object detection in various domain shift scenarios.

北京阿比特科技有限公司