亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The landscape of software developer learning resources has continuously evolved, with recent trends favoring engaging formats like video tutorials. The emergence of Large Language Models (LLMs) like ChatGPT presents a new learning paradigm. While existing research explores the potential of LLMs in software development and education, their impact on developers' learning and solution-seeking behavior remains unexplored. To address this gap, we conducted a survey targeting software developers and computer science students, gathering 341 responses, of which 268 were completed and analyzed. This study investigates how AI chatbots like ChatGPT have influenced developers' learning preferences when acquiring new skills, exploring technologies, and resolving programming issues. Through quantitative and qualitative analysis, we explore whether AI tools supplement or replace traditional learning resources such as video tutorials, written tutorials, and Q&A forums. Our findings reveal a nuanced view: while video tutorials continue to be highly preferred for their comprehensive coverage, a significant number of respondents view AI chatbots as potential replacements for written tutorials, underscoring a shift towards more interactive and personalized learning experiences. Additionally, AI chatbots are increasingly considered valuable supplements to video tutorials, indicating their growing role in the developers' learning resources. These insights offer valuable directions for educators and the software development community by shedding light on the evolving preferences toward learning resources in the era of ChatGPT.

相關內容

Deep reinforcement learning (DRL) has had success across various domains, but applying it to environments with constraints remains challenging due to poor sample efficiency and slow convergence. Recent literature explored incorporating model knowledge to mitigate these problems, particularly through the use of models that assess the feasibility of proposed actions. However, integrating feasibility models efficiently into DRL pipelines in environments with continuous action spaces is non-trivial. We propose a novel DRL training strategy utilizing action mapping that leverages feasibility models to streamline the learning process. By decoupling the learning of feasible actions from policy optimization, action mapping allows DRL agents to focus on selecting the optimal action from a reduced feasible action set. We demonstrate through experiments that action mapping significantly improves training performance in constrained environments with continuous action spaces, especially with imperfect feasibility models.

The rapid evolution of programming languages and software systems has necessitated the implementation of multilingual and scalable clone detection tools. However, it is difficult to achieve the above requirements at the same time. Most existing tools only focus on one challenge. In this work, we propose TGMM, a tree and GPU-based tool for multilingual and multi-granularity code clone detection. By generating parse trees based on user-provided grammar files, TGMM can extract code blocks at a specified granularity and detect Type-3 clones efficiently. In order to show the performance of TGMM, we compare it with seven state-of-the-art tools in terms of recall, precision, and execution time. TGMM ranks first in execution time and precision, while its recall is comparable to the others. Moreover, we analyzed the language extensibility of TGMM across 30 mainstream programming languages. Out of these, a total of 25 languages were supported, while the remaining five currently lack the necessary grammar files. Finally, we analyzed the clone characteristics of nine popular languages at five common granularities, hoping to inspire future researchers. The source code of TGMM is available at: //github.com/TGMM24/TGMM.git.

The technique of forgetting in knowledge representation has been shown to be a powerful and useful knowledge engineering tool with widespread application. Yet, very little research has been done on how different policies of forgetting, or use of different forgetting operators, affects the inferential strength of the original theory. The goal of this paper is to define loss functions for measuring changes in inferential strength based on intuitions from model counting and probability theory. Properties of such loss measures are studied and a pragmatic knowledge engineering tool is proposed for computing loss measures using ProbLog. The paper includes a working methodology for studying and determining the strength of different forgetting policies, in addition to concrete examples showing how to apply the theoretical results using ProbLog. Although the focus is on forgetting, the results are much more general and should have wider application to other areas.

Guided data visualization systems are highly useful for domain experts to highlight important trends in their large-scale and complex datasets. However, more work is needed to understand the impact of guidance on interpreting data visualizations as well as on the resulting use of visualizations when communicating insights. We conducted two user studies with domain experts and found that experts benefit from a guided coarse-to-fine structure when using data visualization systems, as this is the same structure in which they communicate findings.

In the ever-evolving landscape of computing, the advent of edge and fog computing has revolutionized data processing by bringing it closer to end-users. While cloud computing offers numerous advantages, including mobility, flexibility and scalability, it introduces challenges such as latency. Fog and edge computing emerge as complementary solutions, bridging the gap and enhancing services' proximity to users. The pivotal challenge addressed in this paper revolves around optimizing the placement of application microservices to minimize latency in the cloud-to-edge continuum, where a proper node selection may influence the app's performance. Therefore, this task gains complexity due to the paradigm shift from monolithic to microservices-based architectures. Two distinct placement approaches, app-based and service-based, are compared through four different placement algorithms based on criteria such as link latency, node resources, and gateway proximity. App-based allocates all the services of one app sequentially, while service-based allocates one service of each app at a time. The study, conducted using YAFS (Yet Another Fog Simulator), evaluates the impact of these approaches on latency and load balance. The findings consistently confirm the hypothesis that strategies utilizing a service-based approach outperformed or performed equally well compared to app-based approaches, offering valuable insights into trade-offs and performance differences among the algorithms and each approach in the context of efficient microservices placement in cloud-to-edge environments.

The increasing complexity of computer systems necessitates innovative approaches to fault and error management, going beyond traditional manual log analysis. While existing solutions using large language models (LLMs) show promise, they are limited by a gap between natural and domain-specific languages, which restricts their effectiveness in real-world applications. Our approach addresses these limitations by integrating interpretable domain knowledge into open-source LLMs through continual pre-training (CPT), enhancing performance on log tasks while retaining natural language processing capabilities. We created a comprehensive dataset, NLPLog, with over 250,000 question-answer pairs to facilitate this integration. Our model, SuperLog, trained with this dataset, achieves the best performance across four log analysis tasks, surpassing the second-best model by an average of 12.01%. Our contributions include a novel CPT paradigm that significantly improves model performance, the development of SuperLog with state-of-the-art results, and the release of a large-scale dataset to support further research in this domain.

The fusion of causal models with deep learning introducing increasingly intricate data sets, such as the causal associations within images or between textual components, has surfaced as a focal research area. Nonetheless, the broadening of original causal concepts and theories to such complex, non-statistical data has been met with serious challenges. In response, our study proposes redefinitions of causal data into three distinct categories from the standpoint of causal structure and representation: definite data, semi-definite data, and indefinite data. Definite data chiefly pertains to statistical data used in conventional causal scenarios, while semi-definite data refers to a spectrum of data formats germane to deep learning, including time-series, images, text, and others. Indefinite data is an emergent research sphere inferred from the progression of data forms by us. To comprehensively present these three data paradigms, we elaborate on their formal definitions, differences manifested in datasets, resolution pathways, and development of research. We summarize key tasks and achievements pertaining to definite and semi-definite data from myriad research undertakings, present a roadmap for indefinite data, beginning with its current research conundrums. Lastly, we classify and scrutinize the key datasets presently utilized within these three paradigms.

In contrast to batch learning where all training data is available at once, continual learning represents a family of methods that accumulate knowledge and learn continuously with data available in sequential order. Similar to the human learning process with the ability of learning, fusing, and accumulating new knowledge coming at different time steps, continual learning is considered to have high practical significance. Hence, continual learning has been studied in various artificial intelligence tasks. In this paper, we present a comprehensive review of the recent progress of continual learning in computer vision. In particular, the works are grouped by their representative techniques, including regularization, knowledge distillation, memory, generative replay, parameter isolation, and a combination of the above techniques. For each category of these techniques, both its characteristics and applications in computer vision are presented. At the end of this overview, several subareas, where continuous knowledge accumulation is potentially helpful while continual learning has not been well studied, are discussed.

We introduce a multi-task setup of identifying and classifying entities, relations, and coreference clusters in scientific articles. We create SciERC, a dataset that includes annotations for all three tasks and develop a unified framework called Scientific Information Extractor (SciIE) for with shared span representations. The multi-task setup reduces cascading errors between tasks and leverages cross-sentence relations through coreference links. Experiments show that our multi-task model outperforms previous models in scientific information extraction without using any domain-specific features. We further show that the framework supports construction of a scientific knowledge graph, which we use to analyze information in scientific literature.

Object detection typically assumes that training and test data are drawn from an identical distribution, which, however, does not always hold in practice. Such a distribution mismatch will lead to a significant performance drop. In this work, we aim to improve the cross-domain robustness of object detection. We tackle the domain shift on two levels: 1) the image-level shift, such as image style, illumination, etc, and 2) the instance-level shift, such as object appearance, size, etc. We build our approach based on the recent state-of-the-art Faster R-CNN model, and design two domain adaptation components, on image level and instance level, to reduce the domain discrepancy. The two domain adaptation components are based on H-divergence theory, and are implemented by learning a domain classifier in adversarial training manner. The domain classifiers on different levels are further reinforced with a consistency regularization to learn a domain-invariant region proposal network (RPN) in the Faster R-CNN model. We evaluate our newly proposed approach using multiple datasets including Cityscapes, KITTI, SIM10K, etc. The results demonstrate the effectiveness of our proposed approach for robust object detection in various domain shift scenarios.

北京阿比特科技有限公司