亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In software development, developers frequently apply maintenance activities to the source code that change a few lines by a single commit. A good understanding of the characteristics of such small changes can support quality assurance approaches (e.g., automated program repair), as it is likely that small changes are addressing deficiencies in other changes; thus, understanding the reasons for creating small changes can help understand the types of errors introduced. Eventually, these reasons and the types of errors can be used to enhance quality assurance approaches for improving code quality. While prior studies used code churns to characterize and investigate the small changes, such a definition has a critical limitation. Specifically, it loses the information of changed tokens in a line. For example, this definition fails to distinguish the following two one-line changes: (1) changing a string literal to fix a displayed message and (2) changing a function call and adding a new parameter. These are definitely maintenance activities, but we deduce that researchers and practitioners are interested in supporting the latter change. To address this limitation, in this paper, we define micro commits, a type of small change based on changed tokens. Our goal is to quantify small changes using changed tokens. Changed tokens allow us to identify small changes more precisely. In fact, this token-level definition can distinguish the above example. We investigate defined micro commits in four OSS projects and understand their characteristics as the first empirical study on token-based micro commits. We find that micro commits mainly replace a single name or literal token, and micro commits are more likely used to fix bugs. Additionally, we propose the use of token-based information to support software engineering approaches in which very small changes significantly affect their effectiveness.

相關內容

MICRO:IEEE/ACM International Symposium on Microarchitecture Explanation:IEEE/ACM微體系結構國際研討會。 Publisher:IEEE/ACM。 SIT:

During software development, poor design and implementation choices can detrimentally impact software maintainability. Design smells, recurring patterns of poorly designed fragments, signify these issues. Role-stereotypes denote the generic responsibilities that classes assume in system design. Although the concepts of role-stereotypes and design smells differ, both significantly contribute to the design and maintenance of software systems. Understanding the relationship between these aspects is crucial for enhancing software maintainability, code quality, efficient code review, guided refactoring, and the design of role-specific metrics. This paper employs an exploratory approach, combining statistical analysis and unsupervised learning methods, to understand how design smells relate to role-stereotypes across desktop and mobile applications. Analyzing 11,350 classes from 30 GitHub repositories, we identified several design smells that frequently co-occur within certain role-stereotypes. Specifically, three (3) out of six (6) role-stereotypes we studied are more prone to design smells. We also examined the variation of design smells across the two ecosystems, driven by notable differences in their underlying architecture. Findings revealed that design smells are more prevalent in desktop than in mobile applications, especially within the Service Provider and Information Holder role-stereotypes. Additionally, the unsupervised learning method showed that certain pairs or groups of role-stereotypes are prone to similar types of design smells. We believe these relationships are associated with the characteristic and collaborative properties between role-stereotypes. The insights from this research provide valuable guidance for software teams on implementing design smell prevention and correction mechanisms, ensuring conceptual integrity during design and maintenance phases.

In software development, resolving the emergent issues within GitHub repositories is a complex challenge that involves not only the incorporation of new code but also the maintenance of existing code. Large Language Models (LLMs) have shown promise in code generation but face difficulties in resolving Github issues, particularly at the repository level. To overcome this challenge, we empirically study the reason why LLMs fail to resolve GitHub issues and analyze the major factors. Motivated by the empirical findings, we propose a novel LLM-based Multi-Agent framework for GitHub Issue reSolution, MAGIS, consisting of four agents customized for software evolution: Manager, Repository Custodian, Developer, and Quality Assurance Engineer agents. This framework leverages the collaboration of various agents in the planning and coding process to unlock the potential of LLMs to resolve GitHub issues. In experiments, we employ the SWE-bench benchmark to compare MAGIS with popular LLMs, including GPT-3.5, GPT-4, and Claude-2. MAGIS can resolve 13.94% GitHub issues, significantly outperforming the baselines. Specifically, MAGIS achieves an eight-fold increase in resolved ratio over the direct application of GPT-4, the advanced LLM.

It is of critical importance to design digital identity systems that ensure the privacy of citizens as well as protecting them from issuer corruption. Unfortunately, what Europe's and USA's public sectors are currently developing does not offer such basic protections. We aim to solve this issue and propose a method for untraceable selective disclosure and privacy preserving revocation of digital credentials, using the unique homomorphic characteristics of second order Elliptic Curves and Boneh-Lynn-Shacham (BLS) signatures. Our approach ensures that users can selectively reveal only the necessary credentials, while protecting their privacy across multiple presentations. We also aim to protect users from issuer corruption, by making it possible to apply a threshold for revocation to require collective agreement among multiple revocation issuers.

We address the challenges and opportunities in the development of knowledge systems for Sanskrit, with a focus on question answering. By proposing a framework for the automated construction of knowledge graphs, introducing annotation tools for ontology-driven and general-purpose tasks, and offering a diverse collection of web-interfaces, tools, and software libraries, we have made significant contributions to the field of computational Sanskrit. These contributions not only enhance the accessibility and accuracy of Sanskrit text analysis but also pave the way for further advancements in knowledge representation and language processing. Ultimately, this research contributes to the preservation, understanding, and utilization of the rich linguistic information embodied in Sanskrit texts.

We present ConvoCache, a conversational caching system that solves the problem of slow and expensive generative AI models in spoken chatbots. ConvoCache finds a semantically similar prompt in the past and reuses the response. In this paper we evaluate ConvoCache on the DailyDialog dataset. We find that ConvoCache can apply a UniEval coherence threshold of 90% and respond to 89% of prompts using the cache with an average latency of 214ms, replacing LLM and voice synthesis that can take over 1s. To further reduce latency we test prefetching and find limited usefulness. Prefetching with 80% of a request leads to a 63% hit rate, and a drop in overall coherence. ConvoCache can be used with any chatbot to reduce costs by reducing usage of generative AI by up to 89%.

Modern automated driving solutions utilize trajectory planning and control components with numerous parameters that need to be tuned for different driving situations and vehicle types to achieve optimal performance. This paper proposes a method to automatically tune such parameters to resemble expert demonstrations. We utilize a cost function which captures deviations of the closed-loop operation of the controller from the recorded desired driving behavior. Parameter tuning is then accomplished by using local optimization techniques. Three optimization alternatives are compared in a case study, where a trajectory planner is tuned for lane following in a real-world driving scenario. The results suggest that the proposed approach improves manually tuned initial parameters significantly even with respect to noisy demonstration data.

AI assistance tools such as ChatGPT, Copilot, and Gemini have dramatically impacted the nature of software development in recent years. Numerous studies have studied the positive benefits that practitioners have achieved from using these tools in their work. While there is a growing body of knowledge regarding the usability aspects of leveraging AI tools, we still lack concrete details on the issues that organizations and practitioners need to consider should they want to explore increasing adoption or use of AI tools. In this study, we conducted a mixed methods study involving interviews with 26 industry practitioners and 395 survey respondents. We found that there are several motives and challenges that impact individuals and organizations and developed a theory of AI Tool Adoption. For example, we found creating a culture of sharing of AI best practices and tips as a key motive for practitioners' adopting and using AI tools. In total, we identified 2 individual motives, 4 individual challenges, 3 organizational motives, and 3 organizational challenges, and 3 interleaved relationships. The 3 interleaved relationships act in a push-pull manner where motives pull practitioners to increase the use of AI tools and challenges push practitioners away from using AI tools.

The process of software defect prediction (SDP) involves predicting which software system modules or components pose the highest risk of being defective. The projections and discernments derived from SDP can then assist the software development team in effectively allocating its finite resources toward potentially susceptible defective modules. Because of this, SDP models need to be improved and refined continuously. Hence, this research proposes the deployment of a cascade generalization (CG) function to enhance the predictive performances of machine learning (ML)-based SDP models. The CG function extends the initial sample space by introducing new samples into the neighbourhood of the distribution function generated by the base classification algorithm, subsequently mitigating its bias. Experiments were conducted to investigate the effectiveness of CG-based Na\"ive Bayes (NB), Decision Tree (DT), and k-Nearest Neighbor (kNN) models on NASA software defect datasets. Based on the experimental results, the CG-based models (CG-NB, CG-DT, CG-kNN) were superior in prediction performance when compared with the baseline NB, DT, and kNN models respectively. Accordingly, the average accuracy value of CG-NB, CG-DT, and CG-kNN models increased by +11.06%, +3.91%, and +5.14%, respectively, over baseline NB, DT, and kNN models. A similar performance was observed for the area under the curve (AUC) value with CG-NB, CG-DT, and CG-kNN recording an average AUC value of +7.98%, +26%, and +24.9% improvement over the baseline NB, DT, and kNN respectively. In addition, the suggested CG-based models outperformed the Bagging and Boosting ensemble variants of the NB, DT, and kNN models as well as existing computationally diverse SDP models.

Agent-based modeling and simulation has evolved as a powerful tool for modeling complex systems, offering insights into emergent behaviors and interactions among diverse agents. Integrating large language models into agent-based modeling and simulation presents a promising avenue for enhancing simulation capabilities. This paper surveys the landscape of utilizing large language models in agent-based modeling and simulation, examining their challenges and promising future directions. In this survey, since this is an interdisciplinary field, we first introduce the background of agent-based modeling and simulation and large language model-empowered agents. We then discuss the motivation for applying large language models to agent-based simulation and systematically analyze the challenges in environment perception, human alignment, action generation, and evaluation. Most importantly, we provide a comprehensive overview of the recent works of large language model-empowered agent-based modeling and simulation in multiple scenarios, which can be divided into four domains: cyber, physical, social, and hybrid, covering simulation of both real-world and virtual environments. Finally, since this area is new and quickly evolving, we discuss the open problems and promising future directions.

Recommender systems play a crucial role in mitigating the problem of information overload by suggesting users' personalized items or services. The vast majority of traditional recommender systems consider the recommendation procedure as a static process and make recommendations following a fixed strategy. In this paper, we propose a novel recommender system with the capability of continuously improving its strategies during the interactions with users. We model the sequential interactions between users and a recommender system as a Markov Decision Process (MDP) and leverage Reinforcement Learning (RL) to automatically learn the optimal strategies via recommending trial-and-error items and receiving reinforcements of these items from users' feedbacks. In particular, we introduce an online user-agent interacting environment simulator, which can pre-train and evaluate model parameters offline before applying the model online. Moreover, we validate the importance of list-wise recommendations during the interactions between users and agent, and develop a novel approach to incorporate them into the proposed framework LIRD for list-wide recommendations. The experimental results based on a real-world e-commerce dataset demonstrate the effectiveness of the proposed framework.

北京阿比特科技有限公司