亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Dependency cycles pose a significant challenge to software quality and maintainability. However, there is limited understanding of how practitioners resolve dependency cycles in real-world scenarios. This paper presents an empirical study investigating the recurring patterns employed by software developers to resolve dependency cycles between two classes in practice. We analyzed the data from 38 open-source projects across different domains and manually inspected hundreds of cycle untangling cases. Our findings reveal that developers tend to employ five recurring patterns to address dependency cycles. The chosen patterns are not only determined by dependency relations between cyclic classes, but also highly related to their design context, i.e., how cyclic classes depend on or are depended by their neighbor classes. Through this empirical study, we also discovered three common counterintuitive solutions developers usually adopted during cycles' handling. These recurring patterns and common counterintuitive solutions observed in dependency cycles' practice can serve as a taxonomy to improve developers' awareness and also be used as learning materials for students in software engineering and inexperienced developers. Our results also suggest that, in addition to considering the internal structure of dependency cycles, automatic tools need to consider the design context of cycles to provide better support for refactoring dependency cycles.

相關內容

With the booming popularity of smartphones, threats related to these devices are increasingly on the rise. Smishing, a combination of SMS (Short Message Service) and phishing has emerged as a treacherous cyber threat used by malicious actors to deceive users, aiming to steal sensitive information, money or install malware on their mobile devices. Despite the increase in smishing attacks in recent years, there are very few studies aimed at understanding the factors that contribute to a user's ability to differentiate real from fake messages. To address this gap in knowledge, we have conducted an online survey on smishing detection with 214 participants. In this study, we presented them with 16 SMS screenshots and evaluated how different factors affect their decision making process in smishing detection. Next, we conducted a follow-up survey to garner information on the participants' security attitudes, behavior and knowledge. Our results highlighted that attention and security behavioral scores had a significant impact on participants' accuracy in identifying smishing messages. Interestingly, we found that participants had more difficulty identifying real messages from fake ones, with an accuracy of 65.6% with fake messages and 44.6% with real messages. Our study is crucial in developing proactive strategies to encounter and mitigate smishing attacks. By understanding what factors influence smishing detection, we aim to bolster users' resilience against such threats and create a safer digital environment for all.

The advances of deep learning (DL) have paved the way for automatic software vulnerability repair approaches, which effectively learn the mapping from the vulnerable code to the fixed code. Nevertheless, existing DL-based vulnerability repair methods face notable limitations: 1) they struggle to handle lengthy vulnerable code, 2) they treat code as natural language texts, neglecting its inherent structure, and 3) they do not tap into the valuable expert knowledge present in the expert system. To address this, we propose VulMaster, a Transformer-based neural network model that excels at generating vulnerability repairs by comprehensively understanding the entire vulnerable code, irrespective of its length. This model also integrates diverse information, encompassing vulnerable code structures and expert knowledge from the CWE system. We evaluated VulMaster on a real-world C/C++ vulnerability repair dataset comprising 1,754 projects with 5,800 vulnerable functions. The experimental results demonstrated that VulMaster exhibits substantial improvements compared to the learning-based state-of-the-art vulnerability repair approach. Specifically, VulMaster improves the EM, BLEU, and CodeBLEU scores from 10.2\% to 20.0\%, 21.3\% to 29.3\%, and 32.5\% to 40.9\%, respectively.

In the rapidly evolving global business landscape, the demand for software has intensified competition among organizations, leading to challenges in retaining highly qualified IT members in software organizations. One of the problems faced by IT organizations is the retention of these strategic professionals, also known as talent. This work presents an actionable framework for Talent Retention (TR) used in IT organizations. It is based on our findings from interviews performed with 21 IT managers. The TR Framework is our main research outcome. Our framework encompasses a set of factors, contextual characteristics, barriers, strategies, and coping mechanisms. Our findings indicated that software engineers can be differentiated from other professional groups, and beyond competitive salaries, other elements for retaining talent in IT organizations should be considered, such as psychological safety, work-life balance, a positive work environment, innovative and challenging projects, and flexible work. A better understanding of factors could guide IT managers in improving talent management processes by addressing Software Engineering challenges, identifying important elements, and exploring strategies at the individual, team, and organizational levels.

Recent developments enable the quantification of causal control given a structural causal model (SCM). This has been accomplished by introducing quantities which encode changes in the entropy of one variable when intervening on another. These measures, named causal entropy and causal information gain, aim to address limitations in existing information theoretical approaches for machine learning tasks where causality plays a crucial role. They have not yet been properly mathematically studied. Our research contributes to the formal understanding of the notions of causal entropy and causal information gain by establishing and analyzing fundamental properties of these concepts, including bounds and chain rules. Furthermore, we elucidate the relationship between causal entropy and stochastic interventions. We also propose definitions for causal conditional entropy and causal conditional information gain. Overall, this exploration paves the way for enhancing causal machine learning tasks through the study of recently-proposed information theoretic quantities grounded in considerations about causality.

Hackathons and software competitions, increasingly pivotal in the software industry, serve as vital catalysts for innovation and skill development for both organizations and students. These platforms enable companies to prototype ideas swiftly, while students gain enriched learning experiences, enhancing their practical skills. Over the years, hackathons have transitioned from mere competitive events to significant educational tools, fusing theoretical knowledge with real-world problem-solving. The integration of hackathons into computer science and software engineering curricula aims to align educational proficiencies within a collaborative context, promoting peer connectivity and enriched learning via industry-academia collaborations. However, the infusion of advanced technologies, notably artificial intelligence (AI), and machine learning, into hackathons is revolutionizing their structure and outcomes. This evolution brings forth both opportunities, like enhanced learning experiences, and challenges, such as ethical concerns. This study delves into the impact of generative AI, examining its influence on student's technological choices based on a case study on the University of Iowa 2023 event. The exploration provides insights into AI's role in hackathons, and its educational implications, and offers a roadmap for the integration of such technologies in future events, ensuring innovation is balanced with ethical and educational considerations.

In the exascale era in which application behavior has large power & energy footprints, per-application job-level awareness of such impression is crucial in taking steps towards achieving efficiency goals beyond performance, such as energy efficiency, and sustainability. To achieve these goals, we have developed a novel low-latency job power profiling machine learning pipeline that can group job-level power profiles based on their shapes as they complete. This pipeline leverages a comprehensive feature extraction and clustering pipeline powered by a generative adversarial network (GAN) model to handle the feature-rich time series of job-level power measurements. The output is then used to train a classification model that can predict whether an incoming job power profile is similar to a known group of profiles or is completely new. With extensive evaluations, we demonstrate the effectiveness of each component in our pipeline. Also, we provide a preliminary analysis of the resulting clusters that depict the power profile landscape of the Summit supercomputer from more than 60K jobs sampled from the year 2021.

Hardware security is an important concern of system security as vulnerabilities can arise from design errors introduced throughout the development lifecycle. Recent works have proposed techniques to detect hardware security bugs, such as static analysis, fuzzing, and symbolic execution. However, the fundamental properties of hardware security bugs remain relatively unexplored. To gain a better understanding of hardware security bugs, we perform a deep dive into the popular OpenTitan project, including its bug reports and bug fixes. We manually classify the bugs as relevant to functionality or security and analyze characteristics, such as the impact and location of security bugs, and the size of their bug fixes. We also investigate relationships between security impact and bug management during development. Finally, we propose an abstract syntax tree-based analysis to identify the syntactic characteristics of bug fixes. Our results show that 53% of the bugs in OpenTitan have potential security implications and that 55% of all bug fixes modify only one file. Our findings underscore the importance of security-aware development practices and tools and motivate the development of techniques that leverage the highly localized nature of hardware bugs.

Modern software engineering builds up on the composability of software components, that rely on more and more direct and transitive dependencies to build their functionalities. This principle of reusability however makes it harder to reproduce projects' build environments, even though reproducibility of build environments is essential for collaboration, maintenance and component lifetime. In this work, we argue that functional package managers provide the tooling to make build environments reproducible in space and time, and we produce a preliminary evaluation to justify this claim. Using historical data, we show that we are able to reproduce build environments of about 7 million Nix packages, and to rebuild 99.94% of the 14 thousand packages from a 6-year-old Nixpkgs revision.

The objective of this research is the development of a practical system to manipulate and validate software package specifications. The validation process developed is based on consistency checks. Furthermore, by means of scenarios, the customer will be able to interactively experience the specified system prior to its implementation. Functions, data, and data types constitute the framework of our validation system. The specification of the Graphical Kernel System (GKS) is a typical example of the target software package specifications to be manipulated.

Primary motivation for this work was the need to implement hardware accelerators for a newly proposed ANN structure called Auto Resonance Network (ARN) for robotic motion planning. ARN is an approximating feed-forward hierarchical and explainable network. It can be used in various AI applications but the application base was small. Therefore, the objective of the research was twofold: to develop a new application using ARN and to implement a hardware accelerator for ARN. As per the suggestions given by the Doctoral Committee, an image recognition system using ARN has been implemented. An accuracy of around 94% was achieved with only 2 layers of ARN. The network also required a small training data set of about 500 images. Publicly available MNIST dataset was used for this experiment. All the coding was done in Python. Massive parallelism seen in ANNs presents several challenges to CPU design. For a given functionality, e.g., multiplication, several copies of serial modules can be realized within the same area as a parallel module. Advantage of using serial modules compared to parallel modules under area constraints has been discussed. One of the module often useful in ANNs is a multi-operand addition. One problem in its implementation is that the estimation of carry bits when the number of operands changes. A theorem to calculate exact number of carry bits required for a multi-operand addition has been presented in the thesis which alleviates this problem. The main advantage of the modular approach to multi-operand addition is the possibility of pipelined addition with low reconfiguration overhead. This results in overall increase in throughput for large number of additions, typically seen in several DNN configurations.

北京阿比特科技有限公司