亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Developer turnover is inevitable on software projects and leads to knowledge loss, a reduction in productivity, and an increase in defects. Mitigation strategies to deal with turnover tend to disrupt and increase workloads for developers. In this work, we suggest that through code review recommendation we can distribute knowledge and mitigate turnover while more evenly distributing review workload. We conduct historical analyses to understand the natural concentration of review workload and the degree of knowledge spreading that is inherent in code review. Even though review workload is highly concentrated, we show that code review natural spreads knowledge thereby reducing the files at risk to turnover. Using simulation, we evaluate existing code review recommenders and develop novel recommenders to understand their impact on the level of expertise during review, the workload of reviewers, and the files at risk to turnover. Our simulations use seeded random replacement of reviewers to allow us to compare the reviewer recommenders without the confounding variation of different reviewers being replaced for each recommender. Combining recommenders, we develop the SofiaWL recommender that suggests experts with low active review workload when none of the files under review are known by only one developer. In contrast, when knowledge is concentrated on one developer, it sends the review to other reviewers to spread knowledge. For the projects we study, we are able to globally increase expertise during reviews, +3%, reduce workload concentration, -12%, and reduce the files at risk, -28%. We make our scripts and data available in our replication package. Developers can optimize for a particular outcome measure based on the needs of their project, or use our GitHub bot to automatically balance the outcomes.

相關內容

通過學習、實踐或(huo)探索所(suo)獲(huo)得的認識、判斷或(huo)技(ji)能。

In research of manufacturing systems and autonomous robots, the term capability is used for a machine-interpretable specification of a system function. Approaches in this research area develop information models that capture all information relevant to interpret the requirements, effects and behavior of functions. These approaches are intended to overcome the heterogeneity resulting from the various types of processes and from the large number of different vendors. However, these models and associated methods do not offer solutions for automated process planning, i.e. finding a sequence of individual capabilities required to manufacture a certain product or to accomplish a mission using autonomous robots. Instead, this is a typical task for AI planning approaches, which unfortunately require a high effort to create the respective planning problem descriptions. In this paper, we present an approach that combines these two topics: Starting from a semantic capability model, an AI planning problem is automatically generated. The planning problem is encoded using Satisfiability Modulo Theories and uses an existing solver to find valid capability sequences including required parameter values. The approach also offers possibilities to integrate existing human expertise and to provide explanations for human operators in order to help understand planning decisions.

Interdisciplinary research has emerged as a hotbed for innovation and a key approach to addressing complex societal challenges. The increasing dominance of grant-supported research in shaping scientific advances, coupled with growing interest in funding interdisciplinary work, raises fundamental questions about the effectiveness of interdisciplinary grants in fostering high-impact interdisciplinary research outcomes. Here, we quantify the interdisciplinarity of both research grants and publications, capturing 350,000 grants from 164 funding agencies across 26 countries and 1.3 million papers that acknowledged their support from 1985 to 2009. Our analysis uncovers two seemingly contradictory patterns: Interdisciplinary grants tend to produce interdisciplinary papers, which are generally associated with high impact. However, compared to disciplinary grants, interdisciplinary grants on average yield fewer papers and interdisciplinary papers they support tend to have substantially reduced impact. We demonstrate that the key to explaining this paradox lies in the power of disciplinary grants in propelling high-impact interdisciplinary research. Specifically, our results show that highly interdisciplinary papers supported by deeply disciplinary grants garner disproportionately more citations, both within their core disciplines and from broader fields. Moreover, disciplinary grants, particularly when combined with other similar grants, are more effective in producing high-impact interdisciplinary research. Amidst the rapid rise of support for interdisciplinary work across the sciences, these results highlight the hitherto unknown role of disciplinary grants in driving crucial interdisciplinary advances, suggesting that interdisciplinary research requires deep disciplinary expertise and investments.

Learnable embedding vector is one of the most important applications in machine learning, and is widely used in various database-related domains. However, the high dimensionality of sparse data in recommendation tasks and the huge volume of corpus in retrieval-related tasks lead to a large memory consumption of the embedding table, which poses a great challenge to the training and deployment of models. Recent research has proposed various methods to compress the embeddings at the cost of a slight decrease in model quality or the introduction of other overheads. Nevertheless, the relative performance of these methods remains unclear. Existing experimental comparisons only cover a subset of these methods and focus on limited metrics. In this paper, we perform a comprehensive comparative analysis and experimental evaluation of embedding compression. We introduce a new taxonomy that categorizes these techniques based on their characteristics and methodologies, and further develop a modular benchmarking framework that integrates 14 representative methods. Under a uniform test environment, our benchmark fairly evaluates each approach, presents their strengths and weaknesses under different memory budgets, and recommends the best method based on the use case. In addition to providing useful guidelines, our study also uncovers the limitations of current methods and suggests potential directions for future research.

Correctness properties are critical to conducting verification and validation on software systems, especially those cyberphysical systems whose functionality changes frequently due to software updates, changes in the operating environment, or newly learned behaviors. We detail a novel method to automatically construct expressive, executable correctness properties in the form of machine-learned correctness properties which can be used to ensure that a system's behavior is correct with respect to its design and operating requirements. We propose a method to bootstrap the creation of these correctness properties using a novel simulation-based generation of training and testing data using multiple extensions to the Cross Entropy algorithm for search-based optimization. Then, we apply this method to a software-in-the-loop evaluation of an autonomous vehicle to demonstrate that such models can assert about important properties of multi-agent cyberphysical systems. We demonstrate that this process brings the task of developing robust correctness properties from the realm of formal methods experts into the domain of system developers and engineers, and that machine-learned correctness properties are expressive enough to capture the correct behavior of cyberphysical systems in their complex environments. This advancement can provide evidence of dependability to system designers and users, enhancing trust in the deployment of autonomous vehicles and other intelligent transportation systems.

Privacy is a key principle for developing ethical AI technologies, but how does including AI technologies in products and services change privacy risks? We constructed a taxonomy of AI privacy risks by analyzing 321 documented AI privacy incidents. We codified how the unique capabilities and requirements of AI technologies described in those incidents generated new privacy risks, exacerbated known ones, or otherwise did not meaningfully alter the risk. We present 12 high-level privacy risks that AI technologies either newly created (e.g., exposure risks from deepfake pornography) or exacerbated (e.g., surveillance risks from collecting training data). One upshot of our work is that incorporating AI technologies into a product can alter the privacy risks it entails. Yet, current approaches to privacy-preserving AI/ML (e.g., federated learning, differential privacy, checklists) only address a subset of the privacy risks arising from the capabilities and data requirements of AI.

Prior work has studied the computational complexity of computing optimal strategies to commit to in Stackelberg or leadership games, where a leader commits to a strategy which is observed by one or more followers. We extend this setting to one where the leader can additionally commit to outcome-conditional utility transfers. We characterize the computational complexity of finding optimal strategies in normal-form and Bayesian games, giving a mix of efficient algorithms and NP-hardness results. Finally, we allow the leader to also commit to a signaling scheme which induces a correlated equilibrium. In this setting, optimal commitments can be found in polynomial time for arbitrarily many players.

Writing declarative models has numerous benefits, ranging from automated reasoning and correction of design-level properties before systems are built, to automated testing and debugging of their implementations after they are built. Alloy is a declarative modeling language that is well-suited for verifying system designs. A key strength of Alloy is its scenario-finding toolset, the Analyzer, which allows users to explore all valid scenarios that adhere to the model's constraints up to a user-provided scope. However, even with visualized scenarios, it is difficult to write correct Alloy models. To address this, a growing body of work explores different techniques for debugging Alloy models. In order to develop and evaluate these techniques in an effective manor, this paper presents an empirical study of over 97,000 models written by novice users trying to learn Alloy. We investigate how users write both correct and incorrect models in order to produce a comprehensive benchmark for future use as well as a series of observations to guide debugging and educational efforts for Alloy model development.

Face recognition technology has advanced significantly in recent years due largely to the availability of large and increasingly complex training datasets for use in deep learning models. These datasets, however, typically comprise images scraped from news sites or social media platforms and, therefore, have limited utility in more advanced security, forensics, and military applications. These applications require lower resolution, longer ranges, and elevated viewpoints. To meet these critical needs, we collected and curated the first and second subsets of a large multi-modal biometric dataset designed for use in the research and development (R&D) of biometric recognition technologies under extremely challenging conditions. Thus far, the dataset includes more than 350,000 still images and over 1,300 hours of video footage of approximately 1,000 subjects. To collect this data, we used Nikon DSLR cameras, a variety of commercial surveillance cameras, specialized long-rage R&D cameras, and Group 1 and Group 2 UAV platforms. The goal is to support the development of algorithms capable of accurately recognizing people at ranges up to 1,000 m and from high angles of elevation. These advances will include improvements to the state of the art in face recognition and will support new research in the area of whole-body recognition using methods based on gait and anthropometry. This paper describes methods used to collect and curate the dataset, and the dataset's characteristics at the current stage.

Benefit from the quick development of deep learning techniques, salient object detection has achieved remarkable progresses recently. However, there still exists following two major challenges that hinder its application in embedded devices, low resolution output and heavy model weight. To this end, this paper presents an accurate yet compact deep network for efficient salient object detection. More specifically, given a coarse saliency prediction in the deepest layer, we first employ residual learning to learn side-output residual features for saliency refinement, which can be achieved with very limited convolutional parameters while keep accuracy. Secondly, we further propose reverse attention to guide such side-output residual learning in a top-down manner. By erasing the current predicted salient regions from side-output features, the network can eventually explore the missing object parts and details which results in high resolution and accuracy. Experiments on six benchmark datasets demonstrate that the proposed approach compares favorably against state-of-the-art methods, and with advantages in terms of simplicity, efficiency (45 FPS) and model size (81 MB).

Object detection is considered as one of the most challenging problems in computer vision, since it requires correct prediction of both classes and locations of objects in images. In this study, we define a more difficult scenario, namely zero-shot object detection (ZSD) where no visual training data is available for some of the target object classes. We present a novel approach to tackle this ZSD problem, where a convex combination of embeddings are used in conjunction with a detection framework. For evaluation of ZSD methods, we propose a simple dataset constructed from Fashion-MNIST images and also a custom zero-shot split for the Pascal VOC detection challenge. The experimental results suggest that our method yields promising results for ZSD.

北京阿比特科技有限公司