亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Bug reports are vital for software maintenance that allow users to inform developers of the problems encountered while using software. However, it is difficult for non-technical users to write clear descriptions about the bug occurrence. Therefore, more and more users begin to record the screen for reporting bugs as it is easy to be created and contains detailed procedures triggering the bug. But it is still tedious and time-consuming for developers to reproduce the bug due to the length and unclear actions within the recording. To overcome these issues, we propose GIFdroid, a light-weight approach to automatically replay the execution trace from visual bug reports. GIFdroid adopts image processing techniques to extract the keyframes from the recording, map them to states in GUI Transitions Graph, and generate the execution trace of those states to trigger the bug. Our automated experiments and user study demonstrate its accuracy, efficiency, and usefulness of the approach.

相關內容

Automator是蘋果公司為他們的Mac OS X系統開發的一款軟件。 只要通過點擊拖拽鼠標等操作就可以將一系列動作組合成一個工作流,從而幫助你自動的(可重復的)完成一些復雜的工作。Automator還能橫跨很多不同種類的程序,包括:查找器、Safari網絡瀏覽器、iCal、地址簿或者其他的一些程序。它還能和一些第三方的程序一起工作,如微軟的Office、Adobe公司的Photoshop或者Pixelmator等。

The concept of traceability between artifacts is considered an enabler for software project success. This concept has received plenty of attention from the research community and is by many perceived to always be available in an industrial setting. In this industry-academia collaborative project, a team of researchers, supported by testing practitioners from a large telecommunication company, sought to investigate the partner company's issues related to software quality. However, it was soon identified that the fundamental traceability links between requirements and test cases were missing. This lack of traceability impeded the implementation of a solution to help the company deal with its quality issues. In this experience report, we discuss lessons learned about the practical value of creating and maintaining traceability links in complex industrial settings and provide a cautionary tale for researchers.

In longitudinal study, it is common that response and covariate are not measured at the same time, which complicates the analysis to a large extent. In this paper, we take into account the estimation of generalized varying coefficient model with such asynchronous observations. A penalized kernel-weighted estimating equation is constructed through kernel technique in the framework of functional data analysis. Moreover, local sparsity is also considered in the estimating equation to improve the interpretability of the estimate. We extend the iteratively reweighted least squares (IRLS) algorithm in our computation. The theoretical properties are established in terms of both consistency and sparsistency, and the simulation studies further verify the satisfying performance of our method when compared with existing approaches. The method is applied to an AIDS study to reveal its practical merits.

We present a video generation model that accurately reproduces object motion, changes in camera viewpoint, and new content that arises over time. Existing video generation methods often fail to produce new content as a function of time while maintaining consistencies expected in real environments, such as plausible dynamics and object persistence. A common failure case is for content to never change due to over-reliance on inductive biases to provide temporal consistency, such as a single latent code that dictates content for the entire video. On the other extreme, without long-term consistency, generated videos may morph unrealistically between different scenes. To address these limitations, we prioritize the time axis by redesigning the temporal latent representation and learning long-term consistency from data by training on longer videos. To this end, we leverage a two-phase training strategy, where we separately train using longer videos at a low resolution and shorter videos at a high resolution. To evaluate the capabilities of our model, we introduce two new benchmark datasets with explicit focus on long-term temporal dynamics.

A single-step high-order implicit time integration scheme with controllable numerical dissipation at high frequency is presented for the transient analysis of structural dynamic problems. The amount of numerical dissipation is controlled by a user-specified value of the spectral radius $\rho_\infty$ in the high frequency limit. Using this user-specified parameter as a weight factor, a Pad\'e expansion of the matrix exponential solution of the equation of motion is constructed by mixing the diagonal and sub-diagonal expansions. An efficient time stepping scheme is designed where systems of equations similar in complexity to the standard Newmark method are solved recursively. It is shown that the proposed high-order scheme achieves high-frequency dissipation while minimizing low-frequency dissipation and period errors. The effectiveness of dissipation control and efficiency of the scheme are demonstrated with numerical examples. A simple recommendation on the choice of the controlling parameter and time step size is provided. The source code written in MATLAB and FORTRAN is available for download at: //github.com/ChongminSong/HighOrderTimeIntegration.

Much of the literature on optimal design of bandit algorithms is based on minimization of expected regret. It is well known that designs that are optimal over certain exponential families can achieve expected regret that grows logarithmically in the number of arm plays, at a rate governed by the Lai-Robbins lower bound. In this paper, we show that when one uses such optimized designs, the regret distribution of the associated algorithms necessarily has a very heavy tail, specifically, that of a truncated Cauchy distribution. Furthermore, for $p>1$, the $p$'th moment of the regret distribution grows much faster than poly-logarithmically, in particular as a power of the total number of arm plays. We show that optimized UCB bandit designs are also fragile in an additional sense, namely when the problem is even slightly mis-specified, the regret can grow much faster than the conventional theory suggests. Our arguments are based on standard change-of-measure ideas, and indicate that the most likely way that regret becomes larger than expected is when the optimal arm returns below-average rewards in the first few arm plays, thereby causing the algorithm to believe that the arm is sub-optimal. To alleviate the fragility issues exposed, we show that UCB algorithms can be modified so as to ensure a desired degree of robustness to mis-specification. In doing so, we also provide a sharp trade-off between the amount of UCB exploration and the tail exponent of the resulting regret distribution.

Fuzzing has proven to be a fundamental technique to automated software testing but also a costly one. With the increased adoption of CI/CD practices in software development, a natural question to ask is `What are the best ways to integrate fuzzing into CI/CD pipelines considering the velocity in code changes and the automated delivery/deployment practices?'. Indeed, a recent study by B\"ohme and Zhu shows that four in every five bugs have been introduced by recent code changes (i.e. regressions). In this paper, we take a close look at the integration of fuzzers to CI/CD pipelines from both automated software testing and continuous development angles. Firstly, we study an optimization opportunity to triage commits that do not require fuzzing and find, through experimental analysis, that the average fuzzing effort in CI/CD can be reduced by ~63% in three of the nine libraries we analyzed (>40% for six libraries). Secondly, we investigate the impact of fuzzing campaign duration on the CI/CD process: A shorter fuzzing campaign such as 15 minutes (as opposed to the wisdom of 24 hours in the field) facilitates a faster pipeline and can still uncover important bugs, but may also reduce its capability to detect sophisticated bugs. Lastly, we discuss a prioritization strategy that automatically assigns resources to fuzzing campaigns based on a set of predefined priority strategies. Our findings suggest that continuous fuzzing (as part of the automated testing in CI/CD) is indeed beneficial and there are many optimization opportunities to improve the effectiveness and scalability of fuzz testing.

Democratization of AI involves training and deploying machine learning models across heterogeneous and potentially massive environments. Diversity of data opens up a number of possibilities to advance AI systems, but also introduces pressing concerns such as privacy, security, and equity that require special attention. This work shows that it is theoretically impossible to design a rational learning algorithm that has the ability to successfully learn across heterogeneous environments, which we decoratively call collective intelligence (CI). By representing learning algorithms as choice correspondences over a hypothesis space, we are able to axiomatize them with essential properties. Unfortunately, the only feasible algorithm compatible with all of the axioms is the standard empirical risk minimization (ERM) which learns arbitrarily from a single environment. Our impossibility result reveals informational incomparability between environments as one of the foremost obstacles for researchers who design novel algorithms that learn from multiple environments, which sheds light on prerequisites for success in critical areas of machine learning such as out-of-distribution generalization, federated learning, algorithmic fairness, and multi-modal learning.

The IS discourse on the potential of distributed ledger technology (DLT) in the financial services has grown at a tremendous pace in recent years. Yet, little has been said about the related implications for the costly and highly regulated process of compliance reporting. Working with a group of representatives from industry and regulatory authorities, we employ the design science research methodology (DSR) in the design, development, and evaluation of an artefact, enabling the automated collection and enrichment of transactional data. Our findings indicate that DLT may facilitate the automation of key compliance processes through the implementation of a "pull-model", in which regulators can access compliance data in near real-time to stage aggregate exposures at the supranational level. Generalizing our preliminary results, we present four propositions on the implications of DLT in compliance. The findings contribute new practical insights on the topic of compliance to the growing IS discourse on DLT.

The growing energy and performance costs of deep learning have driven the community to reduce the size of neural networks by selectively pruning components. Similarly to their biological counterparts, sparse networks generalize just as well, if not better than, the original dense networks. Sparsity can reduce the memory footprint of regular networks to fit mobile devices, as well as shorten training time for ever growing networks. In this paper, we survey prior work on sparsity in deep learning and provide an extensive tutorial of sparsification for both inference and training. We describe approaches to remove and add elements of neural networks, different training strategies to achieve model sparsity, and mechanisms to exploit sparsity in practice. Our work distills ideas from more than 300 research papers and provides guidance to practitioners who wish to utilize sparsity today, as well as to researchers whose goal is to push the frontier forward. We include the necessary background on mathematical methods in sparsification, describe phenomena such as early structure adaptation, the intricate relations between sparsity and the training process, and show techniques for achieving acceleration on real hardware. We also define a metric of pruned parameter efficiency that could serve as a baseline for comparison of different sparse networks. We close by speculating on how sparsity can improve future workloads and outline major open problems in the field.

Most existing works in visual question answering (VQA) are dedicated to improving the accuracy of predicted answers, while disregarding the explanations. We argue that the explanation for an answer is of the same or even more importance compared with the answer itself, since it makes the question and answering process more understandable and traceable. To this end, we propose a new task of VQA-E (VQA with Explanation), where the computational models are required to generate an explanation with the predicted answer. We first construct a new dataset, and then frame the VQA-E problem in a multi-task learning architecture. Our VQA-E dataset is automatically derived from the VQA v2 dataset by intelligently exploiting the available captions. We have conducted a user study to validate the quality of explanations synthesized by our method. We quantitatively show that the additional supervision from explanations can not only produce insightful textual sentences to justify the answers, but also improve the performance of answer prediction. Our model outperforms the state-of-the-art methods by a clear margin on the VQA v2 dataset.

北京阿比特科技有限公司