亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We report on intermediate results of -- to the best of our knowledge -- the first study of completeness thresholds for (partially) bounded memory safety proofs. Specifically, we consider heap-manipulating programs that iterate over arrays without allocating or freeing memory. In this setting, we present the first notion of completeness thresholds for program verification which reduce unbounded memory safety proofs to (partially) bounded ones. Moreover, we demonstrate that we can characterise completeness thresholds for simple classes of array traversing programs. Finally, we suggest avenues of research to scale this technique theoretically, i.e., to larger classes of programs (heap manipulation, tree-like data structures), and practically by highlighting automation opportunities.

相關內容

Automatic blood vessel segmentation from retinal images plays an important role in the diagnosis of many systemic and eye diseases, including retinopathy of prematurity. Current state-of-the-art research in blood vessel segmentation from retinal images is based on convolutional neural networks. The solutions proposed so far are trained and tested on images from a few available retinal blood vessel segmentation datasets, which might limit their performance when given an image with retinopathy of prematurity signs. In this paper, we evaluate the performance of three high-performing convolutional neural networks for retinal blood vessel segmentation in the context of blood vessel segmentation on retinopathy of prematurity retinal images. The main motive behind the study is to test if existing public datasets suffice to develop a high-performing predictor that could assist an ophthalmologist in retinopathy of prematurity diagnosis. To do so, we create a dataset consisting solely of retinopathy of prematurity images with retinal blood vessel annotations manually labeled by two observers, where one is the ophthalmologist experienced in retinopathy of prematurity treatment. Experimental results show that all three solutions have difficulties in detecting the retinal blood vessels of infants due to a lower contrast compared to images from public datasets as demonstrated by a significant drop in classification sensitivity. All three solutions segment alongside retinal also choroidal blood vessels which are not used to diagnose retinopathy of prematurity, but instead represent noise and are confused with retinal blood vessels. By visual and numerical observations, we observe that existing solutions for retinal blood vessel segmentation need improvement toward more detailed datasets or deeper models in order to assist the ophthalmologist in retinopathy of prematurity diagnosis.

A growing line of work shows how learned predictions can be used to break through worst-case barriers to improve the running time of an algorithm. However, incorporating predictions into data structures with strong theoretical guarantees remains underdeveloped. This paper takes a step in this direction by showing that predictions can be leveraged in the fundamental online list labeling problem. In the problem, n items arrive over time and must be stored in sorted order in an array of size Theta(n). The array slot of an element is its label and the goal is to maintain sorted order while minimizing the total number of elements moved (i.e., relabeled). We design a new list labeling data structure and bound its performance in two models. In the worst-case learning-augmented model, we give guarantees in terms of the error in the predictions. Our data structure provides strong guarantees: it is optimal for any prediction error and guarantees the best-known worst-case bound even when the predictions are entirely erroneous. We also consider a stochastic error model and bound the performance in terms of the expectation and variance of the error. Finally, the theoretical results are demonstrated empirically. In particular, we show that our data structure has strong performance on real temporal data sets where predictions are constructed from elements that arrived in the past, as is typically done in a practical use case.

The analysis of fairness in process mining is a significant aspect of data-driven decision-making, yet the advancement in this field is constrained due to the scarcity of event data that incorporates fairness considerations. To bridge this gap, we present a collection of simulated event logs, spanning four critical domains, which encapsulate a variety of discrimination scenarios. By simulating these event logs with CPN Tools, we ensure data with known ground truth, thereby offering a robust foundation for fairness analysis. These logs are made freely available under the CC-BY-4.0 license and adhere to the XES standard, thereby assuring broad compatibility with various process mining tools. This initiative aims to empower researchers with the requisite resources to test and develop fairness techniques within process mining, ultimately contributing to the pursuit of equitable, data-driven decision-making processes.

Dependency cycles pose a significant challenge to software quality and maintainability. However, there is limited understanding of how practitioners resolve dependency cycles in real-world scenarios. This paper presents an empirical study investigating the recurring patterns employed by software developers to resolve dependency cycles between two classes in practice. We analyzed the data from 18 open-source projects across different domains and manually inspected hundreds of cycle untangling cases. Our findings reveal that developers tend to employ five recurring patterns to address dependency cycles. The chosen patterns are not only determined by dependency relations between cyclic classes, but also highly related to their design context, i.e., how cyclic classes depend on or are depended by their neighbor classes. Through this empirical study, we also discovered three common mistakes developers usually made during cycles' handling. These recurring patterns and common mistakes observed in dependency cycles' practice can serve as a taxonomy to improve developers' awareness and also be used as learning materials for students in software engineering and inexperienced developers. Our results also suggest that, in addition to considering the internal structure of dependency cycles, automatic tools need to consider the design context of cycles to provide better support for refactoring dependency cycles.

Refactoring is a crucial technique for improving the efficiency and maintainability of software by restructuring its internal design while preserving its external behavior. While classical programs have benefited from various refactoring methods, the field of quantum programming lacks dedicated refactoring techniques. The distinct properties of quantum computing, such as quantum superposition, entanglement, and the no-cloning principle, necessitate specialized refactoring techniques. This paper bridges this gap by presenting a comprehensive set of refactorings specifically designed for quantum programs. Each refactoring is carefully designed and explained to ensure the effective restructuring of quantum programs. Additionally, we highlight the importance of tool support in automating the refactoring process for quantum programs. Although our study focuses on the quantum programming language Q\#, our approach is applicable to other quantum programming languages, offering a general solution for enhancing the maintainability and efficiency of quantum software.

AI-powered programming assistants are increasingly gaining popularity, with GitHub Copilot alone used by over a million developers worldwide. These tools are far from perfect, however, producing code suggestions that may be incorrect or incomplete in subtle ways. As a result, developers face a new set of challenges when they need to understand, validate, and choose between AI's suggestions. This paper explores whether Live Programming, a continuous display of a program's runtime values, can help address these challenges. We introduce Live Exploration of AI-Generated Programs, a new interaction model for AI programming assistants that supports exploring multiple code suggestions through Live Programming. We implement this interaction model in a prototype Python environment LEAP and evaluate it through a between-subject study. Our results motivate several design opportunities for future AI-powered programming tools.

User ratings are widely used in web systems and applications to provide personalized interaction and to help other users make better choices. Previous research has shown that rating scale features and user personality can both influence users' rating behaviour, but relatively little work has been devoted to understanding if the effects of rating scale features may vary depending on users' personality. In this paper, we study the impact of scale granularity and colour on the ratings of individuals with different personalities, represented according to the Big Five model. To this aim, we carried out a user study with 203 participants, in the context of a web-based survey where users were assigned an image rating task. Our results confirm that both colour and granularity can affect user ratings, but their specific effects also depend on user scores for certain personality traits, in particular agreeableness, openness to experience and conscientiousness.

Online platforms employ recommendation systems to enhance customer engagement and drive revenue. However, in a multi-sided platform where the platform interacts with diverse stakeholders such as sellers (items) and customers (users), each with their own desired outcomes, finding an appropriate middle ground becomes a complex operational challenge. In this work, we investigate the ``price of fairness'', which captures the platform's potential compromises when balancing the interests of different stakeholders. Motivated by this, we propose a fair recommendation framework where the platform maximizes its revenue while interpolating between item and user fairness constraints. We further examine the fair recommendation problem in a more realistic yet challenging online setting, where the platform lacks knowledge of user preferences and can only observe binary purchase decisions. To address this, we design a low-regret online optimization algorithm that preserves the platform's revenue while achieving fairness for both items and users. Finally, we demonstrate the effectiveness of our framework and proposed method via a case study on MovieLens data.

When is heterogeneity in the composition of an autonomous robotic team beneficial and when is it detrimental? We investigate and answer this question in the context of a minimally viable model that examines the role of heterogeneous speeds in perimeter defense problems, where defenders share a total allocated speed budget. We consider two distinct problem settings and develop strategies based on dynamic programming and on local interaction rules. We present a theoretical analysis of both approaches and our results are extensively validated using simulations. Interestingly, our results demonstrate that the viability of heterogeneous teams depends on the amount of information available to the defenders. Moreover, our results suggest a universality property: across a wide range of problem parameters the optimal ratio of the speeds of the defenders remains nearly constant.

Substantial progress has been made recently on developing provably accurate and efficient algorithms for low-rank matrix factorization via nonconvex optimization. While conventional wisdom often takes a dim view of nonconvex optimization algorithms due to their susceptibility to spurious local minima, simple iterative methods such as gradient descent have been remarkably successful in practice. The theoretical footings, however, had been largely lacking until recently. In this tutorial-style overview, we highlight the important role of statistical models in enabling efficient nonconvex optimization with performance guarantees. We review two contrasting approaches: (1) two-stage algorithms, which consist of a tailored initialization step followed by successive refinement; and (2) global landscape analysis and initialization-free algorithms. Several canonical matrix factorization problems are discussed, including but not limited to matrix sensing, phase retrieval, matrix completion, blind deconvolution, robust principal component analysis, phase synchronization, and joint alignment. Special care is taken to illustrate the key technical insights underlying their analyses. This article serves as a testament that the integrated consideration of optimization and statistics leads to fruitful research findings.

北京阿比特科技有限公司