亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Analyses of a software product line (SPL) typically report variable results that are annotated with logical expressions indicating the set of product variants for which the results hold. These expressions can get complicated and difficult to reason about when the SPL has lots of features and product variants. Previous work introduced a visualizer that supports filters for highlighting the analysis results that apply to product variants of interest, but this work was weakly evaluated. In this paper, we report on a controlled user study that evaluates the effectiveness of this new visualizer in helping the user search variable results and compare the results of multiple variants. Our findings indicate that the use of the new visualizer significantly improves the correctness and efficiency of the user's work and reduces the user's cognitive load in working with variable results.

相關內容

Modeling structure and behavior of software systems plays a crucial role, in various areas of software engineering. As with other software engineering artifacts, software models are subject to evolution. Supporting modelers in evolving models by model completion facilities and providing high-level edit operations such as frequently occurring editing patterns is still an open problem. Recently, large language models (i.e., generative neural networks) have garnered significant attention in various research areas, including software engineering. In this paper, we explore the potential of large language models in supporting the evolution of software models in software engineering. We propose an approach that utilizes large language models for model completion and discovering editing patterns in model histories of software systems. Through controlled experiments using simulated model repositories, we conduct an evaluation of the potential of large language models for these two tasks. We have found that large language models are indeed a promising technology for supporting software model evolution, and that it is worth investigating further in the area of software model evolution.

Long-run average optimization problems for Markov decision processes (MDPs) require constructing policies with optimal steady-state behavior, i.e., optimal limit frequency of visits to the states. However, such policies may suffer from local instability, i.e., the frequency of states visited in a bounded time horizon along a run differs significantly from the limit frequency. In this work, we propose an efficient algorithmic solution to this problem.

Missing data are an unavoidable complication frequently encountered in many causal discovery tasks. When a missing process depends on the missing values themselves (known as self-masking missingness), the recovery of the joint distribution becomes unattainable, and detecting the presence of such self-masking missingness remains a perplexing challenge. Consequently, due to the inability to reconstruct the original distribution and to discern the underlying missingness mechanism, simply applying existing causal discovery methods would lead to wrong conclusions. In this work, we found that the recent advances additive noise model has the potential for learning causal structure under the existence of the self-masking missingness. With this observation, we aim to investigate the identification problem of learning causal structure from missing data under an additive noise model with different missingness mechanisms, where the `no self-masking missingness' assumption can be eliminated appropriately. Specifically, we first elegantly extend the scope of identifiability of causal skeleton to the case with weak self-masking missingness (i.e., no other variable could be the cause of self-masking indicators except itself). We further provide the sufficient and necessary identification conditions of the causal direction under additive noise model and show that the causal structure can be identified up to an IN-equivalent pattern. We finally propose a practical algorithm based on the above theoretical results on learning the causal skeleton and causal direction. Extensive experiments on synthetic and real data demonstrate the efficiency and effectiveness of the proposed algorithms.

The environmental impact of video streaming services has been discussed as part of the strategies towards sustainable information and communication technologies. A first step towards that is the energy profiling and assessment of energy consumption of existing video technologies. This paper presents a comprehensive study of power measurement techniques in video compression, comparing the use of hardware and software power meters. An experimental methodology to ensure reliability of measurements is introduced. Key findings demonstrate the high correlation of hardware and software based energy measurements for two video codecs across different spatial and temporal resolutions at a lower computational overhead.

Recommender systems are expected to be assistants that help human users find relevant information automatically without explicit queries. As recommender systems evolve, increasingly sophisticated learning techniques are applied and have achieved better performance in terms of user engagement metrics such as clicks and browsing time. The increase in the measured performance, however, can have two possible attributions: a better understanding of user preferences, and a more proactive ability to utilize human bounded rationality to seduce user over-consumption. A natural following question is whether current recommendation algorithms are manipulating user preferences. If so, can we measure the manipulation level? In this paper, we present a general framework for benchmarking the degree of manipulations of recommendation algorithms, in both slate recommendation and sequential recommendation scenarios. The framework consists of four stages, initial preference calculation, training data collection, algorithm training and interaction, and metrics calculation that involves two proposed metrics. We benchmark some representative recommendation algorithms in both synthetic and real-world datasets under the proposed framework. We have observed that a high online click-through rate does not necessarily mean a better understanding of user initial preference, but ends in prompting users to choose more documents they initially did not favor. Moreover, we find that the training data have notable impacts on the manipulation degrees, and algorithms with more powerful modeling abilities are more sensitive to such impacts. The experiments also verified the usefulness of the proposed metrics for measuring the degree of manipulations. We advocate that future recommendation algorithm studies should be treated as an optimization problem with constrained user preference manipulations.

Data preparation is a trial-and-error process that typically involves countless iterations over the data to define the best pipeline of operators for a given task. With tabular data, practitioners often perform that burdensome activity on local machines by writing ad hoc scripts with libraries based on the Pandas dataframe API and testing them on samples of the entire dataset--the faster the library, the less idle time its users have. In this paper, we evaluate the most popular Python dataframe libraries in general data preparation use cases to assess how they perform on a single machine. To do so, we employ 4 real-world datasets and pipelines with distinct characteristics, covering a variety of scenarios. The insights gained with this experimentation are useful to data scientists who need to choose which of the dataframe libraries best suits their data preparation task at hand. In a nutshell, we found that: for small datasets, Pandas consistently proves to be the best choice with the richest API; when RAM is limited and there is no need to complete compatibility with Pandas API, Polars is the go-to choice thanks to its resource and query optimization; when a GPU is available, CuDF often yields the best performance, while for very large datasets that cannot fit in the GPU memory and RAM, PySpark (thanks to a multi-thread execution and a query optimizer) and Vaex (exploiting a columnar data format) are the best options.

Quantum low-density parity-check (QLDPC) codes have emerged as a promising technique for quantum error correction. A variety of decoders have been proposed for QLDPC codes and many of them utilize belief propagation (BP) decoding in some fashion. However, the use of BP decoding for degenerate QLDPC codes is known to face issues with convergence. These issues are commonly attributed to short cycles in the Tanner graph and multiple syndrome-matching error patterns due to code degeneracy. Although various methods have been proposed to mitigate the non-convergence issue, such as BP with ordered statistics decoding (BP-OSD) and BP with stabilizer inactivation (BP-SI), achieving better performance with lower complexity remains an active area of research. In this work, we propose to decode QLDPC codes with BP guided decimation (BPGD), which has been previously studied for constraint satisfaction and lossy compression problems. The decimation process is applicable to both binary BP and quaternary BP and involves sequentially freezing the value of the most reliable qubits to encourage BP convergence. Despite its simplicity, we find that BPGD significantly reduces BP failures due to non-convergence while maintaining a low probability of error given convergence, achieving performance on par with BP-OSD and BP-SI. To better understand how and why BPGD improves performance, we discuss several interpretations of BPGD and their connection to BP syndrome decoding.

With the rapid progress in virtual reality (VR) technology, the scope of VR applications has greatly expanded across various domains. However, the superiority of VR training over traditional methods and its impact on learning efficacy are still uncertain. To investigate whether VR training is more effective than traditional methods, we designed virtual training systems for mechanical assembly on both VR and desktop platforms, subsequently conducting pre-test and post-test experiments. A cohort of 53 students, all enrolled in engineering drawing course without prior knowledge distinctions, was randomly divided into three groups: physical training, desktop virtual training, and immersive VR training. Our investigation utilized analysis of covariance (ANCOVA) to examine the differences in post-test scores among the three groups while controlling for pre-test scores. The group that received VR training showed the highest scores on the post-test. Another facet of our study delved into the presence of the virtual system. We developed a specialized scale to assess this aspect for our research objectives. Our findings indicate that VR training can enhance the sense of presence, particularly in terms of sensory factors and realism factors. Moreover, correlation analysis uncovers connections between the various dimensions of presence. This study confirms that using VR training can improve learning efficacy and the presence in the context of mechanical assembly, surpassing traditional training methods. Furthermore, it provides empirical evidence supporting the integration of VR technology in higher education and engineering training. This serves as a reference for the practical application of VR technology in different fields.

The enormous amount of data to be represented using large graphs exceeds in some cases the resources of a conventional computer. Edges in particular can take up a considerable amount of memory as compared to the number of nodes. However, rigorous edge storage might not always be essential to be able to draw the needed conclusions. A similar problem takes records with many variables and attempts to extract the most discernible features. It is said that the ``dimension'' of this data is reduced. Following an approach with the same objective in mind, we can map a graph representation to a $k$-dimensional space and answer queries of neighboring nodes mainly by measuring Euclidean distances. The accuracy of our answers would decrease but would be compensated for by fuzzy logic which gives an idea about the likelihood of error. This method allows for reasonable representation in memory while maintaining a fair amount of useful information, and allows for concise embedding in $k$-dimensional Euclidean space as well as solving some problems without having to decompress the graph. Of particular interest is the case where $k=2$. Promising highly accurate experimental results are obtained and reported.

In pace with developments in the research field of artificial intelligence, knowledge graphs (KGs) have attracted a surge of interest from both academia and industry. As a representation of semantic relations between entities, KGs have proven to be particularly relevant for natural language processing (NLP), experiencing a rapid spread and wide adoption within recent years. Given the increasing amount of research work in this area, several KG-related approaches have been surveyed in the NLP research community. However, a comprehensive study that categorizes established topics and reviews the maturity of individual research streams remains absent to this day. Contributing to closing this gap, we systematically analyzed 507 papers from the literature on KGs in NLP. Our survey encompasses a multifaceted review of tasks, research types, and contributions. As a result, we present a structured overview of the research landscape, provide a taxonomy of tasks, summarize our findings, and highlight directions for future work.

北京阿比特科技有限公司