亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Context: Requirements engineering (RE) is an important part of Software Engineering (SE), consisting of various human-centric activities that require the frequent collaboration of a variety of roles. Prior research has shown that personality is one such human aspect that has a huge impact on the success of a software project. However, a limited number of empirical studies exist focusing on the impact of personality on RE activities. Objective: The objective of this study is to explore and identify the impact of personality on RE activities, provide a better understanding of these impacts, and provide guidance on how to better handle these impacts in RE. Method: We used a mixed-methods approach, including a personality test-based survey (50 participants) and an in-depth interview study (15 participants) with software practitioners from around the world involved in RE activities. Results: Through personality test analysis, we found a majority of the practitioners have a high score on agreeableness and conscientiousness traits and an average score on extraversion and neuroticism traits. Through analysis of the interviews, we found a range of impacts related to the personality traits of software practitioners, their team members, and external stakeholders. These impacts can be positive or negative, depending on the RE activities, the overall software development process, and the people involved in these activities. Moreover, we found a set of strategies that can be applied to mitigate the negative impact of personality on RE activities. Conclusion: Our identified impacts of personality on RE activities and mitigation strategies serve to provide guidance to software practitioners on handling such possible personality impacts on RE activities and for researchers to investigate these impacts in greater depth in future.

相關內容

IEEE國際需求工程會議是研究人員、實踐者、教育工作者和學生展示和討論需求工程學科最新創新、經驗和關注點的首要國際論壇。這次會議將為學術界、政府和工業界提供一個廣泛的項目,其中包括幾位杰出的主旨演講人和三天的會議,會議內容包括論文、專題討論、海報和演示。官網鏈接: · 類別 · 線性的 · 論文 · 數值分析 ·
2024 年 1 月 10 日

This work introduces a new class of Runge-Kutta methods for solving nonlinearly partitioned initial value problems. These new methods, named nonlinearly partitioned Runge-Kutta (NPRK), generalize existing additive and component-partitioned Runge-Kutta methods, and allow one to distribute different types of implicitness within nonlinear terms. The paper introduces the NPRK framework and discusses order conditions, linear stability, and the derivation of implicit-explicit and implicit-implicit NPRK integrators. The paper concludes with numerical experiments that demonstrate the utility of NPRK methods for solving viscous Burger's and the gray thermal radiation transport equations.

Recently, a new paradigm, meta learning, has been widely applied to Deep Learning Recommendation Models (DLRM) and significantly improves statistical performance, especially in cold-start scenarios. However, the existing systems are not tailored for meta learning based DLRM models and have critical problems regarding efficiency in distributed training in the GPU cluster. It is because the conventional deep learning pipeline is not optimized for two task-specific datasets and two update loops in meta learning. This paper provides a high-performance framework for large-scale training for Optimization-based Meta DLRM models over the \textbf{G}PU cluster, namely \textbf{G}-Meta. Firstly, G-Meta utilizes both data parallelism and model parallelism with careful orchestration regarding computation and communication efficiency, to enable high-speed distributed training. Secondly, it proposes a Meta-IO pipeline for efficient data ingestion to alleviate the I/O bottleneck. Various experimental results show that G-Meta achieves notable training speed without loss of statistical performance. Since early 2022, G-Meta has been deployed in Alipay's core advertising and recommender system, shrinking the continuous delivery of models by four times. It also obtains 6.48\% improvement in Conversion Rate (CVR) and 1.06\% increase in CPM (Cost Per Mille) in Alipay's homepage display advertising, with the benefit of larger training samples and tasks.

Synthetic ground motions (GMs) play a fundamental role in both deterministic and probabilistic seismic engineering assessments. This paper shows that the family of filtered and modulated white noise stochastic GM models overlooks a key parameter -- the high-pass filter's corner frequency, $f_c$. In the simulated motions, this causes significant distortions in the long-period range of the linear-response spectra and in the linear-response spectral correlations. To address this, we incorporate $f_c$ as an explicitly fitted parameter in a site-based stochastic model. We optimize $f_c$ by individually matching the long-period linear-response spectrum (i.e., $Sa(T)$ for $T \geq 1$s) of synthetic GMs with that of each recorded GM. We show that by fitting $f_c$ the resulting stochastically simulated GMs can precisely capture the spectral amplitudes, variability (i.e., variances of $\log(Sa(T))$), and the correlation structure (i.e., correlation of $\log(Sa(T))$ between distinct periods $T_1$ and $T_2$) of recorded GMs. To quantify the impact of $f_c$, a sensitivity analysis is conducted through linear regression. This regression relates the logarithmic linear-response spectrum ($\log(Sa(T))$) to seven GM parameters, including the optimized $f_c$. The results indicate that the variance of $f_c$ observed in natural GMs, along with its correlation with the other GM parameters, accounts for 26\% of the spectral variability in long periods. Neglecting either the $f_c$ variance or $f_c$ correlation typically results in an important overestimation of the linear-response spectral correlation.

Educators are increasingly concerned about the usage of Large Language Models (LLMs) such as ChatGPT in programming education, particularly regarding the potential exploitation of imperfections in Artificial Intelligence Generated Content (AIGC) Detectors for academic misconduct. In this paper, we present an empirical study where the LLM is examined for its attempts to bypass detection by AIGC Detectors. This is achieved by generating code in response to a given question using different variants. We collected a dataset comprising 5,069 samples, with each sample consisting of a textual description of a coding problem and its corresponding human-written Python solution codes. These samples were obtained from various sources, including 80 from Quescol, 3,264 from Kaggle, and 1,725 from LeetCode. From the dataset, we created 13 sets of code problem variant prompts, which were used to instruct ChatGPT to generate the outputs. Subsequently, we assessed the performance of five AIGC detectors. Our results demonstrate that existing AIGC Detectors perform poorly in distinguishing between human-written code and AI-generated code.

In pace with developments in the research field of artificial intelligence, knowledge graphs (KGs) have attracted a surge of interest from both academia and industry. As a representation of semantic relations between entities, KGs have proven to be particularly relevant for natural language processing (NLP), experiencing a rapid spread and wide adoption within recent years. Given the increasing amount of research work in this area, several KG-related approaches have been surveyed in the NLP research community. However, a comprehensive study that categorizes established topics and reviews the maturity of individual research streams remains absent to this day. Contributing to closing this gap, we systematically analyzed 507 papers from the literature on KGs in NLP. Our survey encompasses a multifaceted review of tasks, research types, and contributions. As a result, we present a structured overview of the research landscape, provide a taxonomy of tasks, summarize our findings, and highlight directions for future work.

Graph Neural Networks (GNNs) have gained momentum in graph representation learning and boosted the state of the art in a variety of areas, such as data mining (\emph{e.g.,} social network analysis and recommender systems), computer vision (\emph{e.g.,} object detection and point cloud learning), and natural language processing (\emph{e.g.,} relation extraction and sequence learning), to name a few. With the emergence of Transformers in natural language processing and computer vision, graph Transformers embed a graph structure into the Transformer architecture to overcome the limitations of local neighborhood aggregation while avoiding strict structural inductive biases. In this paper, we present a comprehensive review of GNNs and graph Transformers in computer vision from a task-oriented perspective. Specifically, we divide their applications in computer vision into five categories according to the modality of input data, \emph{i.e.,} 2D natural images, videos, 3D data, vision + language, and medical images. In each category, we further divide the applications according to a set of vision tasks. Such a task-oriented taxonomy allows us to examine how each task is tackled by different GNN-based approaches and how well these approaches perform. Based on the necessary preliminaries, we provide the definitions and challenges of the tasks, in-depth coverage of the representative approaches, as well as discussions regarding insights, limitations, and future directions.

Seeking the equivalent entities among multi-source Knowledge Graphs (KGs) is the pivotal step to KGs integration, also known as \emph{entity alignment} (EA). However, most existing EA methods are inefficient and poor in scalability. A recent summary points out that some of them even require several days to deal with a dataset containing 200,000 nodes (DWY100K). We believe over-complex graph encoder and inefficient negative sampling strategy are the two main reasons. In this paper, we propose a novel KG encoder -- Dual Attention Matching Network (Dual-AMN), which not only models both intra-graph and cross-graph information smartly, but also greatly reduces computational complexity. Furthermore, we propose the Normalized Hard Sample Mining Loss to smoothly select hard negative samples with reduced loss shift. The experimental results on widely used public datasets indicate that our method achieves both high accuracy and high efficiency. On DWY100K, the whole running process of our method could be finished in 1,100 seconds, at least 10* faster than previous work. The performances of our method also outperform previous works across all datasets, where Hits@1 and MRR have been improved from 6% to 13%.

Deep neural networks (DNNs) are successful in many computer vision tasks. However, the most accurate DNNs require millions of parameters and operations, making them energy, computation and memory intensive. This impedes the deployment of large DNNs in low-power devices with limited compute resources. Recent research improves DNN models by reducing the memory requirement, energy consumption, and number of operations without significantly decreasing the accuracy. This paper surveys the progress of low-power deep learning and computer vision, specifically in regards to inference, and discusses the methods for compacting and accelerating DNN models. The techniques can be divided into four major categories: (1) parameter quantization and pruning, (2) compressed convolutional filters and matrix factorization, (3) network architecture search, and (4) knowledge distillation. We analyze the accuracy, advantages, disadvantages, and potential solutions to the problems with the techniques in each category. We also discuss new evaluation metrics as a guideline for future research.

The problem of Multiple Object Tracking (MOT) consists in following the trajectory of different objects in a sequence, usually a video. In recent years, with the rise of Deep Learning, the algorithms that provide a solution to this problem have benefited from the representational power of deep models. This paper provides a comprehensive survey on works that employ Deep Learning models to solve the task of MOT on single-camera videos. Four main steps in MOT algorithms are identified, and an in-depth review of how Deep Learning was employed in each one of these stages is presented. A complete experimental comparison of the presented works on the three MOTChallenge datasets is also provided, identifying a number of similarities among the top-performing methods and presenting some possible future research directions.

Machine learning techniques have deeply rooted in our everyday life. However, since it is knowledge- and labor-intensive to pursue good learning performance, human experts are heavily involved in every aspect of machine learning. In order to make machine learning techniques easier to apply and reduce the demand for experienced human experts, automated machine learning (AutoML) has emerged as a hot topic with both industrial and academic interest. In this paper, we provide an up to date survey on AutoML. First, we introduce and define the AutoML problem, with inspiration from both realms of automation and machine learning. Then, we propose a general AutoML framework that not only covers most existing approaches to date but also can guide the design for new methods. Subsequently, we categorize and review the existing works from two aspects, i.e., the problem setup and the employed techniques. Finally, we provide a detailed analysis of AutoML approaches and explain the reasons underneath their successful applications. We hope this survey can serve as not only an insightful guideline for AutoML beginners but also an inspiration for future research.

北京阿比特科技有限公司