PurposeThe purpose of this paper is to present empirical evidence on the implementation, acceptance and quality-related aspects of research information systems (RIS) in academic institutions.Design/methodology/approachThe study is based on a 2018 survey with 160 German universities and research institutions.FindingsThe paper presents recent figures about the implementation of RIS in German academic institutions, including results on the satisfaction, perceived usefulness and ease of use. It contains also information about the perceived data quality and the preferred quality management. RIS acceptance can be achieved only if the highest possible quality of the data is to be ensured. For this reason, the impact of data quality on the technology acceptance model (TAM) is examined, and the relation between the level of data quality and user acceptance of the associated institutional RIS is addressed.Research limitations/implicationsThe data provide empirical elements for a better understanding of the role of the data quality for the acceptance of RIS, in the framework of a TAM. The study puts the focus on commercial and open-source solutions while in-house developments have been excluded. Also, mainly because of the small sample size, the data analysis was limited to descriptive statistics.Practical implicationsThe results are helpful for the management of RIS projects, to increase acceptance and satisfaction with the system, and for the further development of RIS functionalities.Originality/valueThe number of empirical studies on the implementation and acceptance of RIS is low, and very few address in this context the question of data quality. The study tries to fill the gap.
Despite much creative work on methods and tools, reproducibility -- the ability to repeat the computational steps used to obtain a research result -- remains elusive. One reason for these difficulties is that extant tools for capturing research processes do not align well with the rich working practices of scientists. We advocate here for simple mechanisms that can be integrated easily with current work practices to capture basic information about every data product consumed or produced in a project. We argue that by thus extending the scope of findable, accessible, interoperable, and reusable (FAIR) data in both time and space to enable the creation of a continuous chain of continuous and ubiquitous FAIRness linkages (CUF-Links) from inputs to outputs, such mechanisms can provide a strong foundation for documenting the provenance linkages that are essential to reproducible research. We give examples of mechanisms that can achieve these goals, and review how they have been applied in practice.
Graph analytics attract much attention from both research and industry communities. Due to the linear time complexity, the $k$-core decomposition is widely used in many real-world applications such as biology, social networks, community detection, ecology, and information spreading. In many such applications, the data graphs continuously change over time. The changes correspond to edge insertion and removal. Instead of recomputing the $k$-core, which is time-consuming, we study how to maintain the $k$-core efficiently. That is, when inserting or deleting an edge, we need to identify the affected vertices by searching for more vertices. The state-of-the-art order-based method maintains an order, the so-called $k$-order, among all vertices, which can significantly reduce the searching space. However, this order-based method is complicated for understanding and implementation, and its correctness is not formally discussed. In this work, we propose a simplified order-based approach by introducing the classical Order Data Structure to maintain the $k$-order, which significantly improves the worst-case time complexity for both edge insertion and removal algorithms. Also, our simplified method is intuitive to understand and implement; it is easy to argue the correctness formally. Additionally, we discuss a simplified batch insertion approach. The experiments evaluate our simplified method over 12 real and synthetic graphs with billions of vertices. Compared with the existing method, our simplified approach achieves high speedups up to 7.7x and 9.7x for edge insertion and removal, respectively.
Software non-functional requirements address a multitude of objectives, expectations, and even liabilities that must be considered during development and operation. Typically, these non-functional requirements originate from different domains and their concrete scope, notion, and demarcation to functional requirements is often ambiguous. In this study we seek to categorize and analyze relevant work related to software engineering in a DevOps context in order to clarify the different focus areas, themes, and objectives underlying non-functional requirements and also to identify future research directions in this field. We conducted a systematic mapping study, including 142 selected primary studies, extracted the focus areas, and synthesized the themes and objectives of the described NFRs. In order to examine non-engineering-focused studies related to non-functional requirements in DevOps, we conducted a backward snowballing step and additionally included 17 primary studies. Our analysis revealed 7 recurrent focus areas and 41 themes that characterize NFRs in DevOps, along with typical objectives for these themes. Overall, the focus areas and themes of NFRs in DevOps are very diverse and reflect the different perspectives required to align software engineering with technical quality, business, compliance, and organizational considerations. The lack of methodological support for specifying, measuring, and evaluating fulfillment of these NFRs in DevOps-driven projects offers ample opportunities for future research in this field. Particularly, there is a need for empirically validated approaches for operationalizing non-engineering-focused objectives of software.
ThThis paper reviews and summarizes the main process of the close combination of computer and network communication to promote the rapid development of information technology, and discusses the important role of a series of technical achievements in information movement and application. Combined with the newly concerned concept of metaverse, this paper studies the relationship between the real world, information space and information system, and puts forward the integrated framework of the real world and information system. According to the recent research and practice results, the basic mathematical theories on information model, nature and measurement are comprehensively revised and supplemented. On this basis, taking the eleven kinds of information measurement as the traction, this paper puts forward eleven kinds of measurement effects of information system and their distribution views in each system link, and then analyzes eight typical dynamic configurations of information system, which constitutes a basic theoretical system of information system dynamics with universal significance, in order to support the analysis, design, R&D and evaluation.
Human factors engineering usually emphasizes the research of human-computer interaction and does not pay attention to societal and organizational factors. Traditional sociotechnical systems (STS) theory has been widely used, but there are many new characteristics in the STS environment as we enter the intelligence era, resulting in the limitations of traditional STS. Based on the "user-centered design" philosophy and the perspective of human factors engineering, this paper proposes a new framework of intelligent sociotechnical systems (iSTS) and outlines the new characteristics of iSTS as well as its implications to the development of intelligent systems. Finally, this paper puts forward recommendations for future research and application of iSTS in the aspects of human factors engineering methodology and its research agenda.
Big data management is a reality for an increasing number of organizations in many areas and represents a set of challenges involving big data modeling, storage and retrieval, analysis and visualization. However, technological resources, people and processes are crucial to facilitate the management of big data in any kind of organization, allowing information and knowledge from a large volume of data to support decision-making. Big data management can be supported by these three dimensions: technology, people and processes. Hence, this article discusses these dimensions: the technological dimension that is related to storage, analytics and visualization of big data; the human aspects of big data; and, in addition, the process management dimension that involves in a technological and business approach the aspects of big data management.
Phobia is a widespread mental illness, and severe phobias can seriously impact patients daily lives. One-session Exposure Treatment (OST) has been used to treat phobias in the early days,but it has many disadvantages. As a new way to treat a phobia, virtual reality exposure therapy(VRET) based on serious games is introduced. There have been much researches in the field of serious games for phobia therapy (SGPT), so this paper presents a detailed review of SGPT from three perspectives. First, SGPT in different stages has different forms with the update and iteration of technology. Therefore, we reviewed the development history of SGPT from the perspective of equipment. Secondly, there is no unified classification framework for a large number of SGPT. So we classified and combed SGPT according to different types of phobias. Finally, most articles on SGPT have studied the therapeutic effects of serious games from a medical perspective, and few have studied serious games from a technical perspective. Therefore, we conducted in-depth research on SGPT from a technical perspective in order to provide technical guidance for the development of SGPT. Accordingly, the challenges facing the existing technology has been explored and listed.
This paper describes the development of the Microsoft XiaoIce system, the most popular social chatbot in the world. XiaoIce is uniquely designed as an AI companion with an emotional connection to satisfy the human need for communication, affection, and social belonging. We take into account both intelligent quotient (IQ) and emotional quotient (EQ) in system design, cast human-machine social chat as decision-making over Markov Decision Processes (MDPs), and optimize XiaoIce for long-term user engagement, measured in expected Conversation-turns Per Session (CPS). We detail the system architecture and key components including dialogue manager, core chat, skills, and an empathetic computing module. We show how XiaoIce dynamically recognizes human feelings and states, understands user intents, and responds to user needs throughout long conversations. Since the release in 2014, XiaoIce has communicated with over 660 million users and succeeded in establishing long-term relationships with many of them. Analysis of large-scale online logs shows that XiaoIce has achieved an average CPS of 23, which is significantly higher than that of other chatbots and even human conversations.
Music recommender systems (MRS) have experienced a boom in recent years, thanks to the emergence and success of online streaming services, which nowadays make available almost all music in the world at the user's fingertip. While today's MRS considerably help users to find interesting music in these huge catalogs, MRS research is still facing substantial challenges. In particular when it comes to build, incorporate, and evaluate recommendation strategies that integrate information beyond simple user--item interactions or content-based descriptors, but dig deep into the very essence of listener needs, preferences, and intentions, MRS research becomes a big endeavor and related publications quite sparse. The purpose of this trends and survey article is twofold. We first identify and shed light on what we believe are the most pressing challenges MRS research is facing, from both academic and industry perspectives. We review the state of the art towards solving these challenges and discuss its limitations. Second, we detail possible future directions and visions we contemplate for the further evolution of the field. The article should therefore serve two purposes: giving the interested reader an overview of current challenges in MRS research and providing guidance for young researchers by identifying interesting, yet under-researched, directions in the field.
Internet of Things (IoT) infrastructure within the physical library environment is the basis for an integrative, hybrid approach to digital resource recommenders. The IoT infrastructure provides mobile, dynamic wayfinding support for items in the collection, which includes features for location-based recommendations. The evaluation and analysis herein clarified the nature of users' requests for recommendations based on their location, and describes subject areas of the library for which users request recommendations. The results indicated that users of IoT-based recommendations are interested in a broad distribution of subjects, with a short-head distribution from this collection in American and English Literature. A long-tail finding showed a diversity of topics that are recommended to users in the library book stacks with IoT-powered recommendations.