亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Building software that can support the huge growth in data and computation required by modern research needs individuals with increasingly specialist skill sets that take time to develop and maintain. The Research Software Engineering movement, which started in the UK and has been built up over recent years, aims to recognise and support these individuals. Why does research software matter to professional software development practitioners outside the research community? Research software can have great impact on the wider world and recent progress means the area can now be considered as a more realistic option for a professional software development career. In this article we present a structure, along with supporting evidence of real-world activities, that defines four elements that we believe are key to providing comprehensive and sustainable support for Research Software Engineering. We also highlight ways that the wider developer community can learn from, and engage with, these activities.

相關內容

《工程》是中國工程院(CAE)于2015年推出的國際開放存取期刊。其目的是提供一個高水平的平臺,傳播和分享工程研發的前沿進展、當前主要研究成果和關鍵成果;報告工程科學的進展,討論工程發展的熱點、興趣領域、挑戰和前景,在工程中考慮人與環境的福祉和倫理道德,鼓勵具有深遠經濟和社會意義的工程突破和創新,使之達到國際先進水平,成為新的生產力,從而改變世界,造福人類,創造新的未來。 期刊鏈接: · CASES · Learning · AI · 相同 ·
2023 年 3 月 17 日

New emerging technologies powered by Artificial Intelligence (AI) have the potential to disruptively transform our societies for the better. In particular, data-driven learning approaches (i.e., Machine Learning (ML)) have been a true revolution in the advancement of multiple technologies in various application domains. But at the same time there is growing concern about certain intrinsic characteristics of these methodologies that carry potential risks to both safety and fundamental rights. Although there are mechanisms in the adoption process to minimize these risks (e.g., safety regulations), these do not exclude the possibility of harm occurring, and if this happens, victims should be able to seek compensation. Liability regimes will therefore play a key role in ensuring basic protection for victims using or interacting with these systems. However, the same characteristics that make AI systems inherently risky, such as lack of causality, opacity, unpredictability or their self and continuous learning capabilities, may lead to considerable difficulties when it comes to proving causation. This paper presents three case studies, as well as the methodology to reach them, that illustrate these difficulties. Specifically, we address the cases of cleaning robots, delivery drones and robots in education. The outcome of the proposed analysis suggests the need to revise liability regimes to alleviate the burden of proof on victims in cases involving AI technologies.

The arrival of handwriting recognition technologies offers new possibilities for research in heritage studies. However, it is now necessary to reflect on the experiences and the practices developed by research teams. Our use of the Transkribus platform since 2018 has led us to search for the most significant ways to improve the performance of our handwritten text recognition (HTR) models which are made to transcribe French handwriting dating from the 17th century. This article therefore reports on the impacts of creating transcribing protocols, using the language model at full scale and determining the best way to use base models in order to help increase the performance of HTR models. Combining all of these elements can indeed increase the performance of a single model by more than 20% (reaching a Character Error Rate below 5%). This article also discusses some challenges regarding the collaborative nature of HTR platforms such as Transkribus and the way researchers can share their data generated in the process of creating or training handwritten text recognition models.

Since 2021, the term "Metaverse" has been the most popular one, garnering a lot of interest. Because of its contained environment and built-in computing and networking capabilities, a modern car makes an intriguing location to host its own little metaverse. Additionally, the travellers don't have much to do to pass the time while traveling, making them ideal customers for immersive services. Vetaverse (Vehicular-Metaverse), which we define as the future continuum between vehicular industries and Metaverse, is envisioned as a blended immersive realm that scales up to cities and countries, as digital twins of the intelligent Transportation Systems, referred to as "TS-Metaverse", as well as customized XR services inside each Individual Vehicle, referred to as "IV-Metaverse". The two subcategories serve fundamentally different purposes, namely long-term interconnection, maintenance, monitoring, and management on scale for large transportation systems (TS), and personalized, private, and immersive infotainment services (IV). By outlining the framework of Vetaverse and examining important enabler technologies, we reveal this impending trend. Additionally, we examine unresolved issues and potential routes for future study while highlighting some intriguing Vetaverse services.

Large language models (LLMs) have transformed many fields, including natural language processing, computer vision, and reinforcement learning. These models have also made a significant impact in the field of law, where they are being increasingly utilized to automate various legal tasks, such as legal judgement prediction, legal document analysis, and legal document writing. However, the integration of LLMs into the legal field has also raised several legal problems, including privacy concerns, bias, and explainability. In this survey, we explore the integration of LLMs into the field of law. We discuss the various applications of LLMs in legal tasks, examine the legal challenges that arise from their use, and explore the data resources that can be used to specialize LLMs in the legal domain. Finally, we discuss several promising directions and conclude this paper. By doing so, we hope to provide an overview of the current state of LLMs in law and highlight the potential benefits and challenges of their integration.

Context: Requirements engineering (RE) is an important part of Software Engineering (SE), consisting of various human-centric activities that require the frequent collaboration of a variety of roles. Prior research has shown that personality is one such human aspect that has a huge impact on the success of a software project. However, a limited number of empirical studies exist focusing on the impact of personality on RE activities. Objective: The objective of this study is to explore and identify the impact of personality on RE activities, provide a better understanding of these impacts, and provide guidance on how to better handle these impacts in RE. Method: We used a mixed-methods approach, including a personality test-based survey (50 participants) and an in-depth interview study (15 participants) with software practitioners from around the world involved in RE activities. Results: Through personality test analysis, we found a majority of the practitioners have a high score on agreeableness and conscientiousness traits and an average score on extraversion and neuroticism traits. Through analysis of the interviews, we found a range of impacts related to the personality traits of software practitioners, their team members, and external stakeholders. These impacts can be positive or negative, depending on the RE activities, the overall software development process, and the people involved in these activities. Moreover, we found a set of strategies that can be applied to mitigate the negative impact of personality on RE activities. Conclusion: Our identified impacts of personality on RE activities and mitigation strategies serve to provide guidance to software practitioners on handling such possible personality impacts on RE activities and for researchers to investigate these impacts in greater depth in future.

As software has become more essential to research across disciplines, and as the recognition of this fact has grown, the importance of professionalizing the development and maintenance of this software has also increased. The community of software professionals who work on this software have come together under the title Research Software Engineer (RSE) over the last decade. This has led to the formalization of RSE roles and organized RSE groups in universities, national labs, and industry. This, in turn, has created the need to understand how RSEs come into this profession and into these groups, how to further promote this career path to potential members, as well as the need to understand what training gaps need to be filled for RSEs coming from different entry points. We have categorized three main classifications of entry paths into the RSE profession and identified key elements, both advantages and disadvantages, that should be acknowledged and addressed by the broader research community in order to attract and retain a talented and diverse pool of future RSEs.

Along with the massive growth of the Internet from the 1990s until now, various innovative technologies have been created to bring users breathtaking experiences with more virtual interactions in cyberspace. Many virtual environments with thousands of services and applications, from social networks to virtual gaming worlds, have been developed with immersive experience and digital transformation, but most are incoherent instead of being integrated into a platform. In this context, metaverse, a term formed by combining meta and universe, has been introduced as a shared virtual world that is fueled by many emerging technologies, such as fifth-generation networks and beyond, virtual reality, and artificial intelligence (AI). Among such technologies, AI has shown the great importance of processing big data to enhance immersive experience and enable human-like intelligence of virtual agents. In this survey, we make a beneficial effort to explore the role of AI in the foundation and development of the metaverse. We first deliver a preliminary of AI, including machine learning algorithms and deep learning architectures, and its role in the metaverse. We then convey a comprehensive investigation of AI-based methods concerning six technical aspects that have potentials for the metaverse: natural language processing, machine vision, blockchain, networking, digital twin, and neural interface, and being potential for the metaverse. Subsequently, several AI-aided applications, such as healthcare, manufacturing, smart cities, and gaming, are studied to be deployed in the virtual worlds. Finally, we conclude the key contribution of this survey and open some future research directions in AI for the metaverse.

It has been a long time that computer architecture and systems are optimized to enable efficient execution of machine learning (ML) algorithms or models. Now, it is time to reconsider the relationship between ML and systems, and let ML transform the way that computer architecture and systems are designed. This embraces a twofold meaning: the improvement of designers' productivity, and the completion of the virtuous cycle. In this paper, we present a comprehensive review of work that applies ML for system design, which can be grouped into two major categories, ML-based modelling that involves predictions of performance metrics or some other criteria of interest, and ML-based design methodology that directly leverages ML as the design tool. For ML-based modelling, we discuss existing studies based on their target level of system, ranging from the circuit level to the architecture/system level. For ML-based design methodology, we follow a bottom-up path to review current work, with a scope of (micro-)architecture design (memory, branch prediction, NoC), coordination between architecture/system and workload (resource allocation and management, data center management, and security), compiler, and design automation. We further provide a future vision of opportunities and potential directions, and envision that applying ML for computer architecture and systems would thrive in the community.

Over the past few years, we have seen fundamental breakthroughs in core problems in machine learning, largely driven by advances in deep neural networks. At the same time, the amount of data collected in a wide array of scientific domains is dramatically increasing in both size and complexity. Taken together, this suggests many exciting opportunities for deep learning applications in scientific settings. But a significant challenge to this is simply knowing where to start. The sheer breadth and diversity of different deep learning techniques makes it difficult to determine what scientific problems might be most amenable to these methods, or which specific combination of methods might offer the most promising first approach. In this survey, we focus on addressing this central issue, providing an overview of many widely used deep learning models, spanning visual, sequential and graph structured data, associated tasks and different training methods, along with techniques to use deep learning with less data and better interpret these complex models --- two central considerations for many scientific use cases. We also include overviews of the full design process, implementation tips, and links to a plethora of tutorials, research summaries and open-sourced deep learning pipelines and pretrained models, developed by the community. We hope that this survey will help accelerate the use of deep learning across different scientific domains.

Deep Convolutional Neural Networks (CNNs) are a special type of Neural Networks, which have shown state-of-the-art results on various competitive benchmarks. The powerful learning ability of deep CNN is largely achieved with the use of multiple non-linear feature extraction stages that can automatically learn hierarchical representation from the data. Availability of a large amount of data and improvements in the hardware processing units have accelerated the research in CNNs and recently very interesting deep CNN architectures are reported. The recent race in deep CNN architectures for achieving high performance on the challenging benchmarks has shown that the innovative architectural ideas, as well as parameter optimization, can improve the CNN performance on various vision-related tasks. In this regard, different ideas in the CNN design have been explored such as use of different activation and loss functions, parameter optimization, regularization, and restructuring of processing units. However, the major improvement in representational capacity is achieved by the restructuring of the processing units. Especially, the idea of using a block as a structural unit instead of a layer is gaining substantial appreciation. This survey thus focuses on the intrinsic taxonomy present in the recently reported CNN architectures and consequently, classifies the recent innovations in CNN architectures into seven different categories. These seven categories are based on spatial exploitation, depth, multi-path, width, feature map exploitation, channel boosting and attention. Additionally, it covers the elementary understanding of the CNN components and sheds light on the current challenges and applications of CNNs.

北京阿比特科技有限公司