亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Quantum software systems are emerging software engineering (SE) genre that exploit principles of quantum bits (Qubit) and quantum gates (Qgates) to solve complex computing problems that today classic computers can not effectively do in a reasonable time. According to its proponents, agile software development practices have the potential to address many of the problems endemic to the development of quantum software. However, there is a dearth of evidence confirming if agile practices suit and can be adopted by software teams as they are in the context of quantum software development. To address this lack, we conducted an empirical study to investigate the needs and challenges of using agile practices to develop quantum software. While our semi-structured interviews with 26 practitioners across 10 countries highlighted the applicability of agile practices in this domain, the interview findings also revealed new challenges impeding the effective incorporation of these practices. Our research findings provide a springboard for further contextualization and seamless integration of agile practices with developing the next generation of quantum software.

相關內容

迄今(jin)為(wei)止(zhi),產(chan)品設計師最(zui)友好的交互(hu)動畫軟件。

We thought data to be simply given, but reality tells otherwise; it is costly, situation-dependent, and muddled with dilemmas, constantly requiring human intervention. The ML community's focus on quality data is increasing in the same vein, as good data is vital for successful ML systems. Nonetheless, few works have investigated the dataset builders and the specifics of what they do and struggle to make good data. In this study, through semi-structured interviews with 19 ML experts, we present what humans actually do and consider in each step of the data construction pipeline. We further organize their struggles under three themes: 1) trade-offs from real-world constraints; 2) harmonizing assorted data workers for consistency; 3) the necessity of human intuition and tacit knowledge for processing data. Finally, we discuss why such struggles are inevitable for good data and what practitioners aspire, toward providing systematic support for data works.

In this paper, we propose a modular navigation system that can be mounted on a regular powered wheelchair to assist disabled children and the elderly with autonomous mobility and shared-control features. The lack of independent mobility drastically affects an individual's mental and physical health making them feel less self-reliant, especially children with Cerebral Palsy and limited cognitive skills. To address this problem, we propose a comparatively inexpensive and modular system that uses a stereo camera to perform tasks such as path planning, obstacle avoidance, and collision detection in environments with narrow corridors. We avoid any major changes to the hardware of the wheelchair for an easy installation by replacing wheel encoders with a stereo camera for visual odometry. An open source software package, the Real-Time Appearance Based Mapping package, running on top of the Robot Operating System (ROS) allows us to perform visual SLAM that allows mapping and localizing itself in the environment. The path planning is performed by the move base package provided by ROS, which quickly and efficiently computes the path trajectory for the wheelchair. In this work, we present the design and development of the system along with its significant functionalities. Further, we report experimental results from a Gazebo simulation and real-world scenarios to prove the effectiveness of our proposed system with a compact form factor and a single stereo camera.

Quantum computing (QC) is no longer only a scientific interest but is rapidly becoming an industrially available technology that can potentially tackle the limitations of classical computing. Over the last few years, major technology giants have invested in developing hardware and programming frameworks to develop quantum-specific applications. QC hardware technologies are gaining momentum, however, operationalizing the QC technologies trigger the need for software-intensive methodologies, techniques, processes, tools, roles, and responsibilities for developing industrial-centric quantum software applications. This paper presents the vision of the quantum software engineering (QSE) life cycle consisting of quantum requirements engineering, quantum software design, quantum software implementation, quantum software testing, and quantum software maintenance. This paper particularly calls for joint contributions of software engineering research and industrial community to present real-world solutions to support the entire quantum software development activities. The proposed vision facilitates the researchers and practitioners to propose new processes, reference architectures, novel tools, and practices to leverage quantum computers and develop emerging and next generations of quantum software.

Context: User-Centered Design and Agile methodologies focus on human issues. Nevertheless, agile methodologies focus on contact with contracting customers and generating value for them. Usually, the communication between end users and the agile team is mediated by customers. However, they do not know the problems end users face in their routines. Hence, UX issues are typically identified only after the implementation, during user testing and validation. Objective: Aiming to improve the understanding and definition of the problem in agile projects, this research investigates the practices and difficulties experienced by agile teams during the development of data science and process automation projects. Also, we analyze the benefits and the teams' perceptions regarding user participation in these projects. Method: We collected data from four agile teams in an academia-industry collaboration focusing on delivering data science and process automation solutions. Therefore, we applied a carefully designed questionnaire answered by developers, scrum masters, and UX designers. In total, 18 subjects answered the questionnaire. Results: From the results, we identify practices used by the teams to define and understand the problem and to represent the solution. The practices most often used are prototypes and meetings with stakeholders. Another practice that helped the team to understand the problem was using Lean Inceptions. Also, our results present some specific issues regarding data science projects. Conclusion: We observed that end-user participation can be critical to understanding and defining the problem. They help to define elements of the domain and barriers in the implementation. We identified a need for approaches that facilitate user-team communication in data science projects and the need for more detailed requirements representations to support data science solutions.

Quantum computing (QC) has received a lot of attention according to its light training parameter numbers and computational speeds by qubits. Moreover, various researchers have tried to enable quantum machine learning (QML) using QC, where there are also multifarious efforts to use QC to implement quantum multi-agent reinforcement learning (QMARL). Existing classical multi-agent reinforcement learning (MARL) using neural network features non-stationarity and uncertain properties due to its large number of parameters. Therefore, this paper presents a visual simulation software framework for a novel QMARL algorithm to control autonomous multi-drone systems to take advantage of QC. Our proposed QMARL framework accomplishes reasonable reward convergence and service quality performance with fewer trainable parameters than the classical MARL. Furthermore, QMARL shows more stable training results than existing MARL algorithms. Lastly, our proposed visual simulation software allows us to analyze the agents' training process and results.

Considering the market's competitiveness and the complexity of organizations and projects, analyzing data is crucial to decision support on software development and project management processes. These practices are essential to increase performance, reduce costs and risks of failure, and guarantee the quality of results, keeping the work organized and controlled. ITLingo-Cloud is a multi-organization and multi-workspace collaborative platform to manage and analyze data that can support translating project performance knowledge into improved decision-making. This platform allows users to quickly set up their environment, manage workspaces and technical documentation, and analyze and observe statistics to aid both technical and business decisions. ITLingo-Cloud supports multiple technologies and languages, promotes data synchronization with templates and reusable libraries, as well as automation tasks, namely automatic data extraction, automatic validation, or document automation. The usability of ITLingo-Cloud was recently evaluated with two experiments and discussed with other related approaches.

Autonomic computing investigates how systems can achieve (user) specified control outcomes on their own, without the intervention of a human operator. Autonomic computing fundamentals have been substantially influenced by those of control theory for closed and open-loop systems. In practice, complex systems may exhibit a number of concurrent and inter-dependent control loops. Despite research into autonomic models for managing computer resources, ranging from individual resources (e.g., web servers) to a resource ensemble (e.g., multiple resources within a data center), research into integrating Artificial Intelligence (AI) and Machine Learning (ML) to improve resource autonomy and performance at scale continues to be a fundamental challenge. The integration of AI/ML to achieve such autonomic and self-management of systems can be achieved at different levels of granularity, from full to human-in-the-loop automation. In this article, leading academics, researchers, practitioners, engineers, and scientists in the fields of cloud computing, AI/ML, and quantum computing join to discuss current research and potential future directions for these fields. Further, we discuss challenges and opportunities for leveraging AI and ML in next generation computing for emerging computing paradigms, including cloud, fog, edge, serverless and quantum computing environments.

Forecasting has always been at the forefront of decision making and planning. The uncertainty that surrounds the future is both exciting and challenging, with individuals and organisations seeking to minimise risks and maximise utilities. The large number of forecasting applications calls for a diverse set of forecasting methods to tackle real-life challenges. This article provides a non-systematic review of the theory and the practice of forecasting. We provide an overview of a wide range of theoretical, state-of-the-art models, methods, principles, and approaches to prepare, produce, organise, and evaluate forecasts. We then demonstrate how such theoretical concepts are applied in a variety of real-life contexts. We do not claim that this review is an exhaustive list of methods and applications. However, we wish that our encyclopedic presentation will offer a point of reference for the rich work that has been undertaken over the last decades, with some key insights for the future of forecasting theory and practice. Given its encyclopedic nature, the intended mode of reading is non-linear. We offer cross-references to allow the readers to navigate through the various topics. We complement the theoretical concepts and applications covered by large lists of free or open-source software implementations and publicly-available databases.

This book develops an effective theory approach to understanding deep neural networks of practical relevance. Beginning from a first-principles component-level picture of networks, we explain how to determine an accurate description of the output of trained networks by solving layer-to-layer iteration equations and nonlinear learning dynamics. A main result is that the predictions of networks are described by nearly-Gaussian distributions, with the depth-to-width aspect ratio of the network controlling the deviations from the infinite-width Gaussian description. We explain how these effectively-deep networks learn nontrivial representations from training and more broadly analyze the mechanism of representation learning for nonlinear models. From a nearly-kernel-methods perspective, we find that the dependence of such models' predictions on the underlying learning algorithm can be expressed in a simple and universal way. To obtain these results, we develop the notion of representation group flow (RG flow) to characterize the propagation of signals through the network. By tuning networks to criticality, we give a practical solution to the exploding and vanishing gradient problem. We further explain how RG flow leads to near-universal behavior and lets us categorize networks built from different activation functions into universality classes. Altogether, we show that the depth-to-width ratio governs the effective model complexity of the ensemble of trained networks. By using information-theoretic techniques, we estimate the optimal aspect ratio at which we expect the network to be practically most useful and show how residual connections can be used to push this scale to arbitrary depths. With these tools, we can learn in detail about the inductive bias of architectures, hyperparameters, and optimizers.

Federated learning (FL) is an emerging, privacy-preserving machine learning paradigm, drawing tremendous attention in both academia and industry. A unique characteristic of FL is heterogeneity, which resides in the various hardware specifications and dynamic states across the participating devices. Theoretically, heterogeneity can exert a huge influence on the FL training process, e.g., causing a device unavailable for training or unable to upload its model updates. Unfortunately, these impacts have never been systematically studied and quantified in existing FL literature. In this paper, we carry out the first empirical study to characterize the impacts of heterogeneity in FL. We collect large-scale data from 136k smartphones that can faithfully reflect heterogeneity in real-world settings. We also build a heterogeneity-aware FL platform that complies with the standard FL protocol but with heterogeneity in consideration. Based on the data and the platform, we conduct extensive experiments to compare the performance of state-of-the-art FL algorithms under heterogeneity-aware and heterogeneity-unaware settings. Results show that heterogeneity causes non-trivial performance degradation in FL, including up to 9.2% accuracy drop, 2.32x lengthened training time, and undermined fairness. Furthermore, we analyze potential impact factors and find that device failure and participant bias are two potential factors for performance degradation. Our study provides insightful implications for FL practitioners. On the one hand, our findings suggest that FL algorithm designers consider necessary heterogeneity during the evaluation. On the other hand, our findings urge system providers to design specific mechanisms to mitigate the impacts of heterogeneity.

北京阿比特科技有限公司