亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Quantum computing is a new field that has recently attracted researchers from a broad range of fields due to its representation power, flexibility and promising results in both speed and scalability. Since 2020, laboratories around the globe have started to experiment with models that lie in the juxtaposition between machine learning and quantum computing. The availability of quantum processing units (QPUs) to the general scientific community through open APIs (e.g., Qiskit from IBM) have kindled the interest in developing and testing new approaches to old problems. In this paper, we present a hybrid quantum machine learning framework for health state diagnostics and prognostics. The framework is exemplified using a problem involving ball bearings dataset. To the best of our knowledge, this is the first attempt to harvest and leverage quantum computing to develop and apply a hybrid quantum-classical machine learning approach to a prognostics and health management (PHM) problem. We hope that this paper initiates the exploration and application of quantum machine learning algorithms in areas of risk and reliability.

相關內容

Context: The advancements in machine learning techniques have encouraged researchers to apply these techniques to a myriad of software engineering tasks that use source code analysis such as testing and vulnerabilities detection. A large number of studies poses challenges to the community to understand the current landscape. Objective: We aim to summarize the current knowledge in the area of applied machine learning for source code analysis. Method: We investigate studies belonging to twelve categories of software engineering tasks and corresponding machine learning techniques, tools, and datasets that have been applied to solve them. To do so, we carried out an extensive literature search and identified 364 primary studies published between 2002 and 2021. We summarize our observations and findings with the help of the identified studies. Results: Our findings suggest that the usage of machine learning techniques for source code analysis tasks is consistently increasing. We synthesize commonly used steps and the overall workflow for each task, and summarize the employed machine learning techniques. Additionally, we collate a comprehensive list of available datasets and tools useable in this context. Finally, we summarize the perceived challenges in this area that include availability of standard datasets, reproducibility and replicability, and hardware resources.

This work investigates an extension of transfer learning applied in machine learning algorithms to the emerging hybrid end-to-end quantum neural network (QNN) for spoken command recognition (SCR). Our QNN-based SCR system is composed of classical and quantum components: (1) the classical part mainly relies on a 1D convolutional neural network (CNN) to extract speech features; (2) the quantum part is built upon the variational quantum circuit with a few learnable parameters. Since it is inefficient to train the hybrid end-to-end QNN from scratch on a noisy intermediate-scale quantum (NISQ) device, we put forth a hybrid transfer learning algorithm that allows a pre-trained classical network to be transferred to the classical part of the hybrid QNN model. The pre-trained classical network is further modified and augmented through jointly fine-tuning with a variational quantum circuit (VQC). The hybrid transfer learning methodology is particularly attractive for the task of QNN-based SCR because low-dimensional classical features are expected to be encoded into quantum states. We assess the hybrid transfer learning algorithm applied to the hybrid classical-quantum QNN for SCR on the Google speech command dataset, and our classical simulation results suggest that the hybrid transfer learning can boost our baseline performance on the SCR task.

High energy solar flares and coronal mass ejections have the potential to destroy Earth's ground and satellite infrastructures, causing trillions of dollars in damage and mass human suffering. Destruction of these critical systems would disable power grids and satellites, crippling communications and transportation. This would lead to food shortages and an inability to respond to emergencies. A solution to this impending problem is proposed herein using satellites in solar orbit that continuously monitor the Sun, use artificial intelligence and machine learning to calculate the probability of massive solar explosions from this sensed data, and then signal defense mechanisms that will mitigate the threat. With modern technology there may be only safeguards that can be implemented with enough warning, which is why the best algorithm must be identified and continuously trained with existing and new data to maximize true positive rates while minimizing false negatives. This paper conducts a survey of current machine learning models using open source solar flare prediction data. The rise of edge computing allows machine learning hardware to be placed on the same satellites as the sensor arrays, saving critical time by not having to transmit remote sensing data across the vast distances of space. A system of systems approach will allow enough warning for safety measures to be put into place mitigating the risk of disaster.

A survey of machine learning techniques trained to detect ransomware is presented. This work builds upon the efforts of Taylor et al. in using sensor-based methods that utilize data collected from built-in instruments like CPU power and temperature monitors to identify encryption activity. Exploratory data analysis (EDA) shows the features most useful from this simulated data are clock speed, temperature, and CPU load. These features are used in training multiple algorithms to determine an optimal detection approach. Performance is evaluated with accuracy, F1 score, and false-negative rate metrics. The Multilayer Perceptron with three hidden layers achieves scores of 97% in accuracy and F1 and robust data preparation. A random forest model produces scores of 93% accuracy and 92% F1, showing that sensor-based detection is currently a viable option to detect even zero-day ransomware attacks before the code fully executes.

The recent development of machine learning (ML) and Deep Learning (DL) increases the opportunities in all the sectors. ML is a significant tool that can be applied across many disciplines, but its direct application to civil engineering problems can be challenging. ML for civil engineering applications that are simulated in the lab often fail in real-world tests. This is usually attributed to a data mismatch between the data used to train and test the ML model and the data it encounters in the real world, a phenomenon known as data shift. However, a physics-based ML model integrates data, partial differential equations (PDEs), and mathematical models to solve data shift problems. Physics-based ML models are trained to solve supervised learning tasks while respecting any given laws of physics described by general nonlinear equations. Physics-based ML, which takes center stage across many science disciplines, plays an important role in fluid dynamics, quantum mechanics, computational resources, and data storage. This paper reviews the history of physics-based ML and its application in civil engineering.

Recent work suggests that quantum machine learning techniques can be used for classical image classification by encoding the images in quantum states and using a quantum neural network for inference. However, such work has been restricted to very small input images, at most 4 x 4, that are unrealistic and cannot even be accurately labeled by humans. The primary difficulties in using larger input images is that hitherto-proposed encoding schemes necessitate more qubits than are physically realizable. We propose a framework to classify larger, realistic images using quantum systems. Our approach relies on a novel encoding mechanism that embeds images in quantum states while necessitating fewer qubits than prior work. Our framework is able to classify images that are larger than previously possible, up to 16 x 16 for the MNIST dataset on a personal laptop, and obtains accuracy comparable to classical neural networks with the same number of learnable parameters. We also propose a technique for further reducing the number of qubits needed to represent images that may result in an easier physical implementation at the expense of final performance. Our work enables quantum machine learning and classification on classical datasets of dimensions that were previously intractable by physically realizable quantum computers or classical simulation

We apply machine learning methods to predict chemical hazards focusing on fish acute toxicity across taxa. We analyze the relevance of taxonomy and experimental setup, and show that taking them into account can lead to considerable improvements in the classification performance. We quantify the gain obtained by introducing the taxonomic and experimental information, compared to classifying based on chemical information alone. We use our approach with standard machine learning models (K-nearest neighbors, random forests and deep neural networks), as well as the recently proposed Read-Across Structure Activity Relationship (RASAR) models, which were very successful in predicting chemical hazards to mammals based on chemical similarity. We are able to obtain accuracies of over 0.93 on datasets where, due to noise in the data, the maximum achievable accuracy is expected to be below 0.95, which results in an effective accuracy of 0.98. The best performances are obtained by random forests and RASAR models. We analyze metrics to compare our results with animal test reproducibility, and despite most of our models 'outperform animal test reproducibility' as measured through recently proposed metrics, we show that the comparison between machine learning performance and animal test reproducibility should be addressed with particular care. While we focus on fish mortality, our approach, provided that the right data is available, is valid for any combination of chemicals, effects and taxa.

Human-in-the-loop aims to train an accurate prediction model with minimum cost by integrating human knowledge and experience. Humans can provide training data for machine learning applications and directly accomplish some tasks that are hard for computers in the pipeline with the help of machine-based approaches. In this paper, we survey existing works on human-in-the-loop from a data perspective and classify them into three categories with a progressive relationship: (1) the work of improving model performance from data processing, (2) the work of improving model performance through interventional model training, and (3) the design of the system independent human-in-the-loop. Using the above categorization, we summarize major approaches in the field, along with their technical strengths/ weaknesses, we have simple classification and discussion in natural language processing, computer vision, and others. Besides, we provide some open challenges and opportunities. This survey intends to provide a high-level summarization for human-in-the-loop and motivates interested readers to consider approaches for designing effective human-in-the-loop solutions.

Over the past decades, numerous practical applications of machine learning techniques have shown the potential of data-driven approaches in a large number of computing fields. Machine learning is increasingly included in computing curricula in higher education, and a quickly growing number of initiatives are expanding it in K-12 computing education, too. As machine learning enters K-12 computing education, understanding how intuition and agency in the context of such systems is developed becomes a key research area. But as schools and teachers are already struggling with integrating traditional computational thinking and traditional artificial intelligence into school curricula, understanding the challenges behind teaching machine learning in K-12 is an even more daunting challenge for computing education research. Despite the central position of machine learning in the field of modern computing, the computing education research body of literature contains remarkably few studies of how people learn to train, test, improve, and deploy machine learning systems. This is especially true of the K-12 curriculum space. This article charts the emerging trajectories in educational practice, theory, and technology related to teaching machine learning in K-12 education. The article situates the existing work in the context of computing education in general, and describes some differences that K-12 computing educators should take into account when facing this challenge. The article focuses on key aspects of the paradigm shift that will be required in order to successfully integrate machine learning into the broader K-12 computing curricula. A crucial step is abandoning the belief that rule-based "traditional" programming is a central aspect and building block in developing next generation computational thinking.

Graphs are widely used as a popular representation of the network structure of connected data. Graph data can be found in a broad spectrum of application domains such as social systems, ecosystems, biological networks, knowledge graphs, and information systems. With the continuous penetration of artificial intelligence technologies, graph learning (i.e., machine learning on graphs) is gaining attention from both researchers and practitioners. Graph learning proves effective for many tasks, such as classification, link prediction, and matching. Generally, graph learning methods extract relevant features of graphs by taking advantage of machine learning algorithms. In this survey, we present a comprehensive overview on the state-of-the-art of graph learning. Special attention is paid to four categories of existing graph learning methods, including graph signal processing, matrix factorization, random walk, and deep learning. Major models and algorithms under these categories are reviewed respectively. We examine graph learning applications in areas such as text, images, science, knowledge graphs, and combinatorial optimization. In addition, we discuss several promising research directions in this field.

北京阿比特科技有限公司