亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

High energy solar flares and coronal mass ejections have the potential to destroy Earth's ground and satellite infrastructures, causing trillions of dollars in damage and mass human suffering. Destruction of these critical systems would disable power grids and satellites, crippling communications and transportation. This would lead to food shortages and an inability to respond to emergencies. A solution to this impending problem is proposed herein using satellites in solar orbit that continuously monitor the Sun, use artificial intelligence and machine learning to calculate the probability of massive solar explosions from this sensed data, and then signal defense mechanisms that will mitigate the threat. With modern technology there may be only safeguards that can be implemented with enough warning, which is why the best algorithm must be identified and continuously trained with existing and new data to maximize true positive rates while minimizing false negatives. This paper conducts a survey of current machine learning models using open source solar flare prediction data. The rise of edge computing allows machine learning hardware to be placed on the same satellites as the sensor arrays, saving critical time by not having to transmit remote sensing data across the vast distances of space. A system of systems approach will allow enough warning for safety measures to be put into place mitigating the risk of disaster.

相關內容

機器學習(Machine Learning)是一個(ge)研(yan)究(jiu)(jiu)計算學習方(fang)法(fa)(fa)的(de)(de)(de)(de)(de)(de)國際論(lun)(lun)壇。該(gai)雜志(zhi)發表文(wen)(wen)章,報(bao)告廣泛的(de)(de)(de)(de)(de)(de)學習方(fang)法(fa)(fa)應用(yong)于(yu)各種學習問(wen)題(ti)的(de)(de)(de)(de)(de)(de)實質性結果。該(gai)雜志(zhi)的(de)(de)(de)(de)(de)(de)特色論(lun)(lun)文(wen)(wen)描述研(yan)究(jiu)(jiu)的(de)(de)(de)(de)(de)(de)問(wen)題(ti)和(he)方(fang)法(fa)(fa),應用(yong)研(yan)究(jiu)(jiu)和(he)研(yan)究(jiu)(jiu)方(fang)法(fa)(fa)的(de)(de)(de)(de)(de)(de)問(wen)題(ti)。有關學習問(wen)題(ti)或方(fang)法(fa)(fa)的(de)(de)(de)(de)(de)(de)論(lun)(lun)文(wen)(wen)通過(guo)實證研(yan)究(jiu)(jiu)、理(li)論(lun)(lun)分析或與心理(li)現象的(de)(de)(de)(de)(de)(de)比較提供(gong)了(le)堅(jian)實的(de)(de)(de)(de)(de)(de)支(zhi)持。應用(yong)論(lun)(lun)文(wen)(wen)展示(shi)了(le)如何應用(yong)學習方(fang)法(fa)(fa)來解決重要的(de)(de)(de)(de)(de)(de)應用(yong)問(wen)題(ti)。研(yan)究(jiu)(jiu)方(fang)法(fa)(fa)論(lun)(lun)文(wen)(wen)改進了(le)機器學習的(de)(de)(de)(de)(de)(de)研(yan)究(jiu)(jiu)方(fang)法(fa)(fa)。所有的(de)(de)(de)(de)(de)(de)論(lun)(lun)文(wen)(wen)都以其(qi)他研(yan)究(jiu)(jiu)人員(yuan)可以驗證或復制(zhi)的(de)(de)(de)(de)(de)(de)方(fang)式描述了(le)支(zhi)持證據(ju)。論(lun)(lun)文(wen)(wen)還詳細說明了(le)學習的(de)(de)(de)(de)(de)(de)組(zu)成(cheng)部分,并討論(lun)(lun)了(le)關于(yu)知識表示(shi)和(he)性能任務的(de)(de)(de)(de)(de)(de)假設。 官網地址:

Cross-lingual Machine Reading Comprehension (xMRC) is challenging due to the lack of training data in low-resource languages. The recent approaches use training data only in a resource-rich language like English to fine-tune large-scale cross-lingual pre-trained language models. Due to the big difference between languages, a model fine-tuned only by a source language may not perform well for target languages. Interestingly, we observe that while the top-1 results predicted by the previous approaches may often fail to hit the ground-truth answers, the correct answers are often contained in the top-k predicted results. Based on this observation, we develop a two-stage approach to enhance the model performance. The first stage targets at recall: we design a hard-learning (HL) algorithm to maximize the likelihood that the top-k predictions contain the accurate answer. The second stage focuses on precision: an answer-aware contrastive learning (AA-CL) mechanism is developed to learn the fine difference between the accurate answer and other candidates. Our extensive experiments show that our model significantly outperforms a series of strong baselines on two cross-lingual MRC benchmark datasets.

Hyperparameter optimization (HPO) is a necessary step to ensure the best possible performance of Machine Learning (ML) algorithms. Several methods have been developed to perform HPO; most of these are focused on optimizing one performance measure (usually an error-based measure), and the literature on such single-objective HPO problems is vast. Recently, though, algorithms have appeared which focus on optimizing multiple conflicting objectives simultaneously. This article presents a systematic survey of the literature published between 2014 and 2020 on multi-objective HPO algorithms, distinguishing between metaheuristic-based algorithms, metamodel-based algorithms, and approaches using a mixture of both. We also discuss the quality metrics used to compare multi-objective HPO procedures and present future research directions.

The need to approximate functions is ubiquitous in science, either due to empirical constraints or high computational cost of accessing the function. In high-energy physics, the precise computation of the scattering cross-section of a process requires the evaluation of computationally intensive integrals. A wide variety of methods in machine learning have been used to tackle this problem, but often the motivation of using one method over another is lacking. Comparing these methods is typically highly dependent on the problem at hand, so we specify to the case where we can evaluate the function a large number of times, after which quick and accurate evaluation can take place. We consider four interpolation and three machine learning techniques and compare their performance on three toy functions, the four-point scalar Passarino-Veltman $D_0$ function, and the two-loop self-energy master integral $M$. We find that in low dimensions ($d = 3$), traditional interpolation techniques like the Radial Basis Function perform very well, but in higher dimensions ($d=5, 6, 9$) we find that multi-layer perceptrons (a.k.a neural networks) do not suffer as much from the curse of dimensionality and provide the fastest and most accurate predictions.

Radar shows great potential for autonomous driving by accomplishing long-range sensing under diverse weather conditions. But radar is also a particularly challenging sensing modality due to the radar noises. Recent works have made enormous progress in classifying free and occupied spaces in radar images by leveraging lidar label supervision. However, there are still several unsolved issues. Firstly, the sensing distance of the results is limited by the sensing range of lidar. Secondly, the performance of the results is degenerated by lidar due to the physical sensing discrepancies between the two sensors. For example, some objects visible to lidar are invisible to radar, and some objects occluded in lidar scans are visible in radar images because of the radar's penetrating capability. These sensing differences cause false positive and penetrating capability degeneration, respectively. In this paper, we propose training data preprocessing and polar sliding window inference to solve the issues. The data preprocessing aims to reduce the effect caused by radar-invisible measurements in lidar scans. The polar sliding window inference aims to solve the limited sensing range issue by applying a near-range trained network to the long-range region. Instead of using common Cartesian representation, we propose to use polar representation to reduce the shape dissimilarity between long-range and near-range data. We find that extending a near-range trained network to long-range region inference in the polar space has 4.2 times better IoU than in Cartesian space. Besides, the polar sliding window inference can preserve the radar penetrating capability by changing the viewpoint of the inference region, which makes some occluded measurements seem non-occluded for a pretrained network.

We present an approach to solving hard geometric optimization problems in the RANSAC framework. The hard minimal problems arise from relaxing the original geometric optimization problem into a minimal problem with many spurious solutions. Our approach avoids computing large numbers of spurious solutions. We design a learning strategy for selecting a starting problem-solution pair that can be numerically continued to the problem and the solution of interest. We demonstrate our approach by developing a RANSAC solver for the problem of computing the relative pose of three calibrated cameras, via a minimal relaxation using four points in each view. On average, we can solve a single problem in under 70 $\mu s.$ We also benchmark and study our engineering choices on the very familiar problem of computing the relative pose of two calibrated cameras, via the minimal case of five points in two views.

Continual learning (CL) learns a sequence of tasks incrementally with the goal of achieving two main objectives: overcoming catastrophic forgetting (CF) and encouraging knowledge transfer (KT) across tasks. However, most existing techniques focus only on overcoming CF and have no mechanism to encourage KT, and thus do not do well in KT. Although several papers have tried to deal with both CF and KT, our experiments show that they suffer from serious CF when the tasks do not have much shared knowledge. Another observation is that most current CL methods do not use pre-trained models, but it has been shown that such models can significantly improve the end task performance. For example, in natural language processing, fine-tuning a BERT-like pre-trained language model is one of the most effective approaches. However, for CL, this approach suffers from serious CF. An interesting question is how to make the best use of pre-trained models for CL. This paper proposes a novel model called CTR to solve these problems. Our experimental results demonstrate the effectiveness of CTR

The decline of the number of newly discovered mineral deposits and increase in demand for different minerals in recent years has led exploration geologists to look for more efficient and innovative methods for processing different data types at each stage of mineral exploration. As a primary step, various features, such as lithological units, alteration types, structures, and indicator minerals, are mapped to aid decision-making in targeting ore deposits. Different types of remote sensing datasets, such as satellite and airborne data, make it possible to overcome common problems associated with mapping geological features. The rapid increase in the volume of remote sensing data obtained from different platforms has encouraged scientists to develop advanced, innovative, and robust data processing methodologies. Machine learning methods can help process a wide range of remote sensing datasets and determine the relationship between components such as the reflectance continuum and features of interest. These methods are robust in processing spectral and ground truth measurements against noise and uncertainties. In recent years, many studies have been carried out by supplementing geological surveys with remote sensing datasets, which is now prominent in geoscience research. This paper provides a comprehensive review of the implementation and adaptation of some popular and recently established machine learning methods for processing different types of remote sensing data and investigates their applications for detecting various ore deposit types. We demonstrate the high capability of combining remote sensing data and machine learning methods for mapping different geological features that are critical for providing potential maps. Moreover, we find there is scope for advanced methods to process the new generation of remote sensing data for creating improved mineral prospectivity maps.

Graphical causal inference as pioneered by Judea Pearl arose from research on artificial intelligence (AI), and for a long time had little connection to the field of machine learning. This article discusses where links have been and should be established, introducing key concepts along the way. It argues that the hard open problems of machine learning and AI are intrinsically related to causality, and explains how the field is beginning to understand them.

Transfer learning aims at improving the performance of target learners on target domains by transferring the knowledge contained in different but related source domains. In this way, the dependence on a large number of target domain data can be reduced for constructing target learners. Due to the wide application prospects, transfer learning has become a popular and promising area in machine learning. Although there are already some valuable and impressive surveys on transfer learning, these surveys introduce approaches in a relatively isolated way and lack the recent advances in transfer learning. As the rapid expansion of the transfer learning area, it is both necessary and challenging to comprehensively review the relevant studies. This survey attempts to connect and systematize the existing transfer learning researches, as well as to summarize and interpret the mechanisms and the strategies in a comprehensive way, which may help readers have a better understanding of the current research status and ideas. Different from previous surveys, this survey paper reviews over forty representative transfer learning approaches from the perspectives of data and model. The applications of transfer learning are also briefly introduced. In order to show the performance of different transfer learning models, twenty representative transfer learning models are used for experiments. The models are performed on three different datasets, i.e., Amazon Reviews, Reuters-21578, and Office-31. And the experimental results demonstrate the importance of selecting appropriate transfer learning models for different applications in practice.

Automated machine learning (AutoML) aims to find optimal machine learning solutions automatically given a machine learning problem. It could release the burden of data scientists from the multifarious manual tuning process and enable the access of domain experts to the off-the-shelf machine learning solutions without extensive experience. In this paper, we review the current developments of AutoML in terms of three categories, automated feature engineering (AutoFE), automated model and hyperparameter learning (AutoMHL), and automated deep learning (AutoDL). State-of-the-art techniques adopted in the three categories are presented, including Bayesian optimization, reinforcement learning, evolutionary algorithm, and gradient-based approaches. We summarize popular AutoML frameworks and conclude with current open challenges of AutoML.

北京阿比特科技有限公司