亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Reinforcement learning (RL) has been shown to learn sophisticated control policies for complex tasks including games, robotics, heating and cooling systems and text generation. The action-perception cycle in RL, however, generally assumes that a measurement of the state of the environment is available at each time step without a cost. In applications such as materials design, deep-sea and planetary robot exploration and medicine, however, there can be a high cost associated with measuring, or even approximating, the state of the environment. In this paper, we survey the recently growing literature that adopts the perspective that an RL agent might not need, or even want, a costly measurement at each time step. Within this context, we propose the Deep Dynamic Multi-Step Observationless Agent (DMSOA), contrast it with the literature and empirically evaluate it on OpenAI gym and Atari Pong environments. Our results, show that DMSOA learns a better policy with fewer decision steps and measurements than the considered alternative from the literature. The corresponding code is available at: \url{//github.com/cbellinger27/Learning-when-to-observe-in-RL

相關內容

An underlying mechanism for successful deep learning (DL) with a limited deep architecture and dataset, namely VGG-16 on CIFAR-10, was recently presented based on a quantitative method to measure the quality of a single filter in each layer. In this method, each filter identifies small clusters of possible output labels, with additional noise selected as labels out of the clusters. This feature is progressively sharpened with the layers, resulting in an enhanced signal-to-noise ratio (SNR) and higher accuracy. In this study, the suggested universal mechanism is verified for VGG-16 and EfficientNet-B0 trained on the CIFAR-100 and ImageNet datasets with the following main results. First, the accuracy progressively increases with the layers, whereas the noise per filter typically progressively decreases. Second, for a given deep architecture, the maximal error rate increases approximately linearly with the number of output labels. Third, the average filter cluster size and the number of clusters per filter at the last convolutional layer adjacent to the output layer are almost independent of the number of dataset labels in the range [3, 1,000], while a high SNR is preserved. The presented DL mechanism suggests several techniques, such as applying filter's cluster connections (AFCC), to improve the computational complexity and accuracy of deep architectures and furthermore pinpoints the simplification of pre-existing structures while maintaining their accuracies.

The swift progression of machine learning (ML) has not gone unnoticed in the realm of statistical mechanics. ML techniques have attracted attention by the classical density-functional theory (DFT) community, as they enable discovery of free-energy functionals to determine the equilibrium-density profile of a many-particle system. Within DFT, the external potential accounts for the interaction of the many-particle system with an external field, thus, affecting the density distribution. In this context, we introduce a statistical-learning framework to infer the external potential exerted on a many-particle system. We combine a Bayesian inference approach with the classical DFT apparatus to reconstruct the external potential, yielding a probabilistic description of the external potential functional form with inherent uncertainty quantification. Our framework is exemplified with a grand-canonical one-dimensional particle ensemble with excluded volume interactions in a confined geometry. The required training dataset is generated using a Monte Carlo (MC) simulation where the external potential is applied to the grand-canonical ensemble. The resulting particle coordinates from the MC simulation are fed into the learning framework to uncover the external potential. This eventually allows us to compute the equilibrium density profile of the system by using the tools of DFT. Our approach benchmarks the inferred density against the exact one calculated through the DFT formulation with the true external potential. The proposed Bayesian procedure accurately infers the external potential and the density profile. We also highlight the external-potential uncertainty quantification conditioned on the amount of available simulated data. The seemingly simple case study introduced in this work might serve as a prototype for studying a wide variety of applications, including adsorption and capillarity.

Quantum reinforcement learning (QRL) has emerged as a framework to solve sequential decision-making tasks, showcasing empirical quantum advantages. A notable development is through quantum recurrent neural networks (QRNNs) for memory-intensive tasks such as partially observable environments. However, QRL models incorporating QRNN encounter challenges such as inefficient training of QRL with QRNN, given that the computation of gradients in QRNN is both computationally expensive and time-consuming. This work presents a novel approach to address this challenge by constructing QRL agents utilizing QRNN-based reservoirs, specifically employing quantum long short-term memory (QLSTM). QLSTM parameters are randomly initialized and fixed without training. The model is trained using the asynchronous advantage actor-aritic (A3C) algorithm. Through numerical simulations, we validate the efficacy of our QLSTM-Reservoir RL framework. Its performance is assessed on standard benchmarks, demonstrating comparable results to a fully trained QLSTM RL model with identical architecture and training settings.

When students and users of statistical methods first learn about regression analysis there is an emphasis on the technical details of models and estimation methods that invariably runs ahead of the purposes for which these models might be used. More broadly, statistics is widely understood to provide a body of techniques for "modelling data", underpinned by what we describe as the "true model myth", according to which the task of the statistician/data analyst is to build a model that closely approximates the true data generating process. By way of our own historical examples and a brief review of mainstream clinical research journals, we describe how this perspective leads to a range of problems in the application of regression methods, including misguided "adjustment" for covariates, misinterpretation of regression coefficients and the widespread fitting of regression models without a clear purpose. We then outline an alternative approach to the teaching and application of regression methods, which begins by focussing on clear definition of the substantive research question within one of three distinct types: descriptive, predictive, or causal. The simple univariable regression model may be introduced as a tool for description, while the development and application of multivariable regression models should proceed differently according to the type of question. Regression methods will no doubt remain central to statistical practice as they provide a powerful tool for representing variation in a response or outcome variable as a function of "input" variables, but their conceptualisation and usage should follow from the purpose at hand.

Matthias Eisenmann,Annika Reinke,Vivienn Weru,Minu Dietlinde Tizabi,Fabian Isensee,Tim J. Adler,Patrick Godau,Veronika Cheplygina,Michal Kozubek,Sharib Ali,Anubha Gupta,Jan Kybic,Alison Noble,Carlos Ortiz de Solórzano,Samiksha Pachade,Caroline Petitjean,Daniel Sage,Donglai Wei,Elizabeth Wilden,Deepak Alapatt,Vincent Andrearczyk,Ujjwal Baid,Spyridon Bakas,Niranjan Balu,Sophia Bano,Vivek Singh Bawa,Jorge Bernal,Sebastian Bodenstedt,Alessandro Casella,Jinwook Choi,Olivier Commowick,Marie Daum,Adrien Depeursinge,Reuben Dorent,Jan Egger,Hannah Eichhorn,Sandy Engelhardt,Melanie Ganz,Gabriel Girard,Lasse Hansen,Mattias Heinrich,Nicholas Heller,Alessa Hering,Arnaud Huaulmé,Hyunjeong Kim,Bennett Landman,Hongwei Bran Li,Jianning Li,Jun Ma,Anne Martel,Carlos Martín-Isla,Bjoern Menze,Chinedu Innocent Nwoye,Valentin Oreiller,Nicolas Padoy,Sarthak Pati,Kelly Payette,Carole Sudre,Kimberlin van Wijnen,Armine Vardazaryan,Tom Vercauteren,Martin Wagner,Chuanbo Wang,Moi Hoon Yap,Zeyun Yu,Chun Yuan,Maximilian Zenk,Aneeq Zia,David Zimmerer,Rina Bao,Chanyeol Choi,Andrew Cohen,Oleh Dzyubachyk,Adrian Galdran,Tianyuan Gan,Tianqi Guo,Pradyumna Gupta,Mahmood Haithami,Edward Ho,Ikbeom Jang,Zhili Li,Zhengbo Luo,Filip Lux,Sokratis Makrogiannis,Dominik Müller,Young-tack Oh,Subeen Pang,Constantin Pape,Gorkem Polat,Charlotte Rosalie Reed,Kanghyun Ryu,Tim Scherr,Vajira Thambawita,Haoyu Wang,Xinliang Wang,Kele Xu,Hung Yeh,Doyeob Yeo,Yixuan Yuan,Yan Zeng,Xin Zhao,Julian Abbing,Jannes Adam,Nagesh Adluru,Niklas Agethen,Salman Ahmed,Yasmina Al Khalil,Mireia Alenyà,Esa Alhoniemi,Chengyang An,Talha Anwar,Tewodros Weldebirhan Arega,Netanell Avisdris,Dogu Baran Aydogan,Yingbin Bai,Maria Baldeon Calisto,Berke Doga Basaran,Marcel Beetz,Cheng Bian,Hao Bian,Kevin Blansit,Louise Bloch,Robert Bohnsack,Sara Bosticardo,Jack Breen,Mikael Brudfors,Raphael Brüngel,Mariano Cabezas,Alberto Cacciola,Zhiwei Chen,Yucong Chen,Daniel Tianming Chen,Minjeong Cho,Min-Kook Choi,Chuantao Xie Chuantao Xie,Dana Cobzas,Julien Cohen-Adad,Jorge Corral Acero,Sujit Kumar Das,Marcela de Oliveira,Hanqiu Deng,Guiming Dong,Lars Doorenbos,Cory Efird,Sergio Escalera,Di Fan,Mehdi Fatan Serj,Alexandre Fenneteau,Lucas Fidon,Patryk Filipiak,René Finzel,Nuno R. Freitas,Christoph M. Friedrich,Mitchell Fulton,Finn Gaida,Francesco Galati,Christoforos Galazis,Chang Hee Gan,Zheyao Gao,Shengbo Gao,Matej Gazda,Beerend Gerats,Neil Getty,Adam Gibicar,Ryan Gifford,Sajan Gohil,Maria Grammatikopoulou,Daniel Grzech,Orhun Güley,Timo Günnemann,Chunxu Guo,Sylvain Guy,Heonjin Ha,Luyi Han,Il Song Han,Ali Hatamizadeh,Tian He,Jimin Heo,Sebastian Hitziger,SeulGi Hong,SeungBum Hong,Rian Huang,Ziyan Huang,Markus Huellebrand,Stephan Huschauer,Mustaffa Hussain,Tomoo Inubushi,Ece Isik Polat,Mojtaba Jafaritadi,SeongHun Jeong,Bailiang Jian,Yuanhong Jiang,Zhifan Jiang,Yueming Jin,Smriti Joshi,Abdolrahim Kadkhodamohammadi,Reda Abdellah Kamraoui,Inha Kang,Junghwa Kang,Davood Karimi,April Khademi,Muhammad Irfan Khan,Suleiman A. Khan,Rishab Khantwal,Kwang-Ju Kim,Timothy Kline,Satoshi Kondo,Elina Kontio,Adrian Krenzer,Artem Kroviakov,Hugo Kuijf,Satyadwyoom Kumar,Francesco La Rosa,Abhi Lad,Doohee Lee,Minho Lee,Chiara Lena,Hao Li,Ling Li,Xingyu Li,Fuyuan Liao,KuanLun Liao,Arlindo Limede Oliveira,Chaonan Lin,Shan Lin,Akis Linardos,Marius George Linguraru,Han Liu,Tao Liu,Di Liu,Yanling Liu,Jo?o Louren?o-Silva,Jingpei Lu,Jiangshan Lu,Imanol Luengo,Christina B. Lund,Huan Minh Luu,Yi Lv,Yi Lv,Uzay Macar,Leon Maechler,Sina Mansour L.,Kenji Marshall,Moona Mazher,Richard McKinley,Alfonso Medela,Felix Meissen,Mingyuan Meng,Dylan Miller,Seyed Hossein Mirjahanmardi,Arnab Mishra,Samir Mitha,Hassan Mohy-ud-Din,Tony Chi Wing Mok,Gowtham Krishnan Murugesan,Enamundram Naga Karthik,Sahil Nalawade,Jakub Nalepa,Mohamed Naser,Ramin Nateghi,Hammad Naveed,Quang-Minh Nguyen,Cuong Nguyen Quoc,Brennan Nichyporuk,Bruno Oliveira,David Owen,Jimut Bahan Pal,Junwen Pan,Wentao Pan,Winnie Pang,Bogyu Park,Vivek Pawar,Kamlesh Pawar,Michael Peven,Lena Philipp,Tomasz Pieciak,Szymon Plotka,Marcel Plutat,Fattaneh Pourakpour,Domen Prelo?nik,Kumaradevan Punithakumar,Abdul Qayyum,Sandro Queirós,Arman Rahmim,Salar Razavi,Jintao Ren,Mina Rezaei,Jonathan Adam Rico,ZunHyan Rieu,Markus Rink,Johannes Roth,Yusely Ruiz-Gonzalez,Numan Saeed,Anindo Saha,Mostafa Salem,Ricardo Sanchez-Matilla,Kurt Schilling,Wei Shao,Zhiqiang Shen,Ruize Shi,Pengcheng Shi,Daniel Sobotka,Théodore Soulier,Bella Specktor Fadida,Danail Stoyanov,Timothy Sum Hon Mun,Xiaowu Sun,Rong Tao,Franz Thaler,Antoine Théberge,Felix Thielke,Helena Torres,Kareem A. Wahid,Jiacheng Wang,YiFei Wang,Wei Wang,Xiong Wang,Jianhui Wen,Ning Wen,Marek Wodzinski,Ye Wu,Fangfang Xia,Tianqi Xiang,Chen Xiaofei,Lizhan Xu,Tingting Xue,Yuxuan Yang,Lin Yang,Kai Yao,Huifeng Yao,Amirsaeed Yazdani,Michael Yip,Hwanseung Yoo,Fereshteh Yousefirizi,Shunkai Yu,Lei Yu,Jonathan Zamora,Ramy Ashraf Zeineldin,Dewen Zeng,Jianpeng Zhang,Bokai Zhang,Jiapeng Zhang,Fan Zhang,Huahong Zhang,Zhongchen Zhao,Zixuan Zhao,Jiachen Zhao,Can Zhao,Qingshuo Zheng,Yuheng Zhi,Ziqi Zhou,Baosheng Zou,Klaus Maier-Hein,Paul F. J?ger,Annette Kopp-Schneider,Lena Maier-Hein
Matthias Eisenmann,Annika Reinke,Vivienn Weru,Minu Dietlinde Tizabi,Fabian Isensee,Tim J. Adler,Patrick Godau,Veronika Cheplygina,Michal Kozubek,Sharib Ali,Anubha Gupta,Jan Kybic,Alison Noble,Carlos Ortiz de Solórzano,Samiksha Pachade,Caroline Petitjean,Daniel Sage,Donglai Wei,Elizabeth Wilden,Deepak Alapatt,Vincent Andrearczyk,Ujjwal Baid,Spyridon Bakas,Niranjan Balu,Sophia Bano,Vivek Singh Bawa,Jorge Bernal,Sebastian Bodenstedt,Alessandro Casella,Jinwook Choi,Olivier Commowick,Marie Daum,Adrien Depeursinge,Reuben Dorent,Jan Egger,Hannah Eichhorn,Sandy Engelhardt,Melanie Ganz,Gabriel Girard,Lasse Hansen,Mattias Heinrich,Nicholas Heller,Alessa Hering,Arnaud Huaulmé,Hyunjeong Kim,Bennett Landman,Hongwei Bran Li,Jianning Li,Jun Ma,Anne Martel,Carlos Martín-Isla,Bjoern Menze,Chinedu Innocent Nwoye,Valentin Oreiller,Nicolas Padoy,Sarthak Pati,Kelly Payette,Carole Sudre,Kimberlin van Wijnen,Armine Vardazaryan,Tom Vercauteren,Martin Wagner,Chuanbo Wang,Moi Hoon Yap,Zeyun Yu,Chun Yuan,Maximilian Zenk,Aneeq Zia,David Zimmerer,Rina Bao,Chanyeol Choi,Andrew Cohen,Oleh Dzyubachyk,Adrian Galdran,Tianyuan Gan,Tianqi Guo,Pradyumna Gupta,Mahmood Haithami,Edward Ho,Ikbeom Jang,Zhili Li,Zhengbo Luo,Filip Lux,Sokratis Makrogiannis,Dominik Müller,Young-tack Oh,Subeen Pang,Constantin Pape,Gorkem Polat,Charlotte Rosalie Reed,Kanghyun Ryu,Tim Scherr,Vajira Thambawita,Haoyu Wang,Xinliang Wang,Kele Xu,Hung Yeh,Doyeob Yeo,Yixuan Yuan,Yan Zeng,Xin Zhao,Julian Abbing,Jannes Adam,Nagesh Adluru,Niklas Agethen,Salman Ahmed,Yasmina Al Khalil,Mireia Alenyà,Esa Alhoniemi,Chengyang An,Talha Anwar,Tewodros Weldebirhan Arega,Netanell Avisdris,Dogu Baran Aydogan,Yingbin Bai,Maria Baldeon Calisto,Berke Doga Basaran,Marcel Beetz,Cheng Bian,Hao Bian,Kevin Blansit,Louise Bloch,Robert Bohnsack,Sara Bosticardo,Jack Breen,Mikael Brudfors,Raphael Brüngel,Mariano Cabezas,Alberto Cacciola,Zhiwei Chen,Yucong Chen,Daniel Tianming Chen,Minjeong Cho,Min-Kook Choi,Chuantao Xie Chuantao Xie,Dana Cobzas,Julien Cohen-Adad,Jorge Corral Acero,Sujit Kumar Das,Marcela de Oliveira,Hanqiu Deng,Guiming Dong,Lars Doorenbos,Cory Efird,Sergio Escalera,Di Fan,Mehdi Fatan Serj,Alexandre Fenneteau,Lucas Fidon,Patryk Filipiak,René Finzel,Nuno R. Freitas,Christoph M. Friedrich,Mitchell Fulton,Finn Gaida,Francesco Galati,Christoforos Galazis,Chang Hee Gan,Zheyao Gao,Shengbo Gao,Matej Gazda,Beerend Gerats,Neil Getty,Adam Gibicar,Ryan Gifford,Sajan Gohil,Maria Grammatikopoulou,Daniel Grzech,Orhun Güley,Timo Günnemann,Chunxu Guo,Sylvain Guy,Heonjin Ha,Luyi Han,Il Song Han,Ali Hatamizadeh,Tian He,Jimin Heo,Sebastian Hitziger,SeulGi Hong,SeungBum Hong,Rian Huang,Ziyan Huang,Markus Huellebrand,Stephan Huschauer,Mustaffa Hussain,Tomoo Inubushi,Ece Isik Polat,Mojtaba Jafaritadi,SeongHun Jeong,Bailiang Jian,Yuanhong Jiang,Zhifan Jiang,Yueming Jin,Smriti Joshi,Abdolrahim Kadkhodamohammadi,Reda Abdellah Kamraoui,Inha Kang,Junghwa Kang,Davood Karimi,April Khademi,Muhammad Irfan Khan,Suleiman A. Khan,Rishab Khantwal,Kwang-Ju Kim,Timothy Kline,Satoshi Kondo,Elina Kontio,Adrian Krenzer,Artem Kroviakov,Hugo Kuijf,Satyadwyoom Kumar,Francesco La Rosa,Abhi Lad,Doohee Lee,Minho Lee,Chiara Lena,Hao Li,Ling Li,Xingyu Li,Fuyuan Liao,KuanLun Liao,Arlindo Limede Oliveira,Chaonan Lin,Shan Lin,Akis Linardos,Marius George Linguraru,Han Liu,Tao Liu,Di Liu,Yanling Liu,Jo?o Louren?o-Silva,Jingpei Lu,Jiangshan Lu,Imanol Luengo,Christina B. Lund,Huan Minh Luu,Yi Lv,Yi Lv,Uzay Macar,Leon Maechler,Sina Mansour L.,Kenji Marshall,Moona Mazher,Richard McKinley,Alfonso Medela,Felix Meissen,Mingyuan Meng,Dylan Miller,Seyed Hossein Mirjahanmardi,Arnab Mishra,Samir Mitha,Hassan Mohy-ud-Din,Tony Chi Wing Mok,Gowtham Krishnan Murugesan,Enamundram Naga Karthik,Sahil Nalawade,Jakub Nalepa,Mohamed Naser,Ramin Nateghi,Hammad Naveed,Quang-Minh Nguyen,Cuong Nguyen Quoc,Brennan Nichyporuk,Bruno Oliveira,David Owen,Jimut Bahan Pal,Junwen Pan,Wentao Pan,Winnie Pang,Bogyu Park,Vivek Pawar,Kamlesh Pawar,Michael Peven,Lena Philipp,Tomasz Pieciak,Szymon Plotka,Marcel Plutat,Fattaneh Pourakpour,Domen Prelo?nik,Kumaradevan Punithakumar,Abdul Qayyum,Sandro Queirós,Arman Rahmim,Salar Razavi,Jintao Ren,Mina Rezaei,Jonathan Adam Rico,ZunHyan Rieu,Markus Rink,Johannes Roth,Yusely Ruiz-Gonzalez,Numan Saeed,Anindo Saha,Mostafa Salem,Ricardo Sanchez-Matilla,Kurt Schilling,Wei Shao,Zhiqiang Shen,Ruize Shi,Pengcheng Shi,Daniel Sobotka,Théodore Soulier,Bella Specktor Fadida,Danail Stoyanov,Timothy Sum Hon Mun,Xiaowu Sun,Rong Tao,Franz Thaler,Antoine Théberge,Felix Thielke,Helena Torres,Kareem A. Wahid,Jiacheng Wang,YiFei Wang,Wei Wang,Xiong Wang,Jianhui Wen,Ning Wen,Marek Wodzinski,Ye Wu,Fangfang Xia,Tianqi Xiang,Chen Xiaofei,Lizhan Xu,Tingting Xue,Yuxuan Yang,Lin Yang,Kai Yao,Huifeng Yao,Amirsaeed Yazdani,Michael Yip,Hwanseung Yoo,Fereshteh Yousefirizi,Shunkai Yu,Lei Yu,Jonathan Zamora,Ramy Ashraf Zeineldin,Dewen Zeng,Jianpeng Zhang,Bokai Zhang,Jiapeng Zhang,Fan Zhang,Huahong Zhang,Zhongchen Zhao,Zixuan Zhao,Jiachen Zhao,Can Zhao,Qingshuo Zheng,Yuheng Zhi,Ziqi Zhou,Baosheng Zou,Klaus Maier-Hein,Paul F. J?ger,Annette Kopp-Schneider,Lena Maier-Hein

The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.

In-context learning, i.e., learning from in-context samples, is an impressive ability of Transformer. However, the mechanism driving the in-context learning is not yet fully understood. In this study, we aim to investigate from an underexplored perspective of representation learning. The representation is more complex for in-context learning senario, where the representation can be impacted by both model weights and in-context samples. We refer the above two conceptually aspects of representation as in-weight component and in-context component, respectively. To study how the two components affect in-context learning capabilities, we construct a novel synthetic task, making it possible to device two probes, in-weights probe and in-context probe, to evaluate the two components, respectively. We demonstrate that the goodness of in-context component is highly related to the in-context learning performance, which indicates the entanglement between in-context learning and representation learning. Furthermore, we find that a good in-weights component can actually benefit the learning of the in-context component, indicating that in-weights learning should be the foundation of in-context learning. To further understand the the in-context learning mechanism and importance of the in-weights component, we proof by construction that a simple Transformer, which uses pattern matching and copy-past mechanism to perform in-context learning, can match the in-context learning performance with more complex, best tuned Transformer under the perfect in-weights component assumption. In short, those discoveries from representation learning perspective shed light on new approaches to improve the in-context capacity.

We study feedback controller synthesis for reach-avoid control of discrete-time, linear time-invariant (LTI) systems with Gaussian process and measurement noise. The problem is to compute a controller such that, with at least some required probability, the system reaches a desired goal state in finite time while avoiding unsafe states. Due to stochasticity and nonconvexity, this problem does not admit exact algorithmic or closed-form solutions in general. Our key contribution is a correct-by-construction controller synthesis scheme based on a finite-state abstraction of a Gaussian belief over the unmeasured state, obtained using a Kalman filter. We formalize this abstraction as a Markov decision process (MDP). To be robust against numerical imprecision in approximating transition probabilities, we use MDPs with intervals of transition probabilities. By construction, any policy on the abstraction can be refined into a piecewise linear feedback controller for the LTI system. We prove that the closed-loop LTI system under this controller satisfies the reach-avoid problem with at least the required probability. The numerical experiments show that our method is able to solve reach-avoid problems for systems with up to 6D state spaces, and with control input constraints that cannot be handled by methods such as the rapidly-exploring random belief trees (RRBT).

The modeling and simulation of high-dimensional multiscale systems is a critical challenge across all areas of science and engineering. It is broadly believed that even with today's computer advances resolving all spatiotemporal scales described by the governing equations remains a remote target. This realization has prompted intense efforts to develop model order reduction techniques. In recent years, techniques based on deep recurrent neural networks have produced promising results for the modeling and simulation of complex spatiotemporal systems and offer large flexibility in model development as they can incorporate experimental and computational data. However, neural networks lack interpretability, which limits their utility and generalizability across complex systems. Here we propose a novel framework of Interpretable Learning Effective Dynamics (iLED) that offers comparable accuracy to state-of-the-art recurrent neural network-based approaches while providing the added benefit of interpretability. The iLED framework is motivated by Mori-Zwanzig and Koopman operator theory, which justifies the choice of the specific architecture. We demonstrate the effectiveness of the proposed framework in simulations of three benchmark multiscale systems. Our results show that the iLED framework can generate accurate predictions and obtain interpretable dynamics, making it a promising approach for solving high-dimensional multiscale systems.

The remarkable practical success of deep learning has revealed some major surprises from a theoretical perspective. In particular, simple gradient methods easily find near-optimal solutions to non-convex optimization problems, and despite giving a near-perfect fit to training data without any explicit effort to control model complexity, these methods exhibit excellent predictive accuracy. We conjecture that specific principles underlie these phenomena: that overparametrization allows gradient methods to find interpolating solutions, that these methods implicitly impose regularization, and that overparametrization leads to benign overfitting. We survey recent theoretical progress that provides examples illustrating these principles in simpler settings. We first review classical uniform convergence results and why they fall short of explaining aspects of the behavior of deep learning methods. We give examples of implicit regularization in simple settings, where gradient methods lead to minimal norm functions that perfectly fit the training data. Then we review prediction methods that exhibit benign overfitting, focusing on regression problems with quadratic loss. For these methods, we can decompose the prediction rule into a simple component that is useful for prediction and a spiky component that is useful for overfitting but, in a favorable setting, does not harm prediction accuracy. We focus specifically on the linear regime for neural networks, where the network can be approximated by a linear model. In this regime, we demonstrate the success of gradient flow, and we consider benign overfitting with two-layer networks, giving an exact asymptotic analysis that precisely demonstrates the impact of overparametrization. We conclude by highlighting the key challenges that arise in extending these insights to realistic deep learning settings.

Deep learning constitutes a recent, modern technique for image processing and data analysis, with promising results and large potential. As deep learning has been successfully applied in various domains, it has recently entered also the domain of agriculture. In this paper, we perform a survey of 40 research efforts that employ deep learning techniques, applied to various agricultural and food production challenges. We examine the particular agricultural problems under study, the specific models and frameworks employed, the sources, nature and pre-processing of data used, and the overall performance achieved according to the metrics used at each work under study. Moreover, we study comparisons of deep learning with other existing popular techniques, in respect to differences in classification or regression performance. Our findings indicate that deep learning provides high accuracy, outperforming existing commonly used image processing techniques.

北京阿比特科技有限公司