In this paper, we study dimension reduction techniques for large-scale controlled stochastic differential equations (SDEs). The drift of the considered SDEs contains a polynomial term satisfying a one-sided growth condition. Such nonlinearities in high dimensional settings occur, e.g., when stochastic reaction diffusion equations are discretized in space. We provide a brief discussion around existence, uniqueness and stability of solutions. (Almost) stability then is the basis for new concepts of Gramians that we introduce and study in this work. With the help of these Gramians, dominant subspace are identified leading to a balancing related highly accurate reduced order SDE. We provide an algebraic error criterion and an error analysis of the propose model reduction schemes. The paper is concluded by applying our method to spatially discretized reaction diffusion equations.
In this paper, we consider a model reduction technique for stabilizable and detectable stochastic systems. It is based on a pair of Gramians that we analyze in terms of well-posedness. Subsequently, dominant subspaces of the stochastic systems are identified exploiting these Gramians. An associated balancing related scheme is proposed that removes unimportant information from the stochastic dynamics in order to obtain a reduced system. We show that this reduced model preserves important features like stabilizability and detectability. Additionally, a comprehensive error analysis based on eigenvalues of the Gramian pair product is conducted. This provides an a-priori criterion for the reduction quality which we illustrate in numerical experiments.
We consider stochastic approximations of sampling algorithms, such as Stochastic Gradient Langevin Dynamics (SGLD) and the Random Batch Method (RBM) for Interacting Particle Dynamcs (IPD). We observe that the noise introduced by the stochastic approximation is nearly Gaussian due to the Central Limit Theorem (CLT) while the driving Brownian motion is exactly Gaussian. We harness this structure to absorb the stochastic approximation error inside the diffusion process, and obtain improved convergence guarantees for these algorithms. For SGLD, we prove the first stable convergence rate in KL divergence without requiring uniform warm start, assuming the target density satisfies a Log-Sobolev Inequality. Our result implies superior first-order oracle complexity compared to prior works, under significantly milder assumptions. We also prove the first guarantees for SGLD under even weaker conditions such as H\"{o}lder smoothness and Poincare Inequality, thus bridging the gap between the state-of-the-art guarantees for LMC and SGLD. Our analysis motivates a new algorithm called covariance correction, which corrects for the additional noise introduced by the stochastic approximation by rescaling the strength of the diffusion. Finally, we apply our techniques to analyze RBM, and significantly improve upon the guarantees in prior works (such as removing exponential dependence on horizon), under minimal assumptions.
A fundamental open problem in deep learning theory is how to define and understand the stability of stochastic gradient descent (SGD) close to a fixed point. Conventional literature relies on the convergence of statistical moments, esp., the variance, of the parameters to quantify the stability. We revisit the definition of stability for SGD and use the \textit{convergence in probability} condition to define the \textit{probabilistic stability} of SGD. The proposed stability directly answers a fundamental question in deep learning theory: how SGD selects a meaningful solution for a neural network from an enormous number of solutions that may overfit badly. To achieve this, we show that only under the lens of probabilistic stability does SGD exhibit rich and practically relevant phases of learning, such as the phases of the complete loss of stability, incorrect learning, convergence to low-rank saddles, and correct learning. When applied to a neural network, these phase diagrams imply that SGD prefers low-rank saddles when the underlying gradient is noisy, thereby improving the learning performance. This result is in sharp contrast to the conventional wisdom that SGD prefers flatter minima to sharp ones, which we find insufficient to explain the experimental data. We also prove that the probabilistic stability of SGD can be quantified by the Lyapunov exponents of the SGD dynamics, which can easily be measured in practice. Our work potentially opens a new venue for addressing the fundamental question of how the learning algorithm affects the learning outcome in deep learning.
Malware is the most significant threat to computer security. This paper aims to overview the malware detection field, focusing on the recent and promising hardware-based approach. This approach leverages the Hardware Performance Counters already available in modern processors and the power of Machine Learning, offering attractive advantages like resilience to disabling the protection, resilience to unknown malware, low complexity/overhead/cost, and run-time detection. The approach is deeply analyzed in light of a generic hardware-based detection framework. Some challenges related to the approach are presented: the necessary accuracy improvements, how to deal with the classification error, better correlating the hardware events behavior with the malware, and essential improvements on the hardware performance monitor.
In this paper, we examine the internet of things system which is dedicated for smart cities, smart factory, and connected cars, etc. To support such systems in wide area with low power consumption, energy harvesting technology without wired charging infrastructure is one of the important issues for longevity of networks. In consideration of the fact that the position and amount of energy charged for each device might be unbalanced according to the distribution of nodes and energy sources, the problem of maximizing the minimum throughput among all nodes becomes a NP-hard challenging issue. To overcome this complexity, we propose a machine learning based relaying topology algorithm with a novel backward-pass rate assessment method to present proper learning direction and an iterative balancing time slot allocation algorithm which can utilize the node with sufficient energy as the relay. To validate the proposed scheme, we conducted simulations on the system model we established, thus confirm that the proposed scheme is stable and superior to conventional schemes.
Recently, several studies consider the stochastic optimization problem but in a heavy-tailed noise regime, i.e., the difference between the stochastic gradient and the true gradient is assumed to have a finite $p$-th moment (say being upper bounded by $\sigma^{p}$ for some $\sigma\geq0$) where $p\in(1,2]$, which not only generalizes the traditional finite variance assumption ($p=2$) but also has been observed in practice for several different tasks. Under this challenging assumption, lots of new progress has been made for either convex or nonconvex problems, however, most of which only consider smooth objectives. In contrast, people have not fully explored and well understood this problem when functions are nonsmooth. This paper aims to fill this crucial gap by providing a comprehensive analysis of stochastic nonsmooth convex optimization with heavy-tailed noises. We revisit a simple clipping-based algorithm, whereas, which is only proved to converge in expectation but under the additional strong convexity assumption. Under appropriate choices of parameters, for both convex and strongly convex functions, we not only establish the first high-probability rates but also give refined in-expectation bounds compared with existing works. Remarkably, all of our results are optimal (or nearly optimal up to logarithmic factors) with respect to the time horizon $T$ even when $T$ is unknown in advance. Additionally, we show how to make the algorithm parameter-free with respect to $\sigma$, in other words, the algorithm can still guarantee convergence without any prior knowledge of $\sigma$.
With the rapid development of Pattern Recognition and Computer Vision technologies, tasks like object detection or semantic segmentation have achieved even better accuracy than human beings. Based on these solid foundations, autonomous driving is becoming an important research direction, aiming to revolute the future of transportation and mobility. Sensors are critical to autonomous driving's security and feasibility to perceive the surrounding environment. Multi-Sensor fusion has become a current research hot spot because of its potential for multidimensional perception and integration ability. In this paper, we propose a novel feature-level multi-sensor fusion technology for end-to-end autonomous driving navigation with imitation learning. Our paper mainly focuses on fusion technologies for Lidar and RGB information. We also provide a brand-new penalty-based imitation learning method to reinforce the model's compliance with traffic rules and unify the objective of imitation learning and the metric of autonomous driving.
Estimation-of-distribution algorithms (EDAs) are optimization algorithms that learn a distribution on the search space from which good solutions can be sampled easily. A key parameter of most EDAs is the sample size (population size). If the population size is too small, the update of the probabilistic model builds on few samples, leading to the undesired effect of genetic drift. Too large population sizes avoid genetic drift, but slow down the process. Building on a recent quantitative analysis of how the population size leads to genetic drift, we design a smart-restart mechanism for EDAs. By stopping runs when the risk for genetic drift is high, it automatically runs the EDA in good parameter regimes. Via a mathematical runtime analysis, we prove a general performance guarantee for this smart-restart scheme. This in particular shows that in many situations where the optimal (problem-specific) parameter values are known, the restart scheme automatically finds these, leading to the asymptotically optimal performance. We also conduct an extensive experimental analysis. On four classic benchmark problems, we clearly observe the critical influence of the population size on the performance, and we find that the smart-restart scheme leads to a performance close to the one obtainable with optimal parameter values. Our results also show that previous theory-based suggestions for the optimal population size can be far from the optimal ones, leading to a performance clearly inferior to the one obtained via the smart-restart scheme. We also conduct experiments with PBIL (cross-entropy algorithm) on two combinatorial optimization problems from the literature, the max-cut problem and the bipartition problem. Again, we observe that the smart-restart mechanism finds much better values for the population size than those suggested in the literature, leading to a much better performance.
Typical cooperative multi-agent systems (MASs) exchange information to coordinate their motion in proximity-based control consensus schemes to complete a common objective. However, in the event of faults or cyber attacks to on-board positioning sensors of agents, global control performance may be compromised resulting in a hijacking of the entire MAS. For systems that operate in unknown or landmark-free environments (e.g., open terrain, sea, or air) and also beyond range/proximity sensing of nearby agents, compromised agents lose localization capabilities. To maintain resilience in these scenarios, we propose a method to recover compromised agents by utilizing Received Signal Strength Indication (RSSI) from nearby agents (i.e., mobile landmarks) to provide reliable position measurements for localization. To minimize estimation error: i) a multilateration scheme is proposed to leverage RSSI and position information received from neighboring agents as mobile landmarks and ii) a Kalman filtering method adaptively updates the unknown RSSI-based position measurement covariance matrix at runtime that is robust to unreliable state estimates. The proposed framework is demonstrated with simulations on MAS formations in the presence of faults and cyber attacks to on-board position sensors.
In complex large-scale systems such as climate, important effects are caused by a combination of confounding processes that are not fully observable. The identification of sources from observations of system state is vital for attribution and prediction, which inform critical policy decisions. The difficulty of these types of inverse problems lies in the inability to isolate sources and the cost of simulating computational models. Surrogate models may enable the many-query algorithms required for source identification, but data challenges arise from high dimensionality of the state and source, limited ensembles of costly model simulations to train a surrogate model, and few and potentially noisy state observations for inversion due to measurement limitations. The influence of auxiliary processes adds an additional layer of uncertainty that further confounds source identification. We introduce a framework based on (1) calibrating deep neural network surrogates to the flow maps provided by an ensemble of simulations obtained by varying sources, and (2) using these surrogates in a Bayesian framework to identify sources from observations via optimization. Focusing on an atmospheric dispersion exemplar, we find that the expressive and computationally efficient nature of the deep neural network operator surrogates in appropriately reduced dimension allows for source identification with uncertainty quantification using limited data. Introducing a variable wind field as an auxiliary process, we find that a Bayesian approximation error approach is essential for reliable source inversion when uncertainty due to wind stresses the algorithm.