Dynamic mode decomposition (DMD) is an emerging methodology that has recently attracted computational scientists working on nonintrusive reduced order modeling. One of the major strengths that DMD possesses is having ground theoretical roots from the Koopman approximation theory. Indeed, DMD may be viewed as the data-driven realization of the famous Koopman operator. Nonetheless, the stable implementation of DMD incurs computing the singular value decomposition of the input data matrix. This, in turn, makes the process computationally demanding for high dimensional systems. In order to alleviate this burden, we develop a framework based on sketching methods, wherein a sketch of a matrix is simply another matrix which is significantly smaller, but still sufficiently approximates the original system. Such sketching or embedding is performed by applying random transformations, with certain properties, on the input matrix to yield a compressed version of the initial system. Hence, many of the expensive computations can be carried out on the smaller matrix, thereby accelerating the solution of the original problem. We conduct numerical experiments conducted using the spherical shallow water equations as a prototypical model in the context of geophysical flows. The performance of several sketching approaches is evaluated for capturing the range and co-range of the data matrix. The proposed sketching-based framework can accelerate various portions of the DMD algorithm, compared to classical methods that operate directly on the raw input data. This eventually leads to substantial computational gains that are vital for digital twinning of high dimensional systems.
This work proposes a new framework of model reduction for parametric complex systems. The framework employs a popular model reduction technique dynamic mode decomposition (DMD), which is capable of combining data-driven learning and physics ingredients based on the Koopman operator theory. In the offline step of the proposed framework, DMD constructs a low-rank linear surrogate model for the high dimensional quantities of interest (QoIs) derived from the (nonlinear) complex high fidelity models (HFMs) of unknown forms. Then in the online step, the resulting local reduced order bases (ROBs) and parametric reduced order models (PROMs) at the training parameter sample points are interpolated to construct a new PROM with the corresponding ROB for a new set of target/test parameter values. The interpolations need to be done on the appropriate manifolds within consistent sets of generalized coordinates. The proposed framework is illustrated by numerical examples for both linear and nonlinear problems. In particular, its advantages in computational costs and accuracy are demonstrated by the comparisons with projection-based proper orthogonal decomposition (POD)-PROM and Kriging.
The perfectly matched layer (PML) formulation is a prominent way of handling radiation problems in unbounded domain and has gained interest due to its simple implementation in finite element codes. However, its simplicity can be advanced further using the isogeometric framework. This work presents a spline based PML formulation which avoids additional coordinate transformation as the formulation is based on the same space in which the numerical solution is sought. The procedure can be automated for any convex artificial boundary. This removes restrictions on the domain construction using PML and can therefore reduce computational cost and improve mesh quality. The usage of spline basis functions with higher continuity also improves the accuracy of the numerical solution.
On-demand delivery has become increasingly popular around the world. Motivated by a large grocery chain store who offers fast on-demand delivery services, we model and solve a stochastic dynamic driver dispatching and routing problem for last-mile delivery systems where on-time performance is the main target. The system operator needs to dispatch a set of drivers and specify their delivery routes facing random demand that arrives over a fixed number of periods. The resulting stochastic dynamic program is challenging to solve due to the curse of dimensionality. We propose a novel structured approximation framework to approximate the value function via a parametrized dispatching and routing policy. We analyze the structural properties of the approximation framework and establish its performance guarantee under large-demand scenarios. We then develop efficient exact algorithms for the approximation problem based on Benders decomposition and column generation, which deliver verifiably optimal solutions within minutes. The evaluation results on a real-world data set show that our framework outperforms the current policy of the company by 36.53% on average in terms of delivery time. We also perform several policy experiments to understand the value of dynamic dispatching and routing with varying fleet sizes and dispatch frequencies.
This manuscript gives a theoretical framework for a new Hilbert space of functions, the so called occupation kernel Hilbert space (OKHS), that operate on collections of signals rather than real or complex numbers. To support this new definition, an explicit class of OKHSs is given through the consideration of a reproducing kernel Hilbert space (RKHS). This space enables the definition of nonlocal operators, such as fractional order Liouville operators, as well as spectral decomposition methods for corresponding fractional order dynamical systems. In this manuscript, a fractional order DMD routine is presented, and the details of the finite rank representations are given. Significantly, despite the added theoretical content through the OKHS formulation, the resultant computations only differ slightly from that of occupation kernel DMD methods for integer order systems posed over RKHSs.
This paper proposes a numerical method based on the Adomian decomposition approach for the time discretization, applied to Euler equations. A recursive property is demonstrated that allows to formulate the method in an appropriate and efficient way. To obtain a fully numerical scheme, the space discretization is achieved using the classical DG techniques. The efficiency of the obtained numerical scheme is demonstrated through numerical tests by comparison to exact solution and the popular Runge-Kutta DG method results.
A High-dimensional and sparse (HiDS) matrix is frequently encountered in a big data-related application like an e-commerce system or a social network services system. To perform highly accurate representation learning on it is of great significance owing to the great desire of extracting latent knowledge and patterns from it. Latent factor analysis (LFA), which represents an HiDS matrix by learning the low-rank embeddings based on its observed entries only, is one of the most effective and efficient approaches to this issue. However, most existing LFA-based models perform such embeddings on a HiDS matrix directly without exploiting its hidden graph structures, thereby resulting in accuracy loss. To address this issue, this paper proposes a graph-incorporated latent factor analysis (GLFA) model. It adopts two-fold ideas: 1) a graph is constructed for identifying the hidden high-order interaction (HOI) among nodes described by an HiDS matrix, and 2) a recurrent LFA structure is carefully designed with the incorporation of HOI, thereby improving the representa-tion learning ability of a resultant model. Experimental results on three real-world datasets demonstrate that GLFA outperforms six state-of-the-art models in predicting the missing data of an HiDS matrix, which evidently supports its strong representation learning ability to HiDS data.
We introduce a novel methodology for particle filtering in dynamical systems where the evolution of the signal of interest is described by a SDE and observations are collected instantaneously at prescribed time instants. The new approach includes the discretisation of the SDE and the design of efficient particle filters for the resulting discrete-time state-space model. The discretisation scheme converges with weak order 1 and it is devised to create a sequential dependence structure along the coordinates of the discrete-time state vector. We introduce a class of space-sequential particle filters that exploits this structure to improve performance when the system dimension is large. This is numerically illustrated by a set of computer simulations for a stochastic Lorenz 96 system with additive noise. The new space-sequential particle filters attain approximately constant estimation errors as the dimension of the Lorenz 96 system is increased, with a computational cost that increases polynomially, rather than exponentially, with the system dimension. Besides the new numerical scheme and particle filters, we provide in this paper a general framework for discrete-time filtering in continuous-time dynamical systems described by a SDE and instantaneous observations. Provided that the SDE is discretised using a weakly-convergent scheme, we prove that the marginal posterior laws of the resulting discrete-time state-space model converge to the posterior marginal posterior laws of the original continuous-time state-space model under a suitably defined metric. This result is general and not restricted to the numerical scheme or particle filters specifically studied in this manuscript.
Present-day atomistic simulations generate long trajectories of ever more complex systems. Analyzing these data, discovering metastable states, and uncovering their nature is becoming increasingly challenging. In this paper, we first use the variational approach to conformation dynamics to discover the slowest dynamical modes of the simulations. This allows the different metastable states of the system to be located and organized hierarchically. The physical descriptors that characterize metastable states are discovered by means of a machine learning method. We show in the cases of two proteins, Chignolin and Bovine Pancreatic Trypsin Inhibitor, how such analysis can be effortlessly performed in a matter of seconds. Another strength of our approach is that it can be applied to the analysis of both unbiased and biased simulations.
We provide a new analysis of local SGD, removing unnecessary assumptions and elaborating on the difference between two data regimes: identical and heterogeneous. In both cases, we improve the existing theory and provide values of the optimal stepsize and optimal number of local iterations. Our bounds are based on a new notion of variance that is specific to local SGD methods with different data. The tightness of our results is guaranteed by recovering known statements when we plug $H=1$, where $H$ is the number of local steps. The empirical evidence further validates the severe impact of data heterogeneity on the performance of local SGD.
Graph Neural Networks (GNNs) have recently become increasingly popular due to their ability to learn complex systems of relations or interactions arising in a broad spectrum of problems ranging from biology and particle physics to social networks and recommendation systems. Despite the plethora of different models for deep learning on graphs, few approaches have been proposed thus far for dealing with graphs that present some sort of dynamic nature (e.g. evolving features or connectivity over time). In this paper, we present Temporal Graph Networks (TGNs), a generic, efficient framework for deep learning on dynamic graphs represented as sequences of timed events. Thanks to a novel combination of memory modules and graph-based operators, TGNs are able to significantly outperform previous approaches being at the same time more computationally efficient. We furthermore show that several previous models for learning on dynamic graphs can be cast as specific instances of our framework. We perform a detailed ablation study of different components of our framework and devise the best configuration that achieves state-of-the-art performance on several transductive and inductive prediction tasks for dynamic graphs.