This paper presents a novel physics-infused reduced-order modeling (PIROM) methodology for efficient and accurate modeling of non-linear dynamical systems. The PIROM consists of a physics-based analytical component that represents the known physical processes, and a data-driven dynamical component that represents the unknown physical processes. The PIROM is applied to the aerothermal load modeling for hypersonic aerothermoelastic (ATE) analysis and is found to accelerate the ATE simulations by two-three orders of magnitude while maintaining an accuracy comparable to high-fidelity solutions based on computational fluid dynamics (CFD). Moreover, the PIROM-based solver is benchmarked against the conventional POD-kriging surrogate model, and is found to significantly outperform the accuracy, generalizability and sampling efficiency of the latter in a wide range of operating conditions and in the presence of complex structural boundary conditions. Finally, the PIROM-based ATE solver is demonstrated by a parametric study on the effects of boundary conditions and rib-supports on the ATE response of a compliant and heat-conducting panel structure. The results not only reveal the dramatic snap-through behavior with respect to spring constraints of boundary conditions, but also demonstrates the potential of PIROM to facilitate the rapid and accurate design and optimization of multi-disciplinary systems such as hypersonic structures.
Modern data collection in many data paradigms, including bioinformatics, often incorporates multiple traits derived from different data types (i.e. platforms). We call this data multi-block, multi-view, or multi-omics data. The emergent field of data integration develops and applies new methods for studying multi-block data and identifying how different data types relate and differ. One major frontier in contemporary data integration research is methodology that can identify partially-shared structure between sub-collections of data types. This work presents a new approach: Data Integration Via Analysis of Subspaces (DIVAS). DIVAS combines new insights in angular subspace perturbation theory with recent developments in matrix signal processing and convex-concave optimization into one algorithm for exploring partially-shared structure. Based on principal angles between subspaces, DIVAS provides built-in inference on the results of the analysis, and is effective even in high-dimension-low-sample-size (HDLSS) situations.
Simulation of 3D low-frequency electromagnetic fields propagating in the Earth is computationally expensive. We present a fictitious wave domain high-order finite-difference time-domain (FDTD) modelling method on nonuniform grids to compute frequency-domain 3D controlled-source electromagnetic (CSEM) data. The method overcomes the inconsistency issue widely present in the conventional 2nd order staggered grid finite difference scheme over nonuniform grid, achieving high accuracy with arbitrarily high order scheme. The finite-difference coefficients adaptive to the node spacings, can be accurately computed by inverting a Vandermonde matrix system using efficient algorithm. A generic stability condition applicable to nonuniform grids is established, revealing the dependence of the time step and these finite-difference coefficients. A recursion scheme using fixed point iterations is designed to determine the stretching factor to generate the optimal nonuniform grid. The grid stretching in our method reduces the number of grid points required in the discretization, making it more efficient than the standard high-order FDTD with a densely sampled uniform grid. Instead of stretching in both vertical and horizontal directions, better accuracy of our method is observed when the grid is stretched along the depth without horizontal stretching. The efficiency and accuracy of our method are demonstrated by numerical examples.
Source analysis of Electroencephalography (EEG) data requires the computation of the scalp potential induced by current sources in the brain. This so-called EEG forward problem is based on an accurate estimation of the volume conduction effects in the human head, represented by a partial differential equation which can be solved using the finite element method (FEM). FEM offers flexibility when modeling anisotropic tissue conductivities but requires a volumetric discretization, a mesh, of the head domain. Structured hexahedral meshes are easy to create in an automatic fashion, while tetrahedral meshes are better suited to model curved geometries. Tetrahedral meshes thus offer better accuracy, but are more difficult to create. Methods: We introduce CutFEM for EEG forward simulations to integrate the strengths of hexahedra and tetrahedra. It belongs to the family of unfitted finite element methods, decoupling mesh and geometry representation. Following a description of the method, we will employ CutFEM in both controlled spherical scenarios and the reconstruction of somatosensory evoked potentials. Results: CutFEM outperforms competing FEM approaches with regard to numerical accuracy, memory consumption and computational speed while being able to mesh arbitrarily touching compartments. Conclusion: CutFEM balances numerical accuracy, computational efficiency and a smooth approximation of complex geometries that has previously not been available in FEM-based EEG forward modeling.
Auto-regressive moving-average (ARMA) models are ubiquitous forecasting tools. Parsimony in such models is highly valued for their interpretability and computational tractability, and as such the identification of model orders remains a fundamental task. We propose a novel method of ARMA order identification through projection predictive inference, which is grounded in Bayesian decision theory and naturally allows for uncertainty communication. It benefits from improved stability through the use of a reference model. The procedure consists of two steps: in the first, the practitioner incorporates their understanding of underlying data-generating process into a reference model, which we latterly project onto possibly parsimonious submodels. These submodels are optimally inferred to best replicate the predictive performance of the reference model. We further propose a search heuristic amenable to the ARMA framework. We show that the submodels selected by our procedure exhibit predictive performance at least as good as those produced by auto.arima over simulated and real-data experiments, and in some cases out-perform the latter. Finally we show that our procedure is robust to noise, and scales well to larger data.
We propose a regularization for Reduced Order Models (ROMs) of the quasi-geostrophic equations (QGE) to increase accuracy when the Proper Orthogonal Decomposition (POD) modes retained to construct the reduced basis are insufficient to describe the system dynamics. Our regularization is based on the so-called BV-alpha model, which modifies the nonlinear term in the QGE and adds a linear differential filter for the vorticity. To show the effectiveness of the BV-alpha model for ROM closure, we compare the results computed by a POD-Galerkin ROM with and without regularization for the classical double-gyre wind forcing benchmark. Our numerical results show that the solution computed by the regularized ROM is more accurate, even when the retained POD modes account for a small percentage of the eigenvalue energy. Additionally, we show that, although computationally more expensive that the ROM with no regularization, the regularized ROM is still a competitive alternative to full order simulations of the QGE.
In this work we study the asymptotic consistency of the weak-form sparse identification of nonlinear dynamics algorithm (WSINDy) in the identification of differential equations from noisy samples of solutions. We prove that the WSINDy estimator is unconditionally asymptotically consistent for a wide class of models which includes the Navier-Stokes equations and the Kuramoto-Sivashinsky equation. We thus provide a mathematically rigorous explanation for the observed robustness to noise of weak-form equation learning. Conversely, we also show that in general the WSINDy estimator is only conditionally asymptotically consistent, yielding discovery of spurious terms with probability one if the noise level is above some critical threshold and the nonlinearities exhibit sufficiently fast growth. We derive explicit bounds on the critical noise threshold in the case of Gaussian white noise and provide an explicit characterization of these spurious terms in the case of trigonometric and/or polynomial model nonlinearities. However, a silver lining to this negative result is that if the data is suitably denoised (a simple moving average filter is sufficient), then we recover unconditional asymptotic consistency on the class of models with locally-Lipschitz nonlinearities. Altogether, our results reveal several important aspects of weak-form equation learning which may be used to improve future algorithms. We demonstrate our results numerically using the Lorenz system, the cubic oscillator, a viscous Burgers growth model, and a Kuramoto-Sivashinsky-type higher-order PDE.
We propose quasi-stable coloring, an approximate version of stable coloring. Stable coloring, also called color refinement, is a well-studied technique in graph theory for classifying vertices, which can be used to build compact, lossless representations of graphs. However, its usefulness is limited due to its reliance on strict symmetries. Real data compresses very poorly using color refinement. We propose the first, to our knowledge, approximate color refinement scheme, which we call quasi-stable coloring. By using approximation, we alleviate the need for strict symmetry, and allow for a tradeoff between the degree of compression and the accuracy of the representation. We study three applications: Linear Programming, Max-Flow, and Betweenness Centrality, and provide theoretical evidence in each case that a quasi-stable coloring can lead to good approximations on the reduced graph. Next, we consider how to compute a maximal quasi-stable coloring: we prove that, in general, this problem is NP-hard, and propose a simple, yet effective algorithm based on heuristics. Finally, we evaluate experimentally the quasi-stable coloring technique on several real graphs and applications, comparing with prior approximation techniques. A reference implementation and the experiment code are available at //github.com/mkyl/QuasiStableColors.jl .
The exponential increase of wireless devices with highly demanding services such as streaming video, gaming and others has imposed several challenges to Wireless Local Area Networks (WLANs). In the context of Wi-Fi, IEEE 802.11ax brings high-data rates in dense user deployments. Additionally, it comes with new flexible features in the physical layer as dynamic Clear-Channel-Assessment (CCA) threshold with the goal of improving spatial reuse (SR) in response to radio spectrum scarcity in dense scenarios. In this paper, we formulate the Transmission Power (TP) and CCA configuration problem with an objective of maximizing fairness and minimizing station starvation. We present four main contributions into distributed SR optimization using Multi-Agent Multi-Armed Bandits (MAMABs). First, we propose to reduce the action space given the large cardinality of action combination of TP and CCA threshold values per Access Point (AP). Second, we present two deep Multi-Agent Contextual MABs (MA-CMABs), named Sample Average Uncertainty (SAU)-Coop and SAU-NonCoop as cooperative and non-cooperative versions to improve SR. In addition, we present an analysis whether cooperation is beneficial using MA-MABs solutions based on the e-greedy, Upper Bound Confidence (UCB) and Thompson techniques. Finally, we propose a deep reinforcement transfer learning technique to improve adaptability in dynamic environments. Simulation results show that cooperation via SAU-Coop algorithm contributes to an improvement of 14.7% in cumulative throughput, and 32.5% improvement of PLR when compared with no cooperation approaches. Finally, under dynamic scenarios, transfer learning contributes to mitigation of service drops for at least 60% of the total of users.
When is heterogeneity in the composition of an autonomous robotic team beneficial and when is it detrimental? We investigate and answer this question in the context of a minimally viable model that examines the role of heterogeneous speeds in perimeter defense problems, where defenders share a total allocated speed budget. We consider two distinct problem settings and develop strategies based on dynamic programming and on local interaction rules. We present a theoretical analysis of both approaches and our results are extensively validated using simulations. Interestingly, our results demonstrate that the viability of heterogeneous teams depends on the amount of information available to the defenders. Moreover, our results suggest a universality property: across a wide range of problem parameters the optimal ratio of the speeds of the defenders remains nearly constant.
Since deep neural networks were developed, they have made huge contributions to everyday lives. Machine learning provides more rational advice than humans are capable of in almost every aspect of daily life. However, despite this achievement, the design and training of neural networks are still challenging and unpredictable procedures. To lower the technical thresholds for common users, automated hyper-parameter optimization (HPO) has become a popular topic in both academic and industrial areas. This paper provides a review of the most essential topics on HPO. The first section introduces the key hyper-parameters related to model training and structure, and discusses their importance and methods to define the value range. Then, the research focuses on major optimization algorithms and their applicability, covering their efficiency and accuracy especially for deep learning networks. This study next reviews major services and toolkits for HPO, comparing their support for state-of-the-art searching algorithms, feasibility with major deep learning frameworks, and extensibility for new modules designed by users. The paper concludes with problems that exist when HPO is applied to deep learning, a comparison between optimization algorithms, and prominent approaches for model evaluation with limited computational resources.