This paper investigates model-order reduction methods for geometrically nonlinear structures. The parametrisation method of invariant manifolds is used and adapted to the case of mechanical systems expressed in the physical basis, so that the technique is directly applicable to problems discretised by the finite element method. Two nonlinear mappings, respectively related to displacement and velocity, are introduced, and the link between the two is made explicit at arbitrary order of expansion. The same development is performed on the reduced-order dynamics which is computed at generic order following the different styles of parametrisation. More specifically, three different styles are introduced and commented: the graph style, the complex normal form style and the real normal form style. These developments allow making better connections with earlier works using these parametrisation methods. The technique is then applied to three different examples. A clamped-clamped arch with increasing curvature is first used to show an example of a system with a softening behaviour turning to hardening at larger amplitudes, which can be replicated with a single mode reduction. Secondly, the case of a cantilever beam is investigated. It is shown that the invariant manifold of the first mode shows a folding point at large amplitudes which is not connected to an internal resonance. This exemplifies the failure of the graph style due to the folding point, whereas the normal form style is able to pass over the folding. Finally, A MEMS micromirror undergoing large rotations is used to show the importance of using high-order expansions on an industrial example.
Computational fluctuating hydrodynamics aims at understanding the impact of thermal fluctuations on fluid motions at small scales through numerical exploration. These fluctuations are modeled as stochastic flux terms and incorporated into the classical Navier-Stokes equations, which need to be solved numerically. In this paper, we present a novel projection-based method for solving the incompressible fluctuating hydrodynamics equations. By analyzing the equilibrium structure factor spectrum of the velocity field, we investigate how the inherent splitting errors affect the numerical solution of the stochastic partial differential equations in the presence of non-periodic boundary conditions, and how iterative corrections can reduce these errors. Our computational examples demonstrate both the capability of our approach to reproduce correctly stochastic properties of fluids at small scales as well as its potential use in the simulations of multi-physics problems.
Non-isolated systems have diverse coupling relations with the external environment. These relations generate complex thermodynamics and information transmission between the system and its environment. The framework depicted in the current research attempts to glance at the critical role of the internal orders inside the non-isolated system in shaping the information thermodynamics coupling. We characterize the coupling as a generalized encoding process, where the system acts as an information thermodynamics encoder to encode the external information based on thermodynamics. We formalize the encoding process in the context of the nonequilibrium second law of thermodynamics, revealing an intrinsic difference in information thermodynamics characteristics between information thermodynamics encoders with and without internal correlations. During the information encoding process of an external source $\mathsf{Y}$, specific sub-systems in an encoder $\mathsf{X}$ with internal correlations can exceed the information thermodynamics bound on $\left(\mathsf{X},\mathsf{Y}\right)$ and encode more information than system $\mathsf{X}$ works as a whole. We computationally verify this theoretical finding in an Ising model with a random external field and a neural data set of the human brain during visual perception and recognition. Our analysis demonstrates that the stronger internal correlation inside these systems implies a higher possibility for specific sub-systems to encode more information than the global one. These findings may suggest a new perspective in studying information thermodynamics in diverse physical and biological systems.
We introduce a universal framework for characterizing the statistical efficiency of a statistical estimation problem with differential privacy guarantees. Our framework, which we call High-dimensional Propose-Test-Release (HPTR), builds upon three crucial components: the exponential mechanism, robust statistics, and the Propose-Test-Release mechanism. Gluing all these together is the concept of resilience, which is central to robust statistical estimation. Resilience guides the design of the algorithm, the sensitivity analysis, and the success probability analysis of the test step in Propose-Test-Release. The key insight is that if we design an exponential mechanism that accesses the data only via one-dimensional robust statistics, then the resulting local sensitivity can be dramatically reduced. Using resilience, we can provide tight local sensitivity bounds. These tight bounds readily translate into near-optimal utility guarantees in several cases. We give a general recipe for applying HPTR to a given instance of a statistical estimation problem and demonstrate it on canonical problems of mean estimation, linear regression, covariance estimation, and principal component analysis. We introduce a general utility analysis technique that proves that HPTR nearly achieves the optimal sample complexity under several scenarios studied in the literature.
In this paper, we propose a network model, the multiclass classification-based ROM (MC-ROM), for solving time-dependent parametric partial differential equations (PPDEs). This work is inspired by the observation of applying the deep learning-based reduced order model (DL-ROM) to solve diffusion-dominant PPDEs. We find that the DL-ROM has a good approximation for some special model parameters, but it cannot approximate the drastic changes of the solution as time evolves. Based on this fact, we classify the dataset according to the magnitude of the solutions, and construct corresponding subnets dependent on different types of data. Then we train a classifier to integrate different subnets together to obtain the MC-ROM. When subsets have the same architecture, we can use transfer learning technology to accelerate the offline training. Numerical experiments show that the MC-ROM improves the generalization ability of the DL-ROM both for diffusion- and convection-dominant problems, and maintains the advantage of DL-ROM. We also compare the approximation accuracy and computational efficiency of the proper orthogonal decomposition (POD) which is not suitable for convection-dominant problems. For diffusion-dominant problems, the MC-ROM can save about 100 times online computational cost than the POD with a slightly better approximation in the reduced space of the same dimension.
Advances in information technology have led to extremely large datasets that are often kept in different storage centers. Existing statistical methods must be adapted to overcome the resulting computational obstacles while retaining statistical validity and efficiency. Split-and-conquer approaches have been applied in many areas, including quantile processes, regression analysis, principal eigenspaces, and exponential families. We study split-and-conquer approaches for the distributed learning of finite Gaussian mixtures. We recommend a reduction strategy and develop an effective MM algorithm. The new estimator is shown to be consistent and retains root-n consistency under some general conditions. Experiments based on simulated and real-world data show that the proposed split-and-conquer approach has comparable statistical performance with the global estimator based on the full dataset, if the latter is feasible. It can even slightly outperform the global estimator if the model assumption does not match the real-world data. It also has better statistical and computational performance than some existing methods.
Causal mediation analysis concerns the pathways through which a treatment affects an outcome. While most of the mediation literature focuses on settings with a single mediator, a flourishing line of research has examined settings involving multiple mediators, under which path-specific effects (PSEs) are often of interest. We consider estimation of PSEs when the treatment effect operates through K(\geq1) causally ordered, possibly multivariate mediators. In this setting, the PSEs for many causal paths are not nonparametrically identified, and we focus on a set of PSEs that are identified under Pearl's nonparametric structural equation model. These PSEs are defined as contrasts between the expectations of 2^{K+1} potential outcomes and identified via what we call the generalized mediation functional (GMF). We introduce an array of regression-imputation, weighting, and "hybrid" estimators, and, in particular, two K+2-robust and locally semiparametric efficient estimators for the GMF. The latter estimators are well suited to the use of data-adaptive methods for estimating their nuisance functions. We establish the rate conditions required of the nuisance functions for semiparametric efficiency. We also discuss how our framework applies to several estimands that may be of particular interest in empirical applications. The proposed estimators are illustrated with a simulation study and an empirical example.
We investigate a gradient flow structure of the Ginzburg--Landau--Devonshire (GLD) model for anisotropic ferroelectric materials by reconstructing its energy form. We show that the modified energy form admits at least one minimizer. Under some regularity assumptions for the electric charge distribution and the initial polarization field, we prove that the $L^2$ gradient flow structure has a unique solution. To simulate the GLD model numerically, we propose an energy-stable semi-implicit time-stepping scheme and a hybridizable discontinuous Galerkin method for space discretization. Some numerical tests are provided to verify the stability and convergence of the proposed numerical scheme as well as some properties of ferroelectric materials.
The inductive biases of graph representation learning algorithms are often encoded in the background geometry of their embedding space. In this paper, we show that general directed graphs can be effectively represented by an embedding model that combines three components: a pseudo-Riemannian metric structure, a non-trivial global topology, and a unique likelihood function that explicitly incorporates a preferred direction in embedding space. We demonstrate the representational capabilities of this method by applying it to the task of link prediction on a series of synthetic and real directed graphs from natural language applications and biology. In particular, we show that low-dimensional cylindrical Minkowski and anti-de Sitter spacetimes can produce equal or better graph representations than curved Riemannian manifolds of higher dimensions.
We present a new clustering method in the form of a single clustering equation that is able to directly discover groupings in the data. The main proposition is that the first neighbor of each sample is all one needs to discover large chains and finding the groups in the data. In contrast to most existing clustering algorithms our method does not require any hyper-parameters, distance thresholds and/or the need to specify the number of clusters. The proposed algorithm belongs to the family of hierarchical agglomerative methods. The technique has a very low computational overhead, is easily scalable and applicable to large practical problems. Evaluation on well known datasets from different domains ranging between 1077 and 8.1 million samples shows substantial performance gains when compared to the existing clustering techniques.
Matter evolved under influence of gravity from minuscule density fluctuations. Non-perturbative structure formed hierarchically over all scales, and developed non-Gaussian features in the Universe, known as the Cosmic Web. To fully understand the structure formation of the Universe is one of the holy grails of modern astrophysics. Astrophysicists survey large volumes of the Universe and employ a large ensemble of computer simulations to compare with the observed data in order to extract the full information of our own Universe. However, to evolve trillions of galaxies over billions of years even with the simplest physics is a daunting task. We build a deep neural network, the Deep Density Displacement Model (hereafter D$^3$M), to predict the non-linear structure formation of the Universe from simple linear perturbation theory. Our extensive analysis, demonstrates that D$^3$M outperforms the second order perturbation theory (hereafter 2LPT), the commonly used fast approximate simulation method, in point-wise comparison, 2-point correlation, and 3-point correlation. We also show that D$^3$M is able to accurately extrapolate far beyond its training data, and predict structure formation for significantly different cosmological parameters. Our study proves, for the first time, that deep learning is a practical and accurate alternative to approximate simulations of the gravitational structure formation of the Universe.