In this paper, we study the generalized multiscale finite element method (GMsFEM) for single phase compressible flow in highly heterogeneous porous media. We follow the major steps of the GMsFEM to construct permeability dependent offline basis for fast coarse-grid simulation. The offline coarse space is efficiently constructed only once based on the initial permeability field with parallel computing. A rigorous convergence analysis is performed for two types of snapshot spaces. The analysis indicates that the convergence rates of the proposed multiscale method depend on the coarse meshsize and the eigenvalue decay of the local spectral problem. To further increase the accuracy of multiscale method, residual driven online multiscale basis is added to the offline space. The construction of online multiscale basis is based on a carefully design error indicator motivated by the analysis. We find that online basis is particularly important for the singular source. Rich numerical tests on typical 3D highly heterogeneous medias are presented to demonstrate the impressive computational advantages of the proposed multiscale method.
This work presents a numerical formulation to model isotropic viscoelastic material behavior for membranes and thin shells. The surface and the shell theory are formulated within a curvilinear coordinate system, which allows the representation of general surfaces and deformations. The kinematics follow from Kirchhoff-Love theory and the discretization makes use of isogeometric shape functions. A multiplicative split of the surface deformation gradient is employed, such that an intermediate surface configuration is introduced. The surface metric and curvature of this intermediate configuration follow from the solution of nonlinear evolution laws - ordinary differential equations (ODEs) - that stem from a generalized viscoelastic solid model. The evolution laws are integrated numerically with the implicit Euler scheme and linearized within the Newton-Raphson scheme of the nonlinear finite element framework. The implementation of surface and bending viscosity is verified with the help of analytical solutions and shows ideal convergence behavior. The chosen numerical examples capture large deformations and typical viscoelasticity behavior, such as creep, relaxation, and strain rate dependence. It is shown that the proposed formulation can also be straightforwardly applied to model boundary viscoelasticity of 3D bodies.
The problem of continuous inverse optimal control (over finite time horizon) is to learn the unknown cost function over the sequence of continuous control variables from expert demonstrations. In this article, we study this fundamental problem in the framework of energy-based model, where the observed expert trajectories are assumed to be random samples from a probability density function defined as the exponential of the negative cost function up to a normalizing constant. The parameters of the cost function are learned by maximum likelihood via an "analysis by synthesis" scheme, which iterates (1) synthesis step: sample the synthesized trajectories from the current probability density using the Langevin dynamics via back-propagation through time, and (2) analysis step: update the model parameters based on the statistical difference between the synthesized trajectories and the observed trajectories. Given the fact that an efficient optimization algorithm is usually available for an optimal control problem, we also consider a convenient approximation of the above learning method, where we replace the sampling in the synthesis step by optimization. Moreover, to make the sampling or optimization more efficient, we propose to train the energy-based model simultaneously with a top-down trajectory generator via cooperative learning, where the trajectory generator is used to fast initialize the synthesis step of the energy-based model. We demonstrate the proposed methods on autonomous driving tasks, and show that they can learn suitable cost functions for optimal control.
This paper makes the first attempt to apply newly developed upwind GFDM for the meshless solution of two-phase porous flow equations. In the presented method, node cloud is used to flexibly discretize the computational domain, instead of complicated mesh generation. Combining with moving least square approximation and local Taylor expansion, spatial derivatives of oil-phase pressure at a node are approximated by generalized difference operators in the local influence domain of the node. By introducing the first-order upwind scheme of phase relative permeability, and combining the discrete boundary conditions, fully-implicit GFDM-based nonlinear discrete equations of the immiscible two-phase porous flow are obtained and solved by the nonlinear solver based on the Newton iteration method with the automatic differentiation, to avoid the additional computational cost and possible computational instability caused by sequentially coupled scheme. Two numerical examples are implemented to test the computational performances of the presented method. Detailed error analysis finds the two sources of the calculation error, roughly studies the convergence order thus find that the low-order error of GFDM makes the convergence order of GFDM lower than that of FDM when node spacing is small, and points out the significant effect of the symmetry or uniformity of the node collocation in the node influence domain on the accuracy of generalized difference operators, and the radius of the node influence domain should be small to achieve high calculation accuracy, which is a significant difference between the studied hyperbolic two-phase porous flow problem and the elliptic problems when GFDM is applied.
Split learning (SL) is a collaborative learning framework, which can train an artificial intelligence (AI) model between a device and an edge server by splitting the AI model into a device-side model and a server-side model at a cut layer. The existing SL approach conducts the training process sequentially across devices, which incurs significant training latency especially when the number of devices is large. In this paper, we design a novel SL scheme to reduce the training latency, named Cluster-based Parallel SL (CPSL) which conducts model training in a "first-parallel-then-sequential" manner. Specifically, the CPSL is to partition devices into several clusters, parallelly train device-side models in each cluster and aggregate them, and then sequentially train the whole AI model across clusters, thereby parallelizing the training process and reducing training latency. Furthermore, we propose a resource management algorithm to minimize the training latency of CPSL considering device heterogeneity and network dynamics in wireless networks. This is achieved by stochastically optimizing the cut layer selection, real-time device clustering, and radio spectrum allocation. The proposed two-timescale algorithm can jointly make the cut layer selection decision in a large timescale and device clustering and radio spectrum allocation decisions in a small timescale. Extensive simulation results on non-independent and identically distributed data demonstrate that the proposed solutions can greatly reduce the training latency as compared with the existing SL benchmarks, while adapting to network dynamics.
This paper introduces a novel approach to compute the numerical fluxes at the cell boundaries for a cell-centered conservative numerical scheme. Explicit gradients used in deriving the reconstruction polynomials are replaced by high-order gradients computed by compact finite differences, referred to as implicit gradients in this paper. The new approach has superior dispersion and dissipation properties in comparison to the compact reconstruction approach. A problem-independent shock capturing approach via Boundary Variation Diminishing (BVD) algorithm is used to suppress oscillations for the simulation of flows with shocks and material interfaces. Several numerical test cases are carried out to verify the proposed method's capability using the implicit gradient method for compressible flows.
We study the numerical approximation by space-time finite element methods of a multi-physics system coupling hyperbolic elastodynamics with parabolic transport and modelling poro- and thermoelasticity. The equations are rewritten as a first-order system in time. Discretizations by continuous Galerkin methods in space and time with inf-sup stable pairs of finite elements for the spatial approximation of the unknowns are investigated. Optimal order error estimates of energy-type are proven. Superconvergence at the time nodes is addressed briefly. The error analysis can be extended to discontinuous and enriched Galerkin space discretizations. The error estimates are confirmed by numerical experiments.
Multigrid is a powerful solver for large-scale linear systems arising from discretized partial differential equations. The convergence theory of multigrid methods for symmetric positive definite problems has been well developed over the past decades, while, for nonsymmetric problems, such theory is still not mature. As a foundation for multigrid analysis, two-grid convergence theory plays an important role in motivating multigrid algorithms. Regarding two-grid methods for nonsymmetric problems, most previous works focus on the spectral radius of iteration matrix or rely on convergence measures that are typically difficult to compute in practice. Moreover, the existing results are confined to two-grid methods with exact solution of the coarse-grid system. In this paper, we analyze the convergence of a two-grid method for nonsymmetric positive definite problems (e.g., linear systems arising from the discretizations of convection-diffusion equations). In the case of exact coarse solver, we establish an elegant identity for characterizing two-grid convergence factor, which is measured by a smoother-induced norm. The identity can be conveniently used to derive a class of optimal restriction operators and analyze how the convergence factor is influenced by restriction. More generally, we present some convergence estimates for an inexact variant of the two-grid method, in which both linear and nonlinear coarse solvers are considered.
A High-dimensional and sparse (HiDS) matrix is frequently encountered in a big data-related application like an e-commerce system or a social network services system. To perform highly accurate representation learning on it is of great significance owing to the great desire of extracting latent knowledge and patterns from it. Latent factor analysis (LFA), which represents an HiDS matrix by learning the low-rank embeddings based on its observed entries only, is one of the most effective and efficient approaches to this issue. However, most existing LFA-based models perform such embeddings on a HiDS matrix directly without exploiting its hidden graph structures, thereby resulting in accuracy loss. To address this issue, this paper proposes a graph-incorporated latent factor analysis (GLFA) model. It adopts two-fold ideas: 1) a graph is constructed for identifying the hidden high-order interaction (HOI) among nodes described by an HiDS matrix, and 2) a recurrent LFA structure is carefully designed with the incorporation of HOI, thereby improving the representa-tion learning ability of a resultant model. Experimental results on three real-world datasets demonstrate that GLFA outperforms six state-of-the-art models in predicting the missing data of an HiDS matrix, which evidently supports its strong representation learning ability to HiDS data.
In this paper, we consider the constrained energy minimizing generalized multiscale finite element method (CEM-GMsFEM) with discontinuous Galerkin (DG) coupling for the linear elasticity equations in highly heterogeneous and high contrast media. We will introduce the construction of a DG version of the CEM-GMsFEM, such as auxiliary basis functions and offline basis functions. The DG version of the method offers some advantages such as flexibility in coarse grid construction and sparsity of resulting discrete systems. Moreover, to our best knowledge, this is the first time where the proof of the convergence of the CEM-GMsFEM in the DG form is given. Some numerical examples will be presented to illustrate the performance of the method.
The minimum energy path (MEP) describes the mechanism of reaction, and the energy barrier along the path can be used to calculate the reaction rate in thermal systems. The nudged elastic band (NEB) method is one of the most commonly used schemes to compute MEPs numerically. It approximates an MEP by a discrete set of configuration images, where the discretization size determines both computational cost and accuracy of the simulations. In this paper, we consider a discrete MEP to be a stationary state of the NEB method and prove an optimal convergence rate of the discrete MEP with respect to the number of images. Numerical simulations for the transitions of some several proto-typical model systems are performed to support the theory.