We propose a novel formulation for parametric finite element methods to simulate surface diffusion of closed curves, which is also called as the curve diffusion. Several high-order temporal discretizations are proposed based on this new formulation. To ensure that the numerical methods preserve geometric structures of curve diffusion (i.e., the perimeter-decreasing and area-preserving properties), our formulation incorporates two scalar Lagrange multipliers and two evolution equations involving the perimeter and area, respectively. By discretizing the spatial variable using piecewise linear finite elements and the temporal variable using either the Crank-Nicolson method or the backward differentiation formulae method, we develop high-order temporal schemes that effectively preserve the structure at a fully discrete level. These new schemes are implicit and can be efficiently solved using Newton's method. Extensive numerical experiments demonstrate that our methods achieve the desired temporal accuracy, as measured by the manifold distance, while simultaneously preserving the geometric structure of the curve diffusion.
In this paper, we propose a new class of splitting methods to solve the stochastic Langevin equation, which can simultaneously preserve the ergodicity and exponential integrability of the original equation. The central idea is to extract a stochastic subsystem that possesses the strict dissipation from the original equation, which is inspired by the inheritance of the Lyapunov structure for obtaining the ergodicity. We prove that the exponential moment of the numerical solution is bounded, thus validating the exponential integrability of the proposed methods. Further, we show that under moderate verifiable conditions, the methods have the first-order convergence in both strong and weak senses, and we present several concrete splitting schemes based on the methods. The splitting strategy of methods can be readily extended to construct conformal symplectic methods and high-order methods that preserve both the ergodicity and the exponential integrability, as demonstrated in numerical experiments. Our numerical experiments also show that the proposed methods have good performance in the long-time simulation.
The phase-field method has become popular for the numerical modeling of fluid-filled fractures, thanks to its ability to represent complex fracture geometry without algorithms. However, the algorithm-free representation of fracture geometry poses a significant challenge in calculating the crack opening (aperture) of phase-field fracture, which governs the fracture permeability and hence the overall hydromechanical behavior. Although several approaches have been devised to compute the crack opening of phase-field fracture, they require a sophisticated algorithm for post-processing the phase-field values or an additional parameter sensitive to the element size and alignment. Here, we develop a novel method for calculating the crack opening of fluid-filled phase-field fracture, which enables one to obtain the crack opening without additional algorithms or parameters. We transform the displacement-jump-based kinematics of a fracture into a continuous strain-based version, insert it into a force balance equation on the fracture, and apply the phase-field approximation. Through this procedure, we obtain a simple equation for the crack opening, which can be calculated with quantities at individual material points. We verify the proposed method with analytical and numerical solutions obtained based on discrete representations of fractures, demonstrating its capability to calculate the crack opening regardless of the element size or alignment.
This paper introduces a nonconforming virtual element method for general second-order elliptic problems with variable coefficients on domains with curved boundaries and curved internal interfaces. We prove arbitrary order optimal convergence in the energy and $L^2$ norms, confirmed by numerical experiments on a set of polygonal meshes. The accuracy of the numerical approximation provided by the method is shown to be comparable with the theoretical analysis.
Several forms for constructing novel physics-informed neural-networks (PINN) for the solution of partial-differential-algebraic equations based on derivative operator splitting are proposed, using the nonlinear Kirchhoff rod as a prototype for demonstration. The open-source DeepXDE is likely the most well documented framework with many examples. Yet, we encountered some pathological problems and proposed novel methods to resolve them. Among these novel methods are the PDE forms, which evolve from the lower-level form with fewer unknown dependent variables to higher-level form with more dependent variables, in addition to those from lower-level forms. Traditionally, the highest-level form, the balance-of-momenta form, is the starting point for (hand) deriving the lowest-level form through a tedious (and error prone) process of successive substitutions. The next step in a finite element method is to discretize the lowest-level form upon forming a weak form and linearization with appropriate interpolation functions, followed by their implementation in a code and testing. The time-consuming tedium in all of these steps could be bypassed by applying the proposed novel PINN directly to the highest-level form. We developed a script based on JAX. While our JAX script did not show the pathological problems of DDE-T (DDE with TensorFlow backend), it is slower than DDE-T. That DDE-T itself being more efficient in higher-level form than in lower-level form makes working directly with higher-level form even more attractive in addition to the advantages mentioned further above. Since coming up with an appropriate learning-rate schedule for a good solution is more art than science, we systematically codified in detail our experience running optimization through a normalization/standardization of the network-training process so readers can reproduce our results.
We investigate the dividing line between classical and quantum computational power in estimating properties of matrix functions. More precisely, we study the computational complexity of two primitive problems: given a function $f$ and a Hermitian matrix $A$, compute a matrix element of $f(A)$ or compute a local measurement on $f(A)|0\rangle^{\otimes n}$, with $|0\rangle^{\otimes n}$ an $n$-qubit reference state vector, in both cases up to additive approximation error. We consider four functions -- monomials, Chebyshev polynomials, the time evolution function, and the inverse function -- and probe the complexity across a broad landscape covering different problem input regimes. Namely, we consider two types of matrix inputs (sparse and Pauli access), matrix properties (norm, sparsity), the approximation error, and function-specific parameters. We identify BQP-complete forms of both problems for each function and then toggle the problem parameters to easier regimes to see where hardness remains, or where the problem becomes classically easy. As part of our results we make concrete a hierarchy of hardness across the functions; in parameter regimes where we have classically efficient algorithms for monomials, all three other functions remain robustly BQP-hard, or hard under usual computational complexity assumptions. In identifying classically easy regimes, among others, we show that for any polynomial of degree $\mathrm{poly}(n)$ both problems can be efficiently classically simulated when $A$ has $O(\log n)$ non-zero coefficients in the Pauli basis. This contrasts with the fact that the problems are BQP-complete in the sparse access model even for constant row sparsity, whereas the stated Pauli access efficiently constructs sparse access with row sparsity $O(\log n)$. Our work provides a catalog of efficient quantum and classical algorithms for fundamental linear-algebra tasks.
Motivated by information geometry, a distance function on the space of stochastic matrices is advocated. Starting with sequences of Markov chains the Bhattacharyya angle is advocated as the natural tool for comparing both short and long term Markov chain runs. Bounds on the convergence of the distance and mixing times are derived. Guided by the desire to compare different Markov chain models, especially in the setting of healthcare processes, a new distance function on the space of stochastic matrices is presented. It is a true distance measure which has a closed form and is efficient to implement for numerical evaluation. In the case of ergodic Markov chains, it is shown that considering either the Bhattacharyya angle on Markov sequences or the new stochastic matrix distance leads to the same distance between models.
We propose and analyze an $H^2$-conforming Virtual Element Method (VEM) for the simplest linear elliptic PDEs in nondivergence form with Cordes coefficients. The VEM hinges on a hierarchical construction valid for any dimension $d \ge 2$. The analysis relies on the continuous Miranda-Talenti estimate for convex domains $\Omega$ and is rather elementary. We prove stability and error estimates in $H^2(\Omega)$, including the effect of quadrature, under minimal regularity of the data. Numerical experiments illustrate the interplay of coefficient regularity and convergence rates in $H^2(\Omega)$.
The incompressible Euler equations are an important model system in computational fluid dynamics. Fast high-order methods for the solution of this time-dependent system of partial differential equations are of particular interest: due to their exponential convergence in the polynomial degree they can make efficient use of computational resources. To address this challenge we describe a novel timestepping method which combines a hybridised Discontinuous Galerkin method for the spatial discretisation with IMEX timestepping schemes, thus achieving high-order accuracy in both space and time. The computational bottleneck is the solution of a (block-) sparse linear system to compute updates to pressure and velocity at each stage of the IMEX integrator. Following Chorin's projection approach, this update of the velocity and pressure fields is split into two stages. As a result, the hybridised equation for the implicit pressure-velocity problem is reduced to the well-known system which arises in hybridised mixed formulations of the Poisson- or diffusion problem and for which efficient multigrid preconditioners have been developed. Splitting errors can be reduced systematically by embedding this update into a preconditioned Richardson iteration. The accuracy and efficiency of the new method is demonstrated numerically for two time-dependent testcases that have been previously studied in the literature.
This paper focuses on decoupled finite element methods for the fourth-order exterior differential equation. Based on differential complexes and the Helmholtz decomposition, the fourth-order exterior differential equation is decomposed into two second-order exterior differential equations and one generalized Stokes equation. A family of conforming finite element methods are developed for the decoupled formulation. Numerical results are provided for verifying the decoupled finite element methods of the biharmonic equation in three dimensions.
Gaussian graphical models are nowadays commonly applied to the comparison of groups sharing the same variables, by jointy learning their independence structures. We consider the case where there are exactly two dependent groups and the association structure is represented by a family of coloured Gaussian graphical models suited to deal with paired data problems. To learn the two dependent graphs, together with their across-graph association structure, we implement a fused graphical lasso penalty. We carry out a comprehensive analysis of this approach, with special attention to the role played by some relevant submodel classes. In this way, we provide a broad set of tools for the application of Gaussian graphical models to paired data problems. These include results useful for the specification of penalty values in order to obtain a path of lasso solutions and an ADMM algorithm that solves the fused graphical lasso optimization problem. Finally, we present an application of our method to cancer genomics where it is of interest to compare cancer cells with a control sample from histologically normal tissues adjacent to the tumor. All the methods described in this article are implemented in the $\texttt{R}$ package $\texttt{pdglasso}$ availabe at: //github.com/savranciati/pdglasso.