In this paper, a peridynamics-based finite element method (Peri-FEM) is proposed for the quasi-static fracture analysis, which is of the consistent computational framework with the classical finite element method (FEM). First, the integral domain of the peridynamics is reconstructed, and a new type of element called peridynamic element (PE) is defined. Although PEs are generated by the continuous elements (CEs) of classical FEM, they do not affect each other. Then the spatial discretization is performed based on PEs and CEs, and the linear equations about the nodal displacement are established according to the principle of minimum potential energy. Besides, the cracks are characterized as the degradation of the mechanical properties of PEs. Finally, the validity of the proposed method is demonstrated through numerical examples.
The objective of the current study is to utilize an innovative method called 'change probabilities' for describing fracture roughness. In order to detect and visualize anisotropy of rock joint surfaces, the roughness of one-dimensional profiles taken in different directions is quantified. The central quantifiers, 'change probabilities', are based on counting monotone changes in discretizations of a profile. These probabilities, which are usually varying with the scale, can be reinterpreted as scale-dependent Hurst exponents. For a large class of Gaussian stochastic processes change probabilities are shown to be directly related to the classical Hurst exponent, which generalizes a relationship known for fractional Brownian motion. While being related to this classical roughness measure, the proposed method is more generally applicable, increasing therefore the flexibility of modeling and investigating surface profiles. In particular, it allows a quick and efficient visualization and detection of roughness anisotropy and scale dependence of roughness.
The method-of-moments implementation of the electric-field integral equation yields many code-verification challenges due to the various sources of numerical error and their possible interactions. Matters are further complicated by singular integrals, which arise from the presence of a Green's function. In this paper, we provide approaches to separately assess the numerical errors arising from the use of basis functions to approximate the solution and the use of quadrature to approximate the integration. Through these approaches, we are able to verify the code and compare the error from different quadrature options.
The statistical finite element method (StatFEM) is an emerging probabilistic method that allows observations of a physical system to be synthesised with the numerical solution of a PDE intended to describe it in a coherent statistical framework, to compensate for model error. This work presents a new theoretical analysis of the statistical finite element method demonstrating that it has similar convergence properties to the finite element method on which it is based. Our results constitute a bound on the Wasserstein-2 distance between the ideal prior and posterior and the StatFEM approximation thereof, and show that this distance converges at the same mesh-dependent rate as finite element solutions converge to the true solution. Several numerical examples are presented to demonstrate our theory, including an example which test the robustness of StatFEM when extended to nonlinear quantities of interest.
In the present paper we initiate the challenging task of building a mathematically sound theory for Adaptive Virtual Element Methods (AVEMs). Among the realm of polygonal meshes, we restrict our analysis to triangular meshes with hanging nodes in 2d -- the simplest meshes with a systematic refinement procedure that preserves shape regularity and optimal complexity. A major challenge in the a posteriori error analysis of AVEMs is the presence of the stabilization term, which is of the same order as the residual-type error estimator but prevents the equivalence of the latter with the energy error. Under the assumption that any chain of recursively created hanging nodes has uniformly bounded length, we show that the stabilization term can be made arbitrarily small relative to the error estimator provided the stabilization parameter of the scheme is sufficiently large. This quantitative estimate leads to stabilization-free upper and lower a posteriori bounds for the energy error. This novel and crucial property of VEMs hinges on the largest subspace of continuous piecewise linear functions and the delicate interplay between its coarser scales and the finer ones of the VEM space. Our results apply to $H^1$-conforming (lowest order) VEMs of any kind, including the classical and enhanced VEMs.
We investigate the equilibrium behavior for the decentralized cheap talk problem for real random variables and quadratic cost criteria in which an encoder and a decoder have misaligned objective functions. In prior work, it has been shown that the number of bins in any equilibrium has to be countable, generalizing a classical result due to Crawford and Sobel who considered sources with density supported on $[0,1]$. In this paper, we first refine this result in the context of log-concave sources. For sources with two-sided unbounded support, we prove that, for any finite number of bins, there exists a unique equilibrium. In contrast, for sources with semi-unbounded support, there may be a finite upper bound on the number of bins in equilibrium depending on certain conditions stated explicitly. Moreover, we prove that for log-concave sources, the expected costs of the encoder and the decoder in equilibrium decrease as the number of bins increases. Furthermore, for strictly log-concave sources with two-sided unbounded support, we prove convergence to the unique equilibrium under best response dynamics which starts with a given number of bins, making a connection with the classical theory of optimal quantization and convergence results of Lloyd's method. In addition, we consider more general sources which satisfy certain assumptions on the tail(s) of the distribution and we show that there exist equilibria with infinitely many bins for sources with two-sided unbounded support. Further explicit characterizations are provided for sources with exponential, Gaussian, and compactly-supported probability distributions.
In this paper we get error bounds for fully discrete approximations of infinite horizon problems via the dynamic programming approach. It is well known that considering a time discretization with a positive step size $h$ an error bound of size $h$ can be proved for the difference between the value function (viscosity solution of the Hamilton-Jacobi-Bellman equation corresponding to the infinite horizon) and the value function of the discrete time problem. However, including also a spatial discretization based on elements of size $k$ an error bound of size $O(k/h)$ can be found in the literature for the error between the value functions of the continuous problem and the fully discrete problem. In this paper we revise the error bound of the fully discrete method and prove, under similar assumptions to those of the time discrete case, that the error of the fully discrete case is in fact $O(h+k)$ which gives first order in time and space for the method. This error bound matches the numerical experiments of many papers in the literature in which the behaviour $1/h$ from the bound $O(k/h)$ have not been observed.
We present a hybrid-mixed finite element method for a novel hybrid-dimensional model of single-phase Darcy flow in a fractured porous media. In this model, the fracture is treated as an $(d-1)$-dimensional interface within the $d$-dimensional fractured porous domain, for $d=2, 3$. Two classes of fracture are distinguished based on the permeability magnitude ratio between the fracture and its surrounding medium: when the permeability in the fracture is (significantly) larger than in its surrounding medium, it is considered as a {\it conductive} fracture; when the permeability in the fracture is (significantly) smaller than in its surrounding medium, it is considered as a {\it blocking} fracture. The conductive fractures are treated using the classical hybrid-dimensional approach of the interface model where pressure is assumed to be continuous across the fracture interfaces, while the blocking fractures are treated using the recent Dirac-$\delta$ function approach where normal component of Darcy velocity is assumed to be continuous across the interface. Due to the use of Dirac-$\delta$ function approach for the blocking fractures, our numerical scheme allows for nonconforming meshes with respect to the blocking fractures. This is the major novelty of our model and numerical discretization. Moreover, our numerical scheme produces locally conservative velocity approximations and leads to a symmetric positive definite linear system involving pressure degrees of freedom on the mesh skeleton only. The performance of the proposed method is demonstrated by various benchmark test cases in both two- and three-dimensions. Numerical results indicate that the proposed scheme is highly competitive with existing methods in the literature.
In this paper, we study an initial-boundary value problem of Kirchhoff type involving memory term for non-homogeneous materials. The purpose of this research is threefold. First, we prove the existence and uniqueness of weak solutions to the problem using the Galerkin method. Second, to obtain numerical solutions efficiently, we develop a L1 type backward Euler-Galerkin FEM, which is $O(h+k^{2-\alpha})$ accurate, where $\alpha~ (0<\alpha<1)$ is the order of fractional time derivative, $h$ and $k$ are the discretization parameters for space and time directions, respectively. Next, to achieve the optimal rate of convergence in time, we propose a fractional Crank-Nicolson-Galerkin FEM based on L2-1$_{\sigma}$ scheme. We prove that the numerical solutions of this scheme converge to the exact solution with accuracy $O(h+k^{2})$. We also derive a priori bounds on numerical solutions for the proposed schemes. Finally, some numerical experiments are conducted to validate our theoretical claims.
Computational fluctuating hydrodynamics aims at understanding the impact of thermal fluctuations on fluid motions at small scales through numerical exploration. These fluctuations are modeled as stochastic flux terms and incorporated into the classical Navier-Stokes equations, which need to be solved numerically. In this paper, we present a novel projection-based method for solving the incompressible fluctuating hydrodynamics equations. By analyzing the equilibrium structure factor spectrum of the velocity field, we investigate how the inherent splitting errors affect the numerical solution of the stochastic partial differential equations in the presence of non-periodic boundary conditions, and how iterative corrections can reduce these errors. Our computational examples demonstrate both the capability of our approach to reproduce correctly stochastic properties of fluids at small scales as well as its potential use in the simulations of multi-physics problems.
A key advantage of isogeometric discretizations is their accurate and well-behaved eigenfrequencies and eigenmodes. For degree two and higher, however, optical branches of spurious outlier frequencies and modes may appear due to boundaries or reduced continuity at patch interfaces. In this paper, we introduce a variational approach based on perturbed eigenvalue analysis that eliminates outlier frequencies without negatively affecting the accuracy in the remainder of the spectrum and modes. We then propose a pragmatic iterative procedure that estimates the perturbation parameters in such a way that the outlier frequencies are effectively reduced. We demonstrate that our approach allows for a much larger critical time-step size in explicit dynamics calculations. In addition, we show that the critical time-step size obtained with the proposed approach does not depend on the polynomial degree of spline basis functions.