Surface Stokes and Navier-Stokes equations are used to model fluid flow on surfaces. They have attracted significant recent attention in the numerical analysis literature because approximation of their solutions poses significant challenges not encountered in the Euclidean context. One challenge comes from the need to simultaneously enforce tangentiality and $H^1$ conformity (continuity) of discrete vector fields used to approximate solutions in the velocity-pressure formulation. Existing methods in the literature all enforce one of these two constraints weakly either by penalization or by use of Lagrange multipliers. Missing so far is a robust and systematic construction of surface Stokes finite element spaces which employ nodal degrees of freedom, including MINI, Taylor-Hood, Scott-Vogelius, and other composite elements which can lead to divergence-conforming or pressure-robust discretizations. In this paper we construct surface MINI spaces whose velocity fields are tangential. They are not $H^1$-conforming, but do lie in $H({\rm div})$ and do not require penalization to achieve optimal convergence rates. We prove stability and optimal-order energy-norm convergence of the method and demonstrate optimal-order convergence of the velocity field in $L_2$ via numerical experiments. The core advance in the paper is the construction of nodal degrees of freedom for the velocity field. This technique also may be used to construct surface counterparts to many other standard Euclidean Stokes spaces, and we accordingly present numerical experiments indicating optimal-order convergence of nonconforming tangential surface Taylor-Hood $\mathbb{P}^2-\mathbb{P}^1$ elements.
We study a class of Gaussian processes for which the posterior mean, for a particular choice of data, replicates a truncated Taylor expansion of any order. The data consist of derivative evaluations at the expansion point and the prior covariance kernel belongs to the class of Taylor kernels, which can be written in a certain power series form. We discuss and prove some results on maximum likelihood estimation of parameters of Taylor kernels. The proposed framework is a special case of Gaussian process regression based on data that is orthogonal in the reproducing kernel Hilbert space of the covariance kernel.
This paper deals with speeding up the convergence of a class of two-step iterative methods for solving linear systems of equations. To implement the acceleration technique, the residual norm associated with computed approximations for each sub-iterate is minimized over a certain two-dimensional subspace. Convergence properties of the proposed method are studied in detail. The approach is further developed to solve (regularized) normal equations arising from the discretization of ill-posed problems. The results of numerical experiments are reported to illustrate the performance of exact and inexact variants of the method on several test problems from different application areas.
The numerical solution of continuum damage mechanics (CDM) problems suffers from critical points during the material softening stage, and consequently existing iterative solvers are subject to a trade-off between computational expense and solution accuracy. Displacement-controlled arc-length methods were developed to address these challenges, but are currently applicable only to geometrically non-linear problems. In this work, we present a novel displacement-controlled arc-length (DAL) method for CDM problems in both local damage and non-local gradient damage versions. The analytical tangent matrix is derived for the DAL solver in both of the local and the non-local models. In addition, several consistent and non-consistent implementation algorithms are proposed, implemented, and evaluated. Unlike existing force-controlled arc-length solvers that monolithically scale the external force vector, the proposed method treats the external force vector as an independent variable and determines the position of the system on the equilibrium path based on all the nodal variations of the external force vector. Such a flexible approach renders the proposed solver to be substantially more efficient and versatile than existing solvers used in CDM problems. The considerable advantages of the proposed DAL algorithm are demonstrated against several benchmark 1D problems with sharp snap-backs and 2D examples with various boundary conditions and loading scenarios, where the proposed method drastically outperforms existing conventional approaches in terms of accuracy, computational efficiency, and the ability to predict the complete equilibrium path including all critical points.
Introduction. There is currently no guidance on how to assess the calibration of multistate models used for risk prediction. We introduce several techniques that can be used to produce calibration plots for the transition probabilities of a multistate model, before assessing their performance in the presence of non-informative and informative censoring through a simulation. Methods. We studied pseudo-values based on the Aalen-Johansen estimator, binary logistic regression with inverse probability of censoring weights (BLR-IPCW), and multinomial logistic regression with inverse probability of censoring weights (MLR-IPCW). The MLR-IPCW approach results in a calibration scatter plot, providing extra insight about the calibration. We simulated data with varying levels of censoring and evaluated the ability of each method to estimate the calibration curve for a set of predicted transition probabilities. We also developed evaluated the calibration of a model predicting the incidence of cardiovascular disease, type 2 diabetes and chronic kidney disease among a cohort of patients derived from linked primary and secondary healthcare records. Results. The pseudo-value, BLR-IPCW and MLR-IPCW approaches give unbiased estimates of the calibration curves under non-informative censoring. These methods remained unbiased in the presence of informative censoring, unless the mechanism was strongly informative, with bias concentrated in the areas of predicted transition probabilities of low density. Conclusions. We recommend implementing either the pseudo-value or BLR-IPCW approaches to produce a calibration curve, combined with the MLR-IPCW approach to produce a calibration scatter plot, which provides additional information over either of the other methods.
The Bayesian inference approach is widely used to tackle inverse problems due to its versatile and natural ability to handle ill-posedness. However, it often faces challenges when dealing with situations involving continuous fields or large-resolution discrete representations (high-dimensional). Moreover, the prior distribution of unknown parameters is commonly difficult to be determined. In this study, an Operator Learning-based Generative Adversarial Network (OL-GAN) is proposed and integrated into the Bayesian inference framework to handle these issues. Unlike most Bayesian approaches, the distinctive characteristic of the proposed method is to learn the joint distribution of parameters and responses. By leveraging the trained generative model, the posteriors of the unknown parameters can theoretically be approximated by any sampling algorithm (e.g., Markov Chain Monte Carlo, MCMC) in a low-dimensional latent space shared by the components of the joint distribution. The latent space is typically a simple and easy-to-sample distribution (e.g., Gaussian, uniform), which significantly reduces the computational cost associated with the Bayesian inference while avoiding prior selection concerns. Furthermore, incorporating operator learning enables resolution-independent in the generator. Predictions can be obtained at desired coordinates, and inversions can be performed even if the observation data are misaligned with the training data. Finally, the effectiveness of the proposed method is validated through several numerical experiments.
This paper proposes a hierarchy of numerical fluxes for the compressible flow equations which are kinetic-energy and pressure equilibrium preserving and asymptotically entropy conservative, i.e., they are able to arbitrarily reduce the numerical error on entropy production due to the spatial discretization. The fluxes are based on the use of the harmonic mean for internal energy and only use algebraic operations, making them less computationally expensive than the entropy-conserving fluxes based on the logarithmic mean. The use of the geometric mean is also explored and identified to be well-suited to reduce errors on entropy evolution. Results of numerical tests confirmed the theoretical predictions and the entropy-conserving capabilities of a selection of schemes have been compared.
We discuss a system of stochastic differential equations with a stiff linear term and additive noise driven by fractional Brownian motions (fBms) with Hurst parameter H>1/2, which arise e. g., from spatial approximations of stochastic partial differential equations. For their numerical approximation, we present an exponential Euler scheme and show that it converges in the strong sense with an exact rate close to the Hurst parameter H. Further, based on [2], we conclude the existence of a unique stationary solution of the exponential Euler scheme that is pathwise asymptotically stable.
This paper aims to reconstruct the initial condition of a hyperbolic equation with an unknown damping coefficient. Our approach involves approximating the hyperbolic equation's solution by its truncated Fourier expansion in the time domain and using a polynomial-exponential basis. This truncation process facilitates the elimination of the time variable, consequently, yielding a system of quasi-linear elliptic equations. To globally solve the system without needing an accurate initial guess, we employ the Carleman contraction principle. We provide several numerical examples to illustrate the efficacy of our method. The method not only delivers precise solutions but also showcases remarkable computational efficiency.
Polyurethane (PU) is an ideal thermal insulation material due to its excellent thermal properties. The incorporation of Phase Change Materials (PCMs) capsules into Polyurethane (PU) has been shown to be effective in building envelopes. This design can significantly increase the stability of the indoor thermal environment and reduce the fluctuation of indoor air temperature. We develop a multiscale model of a PU-PCM foam composite and study the thermal conductivity of this material. Later, the design of materials can be optimized by obtaining thermal conductivity. We conduct a case study based on the performance of this optimized material to fully consider the thermal comfort of the occupants of a building envelope with the application of PU-PCMs composites in a single room. At the same time, we also predict the energy consumption of this case. All the outcomes show that this design is promising, enabling the passive design of building energy and significantly improving occupants' comfort.
We present new Dirichlet-Neumann and Neumann-Dirichlet algorithms with a time domain decomposition applied to unconstrained parabolic optimal control problems. After a spatial semi-discretization, we use the Lagrange multiplier approach to derive a coupled forward-backward optimality system, which can then be solved using a time domain decomposition. Due to the forward-backward structure of the optimality system, three variants can be found for the Dirichlet-Neumann and Neumann-Dirichlet algorithms. We analyze their convergence behavior and determine the optimal relaxation parameter for each algorithm. Our analysis reveals that the most natural algorithms are actually only good smoothers, and there are better choices which lead to efficient solvers. We illustrate our analysis with numerical experiments.