Approximating differential operators defined on two-dimensional surfaces is an important problem that arises in many areas of science and engineering. Over the past ten years, localized meshfree methods based on generalized moving least squares (GMLS) and radial basis function finite differences (RBF-FD) have been shown to be effective for this task as they can give high orders of accuracy at low computational cost, and they can be applied to surfaces defined only by point clouds. However, there have yet to be any studies that perform a direct comparison of these methods for approximating surface differential operators (SDOs). The first purpose of this work is to fill that gap. For this comparison, we focus on an RBF-FD method based on polyharmonic spline kernels and polynomials (PHS+Poly) since they are most closely related to the GMLS method. Additionally, we use a relatively new technique for approximating SDOs with RBF-FD called the tangent plane method since it is simpler than previous techniques and natural to use with PHS+Poly RBF-FD. The second purpose of this work is to relate the tangent plane formulation of SDOs to the local coordinate formulation used in GMLS and to show that they are equivalent when the tangent space to the surface is known exactly. The final purpose is to use ideas from the GMLS SDO formulation to derive a new RBF-FD method for approximating the tangent space for a point cloud surface when it is unknown. For the numerical comparisons of the methods, we examine their convergence rates for approximating the surface gradient, divergence, and Laplacian as the point clouds are refined for various parameter choices. We also compare their efficiency in terms of accuracy per computational cost, both when including and excluding setup costs.
Meta-analysis is the aggregation of data from multiple studies to find patterns across a broad range relating to a particular subject. It is becoming increasingly useful to apply meta-analysis to summarize these studies being done across various fields. In meta-analysis, it is common to use the mean and standard deviation from each study to compare for analysis. While many studies reported mean and standard deviation for their summary statistics, some report other values including the minimum, maximum, median, and first and third quantiles. Often, the quantiles and median are reported when the data is skewed and does not follow a normal distribution. In order to correctly summarize the data and draw conclusions from multiple studies, it is necessary to estimate the mean and standard deviation from each study, considering variation and skewness within each study. In past literature, methods have been proposed to estimate the mean and standard deviation, but do not consider negative values. Data that include negative values are common and would increase the accuracy and impact of the me-ta-analysis. We propose a method that implements a generalized Box-Cox transformation to estimate the mean and standard deviation accounting for such negative values while maintaining similar accuracy.
We prove that the native space of a Wu function is a dense subspace of a Sobolev space. An explicit characterization of the native spaces of Wu functions is given. Three definitions of Wu functions are introduced and proven to be equivalent. Based on these new equivalent definitions and the so called $f$-form tricks, we can generalize the Wu functions into the even-dimensional spaces $\R^{2k}$, while the original Wu functions are only defined in the odd-dimensional spaces $\R^{2k+1}$. Such functions in even-dimensional spaces are referred to as the `missing Wu functions'. Furthermore we can generalize the Wu functions into `fractional'-dimensional spaces. We call all these Wu functions the generalized Wu functions. The closed form of the generalized Wu functions are given in terms of hypergeometric functions. Finally we prove that the Wu functions and the missing Wu functions can be written as linear combinations of the generalized Wendland functions.
Obtaining the solutions of partial differential equations based on machine learning methods has drawn more and more attention in the fields of scientific computation and engineering applications. In this work, we first propose a coupled Extreme Learning Machine(called CELM) method incorporated with the physical laws to solve a class of fourth-order biharmonic equations by reformulating it into two well-posed Poisson problems. In addition, some activation functions including tangent, gaussian, sine, and trigonometric functions are introduced to assess our CELM method. Furthermore, we introduce several activation functions, such as tangent, Gaussian, sine, and trigonometric functions, to evaluate the performance of our CELM method. Notably, the sine and trigonometric functions demonstrate a remarkable ability to effectively minimize the approximation error of the CELM model. In the end, several numerical experiments are performed to study the initializing ways for both the weights and biases of the hidden units in our CELM model and explore the required number of hidden units. Numerical results show the proposed CELM algorithm is high-precision and efficient to address the biharmonic equations on both regular and irregular domains.
In this paper we propose a variant of enriched Galerkin methods for second order elliptic equations with over-penalization of interior jump terms. The bilinear form with interior over-penalization gives a non-standard norm which is different from the discrete energy norm in the classical discontinuous Galerkin methods. Nonetheless we prove that optimal a priori error estimates with the standard discrete energy norm can be obtained by combining a priori and a posteriori error analysis techniques. We also show that the interior over-penalization is advantageous for constructing preconditioners robust to mesh refinement by analyzing spectral equivalence of bilinear forms. Numerical results are included to illustrate the convergence and preconditioning results.
Derivatives are a key nonparametric functional in wide-ranging applications where the rate of change of an unknown function is of interest. In the Bayesian paradigm, Gaussian processes (GPs) are routinely used as a flexible prior for unknown functions, and are arguably one of the most popular tools in many areas. However, little is known about the optimal modelling strategy and theoretical properties when using GPs for derivatives. In this article, we study a plug-in strategy by differentiating the posterior distribution with GP priors for derivatives of any order. This practically appealing plug-in GP method has been previously perceived as suboptimal and degraded, but this is not necessarily the case. We provide posterior contraction rates for plug-in GPs and establish that they remarkably adapt to derivative orders. We show that the posterior measure of the regression function and its derivatives, with the same choice of hyperparameter that does not depend on the order of derivatives, converges at the minimax optimal rate up to a logarithmic factor for functions in certain classes. We analyze a data-driven hyperparameter tuning method based on empirical Bayes, and show that it satisfies the optimal rate condition while maintaining computational efficiency. This article to the best of our knowledge provides the first positive result for plug-in GPs in the context of inferring derivative functionals, and leads to a practically simple nonparametric Bayesian method with optimal and adaptive hyperparameter tuning for simultaneously estimating the regression function and its derivatives. Simulations show competitive finite sample performance of the plug-in GP method. A climate change application for analyzing the global sea-level rise is discussed.
Symmetry is a cornerstone of much of mathematics, and many probability distributions possess symmetries characterized by their invariance to a collection of group actions. Thus, many mathematical and statistical methods rely on such symmetry holding and ostensibly fail if symmetry is broken. This work considers under what conditions a sequence of probability measures asymptotically gains such symmetry or invariance to a collection of group actions. Considering the many symmetries of the Gaussian distribution, this work effectively proposes a non-parametric type of central limit theorem. That is, a Lipschitz function of a high dimensional random vector will be asymptotically invariant to the actions of certain compact topological groups. Applications of this include a partial law of the iterated logarithm for uniformly random points in an $\ell_p^n$-ball and an asymptotic equivalence between classical parametric statistical tests and their randomization counterparts even when invariance assumptions are violated.
In this work a general semi-parametric multivariate model where the first two conditional moments are assumed to be multivariate time series is introduced. The focus of the estimation is the conditional mean parameter vector for discrete-valued distributions. Quasi-Maximum Likelihood Estimators (QMLEs) based on the linear exponential family are typically employed for such estimation problems when the true multivariate conditional probability distribution is unknown or too complex. Although QMLEs provide consistent estimates they may be inefficient. In this paper novel two-stage Multivariate Weighted Least Square Estimators (MWLSEs) are introduced which enjoy the same consistency property as the QMLEs but can provide improved efficiency with suitable choice of the covariance matrix of the observations. The proposed method allows for a more accurate estimation of model parameters in particular for count and categorical data when maximum likelihood estimation is unfeasible. Moreover, consistency and asymptotic normality of MWLSEs are derived. The estimation performance of QMLEs and MWLSEs is compared through simulation experiments and a real data application, showing superior accuracy of the proposed methodology.
A numerical method is proposed for simulation of composite open quantum systems. It is based on Lindblad master equations and adiabatic elimination. Each subsystem is assumed to converge exponentially towards a stationary subspace, slightly impacted by some decoherence channels and weakly coupled to the other subsystems. This numerical method is based on a perturbation analysis with an asymptotic expansion. It exploits the formulation of the slow dynamics with reduced dimension. It relies on the invariant operators of the local and nominal dissipative dynamics attached to each subsystem. Second-order expansion can be computed only with local numerical calculations. It avoids computations on the tensor-product Hilbert space attached to the full system. This numerical method is particularly well suited for autonomous quantum error correction schemes. Simulations of such reduced models agree with complete full model simulations for typical gates acting on one and two cat-qubits (Z, ZZ and CNOT) when the mean photon number of each cat-qubit is less than 8. For larger mean photon numbers and gates with three cat-qubits (ZZZ and CCNOT), full model simulations are almost impossible whereas reduced model simulations remain accessible. In particular, they capture both the dominant phase-flip error-rate and the very small bit-flip error-rate with its exponential suppression versus the mean photon number.
Models of complex technological systems inherently contain interactions and dependencies among their input variables that affect their joint influence on the output. Such models are often computationally expensive and few sensitivity analysis methods can effectively process such complexities. Moreover, the sensitivity analysis field as a whole pays limited attention to the nature of interaction effects, whose understanding can prove to be critical for the design of safe and reliable systems. In this paper, we introduce and extensively test a simple binning approach for computing sensitivity indices and demonstrate how complementing it with the smart visualization method, simulation decomposition (SimDec), can permit important insights into the behavior of complex engineering models. The simple binning approach computes first-, second-order effects, and a combined sensitivity index, and is considerably more computationally efficient than Sobol' indices. The totality of the sensitivity analysis framework provides an efficient and intuitive way to analyze the behavior of complex systems containing interactions and dependencies.
A new mechanical model on noncircular shallow tunnelling considering initial stress field is proposed in this paper by constraining far-field ground surface to eliminate displacement singularity at infinity, and the originally unbalanced tunnel excavation problem in existing solutions is turned to an equilibrium one of mixed boundaries. By applying analytic continuation, the mixed boundaries are transformed to a homogenerous Riemann-Hilbert problem, which is subsequently solved via an efficient and accurate iterative method with boundary conditions of static equilibrium, displacement single-valuedness, and traction along tunnel periphery. The Lanczos filtering technique is used in the final stress and displacement solution to reduce the Gibbs phenomena caused by the constrained far-field ground surface for more accurte results. Several numerical cases are conducted to intensively verify the proposed solution by examining boundary conditions and comparing with existing solutions, and all the results are in good agreements. Then more numerical cases are conducted to investigate the stress and deformation distribution along ground surface and tunnel periphery, and several engineering advices are given. Further discussions on the defects of the proposed solution are also conducted for objectivity.