Testing the goodness-of-fit of a model with its defining functional constraints in the parameters could date back to Spearman (1927), who analyzed the famous "tetrad" polynomial in the covariance matrix of the observed variables in a single-factor model. Despite its long history, the Wald test typically employed to operationalize this approach could produce very inaccurate test sizes in many situations, even when the regular conditions for the classical normal asymptotics are met and a very large sample is available. Focusing on testing a polynomial constraint in a Gaussian covariance matrix, we obtained a new understanding of this baffling phenomenon: When the null hypothesis is true but "near-singular", the standardized Wald test exhibits slow weak convergence, owing to the sophisticated dependency structure inherent to the underlying U-statistic that ultimately drives its limiting distribution; this can also be rigorously explained by a key ratio of moments encoded in the Berry-Esseen bound quantifying the normal approximation error involved. As an alternative, we advocate the use of an incomplete U-statistic to mildly tone down the dependence thereof and render the speed of convergence agnostic to the singularity status of the hypothesis. In parallel, we develop a Berry-Esseen bound that is mathematically descriptive of the singularity-agnostic nature of our standardized incomplete U-statistic, using some of the finest exponential-type inequalities in the literature.
In this work, we propose the application of the eXtended Finite Element Method (XFEM) in the context of the coupling between three-dimensional and one-dimensional elliptic problems. In particular, we consider the case in which the 3D-1D coupled problem arises from the geometrical model reduction of a fully three-dimensional problem, characterized by thin tubular inclusions embedded in a much wider domain. In the 3D-1D coupling framework, the use of non conforming meshes is widely adopted. However, since the inclusions typically behave as singular sinks or sources for the 3D problem, mesh adaptation near the embedded 1D domains may be necessary to enhance solution accuracy and recover optimal convergence rates. An alternative to mesh adaptation is represented by the XFEM, which we here propose to enhance the approximation capabilities of an optimization-based 3D-1D coupling approach. An effective quadrature strategy is devised to integrate the enrichment functions and numerical tests on single and multiple segments are proposed to demonstrate the effectiveness of the approach.
In this paper, a fifth-order moment-based Hermite weighted essentially non-oscillatory scheme with unified stencils (termed as HWENO-U) is proposed for hyperbolic conservation laws. The main idea of the HWENO-U scheme is to modify the first-order moment by a HWENO limiter only in the time discretizations using the same information of spatial reconstructions, in which the limiter not only overcomes spurious oscillations well, but also ensures the stability of the fully-discrete scheme. For the HWENO reconstructions, a new scale-invariant nonlinear weight is designed by incorporating only the integral average values of the solution, which keeps all properties of the original one while is more robust for simulating challenging problems with sharp scale variations. Compared with previous HWENO schemes, the advantages of the HWENO-U scheme are: (1) a simpler implemented process involving only a single HWENO reconstruction applied throughout the entire procedures without any modifications for the governing equations; (2) increased efficiency by utilizing the same candidate stencils, reconstructed polynomials, and linear and nonlinear weights in both the HWENO limiter and spatial reconstructions; (3) reduced problem-specific dependencies and improved rationality, as the nonlinear weights are identical for the function $u$ and its non-zero multiple $\zeta u$. Besides, the proposed scheme retains the advantages of previous HWENO schemes, including compact reconstructed stencils and the utilization of artificial linear weights. Extensive benchmarks are carried out to validate the accuracy, efficiency, resolution, and robustness of the proposed scheme.
This paper presents likelihood-based inference methods for the family of univariate gamma-normal distributions GN({\alpha}, r, {\mu}, {\sigma}^2 ) that result from summing independent gamma({\alpha}, r) and N({\mu}, {\sigma}^2 ) random variables. First, the probability density function of a gamma-normal variable is provided in compact form with the use of parabolic cylinder functions, along with key properties. We then provide analytic expressions for the maximum-likelihood score equations and the Fisher information matrix, and discuss inferential methods for the gamma-normal distribution. Given the widespread use of the two constituting distributions, the gamma-normal distribution is a general purpose tool for a variety of applications. In particular, we discuss two distributions that are obtained as special cases and that are featured in a variety of statistical applications: the exponential-normal distribution and the chi-squared-normal (or overdispersed chi-squared) distribution.
In this article we aim to obtain the Fisher Riemann geodesics for nonparametric families of probability densities as a weak limit of the parametric case with increasing number of parameters.
Effect modification occurs when the impact of the treatment on an outcome varies based on the levels of other covariates known as effect modifiers. Modeling of these effect differences is important for etiological goals and for purposes of optimizing treatment. Structural nested mean models (SNMMs) are useful causal models for estimating the potentially heterogeneous effect of a time-varying exposure on the mean of an outcome in the presence of time-varying confounding. A data-driven approach for selecting the effect modifiers of an exposure may be necessary if these effect modifiers are a priori unknown and need to be identified. Although variable selection techniques are available in the context of estimating conditional average treatment effects using marginal structural models, or in the context of estimating optimal dynamic treatment regimens, all of these methods consider an outcome measured at a single point in time. In the context of an SNMM for repeated outcomes, we propose a doubly robust penalized G-estimator for the causal effect of a time-varying exposure with a simultaneous selection of effect modifiers and use this estimator to analyze the effect modification in a study of hemodiafiltration. We prove the oracle property of our estimator, and conduct a simulation study for evaluation of its performance in finite samples and for verification of its double-robustness property. Our work is motivated by and applied to the study of hemodiafiltration for treating patients with end-stage renal disease at the Centre Hospitalier de l'Universit\'e de Montr\'eal. We apply the proposed method to investigate the effect heterogeneity of dialysis facility on the repeated session-specific hemodiafiltration outcomes.
Intracranial aneurysms are the leading cause of stroke. One of the established treatment approaches is the embolization induced by coil insertion. However, the prediction of treatment and subsequent changed flow characteristics in the aneurysm, is still an open problem. In this work, we present an approach based on patient specific geometry and parameters including a coil representation as inhomogeneous porous medium. The model consists of the volume-averaged Navier-Stokes equations including the non-Newtonian blood rheology. We solve these equations using a problem-adapted lattice Boltzmann method and present a comparison between fully-resolved and volume-averaged simulations. The results indicate the validity of the model. Overall, this workflow allows for patient specific assessment of the flow due to potential treatment.
We consider the numerical behavior of the fixed-stress splitting method for coupled poromechanics as undrained regimes are approached. We explain that pressure stability is related to the splitting error of the scheme, not the fact that the discrete saddle point matrix never appears in the fixed-stress approach. This observation reconciles previous results regarding the pressure stability of the splitting method. Using examples of compositional poromechanics with application to geological CO$_2$ sequestration, we see that solutions obtained using the fixed-stress scheme with a low order finite element-finite volume discretization which is not inherently inf-sup stable can exhibit the same pressure oscillations obtained with the corresponding fully implicit scheme. Moreover, pressure jump stabilization can effectively remove these spurious oscillations in the fixed-stress setting, while also improving the efficiency of the scheme in terms of the number of iterations required at every time step to reach convergence.
We consider the Sherrington-Kirkpatrick model of spin glasses at high-temperature and no external field, and study the problem of sampling from the Gibbs distribution $\mu$ in polynomial time. We prove that, for any inverse temperature $\beta<1/2$, there exists an algorithm with complexity $O(n^2)$ that samples from a distribution $\mu^{alg}$ which is close in normalized Wasserstein distance to $\mu$. Namely, there exists a coupling of $\mu$ and $\mu^{alg}$ such that if $(x,x^{alg})\in\{-1,+1\}^n\times \{-1,+1\}^n$ is a pair drawn from this coupling, then $n^{-1}\mathbb E\{||x-x^{alg}||_2^2\}=o_n(1)$. The best previous results, by Bauerschmidt and Bodineau and by Eldan, Koehler, and Zeitouni, implied efficient algorithms to approximately sample (under a stronger metric) for $\beta<1/4$. We complement this result with a negative one, by introducing a suitable "stability" property for sampling algorithms, which is verified by many standard techniques. We prove that no stable algorithm can approximately sample for $\beta>1$, even under the normalized Wasserstein metric. Our sampling method is based on an algorithmic implementation of stochastic localization, which progressively tilts the measure $\mu$ towards a single configuration, together with an approximate message passing algorithm that is used to approximate the mean of the tilted measure.
Non-Hermitian topological phases can produce some remarkable properties, compared with their Hermitian counterpart, such as the breakdown of conventional bulk-boundary correspondence and the non-Hermitian topological edge mode. Here, we introduce several algorithms with multi-layer perceptron (MLP), and convolutional neural network (CNN) in the field of deep learning, to predict the winding of eigenvalues non-Hermitian Hamiltonians. Subsequently, we use the smallest module of the periodic circuit as one unit to construct high-dimensional circuit data features. Further, we use the Dense Convolutional Network (DenseNet), a type of convolutional neural network that utilizes dense connections between layers to design a non-Hermitian topolectrical Chern circuit, as the DenseNet algorithm is more suitable for processing high-dimensional data. Our results demonstrate the effectiveness of the deep learning network in capturing the global topological characteristics of a non-Hermitian system based on training data.
The broad class of multivariate unified skew-normal (SUN) distributions has been recently shown to possess fundamental conjugacy properties. When used as priors for the vector of parameters in general probit, tobit, and multinomial probit models, these distributions yield posteriors that still belong to the SUN family. Although such a core result has led to important advancements in Bayesian inference and computation, its applicability beyond likelihoods associated with fully-observed, discretized, or censored realizations from multivariate Gaussian models remains yet unexplored. This article covers such an important gap by proving that the wider family of multivariate unified skew-elliptical (SUE) distributions, which extends SUNs to more general perturbations of elliptical densities, guarantees conjugacy for broader classes of models, beyond those relying on fully-observed, discretized or censored Gaussians. Such a result leverages the closure under linear combinations, conditioning and marginalization of SUE to prove that such a family is conjugate to the likelihood induced by general multivariate regression models for fully-observed, censored or dichotomized realizations from skew-elliptical distributions. This advancement substantially enlarges the set of models that enable conjugate Bayesian inference to general formulations arising from elliptical and skew-elliptical families, including the multivariate Student's t and skew-t, among others.