Network data is prevalent in many contemporary big data applications in which a common interest is to unveil important latent links between different pairs of nodes. Yet a simple fundamental question of how to precisely quantify the statistical uncertainty associated with the identification of latent links still remains largely unexplored. In this paper, we propose the method of statistical inference on membership profiles in large networks (SIMPLE) in the setting of degree-corrected mixed membership model, where the null hypothesis assumes that the pair of nodes share the same profile of community memberships. In the simpler case of no degree heterogeneity, the model reduces to the mixed membership model for which an alternative more robust test is also proposed. Both tests are of the Hotelling-type statistics based on the rows of empirical eigenvectors or their ratios, whose asymptotic covariance matrices are very challenging to derive and estimate. Nevertheless, their analytical expressions are unveiled and the unknown covariance matrices are consistently estimated. Under some mild regularity conditions, we establish the exact limiting distributions of the two forms of SIMPLE test statistics under the null hypothesis and contiguous alternative hypothesis. They are the chi-square distributions and the noncentral chi-square distributions, respectively, with degrees of freedom depending on whether the degrees are corrected or not. We also address the important issue of estimating the unknown number of communities and establish the asymptotic properties of the associated test statistics. The advantages and practical utility of our new procedures in terms of both size and power are demonstrated through several simulation examples and real network applications.
Bayesian inference provides a framework to combine an arbitrary number of model components with shared parameters, allowing joint uncertainty estimation and the use of all available data sources. However, misspecification of any part of the model might propagate to all other parts and lead to unsatisfactory results. Cut distributions have been proposed as a remedy, where the information is prevented from flowing along certain directions. We consider cut distributions from an asymptotic perspective, find the equivalent of the Laplace approximation, and notice a lack of frequentist coverage for the associate credible regions. We propose algorithms based on the Posterior Bootstrap that deliver credible regions with the nominal frequentist asymptotic coverage. The algorithms involve numerical optimization programs that can be performed fully in parallel. The results and methods are illustrated in various settings, such as causal inference with propensity scores and epidemiological studies.
The recent statistical finite element method (statFEM) provides a coherent statistical framework to synthesise finite element models with observed data. Through embedding uncertainty inside of the governing equations, finite element solutions are updated to give a posterior distribution which quantifies all sources of uncertainty associated with the model. However to incorporate all sources of uncertainty, one must integrate over the uncertainty associated with the model parameters, the known forward problem of uncertainty quantification. In this paper, we make use of Langevin dynamics to solve the statFEM forward problem, studying the utility of the unadjusted Langevin algorithm (ULA), a Metropolis-free Markov chain Monte Carlo sampler, to build a sample-based characterisation of this otherwise intractable measure. Due to the structure of the statFEM problem, these methods are able to solve the forward problem without explicit full PDE solves, requiring only sparse matrix-vector products. ULA is also gradient-based, and hence provides a scalable approach up to high degrees-of-freedom. Leveraging the theory behind Langevin-based samplers, we provide theoretical guarantees on sampler performance, demonstrating convergence, for both the prior and posterior, in the Kullback-Leibler divergence, and, in Wasserstein-2, with further results on the effect of preconditioning. Numerical experiments are also provided, for both the prior and posterior, to demonstrate the efficacy of the sampler, with a Python package also included.
Establishing the invariance property of an instrument (e.g., a questionnaire or test) is a key step for establishing its measurement validity. Measurement invariance is typically assessed by differential item functioning (DIF) analysis, i.e., detecting DIF items whose response distribution depends not only on the latent trait measured by the instrument but also the group membership. DIF analysis is confounded by the group difference in the latent trait distributions. Many DIF analyses require to know several anchor items that are DIF-free and then draw inference on whether each of the rest is a DIF item, where the anchor items are used to calibrate the latent trait distributions. In this paper, we propose a new method for DIF analysis under a multiple indicators and multiple causes (MIMIC) model for DIF. This method adopts a minimal L1 norm condition for identifying the latent trait distributions. It can not only accurately estimate the DIF effects of individual items without requiring prior knowledge about an anchor set, but also draw valid statistical inference, which yields accurate detection of DIF items. The inference results further allow us to control the type-I error for DIF detection. We conduct simulation studies to evaluate the performance of the proposed method and compare it with the anchor-set-based likelihood ratio test approach and the LASSO approach. The proposed method is further applied to analyzing the three personality scales of Eysenck personality questionnaire - revised (EPQ-R).
Randomized iterative algorithms have attracted much attention in recent years because they can approximately solve large-scale linear systems of equations without accessing the entire coefficient matrix. In this paper, we propose two novel pseudoinverse-free randomized block iterative algorithms for solving consistent and inconsistent linear systems. The proposed algorithms require two user-defined random matrices: one for row sampling and the other for column sampling. We can recover the well-known doubly stochastic Gauss--Seidel, randomized Kaczmarz, randomized coordinate descent, and randomized extended Kaczmarz algorithms by choosing appropriate random matrices used in our algorithms. Because our algorithms allow for a much wider selection of these two random matrices, a number of new specific algorithms can be obtained. We prove the linear convergence in the mean square sense of our algorithms. Numerical experiments for linear systems with synthetic and real-world coefficient matrices demonstrate the efficiency of some special cases of our algorithms.
Aggregated Relational Data, known as ARD, capture information about a social network by asking a respondent questions of the form "How many people with characteristic X do you know?" rather than asking about connections between each pair of individuals directly. Despite widespread use and a growing literature on ARD methodology, there is still no systematic understanding of when and why ARD should accurately recover features of the unobserved network. This paper provides such a characterization. First, we show that ARD provide sufficient information to consistently estimate the parameters of a common generative model for graphs. Then, we characterize conditions under which ARD should recover individual and graph level statistics from the unobserved graph.
Gaussian smoothed sliced Wasserstein distance has been recently introduced for comparing probability distributions, while preserving privacy on the data. It has been shown, in applications such as domain adaptation, to provide performances similar to its non-private (non-smoothed) counterpart. However, the computational and statistical properties of such a metric is not yet been well-established. In this paper, we analyze the theoretical properties of this distance as well as those of generalized versions denoted as Gaussian smoothed sliced divergences. We show that smoothing and slicing preserve the metric property and the weak topology. We also provide results on the sample complexity of such divergences. Since, the privacy level depends on the amount of Gaussian smoothing, we analyze the impact of this parameter on the divergence. We support our theoretical findings with empirical studies of Gaussian smoothed and sliced version of Wassertein distance, Sinkhorn divergence and maximum mean discrepancy (MMD). In the context of privacy-preserving domain adaptation, we confirm that those Gaussian smoothed sliced Wasserstein and MMD divergences perform very well while ensuring data privacy.
No interference between experimental units is a critical assumption in causal inference. Over the past decades, there have been significant advances to go beyond this assumption using the design of experiments; two-stage randomization is one such. The researchers have shown that this design enables us to estimate treatment effects in the presence of interference. On the other hand, the noncompliance behavior of experimental units is another fundamental issue in many social experiments, and researchers have established methods to deal with noncompliance under the assumption of no interference between units. In this article, we propose a Bayesian approach to analyze a causal inference problem with both interference and noncompliance. Building on previous work on two-stage randomized experiments and noncompliance, we apply the principal stratification framework to compare treatments adjusting for post-treatment variables yielding special principal effects in the two-stage randomized experiment. We illustrate the proposed methodology by conducting simulation studies and reanalyzing the evaluation of India's National Health Insurance Program, where we draw more definitive conclusions than existing results.
Clustering methods have led to a number of important discoveries in bioinformatics and beyond. A major challenge in their use is determining which clusters represent important underlying structure, as opposed to spurious sampling artifacts. This challenge is especially serious, and very few methods are available, when the data are very high in dimension. Statistical Significance of Clustering (SigClust) is a recently developed cluster evaluation tool for high dimensional low sample size data. An important component of the SigClust approach is the very definition of a single cluster as a subset of data sampled from a multivariate Gaussian distribution. The implementation of SigClust requires the estimation of the eigenvalues of the covariance matrix for the null multivariate Gaussian distribution. We show that the original eigenvalue estimation can lead to a test that suffers from severe inflation of type-I error, in the important case where there are a few very large eigenvalues. This paper addresses this critical challenge using a novel likelihood based soft thresholding approach to estimate these eigenvalues, which leads to a much improved SigClust. Major improvements in SigClust performance are shown by both mathematical analysis, based on the new notion of Theoretical Cluster Index, and extensive simulation studies. Applications to some cancer genomic data further demonstrate the usefulness of these improvements.
Graph Neural Networks (GNNs) are widely adopted to analyse non-Euclidean data, such as chemical networks, brain networks, and social networks, modelling complex relationships and interdependency between objects. Recently, Membership Inference Attack (MIA) against GNNs raises severe privacy concerns, where training data can be leaked from trained GNN models. However, prior studies focus on inferring the membership of only the components in a graph, e.g., an individual node or edge. How to infer the membership of an entire graph record is yet to be explored. In this paper, we take the first step in MIA against GNNs for graph-level classification. Our objective is to infer whether a graph sample has been used for training a GNN model. We present and implement two types of attacks, i.e., training-based attacks and threshold-based attacks from different adversarial capabilities. We perform comprehensive experiments to evaluate our attacks in seven real-world datasets using five representative GNN models. Both our attacks are shown effective and can achieve high performance, i.e., reaching over 0.7 attack F1 scores in most cases. Furthermore, we analyse the implications behind the MIA against GNNs. Our findings confirm that GNNs can be even more vulnerable to MIA than the models with non-graph structures. And unlike the node-level classifier, MIAs on graph-level classification tasks are more co-related with the overfitting level of GNNs rather than the statistic property of their training graphs.
We develop an inference method for a (sub)vector of parameters identified by conditional moment restrictions, which are implied by economic models such as rational behavior and Euler equations. Building on Bierens (1990), we propose penalized maximum statistics and combine bootstrap inference with model selection. Our method is optimized to be powerful against a set of local alternatives of interest by solving a data-dependent max-min problem for tuning parameter selection. We demonstrate the efficacy of our method by a proof of concept using two empirical examples: rational unbiased reporting of ability status and the elasticity of intertemporal substitution.