Verbal autopsies (VAs) are extensively used to investigate the population-level distributions of deaths by cause in low-resource settings without well-organized vital statistics systems. Computer-based methods are often adopted to assign causes of death to deceased individuals based on the interview responses of their family members or caregivers. In this article, we develop a new Bayesian approach that extracts information about cause-of-death distributions from VA data considering the age- and sex-related variation in the associations between symptoms. Its performance is compared with that of existing approaches using gold-standard data from the Population Health Metrics Research Consortium. In addition, we compute the relevance of predictors to causes of death based on information-theoretic measures.
Single-particle entangled states (SPES) can offer a more secure way of encoding and processing quantum information than their multi-particle counterparts. The SPES generated via a 2D alternate quantum-walk setup from initially separable states can be either 3-way or 2-way entangled. This letter shows that the generated genuine three-way and nonlocal two-way SPES can be used as cryptographic keys to securely encode two distinct messages simultaneously. We detail the message encryption-decryption steps and show the resilience of the 3-way and 2-way SPES-based cryptographic protocols against eavesdropper attacks like intercept-and-resend and man-in-the-middle. We also detail how these protocols can be experimentally realized using single photons, with the three degrees of freedom being OAM, path, and polarization. These have unparalleled security for quantum communication tasks. The ability to simultaneously encode two distinct messages using the generated SPES showcases the versatility and efficiency of the proposed cryptographic protocol. This capability could significantly improve the throughput of quantum communication systems.
Online communities offer their members various benefits, such as information access, social and emotional support, and entertainment. Despite the important role that founders play in shaping communities, prior research has focused primarily on what drives users to participate and contribute; the motivations and goals of founders remain underexplored. To uncover how and why online communities get started, we present findings from a survey of 951 recent founders of Reddit communities. We find that topical interest is the most common motivation for community creation, followed by motivations to exchange information, connect with others, and self-promote. Founders have heterogeneous goals for their nascent communities, but they tend to privilege community quality and engagement over sheer growth. These differences in founders' early attitudes towards their communities help predict not only the community-building actions that they pursue, but also the ability of their communities to attract visitors, contributors, and subscribers over the first 28 days. We end with a discussion of the implications for researchers, designers, and founders of online communities.
Representing ecosystems at equilibrium has been foundational for building ecological theories, forecasting species populations and planning conservation actions. The equilibrium "balance of nature" ideal suggests that populations will eventually stabilise to a coexisting balance of species. However, a growing body of literature argues that the equilibrium ideal is inappropriate for ecosystems. Here, we develop and demonstrate a new framework for representing ecosystems without considering equilibrium dynamics. Instead, far more pragmatic ecosystem models are constructed by considering population trajectories, regardless of whether they exhibit equilibrium or transient (i.e. non-equilibrium) behaviour. This novel framework maximally utilises readily available, but often overlooked, knowledge from field observations and expert elicitation, rather than relying on theoretical ecosystem properties. We developed innovative Bayesian algorithms to generate ecosystem models in this new statistical framework, without excessive computational burden. Our results reveal that our pragmatic framework could have a dramatic impact on conservation decision-making and enhance the realism of ecosystem models and forecasts.
Linear structural vector autoregressive models can be identified statistically without imposing restrictions on the model if the shocks are mutually independent and at most one of them is Gaussian. We show that this result extends to structural threshold and smooth transition vector autoregressive models incorporating a time-varying impact matrix defined as a weighted sum of the impact matrices of the regimes. Our empirical application studies the effects of the climate policy uncertainty shock on the U.S. macroeconomy. In a structural logistic smooth transition vector autoregressive model consisting of two regimes, we find that a positive climate policy uncertainty shock decreases production in times of low economic policy uncertainty but slightly increases it in times of high economic policy uncertainty. The introduced methods are implemented to the accompanying R package sstvars.
High-dimensional, higher-order tensor data are gaining prominence in a variety of fields, including but not limited to computer vision and network analysis. Tensor factor models, induced from noisy versions of tensor decompositions or factorizations, are natural potent instruments to study a collection of tensor-variate objects that may be dependent or independent. However, it is still in the early stage of developing statistical inferential theories for the estimation of various low-rank structures, which are customary to play the role of signals of tensor factor models. In this paper, we attempt to ``decode" the estimation of a higher-order tensor factor model by leveraging tensor matricization. Specifically, we recast it into mode-wise traditional high-dimensional vector/fiber factor models, enabling the deployment of conventional principal components analysis (PCA) for estimation. Demonstrated by the Tucker tensor factor model (TuTFaM), which is induced from the noisy version of the widely-used Tucker decomposition, we summarize that estimations on signal components are essentially mode-wise PCA techniques, and the involvement of projection and iteration will enhance the signal-to-noise ratio to various extent. We establish the inferential theory of the proposed estimators, conduct rich simulation experiments, and illustrate how the proposed estimations can work in tensor reconstruction, and clustering for independent video and dependent economic datasets, respectively.
This paper addresses the problem of deciding whether the dose response relationships between subgroups and the full population in a multi-regional trial are similar to each other. Similarity is measured in terms of the maximal deviation between the dose response curves. We consider a parametric framework and develop two powerful bootstrap tests for the similarity between the dose response curves of one subgroup and the full population, and for the similarity between the dose response curves of several subgroups and the full population. We prove the validity of the tests, investigate the finite sample properties by means of a simulation study and finally illustrate the methodology in a case study.
Our research delves into the balance between maintaining privacy and preserving statistical accuracy when dealing with multivariate data that is subject to \textit{componentwise local differential privacy} (CLDP). With CLDP, each component of the private data is made public through a separate privacy channel. This allows for varying levels of privacy protection for different components or for the privatization of each component by different entities, each with their own distinct privacy policies. We develop general techniques for establishing minimax bounds that shed light on the statistical cost of privacy in this context, as a function of the privacy levels $\alpha_1, ... , \alpha_d$ of the $d$ components. We demonstrate the versatility and efficiency of these techniques by presenting various statistical applications. Specifically, we examine nonparametric density and covariance estimation under CLDP, providing upper and lower bounds that match up to constant factors, as well as an associated data-driven adaptive procedure. Furthermore, we quantify the probability of extracting sensitive information from one component by exploiting the fact that, on another component which may be correlated with the first, a smaller degree of privacy protection is guaranteed.
The population-wise error rate (PWER) is a type I error rate for clinical trials with multiple target populations. In such trials, one treatment is tested for its efficacy in each population. The PWER is defined as the probability that a randomly selected, future patient will be exposed to an inefficient treatment based on the study results. The PWER can be understood and computed as an average of strata-specific family-wise error rates and involves the prevalences of these strata. A major issue of this concept is that the prevalences are usually not known in practice, so that the PWER cannot be directly controlled. Instead, one could use an estimator based on the given sample, like their maximum-likelihood estimator under a multinomial distribution. In this paper we show in simulations that this does not substantially inflate the true PWER. We differentiate between the expected PWER, which is almost perfectly controlled, and study-specific values of the PWER which are conditioned on all subgroup sample sizes and vary within a narrow range. Thereby, we consider up to eight different overlapping patient populations and moderate to large sample sizes. In these settings, we also consider the maximum strata-wise family-wise error rate, which is found to be, on average, at least bounded by twice the significance level used for PWER control. Finally, we introduce an adjustment of the PWER that could be made when, by chance, no patients are recruited from a stratum, so that this stratum is not counted in PWER control. We would then reduce the PWER in order to control for multiplicity in this stratum as well.
The consistency of the maximum likelihood estimator for mixtures of elliptically-symmetric distributions for estimating its population version is shown, where the underlying distribution $P$ is nonparametric and does not necessarily belong to the class of mixtures on which the estimator is based. In a situation where $P$ is a mixture of well enough separated but nonparametric distributions it is shown that the components of the population version of the estimator correspond to the well separated components of $P$. This provides some theoretical justification for the use of such estimators for cluster analysis in case that $P$ has well separated subpopulations even if these subpopulations differ from what the mixture model assumes.
Knowledge graphs (KGs) of real-world facts about entities and their relationships are useful resources for a variety of natural language processing tasks. However, because knowledge graphs are typically incomplete, it is useful to perform knowledge graph completion or link prediction, i.e. predict whether a relationship not in the knowledge graph is likely to be true. This paper serves as a comprehensive survey of embedding models of entities and relationships for knowledge graph completion, summarizing up-to-date experimental results on standard benchmark datasets and pointing out potential future research directions.