We derive the exact asymptotic distribution of the maximum likelihood estimator $(\hat{\alpha}_n, \hat{\theta}_n)$ of $(\alpha, \theta)$ for the Ewens--Pitman partition in the regime of $0<\alpha<1$ and $\theta>-\alpha$: we show that $\hat{\alpha}_n$ is $n^{\alpha/2}$-consistent and converges to a variance mixture of normal distributions, i.e., $\hat{\alpha}_n$ is asymptotically mixed normal, while $\hat{\theta}_n$ is not consistent and converges to a transformation of the generalized Mittag-Leffler distribution. As an application, we derive a confidence interval of $\alpha$ and propose a hypothesis testing of sparsity for network data. In our proof, we define an empirical measure induced by the Ewens--Pitman partition and prove a suitable convergence of the measure in some test functions, aiming to derive asymptotic behavior of the log likelihood.
In survey sampling, survey data do not necessarily represent the target population, and the samples are often biased. However, information on the survey weights aids in the elimination of selection bias. The Horvitz-Thompson estimator is a well-known unbiased, consistent, and asymptotically normal estimator; however, it is not efficient. Thus, this study derives the semiparametric efficiency bound for various target parameters by considering the survey weight as a random variable and consequently proposes a semiparametric optimal estimator with certain working models on the survey weights. The proposed estimator is consistent, asymptotically normal, and efficient in a class of the regular and asymptotically linear estimators. Further, a limited simulation study is conducted to investigate the finite sample performance of the proposed method. The proposed method is applied to the 1999 Canadian Workplace and Employee Survey data.
Standardness is a popular assumption in the literature on set estimation. It also appears in statistical approaches to topological data analysis, where it is common to assume that the data were sampled from a probability measure that satisfies the standard assumption. Relevant results in this field, such as rates of convergence and confidence sets, depend on the standardness parameter, which in practice may be unknown. In this paper, we review the notion of standardness and its connection to other geometrical restrictions. We prove the almost sure consistency of a plug-in type estimator for the so-called standardness constant, already studied in the literature. We propose a method to correct the bias of the plug-in estimator and corroborate our theoretical findings through a small simulation study. We also show that it is not possible to determine, based on a finite sample, whether a probability measure satisfies the standard assumption.
We investigate the problem of joint statistical estimation of several parameters for a stochastic differential equations driven by an additive fractional Brownian motion. Based on discrete-time observations of the model, we construct an estimator of the Hurst parameter, the diffusion parameter and the drift, which lies in a parametrised family of coercive drift coefficients. Our procedure is based on the assumption that the stationary distribution of the SDE and of its increments permits to identify the parameters of the model. Under this assumption, we prove consistency results and derive a rate of convergence for the estimator. Finally, we show that the identifiability assumption is satisfied in the case of a family of fractional Ornstein-Uhlenbeck processes and illustrate our results with some numerical experiments.
In this paper we report our new finding on the linear sampling and factorization methods: in addition to shape identification, the linear sampling and factorization methods have capability in parameter identification. Our demonstration is for shape/parameter identification associated with a restricted Fourier integral operator which arises from the multi-frequency inverse source problem for a fixed observation direction and the Born inverse scattering problems. Within the framework of linear sampling method, we develop both a shape identification theory and a parameter identification theory which are stimulated, analyzed, and implemented with the help of the prolate spheroidal wave functions and their generalizations. Both the shape and parameter identification theories are general, since the theories allow any general regularization scheme such as the Tikhonov or the singular value cut off regularization. We further propose a prolate-Galerkin formulation of the linear sampling method for implementation and provide numerical experiments to demonstrate how the linear sampling method is capable of reconstructing both the shape and the parameter.
Partial orders are a natural model for the social hierarchies that may constrain "queue-like" rank-order data. However, the computational cost of counting the linear extensions of a general partial order on a ground set with more than a few tens of elements is prohibitive. Vertex-series-parallel partial orders (VSPs) are a subclass of partial orders which admit rapid counting and represent the sorts of relations we expect to see in a social hierarchy. However, no Bayesian analysis of VSPs has been given to date. We construct a marginally consistent family of priors over VSPs with a parameter controlling the prior distribution over VSP depth. The prior for VSPs is given in closed form. We extend an existing observation model for queue-like rank-order data to represent noise in our data and carry out Bayesian inference on "Royal Acta" data and Formula 1 race data. Model comparison shows our model is a better fit to the data than Plackett-Luce mixtures, Mallows mixtures, and "bucket order" models and competitive with more complex models fitting general partial orders.
We study the non-parametric estimation of a multidimensional unknown density f in a tomography problem based on independent and identically distributed observations, whose common density is proportional to the Radon transform of f. We identify the underlying statistical inverse problem and use a spectral cut-off regularisation to deduce an estimator. A fully data-driven choice of the cut-off parameter m in R+ is proposed and studied. To discuss the bias-variance trade off, we consider Sobolev spaces and show the minimax-optimality of the spectral cut-off density estimator. In a simulation study, we illustrate a reasonable behaviour of the studied fully data-driven estimator.
We introduce a sparse estimation in the ordinary kriging for functional data. The functional kriging predicts a feature given as a function at a location where the data are not observed by a linear combination of data observed at other locations. To estimate the weights of the linear combination, we apply the lasso-type regularization in minimizing the expected squared error. We derive an algorithm to derive the estimator using the augmented Lagrange method. Tuning parameters included in the estimation procedure are selected by cross-validation. Since the proposed method can shrink some of the weights of the linear combination toward zeros exactly, we can investigate which locations are necessary or unnecessary to predict the feature. Simulation and real data analysis show that the proposed method appropriately provides reasonable results.
This paper studies inference in two-stage randomized experiments under covariate-adaptive randomization. In the initial stage of this experimental design, clusters (e.g., households, schools, or graph partitions) are stratified and randomly assigned to control or treatment groups based on cluster-level covariates. Subsequently, an independent second-stage design is carried out, wherein units within each treated cluster are further stratified and randomly assigned to either control or treatment groups, based on individual-level covariates. Under the homogeneous partial interference assumption, I establish conditions under which the proposed difference-in-"average of averages" estimators are consistent and asymptotically normal for the corresponding average primary and spillover effects and develop consistent estimators of their asymptotic variances. Combining these results establishes the asymptotic validity of tests based on these estimators. My findings suggest that ignoring covariate information in the design stage can result in efficiency loss, and commonly used inference methods that ignore or improperly use covariate information can lead to either conservative or invalid inference. Finally, I apply these results to studying optimal use of covariate information under covariate-adaptive randomization in large samples, and demonstrate that a specific generalized matched-pair design achieves minimum asymptotic variance for each proposed estimator. The practical relevance of the theoretical results is illustrated through a simulation study and an empirical application.
In this work, an integer linear programming (ILP) based model is proposed for the computation of a minimal cost addition sequence for a given set of integers. Since exponents are additive under multiplication, the minimal length addition sequence will provide an economical solution for the evaluation of a requested set of power terms. This is turn, finds application in, e.g., window-based exponentiation for cryptography and polynomial evaluation. Not only is an optimal model proposed, the model is extended to consider different costs for multipliers and squarers as well as controlling the depth of the resulting addition sequence.
Consider a random sample $(X_{1},\ldots,X_{n})$ from an unknown discrete distribution $P=\sum_{j\geq1}p_{j}\delta_{s_{j}}$ on a countable alphabet $\mathbb{S}$, and let $(Y_{n,j})_{j\geq1}$ be the empirical frequencies of distinct symbols $s_{j}$'s in the sample. We consider the problem of estimating the $r$-order missing mass, which is a discrete functional of $P$ defined as $$\theta_{r}(P;\mathbf{X}_{n})=\sum_{j\geq1}p^{r}_{j}I(Y_{n,j}=0).$$ This is generalization of the missing mass whose estimation is a classical problem in statistics, being the subject of numerous studies both in theory and methods. First, we introduce a nonparametric estimator of $\theta_{r}(P;\mathbf{X}_{n})$ and a corresponding non-asymptotic confidence interval through concentration properties of $\theta_{r}(P;\mathbf{X}_{n})$. Then, we investigate minimax estimation of $\theta_{r}(P;\mathbf{X}_{n})$, which is the main contribution of our work. We show that minimax estimation is not feasible over the class of all discrete distributions on $\mathbb{S}$, and not even for distributions with regularly varying tails, which only guarantee that our estimator is consistent for $\theta_{r}(P;\mathbf{X}_{n})$. This leads to introduce the stronger assumption of second-order regular variation for the tail behaviour of $P$, which is proved to be sufficient for minimax estimation of $\theta_r(P;\mathbf{X}_{n})$, making the proposed estimator an optimal minimax estimator of $\theta_{r}(P;\mathbf{X}_{n})$. Our interest in the $r$-order missing mass arises from forensic statistics, where the estimation of the $2$-order missing mass appears in connection to the estimation of the likelihood ratio $T(P,\mathbf{X}_{n})=\theta_{1}(P;\mathbf{X}_{n})/\theta_{2}(P;\mathbf{X}_{n})$, known as the "fundamental problem of forensic mathematics". We present theoretical guarantees to nonparametric estimation of $T(P,\mathbf{X}_{n})$.