Many spatio-temporal data record the time of birth and death of individuals, along with their spatial trajectories during their lifetime, whether through continuous-time observations or discrete-time observations. Natural applications include epidemiology, individual-based modelling in ecology, spatio-temporal dynamics observed in bio-imaging, and computer vision. The aim of this article is to estimate in this context the birth and death intensity functions, that depend in full generality on the current spatial configuration of all alive individuals. While the temporal evolution of the population size is a simple birth-death process, observing the lifetime and trajectories of all individuals calls for a new paradigm. To formalise this framework, we introduce spatial birth-death-move processes, where the birth and death dynamics depends on the current spatial configuration of the population and where individuals can move during their lifetime according to a continuous Markov process with possible interactions.We consider non-parametric kernel estimators of their birth and death intensity functions. The setting is original because each observation in time belongs to a non-vectorial, infinite dimensional space and the dependence between observations is barely tractable. We prove the consistency of the estimators in presence of continuous-time and discrete-time observations, under fairly simple conditions. We moreover discuss how we can take advantage in practice of structural assumptions made on the intensity functions and we explain how data-driven bandwidth selection can be conducted, despite the unknown (and sometimes undefined) second order moments of the estimators. We finally apply our statistical method to the analysis of the spatio-temporal dynamics of proteins involved in exocytosis in cells, providing new insights on this complex mechanism.
Sleeve functions are generalizations of the well-established ridge functions that play a major role in the theory of partial differential equation, medical imaging, statistics, and neural networks. Where ridge functions are non-linear, univariate functions of the distance to hyperplanes, sleeve functions are based on the squared distance to lower-dimensional manifolds. The present work is a first step to study general sleeve functions by starting with sleeve functions based on finite-length curves. To capture these curve-based sleeve functions, we propose and study a two-step method, where first the outer univariate function - the profile - is recovered, and second the underlying curve is represented by a polygonal chain. Introducing a concept of well-separation, we ensure that the proposed method always terminates and approximate the true sleeve function with a certain quality. Investigating the local geometry, we study an inexact version of our method and show its success under certain conditions.
We study the problem of the non-parametric estimation for the density of the stationary distribution of the multivariate stochastic differential equation with jumps (Xt) , when the dimension d is bigger than 3. From the continuous observation of the sampling path on [0, T ] we show that, under anisotropic Holder smoothness constraints, kernel based estimators can achieve fast convergence rates. In particular , they are as fast as the ones found by Dalalyan and Reiss [9] for the estimation of the invariant density in the case without jumps under isotropic Holder smoothness constraints. Moreover, they are faster than the ones found by Strauch [29] for the invariant density estimation of continuous stochastic differential equations, under anisotropic Holder smoothness constraints. Furthermore, we obtain a minimax lower bound on the L2-risk for pointwise estimation, with the same rate up to a log(T) term. It implies that, on a class of diffusions whose invariant density belongs to the anisotropic Holder class we are considering, it is impossible to find an estimator with a rate of estimation faster than the one we propose.
Dynamic treatment regimes (DTRs) consist of a sequence of decision rules, one per stage of intervention, that finds effective treatments for individual patients according to patient information history. DTRs can be estimated from models which include the interaction between treatment and a small number of covariates which are often chosen a priori. However, with increasingly large and complex data being collected, it is difficult to know which prognostic factors might be relevant in the treatment rule. Therefore, a more data-driven approach of selecting these covariates might improve the estimated decision rules and simplify models to make them easier to interpret. We propose a variable selection method for DTR estimation using penalized dynamic weighted least squares. Our method has the strong heredity property, that is, an interaction term can be included in the model only if the corresponding main terms have also been selected. Through simulations, we show our method has both the double robustness property and the oracle property, and the newly proposed methods compare favorably with other variable selection approaches.
We consider the estimation of densities in multiple subpopulations, where the available sample size in each subpopulation greatly varies. This problem occurs in epidemiology, for example, where different diseases may share similar pathogenic mechanism but differ in their prevalence. Without specifying a parametric form, our proposed method pools information from the population and estimate the density in each subpopulation in a data-driven fashion. Drawing from functional data analysis, low-dimensional approximating density families in the form of exponential families are constructed from the principal modes of variation in the log-densities. Subpopulation densities are subsequently fitted in the approximating families based on likelihood principles and shrinkage. The approximating families increase in their flexibility as the number of components increases and can approximate arbitrary infinite-dimensional densities. We also derive convergence results of the density estimates with discrete observations. The proposed methods are shown to be interpretable and efficient in simulation as well as applications to electronic medical record and rainfall data.
Truncated conditional expectation functions are objects of interest in a wide range of economic applications, including income inequality measurement, financial risk management, and impact evaluation. They typically involve truncating the outcome variable above or below certain quantiles of its conditional distribution. In this paper, based on local linear methods, a novel, two-stage, nonparametric estimator of such functions is proposed. In this estimation problem, the conditional quantile function is a nuisance parameter that has to be estimated in the first stage. The proposed estimator is insensitive to the first-stage estimation error owing to the use of a Neyman-orthogonal moment in the second stage. This construction ensures that inference methods developed for the standard nonparametric regression can be readily adapted to conduct inference on truncated conditional expectations. As an extension, estimation with an estimated truncation quantile level is considered. The proposed estimator is applied in two empirical settings: sharp regression discontinuity designs with a manipulated running variable and randomized experiments with sample selection.
Stein importance sampling is a widely applicable technique based on kernelized Stein discrepancy, which corrects the output of approximate sampling algorithms by reweighting the empirical distribution of the samples. A general analysis of this technique is conducted for the previously unconsidered setting where samples are obtained via the simulation of a Markov chain, and applies to an arbitrary underlying Polish space. We prove that Stein importance sampling yields consistent estimators for quantities related to a target distribution of interest by using samples obtained from a geometrically ergodic Markov chain with a possibly unknown invariant measure that differs from the desired target. The approach is shown to be valid under conditions that are satisfied for a large number of unadjusted samplers, and is capable of retaining consistency when data subsampling is used. Along the way, a universal theory of reproducing Stein kernels is established, which enables the construction of kernelized Stein discrepancy on general Polish spaces, and provides sufficient conditions for kernels to be convergence-determining on such spaces. These results are of independent interest for the development of future methodology based on kernelized Stein discrepancies.
Point processes in time have a wide range of applications that include the claims arrival process in insurance or the analysis of queues in operations research. Due to advances in technology, such samples of point processes are increasingly encountered. A key object of interest is the local intensity function. It has a straightforward interpretation that allows to understand and explore point process data. We consider functional approaches for point processes, where one has a sample of repeated realizations of the point process. This situation is inherently connected with Cox processes, where the intensity functions of the replications are modeled as random functions. Here we study a situation where one records covariates for each replication of the process, such as the daily temperature for bike rentals. For modeling point processes as responses with vector covariates as predictors we propose a novel regression approach for the intensity function that is intrinsically nonparametric. While the intensity function of a point process that is only observed once on a fixed domain cannot be identified, we show how covariates and repeated observations of the process can be utilized to make consistent estimation possible, and we also derive asymptotic rates of convergence without invoking parametric assumptions.
We consider the problem of correctly identifying the mode of a discrete distribution $\mathcal{P}$ with sufficiently high probability by observing a sequence of i.i.d. samples drawn according to $\mathcal{P}$. This problem reduces to the estimation of a single parameter when $\mathcal{P}$ has a support set of size $K = 2$. Noting the efficiency of prior-posterior-ratio (PPR) martingale confidence sequences for handling this special case, we propose a generalisation to mode estimation, in which $\mathcal{P}$ may take $K \geq 2$ values. We observe that the "one-versus-one" principle yields a more efficient generalisation than the "one-versus-rest" alternative. Our resulting stopping rule, denoted PPR-ME, is optimal in its sample complexity up to a logarithmic factor. Moreover, PPR-ME empirically outperforms several other competing approaches for mode estimation. We demonstrate the gains offered by PPR-ME in two practical applications: (1) sample-based forecasting of the winner in indirect election systems, and (2) efficient verification of smart contracts in permissionless blockchains.
This paper addresses the task of estimating a covariance matrix under a patternless sparsity assumption. In contrast to existing approaches based on thresholding or shrinkage penalties, we propose a likelihood-based method that regularizes the distance from the covariance estimate to a symmetric sparsity set. This formulation avoids unwanted shrinkage induced by more common norm penalties and enables optimization of the resulting non-convex objective by solving a sequence of smooth, unconstrained subproblems. These subproblems are generated and solved via the proximal distance version of the majorization-minimization principle. The resulting algorithm executes rapidly, gracefully handles settings where the number of parameters exceeds the number of cases, yields a positive definite solution, and enjoys desirable convergence properties. Empirically, we demonstrate that our approach outperforms competing methods by several metrics across a suite of simulated experiments. Its merits are illustrated on an international migration dataset and a classic case study on flow cytometry. Our findings suggest that the marginal and conditional dependency networks for the cell signalling data are more similar than previously concluded.
Implicit probabilistic models are models defined naturally in terms of a sampling procedure and often induces a likelihood function that cannot be expressed explicitly. We develop a simple method for estimating parameters in implicit models that does not require knowledge of the form of the likelihood function or any derived quantities, but can be shown to be equivalent to maximizing likelihood under some conditions. Our result holds in the non-asymptotic parametric setting, where both the capacity of the model and the number of data examples are finite. We also demonstrate encouraging experimental results.