We study the problem of testing the null hypothesis that X and Y are conditionally independent given Z, where each of X, Y and Z may be functional random variables. This generalises testing the significance of X in a regression model of scalar response Y on functional regressors X and Z. We show however that even in the idealised setting where additionally (X, Y, Z) has a Gaussian distribution, the power of any test cannot exceed its size. Further modelling assumptions are needed and we argue that a convenient way of specifying these is based on choosing methods for regressing each of X and Y on Z. We propose a test statistic involving inner products of the resulting residuals that is simple to compute and calibrate: type I error is controlled uniformly when the in-sample prediction errors are sufficiently small. We show this requirement is met by ridge regression in functional linear model settings without requiring any eigen-spacing conditions or lower bounds on the eigenvalues of the covariance of the functional regressor. We apply our test in constructing confidence intervals for truncation points in truncated functional linear models and testing for edges in a functional graphical model for EEG data.
We analyze the convergence of the harmonic balance method for computing isolated periodic solutions of a large class of continuously differentiable Hilbert space valued differential-algebraic equations (DAEs). We establish asymptotic convergence estimates for (i) the approximate periodic solution in terms of the number of approximated harmonics and (ii) the inexact Newton method used to compute the approximate Fourier coefficients. The convergence estimates are deter-mined by the rate of convergence of the Fourier series of the exact solution and the structure of the DAE. Both the case that the period is known and unknown are analyzed, where in the latter case we require enforcing an appropriately defined phase condition. The theoretical results are illustrated with several numerical experiments from circuit modeling and structural dynamics.
Strong invariance principles describe the error term of a Brownian approximation of the partial sums of a stochastic process. While these strong approximation results have many applications, the results for continuous-time settings have been limited. In this paper, we obtain strong invariance principles for a broad class of ergodic Markov processes. The main results rely on ergodicity requirements and an application of Nummelin splitting for continuous-time processes. Strong invariance principles provide a unified framework for analysing commonly used estimators of the asymptotic variance in settings with a dependence structure. We demonstrate how this can be used to analyse the batch means method for simulation output of Piecewise Deterministic Monte Carlo samplers. We also derive a fluctuation result for additive functionals of ergodic diffusions using our strong approximation results.
We study the Bayesian inverse problem for inferring the log-normal slowness function of the eikonal equation given noisy observation data on its solution at a set of spatial points. We study approximation of the posterior probability measure by solving the truncated eikonal equation, which contains only a finite number of terms in the Karhunen-Loeve expansion of the slowness function, by the Fast Marching Method. The error of this approximation in the Hellinger metric is deduced in terms of the truncation level of the slowness and the grid size in the Fast Marching Method resolution. It is well known that the plain Markov Chain Monte Carlo procedure for sampling the posterior probability is highly expensive. We develop and justify the convergence of a Multilevel Markov Chain Monte Carlo method. Using the heap sort procedure in solving the forward eikonal equation by the Fast Marching Method, our Multilevel Markov Chain Monte Carlo method achieves a prescribed level of accuracy for approximating the posterior expectation of quantities of interest, requiring only an essentially optimal level of complexity. Numerical examples confirm the theoretical results.
Estimating nested expectations is an important task in computational mathematics and statistics. In this paper we propose a new Monte Carlo method using post-stratification to estimate nested expectations efficiently without taking samples of the inner random variable from the conditional distribution given the outer random variable. This property provides the advantage over many existing methods that it enables us to estimate nested expectations only with a dataset on the pair of the inner and outer variables drawn from the joint distribution. We show an upper bound on the mean squared error of the proposed method under some assumptions. Numerical experiments are conducted to compare our proposed method with several existing methods (nested Monte Carlo method, multilevel Monte Carlo method, and regression-based method), and we see that our proposed method is superior to the compared methods in terms of efficiency and applicability.
Bipartite graphs are rich data structures with prevalent applications and identifier structural features. However, less is known about their growth patterns, particularly in streaming settings. Current works study the patterns of static or aggregated temporal graphs optimized for certain down-stream analytics or ignoring multipartite/non-stationary data distributions, emergence patterns of subgraphs, and streaming paradigms. To address these, we perform statistical network analysis over web log streams and identify the governing patterns underlying the bursty emergence of mesoscopic building blocks, 2,2-bicliques known as butterflies, leading to a phenomenon that we call "scale-invariant strength assortativity of streaming butterflies". We provide the graph-theoretic explanation of this phenomenon. We further introduce a set of micro-mechanics in the body of a streaming growth algorithm, sGrow, to pinpoint the generative origins. sGrow supports streaming paradigms, emergence of 4-vertex graphlets, and provides user-specified configurations for the scale, burstiness, level of strength assortativity, probability of out-of-order records, generation time, and time-sensitive connections. Comprehensive Evaluations on pattern reproducing and stress testing validate the effectiveness, efficiency, and robustness of sGrow in realization of the observed patterns independent of initial conditions, scale, temporal characteristics, and model configurations. Theoretical and experimental analysis verify the robust ability of sGrow in generating streaming graphs based on user-specified configurations that affect the scale and burstiness of the stream, level of strength assortativity, probability of-of-order streaming records, generation time, and time-sensitive connections.
Selective inference (post-selection inference) is a methodology that has attracted much attention in recent years in the fields of statistics and machine learning. Naive inference based on data that are also used for model selection tends to show an overestimation, and so the selective inference conditions the event that the model was selected. In this paper, we develop selective inference in propensity score analysis with a semiparametric approach, which has become a standard tool in causal inference. Specifically, for the most basic causal inference model in which the causal effect can be written as a linear sum of confounding variables, we conduct Lasso-type variable selection by adding an $\ell_1$ penalty term to the loss function that gives a semiparametric estimator. Confidence intervals are then given for the coefficients of the selected confounding variables, conditional on the event of variable selection, with asymptotic guarantees. An important property of this method is that it does not require modeling of nonparametric regression functions for the outcome variables, as is usually the case with semiparametric propensity score analysis.
A well-studied challenge that arises in the structure learning problem of causal directed acyclic graphs (DAG) is that using observational data, one can only learn the graph up to a "Markov equivalence class" (MEC). The remaining undirected edges have to be oriented using interventions, which can be very expensive to perform in applications. Thus, the problem of minimizing the number of interventions needed to fully orient the MEC has received a lot of recent attention, and is also the focus of this work. We prove two main results. The first is a new universal lower bound on the number of atomic interventions that any algorithm (whether active or passive) would need to perform in order to orient a given MEC. Our second result shows that this bound is, in fact, within a factor of two of the size of the smallest set of atomic interventions that can orient the MEC. Our lower bound is provably better than previously known lower bounds. The proof of our lower bound is based on the new notion of clique-block shared-parents (CBSP) orderings, which are topological orderings of DAGs without v-structures and satisfy certain special properties. Further, using simulations on synthetic graphs and by giving examples of special graph families, we show that our bound is often significantly better.
Regression models are used in a wide range of applications providing a powerful scientific tool for researchers from different fields. Linear, or simple parametric, models are often not sufficient to describe complex relationships between input variables and a response. Such relationships can be better described through flexible approaches such as neural networks, but this results in less interpretable models and potential overfitting. Alternatively, specific parametric nonlinear functions can be used, but the specification of such functions is in general complicated. In this paper, we introduce a flexible approach for the construction and selection of highly flexible nonlinear parametric regression models. Nonlinear features are generated hierarchically, similarly to deep learning, but have additional flexibility on the possible types of features to be considered. This flexibility, combined with variable selection, allows us to find a small set of important features and thereby more interpretable models. Within the space of possible functions, a Bayesian approach, introducing priors for functions based on their complexity, is considered. A genetically modified mode jumping Markov chain Monte Carlo algorithm is adopted to perform Bayesian inference and estimate posterior probabilities for model averaging. In various applications, we illustrate how our approach is used to obtain meaningful nonlinear models. Additionally, we compare its predictive performance with several machine learning algorithms.
Gaussian stochastic process (GaSP) has been widely used as a prior over functions due to its flexibility and tractability in modeling. However, the computational cost in evaluating the likelihood is $O(n^3)$, where $n$ is the number of observed points in the process, as it requires to invert the covariance matrix. This bottleneck prevents GaSP being widely used in large-scale data. We propose a general class of nonseparable GaSP models for multiple functional observations with a fast and exact algorithm, in which the computation is linear ($O(n)$) and exact, requiring no approximation to compute the likelihood. We show that the commonly used linear regression and separable models are special cases of the proposed nonseparable GaSP model. Through the study of an epigenetic application, the proposed nonseparable GaSP model can accurately predict the genome-wide DNA methylation levels and compares favorably to alternative methods, such as linear regression, random forest and localized Kriging method. The algorithm for fast computation is implemented in the ${\tt FastGaSP}$ R package on CRAN.
This PhD thesis contains several contributions to the field of statistical causal modeling. Statistical causal models are statistical models embedded with causal assumptions that allow for the inference and reasoning about the behavior of stochastic systems affected by external manipulation (interventions). This thesis contributes to the research areas concerning the estimation of causal effects, causal structure learning, and distributionally robust (out-of-distribution generalizing) prediction methods. We present novel and consistent linear and non-linear causal effects estimators in instrumental variable settings that employ data-dependent mean squared prediction error regularization. Our proposed estimators show, in certain settings, mean squared error improvements compared to both canonical and state-of-the-art estimators. We show that recent research on distributionally robust prediction methods has connections to well-studied estimators from econometrics. This connection leads us to prove that general K-class estimators possess distributional robustness properties. We, furthermore, propose a general framework for distributional robustness with respect to intervention-induced distributions. In this framework, we derive sufficient conditions for the identifiability of distributionally robust prediction methods and present impossibility results that show the necessity of several of these conditions. We present a new structure learning method applicable in additive noise models with directed trees as causal graphs. We prove consistency in a vanishing identifiability setup and provide a method for testing substructure hypotheses with asymptotic family-wise error control that remains valid post-selection. Finally, we present heuristic ideas for learning summary graphs of nonlinear time-series models.