The estimation of the potential impact fraction (including the population attributable fraction) with continuous exposure data frequently relies on strong distributional assumptions. However, these assumptions are often violated if the underlying exposure distribution is unknown or if the same distribution is assumed across time or space. Nonparametric methods to estimate the potential impact fraction are available for cohort data, but no alternatives exist for cross-sectional data. In this article, we discuss the impact of distributional assumptions in the estimation of the population impact fraction, showing that under an infinite set of possibilities, distributional violations lead to biased estimates. We propose nonparametric methods to estimate the potential impact fraction for aggregated (mean and standard deviation) or individual data (e.g. observations from a cross-sectional population survey), and develop simulation scenarios to compare their performance against standard parametric procedures. We illustrate our methodology on an application of sugar-sweetened beverage consumption on incidence of type 2 diabetes. We also present an R package pifpaf to implement these methods.
Physics-based covariance models provide a systematic way to construct covariance models that are consistent with the underlying physical laws in Gaussian process analysis. The unknown parameters in the covariance models can be estimated using maximum likelihood estimation, but direct construction of the covariance matrix and classical strategies of computing with it requires $n$ physical model runs, $n^2$ storage complexity, and $n^3$ computational complexity. To address such challenges, we propose to approximate the discretized covariance function using hierarchical matrices. By utilizing randomized range sketching for individual off-diagonal blocks, the construction process of the hierarchical covariance approximation requires $O(\log{n})$ physical model applications and the maximum likelihood computations require $O(n\log^2{n})$ effort per iteration. We propose a new approach to compute exactly the trace of products of hierarchical matrices which results in the expected Fischer information matrix being computable in $O(n\log^2{n})$ as well. The construction is totally matrix-free and the derivatives of the covariance matrix can then be approximated in the same hierarchical structure by differentiating the whole process. Numerical results are provided to demonstrate the effectiveness, accuracy, and efficiency of the proposed method for parameter estimations and uncertainty quantification.
Autoencoders have demonstrated remarkable success in learning low-dimensional latent features of high-dimensional data across various applications. Assuming that data are sampled near a low-dimensional manifold, we employ chart autoencoders, which encode data into low-dimensional latent features on a collection of charts, preserving the topology and geometry of the data manifold. Our paper establishes statistical guarantees on the generalization error of chart autoencoders, and we demonstrate their denoising capabilities by considering $n$ noisy training samples, along with their noise-free counterparts, on a $d$-dimensional manifold. By training autoencoders, we show that chart autoencoders can effectively denoise the input data with normal noise. We prove that, under proper network architectures, chart autoencoders achieve a squared generalization error in the order of $\displaystyle n^{-\frac{2}{d+2}}\log^4 n$, which depends on the intrinsic dimension of the manifold and only weakly depends on the ambient dimension and noise level. We further extend our theory on data with noise containing both normal and tangential components, where chart autoencoders still exhibit a denoising effect for the normal component. As a special case, our theory also applies to classical autoencoders, as long as the data manifold has a global parametrization. Our results provide a solid theoretical foundation for the effectiveness of autoencoders, which is further validated through several numerical experiments.
Approximate message passing (AMP) emerges as an effective iterative paradigm for solving high-dimensional statistical problems. However, prior AMP theory -- which focused mostly on high-dimensional asymptotics -- fell short of predicting the AMP dynamics when the number of iterations surpasses $o\big(\frac{\log n}{\log\log n}\big)$ (with $n$ the problem dimension). To address this inadequacy, this paper develops a non-asymptotic framework for understanding AMP in spiked matrix estimation. Built upon new decomposition of AMP updates and controllable residual terms, we lay out an analysis recipe to characterize the finite-sample behavior of AMP in the presence of an independent initialization, which is further generalized to allow for spectral initialization. As two concrete consequences of the proposed analysis recipe: (i) when solving $\mathbb{Z}_2$ synchronization, we predict the behavior of spectrally initialized AMP for up to $O\big(\frac{n}{\mathrm{poly}\log n}\big)$ iterations, showing that the algorithm succeeds without the need of a subsequent refinement stage (as conjectured recently by \citet{celentano2021local}); (ii) we characterize the non-asymptotic behavior of AMP in sparse PCA (in the spiked Wigner model) for a broad range of signal-to-noise ratio.
Transition systems are often used to describe the behaviour of software systems. If viewed as a graph then, at their most basic level, vertices correspond to the states of a program and each edge represents a transition between states via the (atomic) action labelled. In this setting, systems are thought to be consistent so that at each state formulas are evaluated as either True or False. On the other hand, when a structure of this sort - for example a map where states represent locations, some local properties are known and labelled transitions represent information available about different routes - is built resorting to multiple sources of information, it is common to find inconsistent or incomplete information regarding what holds at each state, both at the level of propositional variables and transitions. This paper aims at bringing together Belnap's four values, Dynamic Logic and hybrid machinery such as nominals and the satisfaction operator, so that reasoning is still possible in face of contradicting evidence. Proof-theory for this new logic is explored by means of a terminating, sound and complete tableaux system.
Functional quantile regression (FQR) is a useful alternative to mean regression for functional data as it provides a comprehensive understanding of how scalar predictors influence the conditional distribution of functional responses. In this article, we study the FQR model for densely sampled, high-dimensional functional data without relying on parametric or independent assumptions on the residual process, with the focus on statistical inference and scalable implementation. This is achieved by a simple but powerful distributed strategy, in which we first perform separate quantile regression to compute $M$-estimators at each sampling location, and then carry out estimation and inference for the entire coefficient functions by properly exploiting the uncertainty quantification and dependence structure of $M$-estimators. We derive a uniform Bahadur representation and a strong Gaussian approximation result for the $M$-estimators on the discrete sampling grid, serving as the basis for inference. An interpolation-based estimator with minimax optimality is proposed, and large sample properties for point and simultaneous interval estimators are established. The obtained minimax optimal rate under the FQR model shows an interesting phase transition phenomenon that has been previously observed in functional mean regression. The proposed methods are illustrated via simulations and an application to a mass spectrometry proteomics dataset.
Given a $k\times n$ integer primitive matrix $\bf{A}$ (i.e., a matrix can be extended to an $n\times n$ unimodular matrix over the integers) with the maximal absolute value of entries $\|\bf{A}\|$ bounded by {an integer} $\lambda$ from above, we study the probability that the $m\times n$ matrix extended from $\bf{A}$ by appending other $m-k$ row vectors of dimension $n$ with entries chosen randomly and independently from the uniform distribution over $\{0, 1,\ldots, \lambda-1\}$ is still primitive. We present a complete and rigorous proof of a lower bound on the probability, which is at least a constant for fixed $m$ in the range $[k+1, n-4]$. As an application, we prove that there exists a fast Las Vegas algorithm that completes a $k\times n$ primitive matrix $\bf{A}$ to an $n\times n$ unimodular matrix within expected $\tilde{O}(n^{\omega}\log \|\bf{A}\|)$ bit operations, where $\tilde{O}$ is big-$O$ but without log factors, $\omega$ is the exponent on the arithmetic operations of matrix multiplication.
In epidemiology research with cancer registry data, it is often of primary interest to make inference on cancer death, not overall survival. Since cause of death is not easy to collect or is not necessarily reliable in cancer registries, some special methodologies have been introduced and widely used by using the concepts of the relative survival ratio and the net survival. In making inference of those measures, external life tables of the general population are utilized to adjust the impact of non-cancer death on overall survival. The validity of this adjustment relies on the assumption that mortality in the external life table approximates non-cancer mortality of cancer patients. However, the population used to calculate a life table may include cancer death and cancer patients. Sensitivity analysis proposed by Talb\"{a}ck and Dickman to address it requires additional information which is often not easily available. We propose a method to make inference on the net survival accounting for potential presence of cancer patients and cancer death in the life table for the general population. The idea of adjustment is to consider correspondence of cancer mortality in the life table and that in the cancer registry. We realize a novel method to adjust cancer mortality in the cancer registry without any additional information to the standard analyses of cancer registries. Our simulation study revealed that the proposed method successfully removed the bias. We illustrate the proposed method with the cancer registry data in England.
Many machine learning problems can be framed in the context of estimating functions, and often these are time-dependent functions that are estimated in real-time as observations arrive. Gaussian processes (GPs) are an attractive choice for modeling real-valued nonlinear functions due to their flexibility and uncertainty quantification. However, the typical GP regression model suffers from several drawbacks: 1) Conventional GP inference scales $O(N^{3})$ with respect to the number of observations; 2) Updating a GP model sequentially is not trivial; and 3) Covariance kernels typically enforce stationarity constraints on the function, while GPs with non-stationary covariance kernels are often intractable to use in practice. To overcome these issues, we propose a sequential Monte Carlo algorithm to fit infinite mixtures of GPs that capture non-stationary behavior while allowing for online, distributed inference. Our approach empirically improves performance over state-of-the-art methods for online GP estimation in the presence of non-stationarity in time-series data. To demonstrate the utility of our proposed online Gaussian process mixture-of-experts approach in applied settings, we show that we can sucessfully implement an optimization algorithm using online Gaussian process bandits.
Causal inference in spatial settings is met with unique challenges and opportunities. On one hand, a unit's outcome can be affected by the exposure at many locations, leading to interference. On the other hand, unmeasured spatial variables can confound the effect of interest. Our work has two overarching goals. First, using causal diagrams, we illustrate that spatial confounding and interference can manifest as each other, meaning that investigating the presence of one can lead to wrongful conclusions in the presence of the other, and that statistical dependencies in the exposure variable can render standard analyses invalid. This can have crucial implications for analyzing data with spatial or other dependencies, and for understanding the effect of interventions on dependent units. Secondly, we propose a parametric approach to mitigate bias from local and neighborhood unmeasured spatial confounding and account for interference simultaneously. This approach is based on simultaneous modeling of the exposure and the outcome while accounting for the presence of spatially-structured unmeasured predictors of both variables. We illustrate our approach with a simulation study and with an analysis of the local and interference effects of sulfur dioxide emissions from power plants on cardiovascular mortality.
This paper focuses on the expected difference in borrower's repayment when there is a change in the lender's credit decisions. Classical estimators overlook the confounding effects and hence the estimation error can be magnificent. As such, we propose another approach to construct the estimators such that the error can be greatly reduced. The proposed estimators are shown to be unbiased, consistent, and robust through a combination of theoretical analysis and numerical testing. Moreover, we compare the power of estimating the causal quantities between the classical estimators and the proposed estimators. The comparison is tested across a wide range of models, including linear regression models, tree-based models, and neural network-based models, under different simulated datasets that exhibit different levels of causality, different degrees of nonlinearity, and different distributional properties. Most importantly, we apply our approaches to a large observational dataset provided by a global technology firm that operates in both the e-commerce and the lending business. We find that the relative reduction of estimation error is strikingly substantial if the causal effects are accounted for correctly.