Causal effect estimation for dynamic treatment regimes (DTRs) contributes to sequential decision making. However, censoring and time-dependent confounding under DTRs are challenging as the amount of observational data declines over time due to a reducing sample size but the feature dimension increases over time. Long-term follow-up compounds these challenges. Another challenge is the highly complex relationships between confounders, treatments, and outcomes, which causes the traditional and commonly used linear methods to fail. We combine outcome regression models with treatment models for high dimensional features using uncensored subjects that are small in sample size and we fit deep Bayesian models for outcome regression models to reveal the complex relationships between confounders, treatments, and outcomes. Also, the developed deep Bayesian models can model uncertainty and output the prediction variance which is essential for the safety-aware applications, such as self-driving cars and medical treatment design. The experimental results on medical simulations of HIV treatment show the ability of the proposed method to obtain stable and accurate dynamic causal effect estimation from observational data, especially with long-term follow-up. Our technique provides practical guidance for sequential decision making, and policy-making.
Neural networks and Gaussian processes are complementary in their strengths and weaknesses. Having a better understanding of their relationship comes with the promise to make each method benefit from the strengths of the other. In this work, we establish an equivalence between the forward passes of neural networks and (deep) sparse Gaussian process models. The theory we develop is based on interpreting activation functions as interdomain inducing features through a rigorous analysis of the interplay between activation functions and kernels. This results in models that can either be seen as neural networks with improved uncertainty prediction or deep Gaussian processes with increased prediction accuracy. These claims are supported by experimental results on regression and classification datasets.
We extend the theoretical results for any FOU(p) processes for the case in which the Hurst parameter is less than 1/2 and we show theoretically and by simulations that under some conditions on T and the sample size n it is possible to obtain consistent estimators of the parameters when the process is observed in a discretized and equispaced interval [0, T ]. Also we will show that the FOU(p) processes can be used to model a wide range of time series varying from short range dependence to large range dependence with similar results as the ARMA or ARFIMA models, and in several cases outperforms those. Lastly, we give a way to obtain explicit formulas for the auto-covariance function for any FOU(p) and we present an application for FOU(2) and FOU(3).
Background/aims: While randomized controlled trials are the gold standard for measuring causal effects, robust conclusions about causal relationships can be obtained using data from observational studies if proper statistical techniques are used to account for the imbalance of pretreatment confounders across groups. Propensity score (PS) and balance weighting are useful techniques that aim to reduce the observed imbalances between treatment groups by weighting the groups to be as similar as possible with respect to observed confounders. Methods: We have created CoBWeb, a free and easy-to-use web application for the estimation of causal treatment effects from observational data, using PS and balancing weights to control for confounding bias. CoBWeb uses multiple algorithms to estimate the PS and balancing weights, to allow for more flexible relations between the treatment indicator and the observed confounders (as different algorithms make different (or no) assumptions about the structural relationship between the treatment covariate and the confounders). The optimal algorithm can be chosen by selecting the one that achieves the best trade-off between balance and effective sample size. Results: CoBWeb follows all the key steps required for robust estimation of the causal treatment effect from observational study data and includes sensitivity analysis of the potential impact of unobserved confounders. We illustrate the practical use of the app using a dataset derived from a study of an intervention for adolescents with substance use disorder, which is available for users within the app environment. Conclusion: CoBWeb is intended to enable non-specialists to understand and apply all the key steps required to perform robust estimation of causal treatment effects using observational data.
We introduce a procedure for conditional density estimation under logarithmic loss, which we call SMP (Sample Minmax Predictor). This estimator minimizes a new general excess risk bound for statistical learning. On standard examples, this bound scales as $d/n$ with $d$ the model dimension and $n$ the sample size, and critically remains valid under model misspecification. Being an improper (out-of-model) procedure, SMP improves over within-model estimators such as the maximum likelihood estimator, whose excess risk degrades under misspecification. Compared to approaches reducing to the sequential problem, our bounds remove suboptimal $\log n$ factors and can handle unbounded classes. For the Gaussian linear model, the predictions and risk bound of SMP are governed by leverage scores of covariates, nearly matching the optimal risk in the well-specified case without conditions on the noise variance or approximation error of the linear model. For logistic regression, SMP provides a non-Bayesian approach to calibration of probabilistic predictions relying on virtual samples, and can be computed by solving two logistic regressions. It achieves a non-asymptotic excess risk of $O((d + B^2R^2)/n)$, where $R$ bounds the norm of features and $B$ that of the comparison parameter; by contrast, no within-model estimator can achieve better rate than $\min({B R}/{\sqrt{n}}, {d e^{BR}}/{n} )$ in general. This provides a more practical alternative to Bayesian approaches, which require approximate posterior sampling, thereby partly addressing a question raised by Foster et al. (2018).
The traditional manual age estimation method is crucial labor based on many kinds of the X-Ray image. Some current studies have shown that lateral cephalometric(LC) images can be used to estimate age. However, these methods are based on manually measuring some image features and making age estimates based on experience or scoring. Therefore, these methods are time-consuming and labor-intensive, and the effect will be affected by subjective opinions. In this work, we propose a saliency map-enhanced age estimation method, which can automatically perform age estimation based on LC images. Meanwhile, it can also show the importance of each region in the image for age estimation, which undoubtedly increases the method's Interpretability. Our method was tested on 3014 LC images from 4 to 40 years old. The MEA of the experimental result is 1.250, which is less than the result of the state-of-the-art benchmark because it performs significantly better in the age group with fewer data. Besides, our model is trained in each area with a high contribution to age estimation in LC images, so the effect of these different areas on the age estimation task was verified. Consequently, we conclude that the proposed saliency map enhancements chronological age estimation method of lateral cephalometric radiographs can work well in chronological age estimation task, especially when the amount of data is small. Besides, compared with traditional deep learning, our method is also interpretable.
This paper considers maximum likelihood (ML) estimation in a large class of models with hidden Markov regimes. We investigate consistency of the ML estimator and local asymptotic normality for the models under general conditions which allow for autoregressive dynamics in the observable process, Markov regime sequences with covariate-dependent transition matrices, and possible model misspecification. A Monte Carlo study examines the finite-sample properties of the ML estimator in correctly specified and misspecified models. An empirical application is also discussed.
Estimating individualized treatment effects (ITEs) from observational data is crucial for decision-making. In order to obtain unbiased ITE estimates, a common assumption is that all confounders are observed. However, in practice, it is unlikely that we observe these confounders directly. Instead, we often observe noisy measurements of true confounders, which can serve as valid proxies. In this paper, we address the problem of estimating ITE in the longitudinal setting where we observe noisy proxies instead of true confounders. To this end, we develop the Deconfounding Temporal Autoencoder, a novel method that leverages observed noisy proxies to learn a hidden embedding that reflects the true hidden confounders. In particular, the DTA combines a long short-term memory autoencoder with a causal regularization penalty that renders the potential outcomes and treatment assignment conditionally independent given the learned hidden embedding. Once the hidden embedding is learned via DTA, state-of-the-art outcome models can be used to control for it and obtain unbiased estimates of ITE. Using synthetic and real-world medical data, we demonstrate the effectiveness of our DTA by improving over state-of-the-art benchmarks by a substantial margin.
Univariate and multivariate general linear regression models, subject to linear inequality constraints, arise in many scientific applications. The linear inequality restrictions on model parameters are often available from phenomenological knowledge and motivated by machine learning applications of high-consequence engineering systems (Agrell, 2019; Veiga and Marrel, 2012). Some studies on the multiple linear models consider known linear combinations of the regression coefficient parameters restricted between upper and lower bounds. In the present paper, we consider both univariate and multivariate general linear models subjected to this kind of linear restrictions. So far, research on univariate cases based on Bayesian methods is all under the condition that the coefficient matrix of the linear restrictions is a square matrix of full rank. This condition is not, however, always feasible. Another difficulty arises at the estimation step by implementing the Gibbs algorithm, which exhibits, in most cases, slow convergence. This paper presents a Bayesian method to estimate the regression parameters when the matrix of the constraints providing the set of linear inequality restrictions undergoes no condition. For the multivariate case, our Bayesian method estimates the regression parameters when the number of the constrains is less than the number of the regression coefficients in each multiple linear models. We examine the efficiency of our Bayesian method through simulation studies for both univariate and multivariate regressions. After that, we illustrate that the convergence of our algorithm is relatively faster than the previous methods. Finally, we use our approach to analyze two real datasets.
Dynamic treatment regimes (DTRs) consist of a sequence of decision rules, one per stage of intervention, that finds effective treatments for individual patients according to patient information history. DTRs can be estimated from models which include the interaction between treatment and a small number of covariates which are often chosen a priori. However, with increasingly large and complex data being collected, it is difficult to know which prognostic factors might be relevant in the treatment rule. Therefore, a more data-driven approach of selecting these covariates might improve the estimated decision rules and simplify models to make them easier to interpret. We propose a variable selection method for DTR estimation using penalized dynamic weighted least squares. Our method has the strong heredity property, that is, an interaction term can be included in the model only if the corresponding main terms have also been selected. Through simulations, we show our method has both the double robustness property and the oracle property, and the newly proposed methods compare favorably with other variable selection approaches.
Data augmentation has been widely used for training deep learning systems for medical image segmentation and plays an important role in obtaining robust and transformation-invariant predictions. However, it has seldom been used at test time for segmentation and not been formulated in a consistent mathematical framework. In this paper, we first propose a theoretical formulation of test-time augmentation for deep learning in image recognition, where the prediction is obtained through estimating its expectation by Monte Carlo simulation with prior distributions of parameters in an image acquisition model that involves image transformations and noise. We then propose a novel uncertainty estimation method based on the formulated test-time augmentation. Experiments with segmentation of fetal brains and brain tumors from 2D and 3D Magnetic Resonance Images (MRI) showed that 1) our test-time augmentation outperforms a single-prediction baseline and dropout-based multiple predictions, and 2) it provides a better uncertainty estimation than calculating the model-based uncertainty alone and helps to reduce overconfident incorrect predictions.