To achieve the goal of providing the best possible care to each patient, physicians need to customize treatments for patients with the same diagnosis, especially when treating diseases that can progress further and require additional treatments, such as cancer. Making decisions at multiple stages as a disease progresses can be formalized as a dynamic treatment regime (DTR). Most of the existing optimization approaches for estimating dynamic treatment regimes including the popular method of Q-learning were developed in a frequentist context. Recently, a general Bayesian machine learning framework that facilitates using Bayesian regression modeling to optimize DTRs has been proposed. In this article, we adapt this approach to censored outcomes using Bayesian additive regression trees (BART) for each stage under the accelerated failure time modeling framework, along with simulation studies and a real data example that compare the proposed approach with Q-learning. We also develop an R wrapper function that utilizes a standard BART survival model to optimize DTRs for censored outcomes. The wrapper function can easily be extended to accommodate any type of Bayesian machine learning model.
The Bayesian additive regression trees (BART) model is an ensemble method extensively and successfully used in regression tasks due to its consistently strong predictive performance and its ability to quantify uncertainty. BART combines "weak" tree models through a set of shrinkage priors, whereby each tree explains a small portion of the variability in the data. However, the lack of smoothness and the absence of a covariance structure over the observations in standard BART can yield poor performance in cases where such assumptions would be necessary. We propose Gaussian processes Bayesian additive regression trees (GP-BART) as an extension of BART which assumes Gaussian process (GP) priors for the predictions of each terminal node among all trees. We illustrate our model on simulated and real data and compare its performance to traditional modelling approaches, outperforming them in many scenarios. An implementation of our method is available in the R package rGPBART available at: //github.com/MateusMaiaDS/gpbart
In social, medical, and behavioral research we often encounter datasets with a multilevel structure and multiple correlated dependent variables. These data are frequently collected from a study population that distinguishes several subpopulations with different (i.e. heterogeneous) effects of an intervention. Despite the frequent occurrence of such data, methods to analyze them are less common and researchers often resort to either ignoring the multilevel and/or heterogeneous structure, analyzing only a single dependent variable, or a combination of these. These analysis strategies are suboptimal: Ignoring multilevel structures inflates Type I error rates, while neglecting the multivariate or heterogeneous structure masks detailed insights. To analyze such data comprehensively, the current paper presents a novel Bayesian multilevel multivariate logistic regression model. The clustered structure of multilevel data is taken into account, such that posterior inferences can be made with accurate error rates. Further, the model shares information between different subpopulations in the estimation of average and conditional average multivariate treatment effects. To facilitate interpretation, multivariate logistic regression parameters are transformed to posterior success probabilities and differences between them. A numerical evaluation compared our framework to less comprehensive alternatives and highlighted the need to model the multilevel structure: Treatment comparisons based on the multilevel model had targeted Type I error rates, while single-level alternatives resulted in inflated Type I errors. A re-analysis of the Third International Stroke Trial data illustrated how incorporating a multilevel structure, assessing treatment heterogeneity, and combining dependent variables contributed to an in-depth understanding of treatment effects.
We study the classic facility location setting, where we are given $n$ clients and $m$ possible facility locations in some arbitrary metric space, and want to choose a location to build a facility. The exact same setting also arises in spatial social choice, where voters are the clients and the goal is to choose a candidate or outcome, with the distance from a voter to an outcome representing the cost of this outcome for the voter (e.g., based on their ideological differences). Unlike most previous work, we do not focus on a single objective to optimize (e.g., the total distance from clients to the facility, or the maximum distance, etc.), but instead attempt to optimize several different objectives simultaneously. More specifically, we consider the $l$-centrum family of objectives, which includes the total distance, max distance, and many others. We present tight bounds on how well any pair of such objectives (e.g., max and sum) can be simultaneously approximated compared to their optimum outcomes. In particular, we show that for any such pair of objectives, it is always possible to choose an outcome which simultaneously approximates both objectives within a factor of $1+\sqrt{2}$, and give a precise characterization of how this factor improves as the two objectives being optimized become more similar. For $q>2$ different centrum objectives, we show that it is always possible to approximate all $q$ of these objectives within a small constant, and that this constant approaches 3 as $q\rightarrow \infty$. Our results show that when optimizing only a few simultaneous objectives, it is always possible to form an outcome which is a significantly better than 3 approximation for all of these objectives.
In this paper, we develop a novel spatial variable selection method for scalar on vector-valued image regression in a multi-group setting. Here, 'vector-valued image' refers to the imaging datasets that contain vector-valued information at each pixel/voxel location, such as in RGB color images, multimodal medical images, DTI imaging, etc. The focus of this work is to identify the spatial locations in the image having an important effect on the scalar outcome measure. Specifically, the overall effect of each voxel is of interest. We thus develop a novel shrinkage prior by soft-thresholding the \ell_2 norm of a latent multivariate Gaussian process. It will allow us to estimate sparse and piecewise-smooth spatially varying vector-valued regression coefficient functions. For posterior inference, an efficient MCMC algorithm is developed. We establish the posterior contraction rate for parameter estimation and consistency for variable selection of the proposed Bayesian model, assuming that the true regression coefficients are Holder smooth. Finally, we demonstrate the advantages of the proposed method in simulation studies and further illustrate in an ADNI dataset for modeling MMSE scores based on DTI-based vector-valued imaging markers.
This work studies the problem of transfer learning under the functional linear regression model framework, which aims to improve the estimation and prediction of the target model by leveraging the information from related source models. We measure the relatedness between target and source models using Reproducing Kernel Hilbert Spaces (RKHS) norm, allowing the type of information being transferred to be interpreted by the structural properties of the spaces. Two transfer learning algorithms are proposed: one transfers information from source tasks when we know which sources to use, while the other one aggregates multiple transfer learning results from the first algorithm to achieve robust transfer learning without prior information about the sources. Furthermore, we establish the optimal convergence rates for the prediction risk in the target model, making the statistical gain via transfer learning mathematically provable. The theoretical analysis of the prediction risk also provides insights regarding what factors are affecting the transfer learning effect, i.e. what makes source tasks useful to the target task. We demonstrate the effectiveness of the proposed transfer learning algorithms on extensive synthetic data as well as real financial data application.
Gaussian mixtures are commonly used for modeling heavy-tailed error distributions in robust linear regression. Combining the likelihood of a multivariate robust linear regression model with a standard improper prior distribution yields an analytically intractable posterior distribution that can be sampled using a data augmentation algorithm. When the response matrix has missing entries, there are unique challenges to the application and analysis of the convergence properties of the algorithm. Conditions for geometric ergodicity are provided when the incomplete data have a "monotone" structure. In the absence of a monotone structure, an intermediate imputation step is necessary for implementing the algorithm. In this case, we provide sufficient conditions for the algorithm to be Harris ergodic. Finally, we show that, when there is a monotone structure and intermediate imputation is unnecessary, intermediate imputation slows the convergence of the underlying Monte Carlo Markov chain, while post hoc imputation does not. An R package for the data augmentation algorithm is provided.
This paper considers the problem of kernel regression and classification with possibly unobservable response variables in the data, where the mechanism that causes the absence of information is unknown and can depend on both predictors and the response variables. Our proposed approach involves two steps: In the first step, we construct a family of models (possibly infinite dimensional) indexed by the unknown parameter of the missing probability mechanism. In the second step, a search is carried out to find the empirically optimal member of an appropriate cover (or subclass) of the underlying family in the sense of minimizing the mean squared prediction error. The main focus of the paper is to look into the theoretical properties of these estimators. The issue of identifiability is also addressed. Our methods use a data-splitting approach which is quite easy to implement. We also derive exponential bounds on the performance of the resulting estimators in terms of their deviations from the true regression curve in general Lp norms, where we also allow the size of the cover or subclass to diverge as the sample size n increases. These bounds immediately yield various strong convergence results for the proposed estimators. As an application of our findings, we consider the problem of statistical classification based on the proposed regression estimators and also look into their rates of convergence under different settings. Although this work is mainly stated for kernel-type estimators, they can also be extended to other popular local-averaging methods such as nearest-neighbor estimators, and histogram estimators.
Constructing asymptotically valid confidence intervals through a valid central limit theorem is crucial for A/B tests, where a classical goal is to statistically assert whether a treatment plan is significantly better than a control plan. In some emerging applications for online platforms, the treatment plan is not a single plan, but instead encompasses an infinite continuum of plans indexed by a continuous treatment parameter. As such, the experimenter not only needs to provide valid statistical inference, but also desires to effectively and adaptively find the optimal choice of value for the treatment parameter to use for the treatment plan. However, we find that classical optimization algorithms, despite of their fast convergence rates under convexity assumptions, do not come with a central limit theorem that can be used to construct asymptotically valid confidence intervals. We fix this issue by providing a new optimization algorithm that on one hand maintains the same fast convergence rate and on the other hand permits the establishment of a valid central limit theorem. We discuss practical implementations of the proposed algorithm and conduct numerical experiments to illustrate the theoretical findings.
Consider the $\ell_{\alpha}$ regularized linear regression, also termed Bridge regression. For $\alpha\in (0,1)$, Bridge regression enjoys several statistical properties of interest such as sparsity and near-unbiasedness of the estimates (Fan and Li, 2001). However, the main difficulty lies in the non-convex nature of the penalty for these values of $\alpha$, which makes an optimization procedure challenging and usually it is only possible to find a local optimum. To address this issue, Polson et al. 2013 took a sampling based fully Bayesian approach to this problem, using the correspondence between the Bridge penalty and a power exponential prior on the regression coefficients. However, their sampling procedure relies on Markov chain Monte Carlo (MCMC) techniques, which are inherently sequential and not scalable to large problem dimensions. Cross validation approaches are similarly computation-intensive. To this end, our contribution is a novel non-iterative method to fit a Bridge regression model. The main contribution lies in an explicit formula for Stein's unbiased risk estimate for the out of sample prediction risk of Bridge regression, which can then be optimized to select the desired tuning parameters, allowing us to completely bypass MCMC as well as computation-intensive cross validation approaches. Our procedure yields results in about 1/8th to 1/10th of the computational time compared to iterative schemes, without any appreciable loss in statistical performance. An R implementation is publicly available online at: //github.com/loriaJ/Sure-tuned_BridgeRegression.
The adaptive processing of structured data is a long-standing research topic in machine learning that investigates how to automatically learn a mapping from a structured input to outputs of various nature. Recently, there has been an increasing interest in the adaptive processing of graphs, which led to the development of different neural network-based methodologies. In this thesis, we take a different route and develop a Bayesian Deep Learning framework for graph learning. The dissertation begins with a review of the principles over which most of the methods in the field are built, followed by a study on graph classification reproducibility issues. We then proceed to bridge the basic ideas of deep learning for graphs with the Bayesian world, by building our deep architectures in an incremental fashion. This framework allows us to consider graphs with discrete and continuous edge features, producing unsupervised embeddings rich enough to reach the state of the art on several classification tasks. Our approach is also amenable to a Bayesian nonparametric extension that automatizes the choice of almost all model's hyper-parameters. Two real-world applications demonstrate the efficacy of deep learning for graphs. The first concerns the prediction of information-theoretic quantities for molecular simulations with supervised neural models. After that, we exploit our Bayesian models to solve a malware-classification task while being robust to intra-procedural code obfuscation techniques. We conclude the dissertation with an attempt to blend the best of the neural and Bayesian worlds together. The resulting hybrid model is able to predict multimodal distributions conditioned on input graphs, with the consequent ability to model stochasticity and uncertainty better than most works. Overall, we aim to provide a Bayesian perspective into the articulated research field of deep learning for graphs.