Inference is crucial in modern astronomical research, where hidden astrophysical features and patterns are often estimated from indirect and noisy measurements. Inferring the posterior of hidden features, conditioned on the observed measurements, is essential for understanding the uncertainty of results and downstream scientific interpretations. Traditional approaches for posterior estimation include sampling-based methods and variational inference. However, sampling-based methods are typically slow for high-dimensional inverse problems, while variational inference often lacks estimation accuracy. In this paper, we propose alpha-DPI, a deep learning framework that first learns an approximate posterior using alpha-divergence variational inference paired with a generative neural network, and then produces more accurate posterior samples through importance re-weighting of the network samples. It inherits strengths from both sampling and variational inference methods: it is fast, accurate, and scalable to high-dimensional problems. We apply our approach to two high-impact astronomical inference problems using real data: exoplanet astrometry and black hole feature extraction.
Recently, subsampling or refining images generated from unconditional GANs has been actively studied to improve the overall image quality. Unfortunately, these methods are often observed less effective or inefficient in handling conditional GANs (cGANs) -- conditioning on a class (aka class-conditional GANs) or a continuous variable (aka continuous cGANs or CcGANs). In this work, we introduce an effective and efficient subsampling scheme, named conditional density ratio-guided rejection sampling (cDR-RS), to sample high-quality images from cGANs. Specifically, we first develop a novel conditional density ratio estimation method, termed cDRE-F-cSP, by proposing the conditional Softplus (cSP) loss and an improved feature extraction mechanism. We then derive the error bound of a density ratio model trained with the cSP loss. Finally, we accept or reject a fake image in terms of its estimated conditional density ratio. A filtering scheme is also developed to increase fake images' label consistency without losing diversity when sampling from CcGANs. We extensively test the effectiveness and efficiency of cDR-RS in sampling from both class-conditional GANs and CcGANs on five benchmark datasets. When sampling from class-conditional GANs, cDR-RS outperforms modern state-of-the-art methods by a large margin (except DRE-F-SP+RS) in terms of effectiveness. Although the effectiveness of cDR-RS is often comparable to that of DRE-F-SP+RS, cDR-RS is substantially more efficient. When sampling from CcGANs, the superiority of cDR-RS is even more noticeable in terms of both effectiveness and efficiency. Notably, with the consumption of reasonable computational resources, cDR-RS can substantially reduce Label Score without decreasing the diversity of CcGAN-generated images, while other methods often need to trade much diversity for slightly improved Label Score.
Recently, subsampling or refining images generated from unconditional GANs has been actively studied to improve the overall image quality. Unfortunately, these methods are often observed less effective or inefficient in handling conditional GANs (cGANs) -- conditioning on a class (aka class-conditional GANs) or a continuous variable (aka continuous cGANs or CcGANs). In this work, we introduce an effective and efficient subsampling scheme, named conditional density ratio-guided rejection sampling (cDR-RS), to sample high-quality images from cGANs. Specifically, we first develop a novel conditional density ratio estimation method, termed cDRE-F-cSP, by proposing the conditional Softplus (cSP) loss and an improved feature extraction mechanism. We then derive the error bound of a density ratio model trained with the cSP loss. Finally, we accept or reject a fake image in terms of its estimated conditional density ratio. A filtering scheme is also developed to increase fake images' label consistency without losing diversity when sampling from CcGANs. We extensively test the effectiveness and efficiency of cDR-RS in sampling from both class-conditional GANs and CcGANs on five benchmark datasets. When sampling from class-conditional GANs, cDR-RS outperforms modern state-of-the-art methods by a large margin (except DRE-F-SP+RS) in terms of effectiveness. Although the effectiveness of cDR-RS is often comparable to that of DRE-F-SP+RS, cDR-RS is substantially more efficient. When sampling from CcGANs, the superiority of cDR-RS is even more noticeable in terms of both effectiveness and efficiency. Notably, with the consumption of reasonable computational resources, cDR-RS can substantially reduce Label Score without decreasing the diversity of CcGAN-generated images, while other methods often need to trade much diversity for slightly improved Label Score.
Bayesian policy reuse (BPR) is a general policy transfer framework for selecting a source policy from an offline library by inferring the task belief based on some observation signals and a trained observation model. In this paper, we propose an improved BPR method to achieve more efficient policy transfer in deep reinforcement learning (DRL). First, most BPR algorithms use the episodic return as the observation signal that contains limited information and cannot be obtained until the end of an episode. Instead, we employ the state transition sample, which is informative and instantaneous, as the observation signal for faster and more accurate task inference. Second, BPR algorithms usually require numerous samples to estimate the probability distribution of the tabular-based observation model, which may be expensive and even infeasible to learn and maintain, especially when using the state transition sample as the signal. Hence, we propose a scalable observation model based on fitting state transition functions of source tasks from only a small number of samples, which can generalize to any signals observed in the target task. Moreover, we extend the offline-mode BPR to the continual learning setting by expanding the scalable observation model in a plug-and-play fashion, which can avoid negative transfer when faced with new unknown tasks. Experimental results show that our method can consistently facilitate faster and more efficient policy transfer.
Remote-sensing (RS) Change Detection (CD) aims to detect "changes of interest" from co-registered bi-temporal images. The performance of existing deep supervised CD methods is attributed to the large amounts of annotated data used to train the networks. However, annotating large amounts of remote sensing images is labor-intensive and expensive, particularly with bi-temporal images, as it requires pixel-wise comparisons by a human expert. On the other hand, we often have access to unlimited unlabeled multi-temporal RS imagery thanks to ever-increasing earth observation programs. In this paper, we propose a simple yet effective way to leverage the information from unlabeled bi-temporal images to improve the performance of CD approaches. More specifically, we propose a semi-supervised CD model in which we formulate an unsupervised CD loss in addition to the supervised Cross-Entropy (CE) loss by constraining the output change probability map of a given unlabeled bi-temporal image pair to be consistent under the small random perturbations applied on the deep feature difference map that is obtained by subtracting their latent feature representations. Experiments conducted on two publicly available CD datasets show that the proposed semi-supervised CD method can reach closer to the performance of supervised CD even with access to as little as 10% of the annotated training data. Code available at //github.com/wgcban/SemiCD
In this study, we examine a clustering problem in which the covariates of each individual element in a dataset are associated with an uncertainty specific to that element. More specifically, we consider a clustering approach in which a pre-processing applying a non-linear transformation to the covariates is used to capture the hidden data structure. To this end, we approximate the sets representing the propagated uncertainty for the pre-processed features empirically. To exploit the empirical uncertainty sets, we propose a greedy and optimistic clustering (GOC) algorithm that finds better feature candidates over such sets, yielding more condensed clusters. As an important application, we apply the GOC algorithm to synthetic datasets of the orbital properties of stars generated through our numerical simulation mimicking the formation process of the Milky Way. The GOC algorithm demonstrates an improved performance in finding sibling stars originating from the same dwarf galaxy. These realistic datasets have also been made publicly available.
Existing inferential methods for small area data involve a trade-off between maintaining area-level frequentist coverage rates and improving inferential precision via the incorporation of indirect information. In this article, we propose a method to obtain an area-level prediction region for a future observation which mitigates this trade-off. The proposed method takes a conformal prediction approach in which the conformity measure is the posterior predictive density of a working model that incorporates indirect information. The resulting prediction region has guaranteed frequentist coverage regardless of the working model, and, if the working model assumptions are accurate, the region has minimum expected volume compared to other regions with the same coverage rate. When constructed under a normal working model, we prove such a prediction region is an interval and construct an efficient algorithm to obtain the exact interval. We illustrate the performance of our method through simulation studies and an application to EPA radon survey data.
Low-rank matrix estimation under heavy-tailed noise is challenging, both computationally and statistically. Convex approaches have been proven statistically optimal but suffer from high computational costs, especially since robust loss functions are usually non-smooth. More recently, computationally fast non-convex approaches via sub-gradient descent are proposed, which, unfortunately, fail to deliver a statistically consistent estimator even under sub-Gaussian noise. In this paper, we introduce a novel Riemannian sub-gradient (RsGrad) algorithm which is not only computationally efficient with linear convergence but also is statistically optimal, be the noise Gaussian or heavy-tailed. Convergence theory is established for a general framework and specific applications to absolute loss, Huber loss, and quantile loss are investigated. Compared with existing non-convex methods, ours reveals a surprising phenomenon of dual-phase convergence. In phase one, RsGrad behaves as in a typical non-smooth optimization that requires gradually decaying stepsizes. However, phase one only delivers a statistically sub-optimal estimator which is already observed in the existing literature. Interestingly, during phase two, RsGrad converges linearly as if minimizing a smooth and strongly convex objective function and thus a constant stepsize suffices. Underlying the phase-two convergence is the smoothing effect of random noise to the non-smooth robust losses in an area close but not too close to the truth. Lastly, RsGrad is applicable for low-rank tensor estimation under heavy-tailed noise where a statistically optimal rate is attainable with the same phenomenon of dual-phase convergence, and a novel shrinkage-based second-order moment method is guaranteed to deliver a warm initialization. Numerical simulations confirm our theoretical discovery and showcase the superiority of RsGrad over prior methods.
Bayesian model selection provides a powerful framework for objectively comparing models directly from observed data, without reference to ground truth data. However, Bayesian model selection requires the computation of the marginal likelihood (model evidence), which is computationally challenging, prohibiting its use in many high-dimensional Bayesian inverse problems. With Bayesian imaging applications in mind, in this work we present the proximal nested sampling methodology to objectively compare alternative Bayesian imaging models for applications that use images to inform decisions under uncertainty. The methodology is based on nested sampling, a Monte Carlo approach specialised for model comparison, and exploits proximal Markov chain Monte Carlo techniques to scale efficiently to large problems and to tackle models that are log-concave and not necessarily smooth (e.g., involving l_1 or total-variation priors). The proposed approach can be applied computationally to problems of dimension O(10^6) and beyond, making it suitable for high-dimensional inverse imaging problems. It is validated on large Gaussian models, for which the likelihood is available analytically, and subsequently illustrated on a range of imaging problems where it is used to analyse different choices of dictionary and measurement model.
One of the most important problems in system identification and statistics is how to estimate the unknown parameters of a given model. Optimization methods and specialized procedures, such as Empirical Minimization (EM) can be used in case the likelihood function can be computed. For situations where one can only simulate from a parametric model, but the likelihood is difficult or impossible to evaluate, a technique known as the Two-Stage (TS) Approach can be applied to obtain reliable parametric estimates. Unfortunately, there is currently a lack of theoretical justification for TS. In this paper, we propose a statistical decision-theoretical derivation of TS, which leads to Bayesian and Minimax estimators. We also show how to apply the TS approach on models for independent and identically distributed samples, by computing quantiles of the data as a first step, and using a linear function as the second stage. The proposed method is illustrated via numerical simulations.
We present a novel static analysis technique to derive higher moments for program variables for a large class of probabilistic loops with potentially uncountable state spaces. Our approach is fully automatic, meaning it does not rely on externally provided invariants or templates. We employ algebraic techniques based on linear recurrences and introduce program transformations to simplify probabilistic programs while preserving their statistical properties. We develop power reduction techniques to further simplify the polynomial arithmetic of probabilistic programs and define the theory of moment-computable probabilistic loops for which higher moments can precisely be computed. Our work has applications towards recovering probability distributions of random variables and computing tail probabilities. The empirical evaluation of our results demonstrates the applicability of our work on many challenging examples.