This paper addresses the deconvolution problem of estimating a square-integrable probability density from observations contaminated with additive measurement errors having a known density. The estimator begins with a density estimate of the contaminated observations and minimizes a reconstruction error penalized by an integrated squared $m$-th derivative. Theory for deconvolution has mainly focused on kernel- or wavelet-based techniques, but other methods including spline-based techniques and this smoothness-penalized estimator have been found to outperform kernel methods in simulation studies. This paper fills in some of these gaps by establishing asymptotic guarantees for the smoothness-penalized approach. Consistency is established in mean integrated squared error, and rates of convergence are derived for Gaussian, Cauchy, and Laplace error densities, attaining some lower bounds already in the literature. The assumptions are weak for most results; the estimator can be used with a broader class of error densities than the deconvoluting kernel. Our application example estimates the density of the mean cytotoxicity of certain bacterial isolates under random sampling; this mean cytotoxicity can only be measured experimentally with additive error, leading to the deconvolution problem. We also describe a method for approximating the solution by a cubic spline, which reduces to a quadratic program.
Model misspecification can create significant challenges for the implementation of probabilistic models, and this has led to development of a range of robust methods which directly account for this issue. However, whether these more involved methods are required will depend on whether the model is really misspecified, and there is a lack of generally applicable methods to answer this question. In this paper, we propose one such method. More precisely, we propose kernel-based hypothesis tests for the challenging composite testing problem, where we are interested in whether the data comes from any distribution in some parametric family. Our tests make use of minimum distance estimators based on the maximum mean discrepancy and the kernel Stein discrepancy. They are widely applicable, including whenever the density of the parametric model is known up to normalisation constant, or if the model takes the form of a simulator. As our main result, we show that we are able to estimate the parameter and conduct our test on the same data (without data splitting), while maintaining a correct test level. Our approach is illustrated on a range of problems, including testing for goodness-of-fit of an unnormalised non-parametric density model, and an intractable generative model of a biological cellular network.
Motivated by the need to model joint dependence between regions of interest in functional neuroconnectivity for efficient inference, we propose a new sampling-based Bayesian clustering approach for covariance structures of high-dimensional Gaussian outcomes. The key technique is based on a Dirichlet process that clusters covariance sub-matrices into independent groups of outcomes, thereby naturally inducing sparsity in the whole brain connectivity matrix. A new split-merge algorithm is employed to improve the mixing of the Markov chain sampling that is shown empirically to recover both uniform and Dirichlet partitions with high accuracy. We investigate the empirical performance of the proposed method through extensive simulations. Finally, the proposed approach is used to group regions of interest into functionally independent groups in the Autism Brain Imaging Data Exchange participants with autism spectrum disorder and attention-deficit/hyperactivity disorder.
Datasets with sheer volume have been generated from fields including computer vision, medical imageology, and astronomy whose large-scale and high-dimensional properties hamper the implementation of classical statistical models. To tackle the computational challenges, one of the efficient approaches is subsampling which draws subsamples from the original large datasets according to a carefully-design task-specific probability distribution to form an informative sketch. The computation cost is reduced by applying the original algorithm to the substantially smaller sketch. Previous studies associated with subsampling focused on non-regularized regression from the computational efficiency and theoretical guarantee perspectives, such as ordinary least square regression and logistic regression. In this article, we introduce a randomized algorithm under the subsampling scheme for the Elastic-net regression which gives novel insights into L1-norm regularized regression problem. To effectively conduct consistency analysis, a smooth approximation technique based on alpha absolute function is firstly employed and theoretically verified. The concentration bounds and asymptotic normality for the proposed randomized algorithm are then established under mild conditions. Moreover, an optimal subsampling probability is constructed according to A-optimality. The effectiveness of the proposed algorithm is demonstrated upon synthetic and real data datasets.
The simultaneous estimation of multiple unknown parameters lies at heart of a broad class of important problems across science and technology. Currently, the state-of-the-art performance in the such problems is achieved by nonparametric empirical Bayes methods. However, these approaches still suffer from two major issues. First, they solve a frequentist problem but do so by following Bayesian reasoning, posing a philosophical dilemma that has contributed to somewhat uneasy attitudes toward empirical Bayes methodology. Second, their computation relies on certain density estimates that become extremely unreliable in some complex simultaneous estimation problems. In this paper, we study these issues in the context of the canonical Gaussian sequence problem. We propose an entirely frequentist alternative to nonparametric empirical Bayes methods by establishing a connection between simultaneous estimation and penalized nonparametric regression. We use flexible regularization strategies, such as shape constraints, to derive accurate estimators without appealing to Bayesian arguments. We prove that our estimators achieve asymptotically optimal regret and show that they are competitive with or can outperform nonparametric empirical Bayes methods in simulations and an analysis of spatially resolved gene expression data.
We study a family of adversarial multiclass classification problems and provide equivalent reformulations in terms of: 1) a family of generalized barycenter problems introduced in the paper and 2) a family of multimarginal optimal transport problems where the number of marginals is equal to the number of classes in the original classification problem. These new theoretical results reveal a rich geometric structure of adversarial learning problems in multiclass classification and extend recent results restricted to the binary classification setting. A direct computational implication of our results is that by solving either the barycenter problem and its dual, or the MOT problem and its dual, we can recover the optimal robust classification rule and the optimal adversarial strategy for the original adversarial problem. Examples with synthetic and real data illustrate our results.
Elitism, which constructs the new population by preserving best solutions out of the old population and newly-generated solutions, has been a default way for population update since its introduction into multi-objective evolutionary algorithms (MOEAs) in the late 1990s. In this paper, we take an opposite perspective to conduct the population update in MOEAs by simply discarding elitism. That is, we treat the newly-generated solutions as the new population directly (so that all selection pressure comes from mating selection). We propose a simple non-elitist MOEA (called NE-MOEA) that only uses Pareto dominance sorting to compare solutions, without involving any diversity-related selection criterion. Preliminary experimental results show that NE-MOEA can compete with well-known elitist MOEAs (NSGA-II, SMS-EMOA and NSGA-III) on several combinatorial problems. Lastly, we discuss limitations of the proposed non-elitist algorithm and suggest possible future research directions.
Noise plagues many numerical datasets, where the recorded values in the data may fail to match the true underlying values due to reasons including: erroneous sensors, data entry/processing mistakes, or imperfect human estimates. Here we consider estimating \emph{which} data values are incorrect along a numerical column. We present a model-agnostic approach that can utilize \emph{any} regressor (i.e.\ statistical or machine learning model) which was fit to predict values in this column based on the other variables in the dataset. By accounting for various uncertainties, our approach distinguishes between genuine anomalies and natural data fluctuations, conditioned on the available information in the dataset. We establish theoretical guarantees for our method and show that other approaches like conformal inference struggle to detect errors. We also contribute a new error detection benchmark involving 5 regression datasets with real-world numerical errors (for which the true values are also known). In this benchmark and additional simulation studies, our method identifies incorrect values with better precision/recall than other approaches.
The photographs captured by digital cameras usually suffer from the improper (over or under) exposure problems. For image exposure enhancement, the tasks of Single-Exposure Correction (SEC) and Multi-Exposure Fusion (MEF) are widely studied in the image processing community. However, current SEC or MEF methods are developed under different motivations and thus ignore the internal correlation between SEC and MEF, making it difficult to process arbitrary-length sequences with inaccurate exposures. Besides, the MEF methods usually fail at estimating the exposure of a sequence containing only under-exposed or over-exposed images. To alleviate these problems, in this paper, we develop an integrated convolutional neural network feasible to tackle an arbitrary-length (including one) image sequence suffering from inaccurate exposures. Specifically, we propose a novel Fusion-Correction Network (FCNet) to fuse and correct an image sequence by employing the multi-level Laplacian Pyramid (LP) image decomposition scheme. In each LP level, the low-frequency base component(s) of the input image sequence is fed into a Fusion block and a Correction block sequentially for consecutive exposure estimation, implemented by alternative image fusion and exposure correction. The exposure-corrected image in current LP level is upsampled and re-composed with the high-frequency detail component(s) of the input image sequence in the next LP level, to output the base component of the input image sequence for the Fusion and Correction blocks in the next LP level. Experiments on the benchmark dataset demonstrate that our FCNet is effective arbitrary-length exposure estimation (both SEC and MEF). The code will be publicly released.
The configuration model is a standard tool for uniformly generating random graphs with a specified degree sequence, and is often used as a null model to evaluate how much of an observed network's structure can be explained by its degree structure alone. A Markov chain Monte Carlo (MCMC) algorithm, based on a degree-preserving double-edge swap, provides an asymptotic solution to sample from the configuration model. However, accurately and efficiently detecting this Markov chain's convergence on its stationary distribution remains an unsolved problem. Here, we provide a solution to detect convergence and sample from the configuration model. We develop an algorithm, based on the assortativity of the sampled graphs, for estimating the gap between effectively independent MCMC states, and a computationally efficient gap-estimation heuristic derived from analyzing a corpus of 509 empirical networks. We provide a convergence detection method based on the Dickey-Fuller Generalized Least Squares test, which we show is more accurate and efficient than three alternative Markov chain convergence tests.
We consider the problem of estimating the optimal transport map between two probability distributions, $P$ and $Q$ in $\mathbb R^d$, on the basis of i.i.d. samples. All existing statistical analyses of this problem require the assumption that the transport map is Lipschitz, a strong requirement that, in particular, excludes any examples where the transport map is discontinuous. As a first step towards developing estimation procedures for discontinuous maps, we consider the important special case where the data distribution $Q$ is a discrete measure supported on a finite number of points in $\mathbb R^d$. We study a computationally efficient estimator initially proposed by Pooladian and Niles-Weed (2021), based on entropic optimal transport, and show in the semi-discrete setting that it converges at the minimax-optimal rate $n^{-1/2}$, independent of dimension. Other standard map estimation techniques both lack finite-sample guarantees in this setting and provably suffer from the curse of dimensionality. We confirm these results in numerical experiments, and provide experiments for other settings, not covered by our theory, which indicate that the entropic estimator is a promising methodology for other discontinuous transport map estimation problems.