An accurate covariance matrix is essential for obtaining reliable cosmological results when using a Gaussian likelihood. In this paper we study the covariance of pseudo-$C_\ell$ estimates of tomographic cosmic shear power spectra. Using two existing publicly available codes in combination, we calculate the full covariance matrix, including mode-coupling contributions arising from both partial sky coverage and non-linear structure growth. For three different sky masks, we compare the theoretical covariance matrix to that estimated from publicly available N-body weak lensing simulations, finding good agreement. We find that as a more extreme sky cut is applied, a corresponding increase in both Gaussian off-diagonal covariance and non-Gaussian super-sample covariance is observed in both theory and simulations, in accordance with expectations. Studying the different contributions to the covariance in detail, we find that the Gaussian covariance dominates along the main diagonal and the closest off-diagonals, but further away from the main diagonal the super-sample covariance is dominant. Forming mock constraints in parameters describing matter clustering and dark energy, we find that neglecting non-Gaussian contributions to the covariance can lead to underestimating the true size of confidence regions by up to 70 per cent. The dominant non-Gaussian covariance component is the super-sample covariance, but neglecting the smaller connected non-Gaussian covariance can still lead to the underestimation of uncertainties by 10--20 per cent. A real cosmological analysis will require marginalisation over many nuisance parameters, which will decrease the relative importance of all cosmological contributions to the covariance, so these values should be taken as upper limits on the importance of each component.
We consider Gaussian measures $\mu, \tilde{\mu}$ on a separable Hilbert space, with fractional-order covariance operators $A^{-2\beta}$ resp. $\tilde{A}^{-2\tilde{\beta}}$, and derive necessary and sufficient conditions on $A, \tilde{A}$ and $\beta, \tilde{\beta} > 0$ for I. equivalence of the measures $\mu$ and $\tilde{\mu}$, and II. uniform asymptotic optimality of linear predictions for $\mu$ based on the misspecified measure $\tilde{\mu}$. These results hold, e.g., for Gaussian processes on compact metric spaces. As an important special case, we consider the class of generalized Whittle-Mat\'ern Gaussian random fields, where $A$ and $\tilde{A}$ are elliptic second-order differential operators, formulated on a bounded Euclidean domain $\mathcal{D}\subset\mathbb{R}^d$ and augmented with homogeneous Dirichlet boundary conditions. Our outcomes explain why the predictive performances of stationary and non-stationary models in spatial statistics often are comparable, and provide a crucial first step in deriving consistency results for parameter estimation of generalized Whittle-Mat\'ern fields.
Let $\mathbb{Z}_n = \{Z_1, \ldots, Z_n\}$ be a design; that is, a collection of $n$ points $Z_j \in [-1,1]^d$. We study the quality of quantization of $[-1,1]^d$ by the points of $\mathbb{Z}_n$ and the problem of quality of coverage of $[-1,1]^d$ by ${\cal B}_d(\mathbb{Z}_n,r)$, the union of balls centred at $Z_j \in \mathbb{Z}_n$. We concentrate on the cases where the dimension $d$ is not small ($d\geq 5$) and $n$ is not too large, $n \leq 2^d$. We define the design ${\mathbb{D}_{n,\delta}}$ as a $2^{d-1}$ design defined on vertices of the cube $[-\delta,\delta]^d$, $0\leq \delta\leq 1$. For this design, we derive a closed-form expression for the quantization error and very accurate approximations for {the coverage area} vol$([-1,1]^d \cap {\cal B}_d(\mathbb{Z}_n,r))$. We provide results of a large-scale numerical investigation confirming the accuracy of the developed approximations and the efficiency of the designs ${\mathbb{D}_{n,\delta}}$.
For large classes of group testing problems, we derive lower bounds for the probability that all significant items are uniquely identified using specially constructed random designs. These bounds allow us to optimize parameters of the randomization schemes. We also suggest and numerically justify a procedure of constructing designs with better separability properties than pure random designs. We illustrate theoretical considerations with a large simulation-based study. This study indicates, in particular, that in the case of the common binary group testing, the suggested families of designs have better separability than the popular designs constructed from disjunct matrices. We also derive several asymptotic expansions and discuss the situations when the resulting approximations achieve high accuracy.
Microscopy research often requires recovering particle-size distributions in three dimensions from only a few (10 - 200) profile measurements in the section. This problem is especially relevant for petrographic and mineralogical studies, where parametric assumptions are reasonable and finding distribution parameters from the microscopic study of small sections is essential. This paper deals with the specific case where particles are approximately spherical (i.e. Wicksell's problem). The paper presents a novel approximation of the probability density of spherical particle profile sizes. This approximation uses the actual non-smoothness of mineral particles rather than perfect spheres. The new approximation facilitates the numerically efficient use of the maximum likelihood method, a generally powerful method that provides the distribution parameter estimates of the minimal variance in most practical cases. The variance and bias of the estimates by the maximum likelihood method were compared numerically for several typical particle-size distributions with those by alternative parametric methods (method of moments and minimum distance estimation), and the maximum likelihood estimation was found to be preferable for both small and large samples. The maximum likelihood method, along with the suggested approximation, may also be used for selecting a model, for constructing narrow confidence intervals for distribution parameters using all the profiles without random sampling and for including the measurements of the profiles intersected by section boundaries. The utility of the approach is illustrated using an example from glacier ice petrography.
When evaluating and comparing models using leave-one-out cross-validation (LOO-CV), the uncertainty of the estimate is typically assessed using the variance of the sampling distribution. Considering the uncertainty is important, as the variability of the estimate can be high in some cases. An important result by Bengio and Grandvalet (2004) states that no general unbiased variance estimator can be constructed, that would apply for any utility or loss measure and any model. We show that it is possible to construct an unbiased estimator considering a specific predictive performance measure and model. We demonstrate an unbiased sampling distribution variance estimator for the Bayesian normal model with fixed model variance using the expected log pointwise predictive density (elpd) utility score. This example demonstrates that it is possible to obtain improved, problem-specific, unbiased estimators for assessing the uncertainty in LOO-CV estimation.
The unlabeled sensing problem is to solve a noisy linear system of equations under unknown permutation of the measurements. We study a particular case of the problem where the permutations are restricted to be r-local, i.e. the permutation matrix is block diagonal with r x r blocks. Assuming a Gaussian measurement matrix, we argue that the r-local permutation model is more challenging compared to a recent sparse permutation model. We propose a proximal alternating minimization algorithm for the general unlabeled sensing problem that provably converges to a first order stationary point. Applied to the r-local model, we show that the resulting algorithm is efficient. We validate the algorithm on synthetic and real datasets. We also formulate the 1-d unassigned distance geometry problem as an unlabeled sensing problem with a structured measurement matrix.
This paper focuses on the expected difference in borrower's repayment when there is a change in the lender's credit decisions. Classical estimators overlook the confounding effects and hence the estimation error can be magnificent. As such, we propose another approach to construct the estimators such that the error can be greatly reduced. The proposed estimators are shown to be unbiased, consistent, and robust through a combination of theoretical analysis and numerical testing. Moreover, we compare the power of estimating the causal quantities between the classical estimators and the proposed estimators. The comparison is tested across a wide range of models, including linear regression models, tree-based models, and neural network-based models, under different simulated datasets that exhibit different levels of causality, different degrees of nonlinearity, and different distributional properties. Most importantly, we apply our approaches to a large observational dataset provided by a global technology firm that operates in both the e-commerce and the lending business. We find that the relative reduction of estimation error is strikingly substantial if the causal effects are accounted for correctly.
To learn intrinsic low-dimensional structures from high-dimensional data that most discriminate between classes, we propose the principle of Maximal Coding Rate Reduction ($\text{MCR}^2$), an information-theoretic measure that maximizes the coding rate difference between the whole dataset and the sum of each individual class. We clarify its relationships with most existing frameworks such as cross-entropy, information bottleneck, information gain, contractive and contrastive learning, and provide theoretical guarantees for learning diverse and discriminative features. The coding rate can be accurately computed from finite samples of degenerate subspace-like distributions and can learn intrinsic representations in supervised, self-supervised, and unsupervised settings in a unified manner. Empirically, the representations learned using this principle alone are significantly more robust to label corruptions in classification than those using cross-entropy, and can lead to state-of-the-art results in clustering mixed data from self-learned invariant features.
Deep learning is the mainstream technique for many machine learning tasks, including image recognition, machine translation, speech recognition, and so on. It has outperformed conventional methods in various fields and achieved great successes. Unfortunately, the understanding on how it works remains unclear. It has the central importance to lay down the theoretic foundation for deep learning. In this work, we give a geometric view to understand deep learning: we show that the fundamental principle attributing to the success is the manifold structure in data, namely natural high dimensional data concentrates close to a low-dimensional manifold, deep learning learns the manifold and the probability distribution on it. We further introduce the concepts of rectified linear complexity for deep neural network measuring its learning capability, rectified linear complexity of an embedding manifold describing the difficulty to be learned. Then we show for any deep neural network with fixed architecture, there exists a manifold that cannot be learned by the network. Finally, we propose to apply optimal mass transportation theory to control the probability distribution in the latent space.
In this paper we introduce a covariance framework for the analysis of EEG and MEG data that takes into account observed temporal stationarity on small time scales and trial-to-trial variations. We formulate a model for the covariance matrix, which is a Kronecker product of three components that correspond to space, time and epochs/trials, and consider maximum likelihood estimation of the unknown parameter values. An iterative algorithm that finds approximations of the maximum likelihood estimates is proposed. We perform a simulation study to assess the performance of the estimator and investigate the influence of different assumptions about the covariance factors on the estimated covariance matrix and on its components. Apart from that, we illustrate our method on real EEG and MEG data sets. The proposed covariance model is applicable in a variety of cases where spontaneous EEG or MEG acts as source of noise and realistic noise covariance estimates are needed for accurate dipole localization, such as in evoked activity studies, or where the properties of spontaneous EEG or MEG are themselves the topic of interest, such as in combined EEG/fMRI experiments in which the correlation between EEG and fMRI signals is investigated.