It is increasingly recognized that participation bias can pose problems for genetic studies. Recently, to overcome the challenge that genetic information of non-participants is unavailable, it is shown that by comparing the IBD (identity by descent) shared and not-shared segments among the participants, one can estimate the genetic component underlying participation. That, however, does not directly address how to adjust estimates of heritability and genetic correlation for phenotypes correlated with participation. Here, for phenotypes whose mean differences between population and sample are known, we demonstrate a way to do so by adopting a statistical framework that separates out the genetic and non-genetic correlations between participation and these phenotypes. Crucially, our method avoids making the assumption that the effect of the genetic component underlying participation is manifested entirely through these other phenotypes. Applying the method to 12 UK Biobank phenotypes, we found 8 have significant genetic correlations with participation, including body mass index, educational attainment, and smoking status. For most of these phenotypes, without adjustments, estimates of heritability and the absolute value of genetic correlation would have underestimation biases.
We propose a topological mapping and localization system able to operate on real human colonoscopies, despite significant shape and illumination changes. The map is a graph where each node codes a colon location by a set of real images, while edges represent traversability between nodes. For close-in-time images, where scene changes are minor, place recognition can be successfully managed with the recent transformers-based local feature matching algorithms. However, under long-term changes -- such as different colonoscopies of the same patient -- feature-based matching fails. To address this, we train on real colonoscopies a deep global descriptor achieving high recall with significant changes in the scene. The addition of a Bayesian filter boosts the accuracy of long-term place recognition, enabling relocalization in a previously built map. Our experiments show that ColonMapper is able to autonomously build a map and localize against it in two important use cases: localization within the same colonoscopy or within different colonoscopies of the same patient. Code: //github.com/jmorlana/ColonMapper.
The brains of all bilaterally symmetric animals on Earth are divided into left and right hemispheres. The anatomy and functionality of the hemispheres have a large degree of overlap, but there are asymmetries, and they specialise in possesses different attributes. Other authors have used computational models to mimic hemispheric asymmetries with a focus on reproducing human data on semantic and visual processing tasks. We took a different approach and aimed to understand how dual hemispheres in a bilateral architecture interact to perform well in a given task. We propose a bilateral artificial neural network that imitates lateralisation observed in nature: that the left hemisphere specialises in local features and the right in global features. We used different training objectives to achieve the desired specialisation and tested it on an image classification task with two different CNN backbones: ResNet and VGG. Our analysis found that the hemispheres represent complementary features that are exploited by a network head that implements a type of weighted attention. The bilateral architecture outperformed a range of baselines of similar representational capacity that do not exploit differential specialisation, with the exception of a conventional ensemble of unilateral networks trained on dual training objectives for local and global features. The results demonstrate the efficacy of bilateralism, contribute to the discussion of bilateralism in biological brains, and the principle may serve as an inductive bias for new AI systems.
Mediation analyses allow researchers to quantify the effect of an exposure variable on an outcome variable through a mediator variable. If a binary mediator variable is misclassified, the resulting analysis can be severely biased. Misclassification is especially difficult to deal with when it is differential and when there are no gold standard labels available. Previous work has addressed this problem using a sensitivity analysis framework or by assuming that misclassification rates are known. We leverage a variable related to the misclassification mechanism to recover unbiased parameter estimates without using gold standard labels. The proposed methods require the reasonable assumption that the sum of the sensitivity and specificity is greater than 1. Three correction methods are presented: (1) an ordinary least squares correction for Normal outcome models, (2) a multi-step predictive value weighting method, and (3) a seamless expectation-maximization algorithm. We apply our misclassification correction strategies to investigate the mediating role of gestational hypertension on the association between maternal age and pre-term birth.
We obtain the almost sure strong consistency and the Berry-Esseen type bound for the maximum likelihood estimator Ln of the ensemble L for determinantal point processes (DPPs), strengthening and completing previous work initiated in Brunel, Moitra, Rigollet, and Urschel [BMRU17]. Numerical algorithms of estimating DPPs are developed and simulation studies are performed. Lastly, we give explicit formula and a detailed discussion for the maximum likelihood estimator for blocked determinantal matrix of two by two submatrices and compare it with the frequency method.
This paper leans on two similar areas so far detached from each other. On the one hand, Dung's pioneering contributions to abstract argumentation, almost thirty years ago, gave rise to a plethora of successors, including abstract dialectical frameworks (ADFs). On the other hand, Boolean networks (BNs), devised as models of gene regulation, have been successful for studying the behavior of molecular processes within cells. ADFs and BNs are similar to each other: both can be viewed as functions from vectors of bits to vectors of bits. As soon as similarities emerge between these two formalisms, however, differences appear. For example, conflict-freedom is prominent in argumentation (where we are interested in a self-consistent, i.e., conflict-free, set of beliefs) but absent in BNs. By contrast, asynchrony (where only one gene is updated at a time) is conspicuous in BNs and lacking in argumentation. Finally, while a monotonicity-based notion occurs in signed reasoning of both argumentation and gene regulation, a different, derivative-based notion only appears in the BN literature. To identify common mathematical structure between both formalisms, these differences need clarification. This contribution is a partial review of both these areas, where we cover enough ground to exhibit their more evident similarities, to then reconcile some of their apparent differences. We highlight a range of avenues of research resulting from ironing out discrepancies between these two fields. Unveiling their common concerns should enable these two areas to cross-fertilize so as to transfer ideas and results between each other.
In the analysis of spatially resolved transcriptomics data, detecting spatially variable genes (SVGs) is crucial. Numerous computational methods exist, but varying SVG definitions and methodologies lead to incomparable results. We review 31 state-of-the-art methods, categorizing SVGs into three types: overall, cell-type-specific, and spatial-domain-marker SVGs. Our review explains the intuitions underlying these methods, summarizes their applications, and categorizes the hypothesis tests they use in the trade-off between generality and specificity for SVG detection. We discuss challenges in SVG detection and propose future directions for improvement. Our review offers insights for method developers and users, advocating for category-specific benchmarking.
We investigate the set of invariant idempotent probabilities for countable idempotent iterated function systems (IFS) defined in compact metric spaces. We demonstrate that, with constant weights, there exists a unique invariant idempotent probability. Utilizing Secelean's approach to countable IFSs, we introduce partially finite idempotent IFSs and prove that the sequence of invariant idempotent measures for these systems converges to the invariant measure of the original countable IFS. We then apply these results to approximate such measures with discrete systems, producing, in the one-dimensional case, data series whose Higuchi fractal dimension can be calculated. Finally, we provide numerical approximations for two-dimensional cases and discuss the application of generalized Higuchi dimensions in these scenarios.
We investigate a Tikhonov regularization scheme specifically tailored for shallow neural networks within the context of solving a classic inverse problem: approximating an unknown function and its derivatives within a unit cubic domain based on noisy measurements. The proposed Tikhonov regularization scheme incorporates a penalty term that takes three distinct yet intricately related network (semi)norms: the extended Barron norm, the variation norm, and the Radon-BV seminorm. These choices of the penalty term are contingent upon the specific architecture of the neural network being utilized. We establish the connection between various network norms and particularly trace the dependence of the dimensionality index, aiming to deepen our understanding of how these norms interplay with each other. We revisit the universality of function approximation through various norms, establish rigorous error-bound analysis for the Tikhonov regularization scheme, and explicitly elucidate the dependency of the dimensionality index, providing a clearer understanding of how the dimensionality affects the approximation performance and how one designs a neural network with diverse approximating tasks.
Non-representative surveys are commonly used and widely available but suffer from selection bias that generally cannot be entirely eliminated using weighting techniques. Instead, we propose a Bayesian method to synthesize longitudinal representative unbiased surveys with non-representative biased surveys by estimating the degree of selection bias over time. We show using a simulation study that synthesizing biased and unbiased surveys together out-performs using the unbiased surveys alone, even if the selection bias may evolve in a complex manner over time. Using COVID-19 vaccination data, we are able to synthesize two large sample biased surveys with an unbiased survey to reduce uncertainty in now-casting and inference estimates while simultaneously retaining the empirical credible interval coverage. Ultimately, we are able to conceptually obtain the properties of a large sample unbiased survey if the assumed unbiased survey, used to anchor the estimates, is unbiased for all time-points.
The goal of explainable Artificial Intelligence (XAI) is to generate human-interpretable explanations, but there are no computationally precise theories of how humans interpret AI generated explanations. The lack of theory means that validation of XAI must be done empirically, on a case-by-case basis, which prevents systematic theory-building in XAI. We propose a psychological theory of how humans draw conclusions from saliency maps, the most common form of XAI explanation, which for the first time allows for precise prediction of explainee inference conditioned on explanation. Our theory posits that absent explanation humans expect the AI to make similar decisions to themselves, and that they interpret an explanation by comparison to the explanations they themselves would give. Comparison is formalized via Shepard's universal law of generalization in a similarity space, a classic theory from cognitive science. A pre-registered user study on AI image classifications with saliency map explanations demonstrate that our theory quantitatively matches participants' predictions of the AI.