Diagnosticians use an observed proportion as a direct estimate of the posterior probability of a diagnosis. Therefore, a diagnostician might regard a continuous Gaussian probability distribution of possible numerical outcomes conditional on the information in the study methods and data as posterior probabilities. Similarly, they might regard the distribution of possible means based on a SEM as a posterior probability distribution too. If the converse likelihood distribution of the observed mean conditional on any hypothetical mean (e.g. the null hypothesis) is assumed to be the same as the above posterior distribution (as is customary) then by Bayes rule, the prior distribution of all possible hypothetical means is uniform. It follows that the probability Q of any theoretically true mean falling into a tail beyond a null hypothesis would be equal to that tails area as a proportion of the whole. It also follows that the P value (the probability of the observed mean or something more extreme conditional on the null hypothesis) is equal to Q. Replication involves doing two independent studies, thus doubling the variance for the combined posterior probability distribution. So, if the original effect size was 1.96, the number of observations was 100, the SEM was 1 and the original P value was 0.025, the theoretical probability of a replicating study getting a P value of up to 0.025 again is only 0.283. By applying this double variance to achieve a power of 80%, the required number of observations is doubled compared to conventional approaches. If some replicating study is to achieve a P value of up to 0.025 yet again with a probability of 0.8, then this requires 3 times as many observations in the power calculation. This might explain the replication crisis.
This study introduces a novel knowledge enhanced tokenisation mechanism, K-Tokeniser, for clinical text processing. Technically, at initialisation stage, K-Tokeniser populates global representations of tokens based on semantic types of domain concepts (such as drugs or diseases) from either a domain ontology like Unified Medical Language System or the training data of the task related corpus. At training or inference stage, sentence level localised context will be utilised for choosing the optimal global token representation to realise the semantic-based tokenisation. To avoid pretraining using the new tokeniser, an embedding initialisation approach is proposed to generate representations for new tokens. Using three transformer-based language models, a comprehensive set of experiments are conducted on four real-world datasets for evaluating K-Tokeniser in a wide range of clinical text analytics tasks including clinical concept and relation extraction, automated clinical coding, clinical phenotype identification, and clinical research article classification. Overall, our models demonstrate consistent improvements over their counterparts in all tasks. In particular, substantial improvements are observed in the automated clinical coding task with 13\% increase on Micro $F_1$ score. Furthermore, K-Tokeniser also shows significant capacities in facilitating quicker converge of language models. Specifically, using K-Tokeniser, the language models would only require 50\% of the training data to achieve the best performance of the baseline tokeniser using all training data in the concept extraction task and less than 20\% of the data for the automated coding task. It is worth mentioning that all these improvements require no pre-training process, making the approach generalisable.
We consider the problem of a graph subjected to adversarial perturbations, such as those arising from cyber-attacks, where edges are covertly added or removed. The adversarial perturbations occur during the transmission of the graph between a sender and a receiver. To counteract potential perturbations, we explore a repetition coding scheme with sender-assigned binary noise and majority voting on the receiver's end to rectify the graph's structure. Our approach operates without prior knowledge of the attack's characteristics. We provide an analytical derivation of a bound on the number of repetitions needed to satisfy probabilistic constraints on the quality of the reconstructed graph. We show that the method can accurately decode graphs that were subjected to non-random edge removal, namely, those connected to vertices with the highest eigenvector centrality, in addition to random addition and removal of edges by the attacker.
In nonparametric statistics, rate-optimal estimators typically balance bias and stochastic error. The recent work on overparametrization raises the question whether rate-optimal estimators exist that do not obey this trade-off. In this work we consider pointwise estimation in the Gaussian white noise model with regression function $f$ in a class of $\beta$-H\"older smooth functions. Let 'worst-case' refer to the supremum over all functions $f$ in the H\"older class. It is shown that any estimator with worst-case bias $\lesssim n^{-\beta/(2\beta+1)}=: \psi_n$ must necessarily also have a worst-case mean absolute deviation that is lower bounded by $\gtrsim \psi_n.$ To derive the result, we establish abstract inequalities relating the change of expectation for two probability measures to the mean absolute deviation.
One of the most recent and fascinating breakthroughs in artificial intelligence is ChatGPT, a chatbot which can simulate human conversation. ChatGPT is an instance of GPT4, which is a language model based on generative gredictive gransformers. So if one wants to study from a theoretical point of view, how powerful such artificial intelligence can be, one approach is to consider transformer networks and to study which problems one can solve with these networks theoretically. Here it is not only important what kind of models these network can approximate, or how they can generalize their knowledge learned by choosing the best possible approximation to a concrete data set, but also how well optimization of such transformer network based on concrete data set works. In this article we consider all these three different aspects simultaneously and show a theoretical upper bound on the missclassification probability of a transformer network fitted to the observed data. For simplicity we focus in this context on transformer encoder networks which can be applied to define an estimate in the context of a classification problem involving natural language.
Neighborhood disadvantage is associated with worse health and cognitive outcomes. Morphological similarity network (MSN) is a promising approach to elucidate cortical network patterns underlying complex cognitive functions. We hypothesized that MSNs could capture changes in cortical patterns related to neighborhood disadvantage and cognitive function. This cross-sectional study included cognitively unimpaired participants from two large Alzheimers studies at University of Wisconsin-Madison. Neighborhood disadvantage status was obtained using the Area Deprivation Index (ADI). Cognitive performance was assessed on memory, processing speed and executive function. Morphological Similarity Networks (MSN) were constructed for each participant based on the similarity in distribution of cortical thickness of brain regions, followed by computation of local and global network features. Association of ADI with cognitive scores and MSN features were examined using linear regression and mediation analysis. ADI showed negative association with category fluency,implicit learning speed, story recall and modified pre-clinical Alzheimers cognitive composite scores, indicating worse cognitive function among those living in more disadvantaged neighborhoods. Local network features of frontal and temporal regions differed based on ADI status. Centrality of left lateral orbitofrontal region showed a partial mediating effect between association of neighborhood disadvantage and story recall performance. Our preliminary findings suggest differences in local cortical organization by neighborhood disadvantage, which partially mediated the relationship between ADI and cognitive performance, providing a possible network-based mechanism to, in-part, explain the risk for poor cognitive functioning associated with disadvantaged neighborhoods.
Difference-in-differences (DID) is a popular approach to identify the causal effects of treatments and policies in the presence of unmeasured confounding. DID identifies the sample average treatment effect in the treated (SATT). However, a goal of such research is often to inform decision-making in target populations outside the treated sample. Transportability methods have been developed to extend inferences from study samples to external target populations; these methods have primarily been developed and applied in settings where identification is based on conditional independence between the treatment and potential outcomes, such as in a randomized trial. We present a novel approach to identifying and estimating effects in a target population, based on DID conducted in a study sample that differs from the target population. We present a range of assumptions under which one may identify causal effects in the target population and employ causal diagrams to illustrate these assumptions. In most realistic settings, results depend critically on the assumption that any unmeasured confounders are not effect measure modifiers on the scale of the effect of interest (e.g., risk difference, odds ratio). We develop several estimators of transported effects, including g-computation, inverse odds weighting, and a doubly robust estimator based on the efficient influence function. Simulation results support theoretical properties of the proposed estimators. As an example, we apply our approach to study the effects of a 2018 US federal smoke-free public housing law on air quality in public housing across the US, using data from a DID study conducted in New York City alone.
Recently, addressing spatial confounding has become a major topic in spatial statistics. However, the literature has provided conflicting definitions, and many proposed definitions do not address the issue of confounding as it is understood in causal inference. We define spatial confounding as the existence of an unmeasured causal confounder with a spatial structure. We present a causal inference framework for nonparametric identification of the causal effect of a continuous exposure on an outcome in the presence of spatial confounding. We propose double machine learning (DML), a procedure in which flexible models are used to regress both the exposure and outcome variables on confounders to arrive at a causal estimator with favorable robustness properties and convergence rates, and we prove that this approach is consistent and asymptotically normal under spatial dependence. As far as we are aware, this is the first approach to spatial confounding that does not rely on restrictive parametric assumptions (such as linearity, effect homogeneity, or Gaussianity) for both identification and estimation. We demonstrate the advantages of the DML approach analytically and in simulations. We apply our methods and reasoning to a study of the effect of fine particulate matter exposure during pregnancy on birthweight in California.
We consider the problem of sampling a high dimensional multimodal target probability measure. We assume that a good proposal kernel to move only a subset of the degrees of freedoms (also known as collective variables) is known a priori. This proposal kernel can for example be built using normalizing flows. We show how to extend the move from the collective variable space to the full space and how to implement an accept-reject step in order to get a reversible chain with respect to a target probability measure. The accept-reject step does not require to know the marginal of the original measure in the collective variable (namely to know the free energy). The obtained algorithm admits several variants, some of them being very close to methods which have been proposed previously in the literature. We show how the obtained acceptance ratio can be expressed in terms of the work which appears in the Jarzynski-Crooks equality, at least for some variants. Numerical illustrations demonstrate the efficiency of the approach on various simple test cases, and allow us to compare the variants of the algorithm.
Transparency is an important aspect of human-robot interaction (HRI), as it can improve system trust and usability leading to improved communication and performance. However, most transparency models focus only on the amount of information given to users. In this paper, we propose a bidirectional transparency model, termed a transparency-based action (TBA) model, which in addition to providing transparency information (robot-to-human), allows the robot to take actions based on transparency information received from the human (robot-of-human and human-to-robot). We implemented a three-level (High, Medium and Low) TBA model on a robotic system trainer in two pilot studies (with students as participants) to examine its impact on acceptance and HRI. Based on the pilot studies results, the Medium TBA level was not included in the main experiment, which was conducted with older adults (aged 75-85). In that experiment, two TBA levels were compared: Low (basic information including only robot-to-human transparency) and High (including additional information relating to predicted outcomes with robot-of-human and human-to-robot transparency). The results revealed a significant difference between the two TBA levels of the model in terms of perceived usefulness, ease of use, and attitude. The High TBA level resulted in improved user acceptance and was preferred by the users.
Error bounds are derived for sampling and estimation using a discretization of an intrinsically defined Langevin diffusion with invariant measure $\text{d}\mu_\phi \propto e^{-\phi} \mathrm{dvol}_g $ on a compact Riemannian manifold. Two estimators of linear functionals of $\mu_\phi $ based on the discretized Markov process are considered: a time-averaging estimator based on a single trajectory and an ensemble-averaging estimator based on multiple independent trajectories. Imposing no restrictions beyond a nominal level of smoothness on $\phi$, first-order error bounds, in discretization step size, on the bias and variance/mean-square error of both estimators are derived. The order of error matches the optimal rate in Euclidean and flat spaces, and leads to a first-order bound on distance between the invariant measure $\mu_\phi$ and a stationary measure of the discretized Markov process. This order is preserved even upon using retractions when exponential maps are unavailable in closed form, thus enhancing practicality of the proposed algorithms. Generality of the proof techniques, which exploit links between two partial differential equations and the semigroup of operators corresponding to the Langevin diffusion, renders them amenable for the study of a more general class of sampling algorithms related to the Langevin diffusion. Conditions for extending analysis to the case of non-compact manifolds are discussed. Numerical illustrations with distributions, log-concave and otherwise, on the manifolds of positive and negative curvature elucidate on the derived bounds and demonstrate practical utility of the sampling algorithm.