Knowing whether vaccine protection wanes over time is important for health policy and drug development. However, quantifying waning effects is difficult. A simple contrast of vaccine efficacy at two different times compares different populations of individuals: those who were uninfected at the first time versus those who remain uninfected until the second time. Thus, the contrast of vaccine efficacy at early and late times can not be interpreted as a causal effect. We propose to quantify vaccine waning using the challenge effect, which is a contrast of outcomes under controlled exposures to the infectious agent following vaccination. We identify sharp bounds on the challenge effect under non-parametric assumptions that are broadly applicable in vaccine trials using routinely collected data. We demonstrate that the challenge effect can differ substantially from the conventional vaccine efficacy due to depletion of susceptible individuals from the risk set over time. Finally, we apply the methods to derive bounds on the waning of the BNT162b2 COVID-19 vaccine using data from a placebo-controlled randomized trial. Our estimates of the challenge effect suggest waning protection after 2 months beyond administration of the second vaccine dose.
This paper describes our submission to the MEDIQA-CORR 2024 shared task for automatically detecting and correcting medical errors in clinical notes. We report results for three methods of few-shot In-Context Learning (ICL) augmented with Chain-of-Thought (CoT) and reason prompts using a large language model (LLM). In the first method, we manually analyse a subset of train and validation dataset to infer three CoT prompts by examining error types in the clinical notes. In the second method, we utilise the training dataset to prompt the LLM to deduce reasons about their correctness or incorrectness. The constructed CoTs and reasons are then augmented with ICL examples to solve the tasks of error detection, span identification, and error correction. Finally, we combine the two methods using a rule-based ensemble method. Across the three sub-tasks, our ensemble method achieves a ranking of 3rd for both sub-task 1 and 2, while securing 7th place in sub-task 3 among all submissions.
Green hydrogen is critical for decarbonising hard-to-electrify sectors, but faces high costs and investment risks. Here we define and quantify the green hydrogen ambition and implementation gap, showing that meeting hydrogen expectations will remain challenging despite surging announcements of projects and subsidies. Tracking 137 projects over three years, we identify a wide 2022 implementation gap with only 2% of global capacity announcements finished on schedule. In contrast, the 2030 ambition gap towards 1.5{\deg}C scenarios is gradually closing as the announced project pipeline has nearly tripled to 441 GW within three years. However, we estimate that, without carbon pricing, realising all these projects would require global subsidies of \$1.6 trillion (\$1.2 - 2.6 trillion range), far exceeding announced subsidies. Given past and future implementation gaps, policymakers must prepare for prolonged green hydrogen scarcity. Policy support needs to secure hydrogen investments, but should focus on applications where hydrogen is indispensable.
In evidence synthesis, effect modifiers are typically described as variables that induce treatment effect heterogeneity at the individual level, through treatment-covariate interactions in an outcome model parametrized at such level. As such, effect modification is defined with respect to a conditional measure, but marginal effect estimates are required for population-level decisions in health technology assessment. For non-collapsible measures, purely prognostic variables that are not determinants of treatment response at the individual level may modify marginal effects, even where there is individual-level treatment effect homogeneity. With heterogeneity, marginal effects for measures that are not directly collapsible cannot be expressed in terms of marginal covariate moments, and generally depend on the joint distribution of conditional effect measure modifiers and purely prognostic variables. There are implications for recommended practices in evidence synthesis. Unadjusted anchored indirect comparisons can be biased in the absence of individual-level treatment effect heterogeneity, or when marginal covariate moments are balanced across studies. Covariate adjustment may be necessary to account for cross-study imbalances in joint covariate distributions involving purely prognostic variables. In the absence of individual patient data for the target, covariate adjustment approaches are inherently limited in their ability to remove bias for measures that are not directly collapsible. Directly collapsible measures would facilitate the transportability of marginal effects between studies by: (1) reducing dependence on model-based covariate adjustment where there is individual-level treatment effect homogeneity or marginal covariate moments are balanced; and (2) facilitating the selection of baseline covariates for adjustment where there is individual-level treatment effect heterogeneity.
We use the Multi Level Monte Carlo method to estimate uncertainties in a Henry-like salt water intrusion problem with a fracture. The flow is induced by the variation of the density of the fluid phase, which depends on the mass fraction of salt. We assume that the fracture has a known fixed location but an uncertain aperture. Other input uncertainties are the porosity and permeability fields and the recharge. In our setting, porosity and permeability vary spatially and recharge is time-dependent. For each realisation of these uncertain parameters, the evolution of the mass fraction and pressure fields is modelled by a system of non-linear and time-dependent PDEs with a jump of the solution at the fracture. The uncertainties propagate into the distribution of the salt concentration, which is an important characteristic of the quality of water resources. We show that the multilevel Monte Carlo (MLMC) method is able to reduce the overall computational cost compared to classical Monte Carlo methods. This is achieved by balancing discretisation and statistical errors. Multiple scenarios are evaluated at different spatial and temporal mesh levels. The deterministic solver ug4 is run in parallel to calculate all stochastic scenarios.
Nonparametric estimators for the mean and the covariance functions of functional data are proposed. The setup covers a wide range of practical situations. The random trajectories are, not necessarily differentiable, have unknown regularity, and are measured with error at discrete design points. The measurement error could be heteroscedastic. The design points could be either randomly drawn or common for all curves. The estimators depend on the local regularity of the stochastic process generating the functional data. We consider a simple estimator of this local regularity which exploits the replication and regularization features of functional data. Next, we use the ``smoothing first, then estimate'' approach for the mean and the covariance functions. They can be applied with both sparsely or densely sampled curves, are easy to calculate and to update, and perform well in simulations. Simulations built upon an example of real data set, illustrate the effectiveness of the new approach.
This study investigates how surgical intervention for speech pathology (specifically, as a result of oral cancer surgery) impacts the performance of an automatic speaker verification (ASV) system. Using two recently collected Dutch datasets with parallel pre and post-surgery audio from the same speaker, NKI-OC-VC and SPOKE, we assess the extent to which speech pathology influences ASV performance, and whether objective/subjective measures of speech severity are correlated with the performance. Finally, we carry out a perceptual study to compare judgements of ASV and human listeners. Our findings reveal that pathological speech negatively affects ASV performance, and the severity of the speech is negatively correlated with the performance. There is a moderate agreement in perceptual and objective scores of speaker similarity and severity, however, we could not clearly establish in the perceptual study, whether the same phenomenon also exists in human perception.
We study the performance of stochastic first-order methods for finding saddle points of convex-concave functions. A notorious challenge faced by such methods is that the gradients can grow arbitrarily large during optimization, which may result in instability and divergence. In this paper, we propose a simple and effective regularization technique that stabilizes the iterates and yields meaningful performance guarantees even if the domain and the gradient noise scales linearly with the size of the iterates (and is thus potentially unbounded). Besides providing a set of general results, we also apply our algorithm to a specific problem in reinforcement learning, where it leads to performance guarantees for finding near-optimal policies in an average-reward MDP without prior knowledge of the bias span.
In comparative effectiveness research, treated and control patients might have a different start of follow-up as treatment is often started later in the disease trajectory. This typically occurs when data from treated and controls are not collected within the same source. Only patients who did not yet experience the event of interest whilst in the control condition end up in the treatment data source. In case of unobserved heterogeneity, these treated patients will have a lower average risk than the controls. We illustrate how failing to account for this time-lag between treated and controls leads to bias in the estimated treatment effect. We define estimands and time axes, then explore five methods to adjust for this time-lag bias by utilising the time between diagnosis and treatment initiation in different ways. We conducted a simulation study to evaluate whether these methods reduce the bias and then applied the methods to a comparison between fertility patients treated with insemination and similar but untreated patients. We conclude that time-lag bias can be vast and that the time between diagnosis and treatment initiation should be taken into account in the analysis to respect the chronology of the disease and treatment trajectory.
Dexterous in-hand manipulation is a unique and valuable human skill requiring sophisticated sensorimotor interaction with the environment while respecting stability constraints. Satisfying these constraints with generated motions is essential for a robotic platform to achieve reliable in-hand manipulation skills. Explicitly modelling these constraints can be challenging, but they can be implicitly modelled and learned through experience or human demonstrations. We propose a learning and control approach based on dictionaries of motion primitives generated from human demonstrations. To achieve this, we defined an optimization process that combines motion primitives to generate robot fingertip trajectories for moving an object from an initial to a desired final pose. Based on our experiments, our approach allows a robotic hand to handle objects like humans, adhering to stability constraints without requiring explicit formalization. In other words, the proposed motion primitive dictionaries learn and implicitly embed the constraints crucial to the in-hand manipulation task.
While physical activity is critical to human health, most people do not meet recommended guidelines. More walkable built environments have the potential to increase activity across the population. However, previous studies on the built environment and physical activity have led to mixed findings, possibly due to methodological limitations such as small cohorts, few or single locations, over-reliance on self-reported measures, and cross-sectional designs. Here, we address these limitations by leveraging a large U.S. cohort of smartphone users (N=2,112,288) to evaluate within-person longitudinal behavior changes that occurred over 248,266 days of objectively-measured physical activity across 7,447 relocations among 1,609 U.S. cities. By analyzing the results of this natural experiment, which exposed individuals to differing built environments, we find that increases in walkability are associated with significant increases in physical activity after relocation (and vice versa). These changes hold across subpopulations of different genders, age, and body-mass index (BMI), and are sustained over three months after moving.The added activity observed after moving to a more walkable location is predominantly composed of moderate-to-vigorous physical activity (MVPA), which is linked to an array of associated health benefits across the life course. A simulation experiment demonstrates that substantial walkability improvements (i.e., bringing all US locations to the walkability level of Chicago or Philadelphia) may lead to 10.3% or 33 million more Americans meeting aerobic physical activity guidelines. Evidence against residential self-selection confounding is reported. Our findings provide robust evidence supporting the importance of the built environment in directly improving health-enhancing physical activity, in addition to offering potential guidance for public policy activities in this area.