A common concern in non-inferiority (NI) trials is that non adherence due, for example, to poor study conduct can make treatment arms artificially similar. Because intention to treat analyses can be anti-conservative in this situation, per protocol analyses are sometimes recommended. However, such advice does not consider the estimands framework, nor the risk of bias from per protocol analyses. We therefore sought to update the above guidance using the estimands framework, and compare estimators to improve on the performance of per protocol analyses. We argue the main threat to validity of NI trials is the occurrence of trial specific intercurrent events (IEs), that is, IEs which occur in a trial setting, but would not occur in practice. To guard against erroneous conclusions of non inferiority, we suggest an estimand using a hypothetical strategy for trial specific IEs should be employed, with handling of other non trial specific IEs chosen based on clinical considerations. We provide an overview of estimators that could be used to estimate a hypothetical estimand, including inverse probability weighting (IPW), and two instrumental variable approaches (one using an informative Bayesian prior on the effect of standard treatment, and one using a treatment by covariate interaction as an instrument). We compare them, using simulation in the setting of all or nothing compliance in two active treatment arms, and conclude both IPW and the instrumental variable method using a Bayesian prior are potentially useful approaches, with the choice between them depending on which assumptions are most plausible for a given trial.
Most existing neural network-based approaches for solving stochastic optimal control problems using the associated backward dynamic programming principle rely on the ability to simulate the underlying state variables. However, in some problems, this simulation is infeasible, leading to the discretization of state variable space and the need to train one neural network for each data point. This approach becomes computationally inefficient when dealing with large state variable spaces. In this paper, we consider a class of this type of stochastic optimal control problems and introduce an effective solution employing multitask neural networks. To train our multitask neural network, we introduce a novel scheme that dynamically balances the learning across tasks. Through numerical experiments on real-world derivatives pricing problems, we prove that our method outperforms state-of-the-art approaches.
Home-based physical therapies are effective if the prescribed exercises are correctly executed and patients adhere to these routines. This is specially important for older adults who can easily forget the guidelines from therapists. Inertial Measurement Units (IMUs) are commonly used for tracking exercise execution giving information of patients' motion data. In this work, we propose the use of Machine Learning techniques to recognize which exercise is being carried out and to assess if the recognized exercise is properly executed by using data from four IMUs placed on the person limbs. To the best of our knowledge, both tasks have never been addressed together as a unique complex task before. However, their combination is needed for the complete characterization of the performance of physical therapies. We evaluate the performance of six machine learning classifiers in three contexts: recognition and evaluation in a single classifier, recognition of correct exercises, excluding the wrongly performed exercises, and a two-stage approach that first recognizes the exercise and then evaluates it. We apply our proposal to a set of 8 exercises of the upper-and lower-limbs designed for maintaining elderly people health status. To do so, the motion of volunteers were monitored with 4 IMUs. We obtain accuracies of 88.4 \% and the 91.4 \% in the two initial scenarios. In the third one, the recognition provides an accuracy of 96.2 \%, whereas the exercise evaluation varies between 93.6 \% and 100.0 \%. This work proves the feasibility of IMUs for a complete monitoring of physical therapies in which we can get information of which exercise is being performed and its quality, as a basis for designing virtual coaches.
This work proposes a novel variational approximation of partial differential equations on moving geometries determined by explicit boundary representations. The benefits of the proposed formulation are the ability to handle large displacements of explicitly represented domain boundaries without generating body-fitted meshes and remeshing techniques. For the space discretization, we use a background mesh and an unfitted method that relies on integration on cut cells only. We perform this intersection by using clipping algorithms. To deal with the mesh movement, we pullback the equations to a reference configuration (the spatial mesh at the initial time slab times the time interval) that is constant in time. This way, the geometrical intersection algorithm is only required in 3D, another key property of the proposed scheme. At the end of the time slab, we compute the deformed mesh, intersect the deformed boundary with the background mesh, and consider an exact transfer operator between meshes to compute jump terms in the time discontinuous Galerkin integration. The transfer is also computed using geometrical intersection algorithms. We demonstrate the applicability of the method to fluid problems around rotating (2D and 3D) geometries described by oriented boundary meshes. We also provide a set of numerical experiments that show the optimal convergence of the method.
Network meta-analysis combines aggregate data (AgD) from multiple randomised controlled trials, assuming that any effect modifiers are balanced across populations. Individual patient data (IPD) meta-regression is the ``gold standard'' method to relax this assumption, however IPD are frequently only available in a subset of studies. Multilevel network meta-regression (ML-NMR) extends IPD meta-regression to incorporate AgD studies whilst avoiding aggregation bias, but currently requires the aggregate-level likelihood to have a known closed form. Notably, this prevents application to time-to-event outcomes. We extend ML-NMR to individual-level likelihoods of any form, by integrating the individual-level likelihood function over the AgD covariate distributions to obtain the respective marginal likelihood contributions. We illustrate with two examples of time-to-event outcomes, showing the performance of ML-NMR in a simulated comparison with little loss of precision from a full IPD analysis, and demonstrating flexible modelling of baseline hazards using cubic M-splines with synthetic data on newly diagnosed multiple myeloma. ML-NMR is a general method for synthesising individual and aggregate level data in networks of all sizes. Extension to general likelihoods, including for survival outcomes, greatly increases the applicability of the method. R and Stan code is provided, and the methods are implemented in the multinma R package.
Recently, addressing spatial confounding has become a major topic in spatial statistics. However, the literature has provided conflicting definitions, and many proposed definitions do not address the issue of confounding as it is understood in causal inference. We define spatial confounding as the existence of an unmeasured causal confounder with a spatial structure. We present a causal inference framework for nonparametric identification of the causal effect of a continuous exposure on an outcome in the presence of spatial confounding. We propose double machine learning (DML), a procedure in which flexible models are used to regress both the exposure and outcome variables on confounders to arrive at a causal estimator with favorable robustness properties and convergence rates, and we prove that this approach is consistent and asymptotically normal under spatial dependence. As far as we are aware, this is the first approach to spatial confounding that does not rely on restrictive parametric assumptions (such as linearity, effect homogeneity, or Gaussianity) for both identification and estimation. We demonstrate the advantages of the DML approach analytically and in simulations. We apply our methods and reasoning to a study of the effect of fine particulate matter exposure during pregnancy on birthweight in California.
A crucial challenge for solving problems in conflict research is in leveraging the semi-supervised nature of the data that arise. Observed response data such as counts of battle deaths over time indicate latent processes of interest such as intensity and duration of conflicts, but defining and labeling instances of these unobserved processes requires nuance and imprecision. The availability of such labels, however, would make it possible to study the effect of intervention-related predictors - such as ceasefires - directly on conflict dynamics (e.g., latent intensity) rather than through an intermediate proxy like observed counts of battle deaths. Motivated by this problem and the new availability of the ETH-PRIO Civil Conflict Ceasefires data set, we propose a Bayesian autoregressive (AR) hidden Markov model (HMM) framework as a sufficiently flexible machine learning approach for semi-supervised regime labeling with uncertainty quantification. We motivate our approach by illustrating the way it can be used to study the role that ceasefires play in shaping conflict dynamics. This ceasefires data set is the first systematic and globally comprehensive data on ceasefires, and our work is the first to analyze this new data and to explore the effect of ceasefires on conflict dynamics in a comprehensive and cross-country manner.
In this study, we consider a class of linear matroid interdiction problems, where the feasible sets for the upper-level decision-maker (referred to as the leader) and the lower-level decision-maker (referred to as the follower) are given by partition matroids with a common ground set. In contrast to classical network interdiction models where the leader is subject to a single budget constraint, in our setting, both the leader and the follower are subject to several independent cardinality constraints and engage in a zero-sum game. While a single-level linear integer programming problem over a partition matroid is known to be polynomially solvable, we prove that the considered bilevel problem is NP-hard, even when the objective function coefficients are all binary. On a positive note, it turns out that, if the number of cardinality constraints is fixed for either the leader or the follower, then the considered class of bilevel problems admits several polynomial-time solution schemes. Specifically, these schemes are based on a single-level dual reformulation, a dynamic programming-based approach, and a 2-flip local search algorithm for the leader.
We consider Maxwell eigenvalue problems on uncertain shapes with perfectly conducting TESLA cavities being the driving example. Due to the shape uncertainty, the resulting eigenvalues and eigenmodes are also uncertain and it is well known that the eigenvalues may exhibit crossings or bifurcations under perturbation. We discuss how the shape uncertainties can be modelled using the domain mapping approach and how the deformation mapping can be expressed as coefficients in Maxwell's equations. Using derivatives of these coefficients and derivatives of the eigenpairs, we follow a perturbation approach to compute approximations of mean and covariance of the eigenpairs. For small perturbations, these approximations are faster and more accurate than Monte Carlo or similar sampling-based strategies. Numerical experiments for a three-dimensional 9-cell TESLA cavity are presented.
During multiple testing, researchers often adjust their alpha level to control the familywise error rate for a statistical inference about a joint union alternative hypothesis (e.g., "H1 or H2"). However, in some cases, they do not make this inference and instead make separate inferences about each of the individual hypotheses that comprise the joint hypothesis (e.g., H1 and H2). For example, a researcher might use a Bonferroni correction to adjust their alpha level from the conventional level of 0.050 to 0.025 when testing H1 and H2, find a significant result for H1 (p < 0.025) and not for H2 (p > .0.025), and so claim support for H1 and not for H2. However, these separate individual inferences do not require an alpha adjustment. Only a statistical inference about the union alternative hypothesis "H1 or H2" requires an alpha adjustment because it is based on "at least one" significant result among the two tests, and so it depends on the familywise error rate. When a researcher corrects their alpha level during multiple testing but does not make an inference about the union alternative hypothesis, their correction is redundant. In the present article, I discuss this redundant correction problem, including its associated loss of statistical power and its potential causes vis-\`a-vis error rate confusions and the alpha adjustment ritual. I also provide three illustrations of redundant corrections from recent psychology studies. I conclude that redundant corrections represent a symptom of statisticism, and I call for a more nuanced and context-specific approach to multiple testing corrections.
The need to Fourier transform data sets with irregular sampling is shared by various domains of science. This is the case for example in astronomy or sismology. Iterative methods have been developed that allow to reach approximate solutions. Here an exact solution to the problem for band-limited periodic signals is presented. The exact spectrum can be deduced from the spectrum of the non-equispaced data through the inversion of a Toeplitz matrix. The result applies to data of any dimension. This method also provides an excellent approximation for non-periodic band-limit signals. The method allows to reach very high dynamic ranges ($10^{13}$ with double-float precision) which depend on the regularity of the samples.