The past few years have seen an increasing number of initiatives aimed at integrating information generated outside of confirmatory randomised clinical trials (RCTs) into drug development. However, data generated non-concurrently and through observational studies can provide results that are difficult to compare with randomised trial data. Moreover, the scientific questions these data can serve to answer often remain vague. Our starting point is to use clearly defined objectives for evidence generation, which are formulated towards early discussion with health technology assessment (HTA) bodies and are additional to regulatory requirements for authorisation of a new treatment. We propose FACTIVE (Flexible Augmented Clinical Trial for Improved eVidencE generation), a new class of study designs enabling flexible augmentation of confirmatory randomised controlled trials with concurrent and close-to-real-world elements. These enabling designs facilitate estimation of certain treatment effects in the confirmatory part and other, complementary treatment effects in a concurrent real-world part. Each stakeholder should use the evidence that is relevant within their own decision-making framework. High quality data are generated under one single protocol and the use of randomisation ensures rigorous statistical inference and interpretation within and between the different parts of the experiment. Evidence for the decision-making of HTA bodies could be available earlier than is currently the case.
Granular jamming has recently become popular in soft robotics with widespread applications including industrial gripping, surgical robotics and haptics. Previous work has investigated the use of various techniques that exploit the nature of granular physics to improve jamming performance, however this is generally underrepresented in the literature compared to its potential impact. We present the first research that exploits vibration-based fluidisation actively (e.g., during a grip) to elicit bespoke performance from granular jamming grippers. We augment a conventional universal gripper with a computer-controllled audio exciter, which is attached to the gripper via a 3D printed mount, and build an automated test rig to allow large-scale data collection to explore the effects of active vibration. We show that vibration in soft jamming grippers can improve holding strength. In a series of studies, we show that frequency and amplitude of the waveforms are key determinants to performance, and that jamming performance is also dependent on temporal properties of the induced waveform. We hope to encourage further study focused on active vibrational control of jamming in soft robotics to improve performance and increase diversity of potential applications.
Many policies in the US are determined locally, e.g., at the county-level. Local policy regimes provide flexibility between regions, but may become less effective in the presence of geographic spillovers, where populations circumvent local restrictions by traveling to less restricted regions nearby. Due to the endogenous nature of policymaking, there have been few opportunities to reliably estimate causal spillover effects or evaluate their impact on local policies. In this work, we identify a novel setting and develop a suitable methodology that allow us to make unconfounded estimates of spillover effects of local policies. Focusing on California's Blueprint for a Safer Economy, we leverage how county-level mobility restrictions were deterministically set by public COVID-19 severity statistics, enabling a regression discontinuity design framework to estimate spillovers between counties. We estimate these effects using a mobility network with billions of timestamped edges and find significant spillover movement, with larger effects in retail, eating places, and gyms. Contrasting local and global policy regimes, our spillover estimates suggest that county-level restrictions are only 54% as effective as statewide restrictions at reducing mobility. However, an intermediate strategy of macro-county restrictions -- where we optimize county partitions by solving a minimum k-cut problem on a graph weighted by our spillover estimates -- can recover over 90% of statewide mobility reductions, while maintaining substantial flexibility between counties.
For the analysis of a time-to-event endpoint in a single-arm or randomized clinical trial it is generally perceived that interpretation of a given estimate of the survival function, or the comparison between two groups, hinges on some quantification of the amount of follow-up. Typically, a median of some loosely defined quantity is reported. However, whatever median is reported, is typically not answering the question(s) trialists actually have in terms of follow-up quantification. In this paper, inspired by the estimand framework, we formulate a comprehensive list of relevant scientific questions that trialists have when reporting time-to-event data. We illustrate how these questions should be answered, and that reference to an unclearly defined follow-up quantity is not needed at all. In drug development, key decisions are made based on randomized controlled trials, and we therefore also discuss relevant scientific questions not only when looking at a time-to-event endpoint in one group, but also for comparisons. We find that different thinking about some of the relevant scientific questions around follow-up is required depending on whether a proportional hazards assumption can be made or other patterns of survival functions are anticipated, e.g. delayed separation, crossing survival functions, or the potential for cure. We conclude the paper with practical recommendations.
In many applications, heterogeneous treatment effects on a censored response variable are of primary interest, and it is natural to evaluate the effects at different quantiles (e.g., median). The large number of potential effect modifiers, the unknown structure of the treatment effects, and the presence of right censoring pose significant challenges. In this paper, we develop a hybrid forest approach called Hybrid Censored Quantile Regression Forest (HCQRF) to assess the heterogeneous effects varying with high-dimensional variables. The hybrid estimation approach takes advantage of the random forests and the censored quantile regression. We propose a doubly-weighted estimation procedure that consists of a redistribution-of-mass weight to handle censoring and an adaptive nearest neighbor weight derived from the forest to handle high-dimensional effect functions. We propose a variable importance decomposition to measure the impact of a variable on the treatment effect function. Extensive simulation studies demonstrate the efficacy and stability of HCQRF. The result of the simulation study also convinces us of the effectiveness of the variable importance decomposition. We apply HCQRF to a clinical trial of colorectal cancer. We achieve insightful estimations of the treatment effect and meaningful variable importance results. The result of the variable importance also confirms the necessity of the decomposition.
High-dimensional data can often display heterogeneity due to heteroscedastic variance or inhomogeneous covariate effects. Penalized quantile and expectile regression methods offer useful tools to detect heteroscedasticity in high-dimensional data. The former is computationally challenging due to the non-smooth nature of the check loss, and the latter is sensitive to heavy-tailed error distributions. In this paper, we propose and study (penalized) robust expectile regression (retire), with a focus on iteratively reweighted $\ell_1$-penalization which reduces the estimation bias from $\ell_1$-penalization and leads to oracle properties. Theoretically, we establish the statistical properties of the retire estimator under two regimes: (i) low-dimensional regime in which $d \ll n$; (ii) high-dimensional regime in which $s\ll n\ll d$ with $s$ denoting the number of significant predictors. In the high-dimensional setting, we carefully characterize the solution path of the iteratively reweighted $\ell_1$-penalized retire estimation, adapted from the local linear approximation algorithm for folded-concave regularization. Under a mild minimum signal strength condition, we show that after as many as $\log(\log d)$ iterations the final iterate enjoys the oracle convergence rate. At each iteration, the weighted $\ell_1$-penalized convex program can be efficiently solved by a semismooth Newton coordinate descent algorithm. Numerical studies demonstrate the competitive performance of the proposed procedure compared with either non-robust or quantile regression based alternatives.
Architectural design contexts contain a set of factors that influence software application development. Among them, \textit{\textbf{organizational}} design contexts consist of high-level company concerns and how it is structured, for example, stakeholders and development schedule, heavily impacting design considerations. Decentralized Autonomous Organization (DAO), as a vital concept in the Web3 space, is an organization constructed by automatically executed rules such as via smart contracts, holding features of the permissionless committee, transparent proposals, and fair contribution by stakeholders. In this work, we conduct a systematic literature review to summarize how DAO is structured as well as explore its benefits\&challenges in Web3 applications.
Engineers and scientists have been collecting and analyzing fatigue data since the 1800s to ensure the reliability of life-critical structures. Applications include (but are not limited to) bridges, building structures, aircraft and spacecraft components, ships, ground-based vehicles, and medical devices. Engineers need to estimate S-N relationships (Stress or Strain versus Number of cycles to failure), typically with a focus on estimating small quantiles of the fatigue-life distribution. Estimates from this kind of model are used as input to models (e.g., cumulative damage models) that predict failure-time distributions under varying stress patterns. Also, design engineers need to estimate lower-tail quantiles of the closely related fatigue-strength distribution. The history of applying incorrect statistical methods is nearly as long and such practices continue to the present. Examples include treating the applied stress (or strain) as the response and the number of cycles to failure as the explanatory variable in regression analyses (because of the need to estimate strength distributions) and ignoring or otherwise mishandling censored observations (known as runouts in the fatigue literature). The first part of the paper reviews the traditional modeling approach where a fatigue-life model is specified. We then show how this specification induces a corresponding fatigue-strength model. The second part of the paper presents a novel alternative modeling approach where a fatigue-strength model is specified and a corresponding fatigue-life model is induced. We explain and illustrate the important advantages of this new modeling approach.
This book develops an effective theory approach to understanding deep neural networks of practical relevance. Beginning from a first-principles component-level picture of networks, we explain how to determine an accurate description of the output of trained networks by solving layer-to-layer iteration equations and nonlinear learning dynamics. A main result is that the predictions of networks are described by nearly-Gaussian distributions, with the depth-to-width aspect ratio of the network controlling the deviations from the infinite-width Gaussian description. We explain how these effectively-deep networks learn nontrivial representations from training and more broadly analyze the mechanism of representation learning for nonlinear models. From a nearly-kernel-methods perspective, we find that the dependence of such models' predictions on the underlying learning algorithm can be expressed in a simple and universal way. To obtain these results, we develop the notion of representation group flow (RG flow) to characterize the propagation of signals through the network. By tuning networks to criticality, we give a practical solution to the exploding and vanishing gradient problem. We further explain how RG flow leads to near-universal behavior and lets us categorize networks built from different activation functions into universality classes. Altogether, we show that the depth-to-width ratio governs the effective model complexity of the ensemble of trained networks. By using information-theoretic techniques, we estimate the optimal aspect ratio at which we expect the network to be practically most useful and show how residual connections can be used to push this scale to arbitrary depths. With these tools, we can learn in detail about the inductive bias of architectures, hyperparameters, and optimizers.
This paper focuses on the expected difference in borrower's repayment when there is a change in the lender's credit decisions. Classical estimators overlook the confounding effects and hence the estimation error can be magnificent. As such, we propose another approach to construct the estimators such that the error can be greatly reduced. The proposed estimators are shown to be unbiased, consistent, and robust through a combination of theoretical analysis and numerical testing. Moreover, we compare the power of estimating the causal quantities between the classical estimators and the proposed estimators. The comparison is tested across a wide range of models, including linear regression models, tree-based models, and neural network-based models, under different simulated datasets that exhibit different levels of causality, different degrees of nonlinearity, and different distributional properties. Most importantly, we apply our approaches to a large observational dataset provided by a global technology firm that operates in both the e-commerce and the lending business. We find that the relative reduction of estimation error is strikingly substantial if the causal effects are accounted for correctly.