The sequential multiple assignment randomized trial (SMART) is the gold standard trial design to generate data for the evaluation of multi-stage treatment regimes. As with conventional (single-stage) randomized clinical trials, interim monitoring allows early stopping; however, there are few methods for principled interim analysis in SMARTs. Because SMARTs involve multiple stages of treatment, a key challenge is that not all enrolled participants will have progressed through all treatment stages at the time of an interim analysis. Wu et al. (2021) propose basing interim analyses on an estimator for the mean outcome under a given regime that uses data only from participants who have completed all treatment stages. We propose an estimator for the mean outcome under a given regime that gains efficiency by using partial information from enrolled participants regardless of their progression through treatment stages. Using the asymptotic distribution of this estimator, we derive associated Pocock and O'Brien-Fleming testing procedures for early stopping. In simulation experiments, the estimator controls type I error and achieves nominal power while reducing expected sample size relative to the method of Wu et al. (2021). We present an illustrative application of the proposed estimator based on a recent SMART evaluating behavioral pain interventions for breast cancer patients.
By exploiting the random sampling techniques, this paper derives an efficient randomized algorithm for computing a generalized CUR decomposition, which provides low-rank approximations of both matrices simultaneously in terms of some of their rows and columns. For large-scale data sets that are expensive to store and manipulate, a new variant of the discrete empirical interpolation method known as L-DEIM, which needs much lower cost and provides a significant acceleration in practice, is also combined with the random sampling approach to further improve the efficiency of our algorithm. Moreover, adopting the randomized algorithm to implement the truncation process of restricted singular value decomposition (RSVD), combined with the L-DEIM procedure, we propose a fast algorithm for computing an RSVD based CUR decomposition, which provides a coordinated low-rank approximation of the three matrices in a CUR-type format simultaneously and provides advantages over the standard CUR approximation for some applications. We establish detailed probabilistic error analysis for the algorithms and provide numerical results that show the promise of our approaches.
For many business applications that require the processing, indexing, and retrieval of professional documents such as legal briefs (in PDF format etc.), it is often essential to classify the pages of any given document into their corresponding types beforehand. Most existing studies in the field of document image classification either focus on single-page documents or treat multiple pages in a document independently. Although in recent years a few techniques have been proposed to exploit the context information from neighboring pages to enhance document page classification, they typically cannot be utilized with large pre-trained language models due to the constraint on input length. In this paper, we present a simple but effective approach that overcomes the above limitation. Specifically, we enhance the input with extra tokens carrying sequential information about previous pages - introducing recurrence - which enables the usage of pre-trained Transformer models like BERT for context-aware page classification. Our experiments conducted on two legal datasets in English and Portuguese respectively show that the proposed approach can significantly improve the performance of document page classification compared to the non-recurrent setup as well as the other context-aware baselines.
In this article, we propose a reactive task allocation architecture for a multi-agent system for scenarios where the tasks arrive at random times and are grouped into multiple queues. Two stage tasks are considered where every task has a beginning, an intermediate and a final part, typical in pick-and-drop and inspect-and-report scenarios. A centralized auction-based task allocation system is proposed, where an auction system takes into consideration bids submitted by the agents for individual tasks, current length of the queues and the waiting times of the tasks in the queues to decide on a task allocation strategy. The costs associated with these considerations, along with the constraints of having unique mappings between tasks and agents and constraints on the maximum number of agents that can be assigned to a queue, results in a Linear Integer Program (LIP) that is solved using the SCIP solver. For the scenario where the queue lengths are penalized but not the waiting times, we demonstrate that the auction system allocates tasks in a manner that all the queue lengths become constant, which is termed balancing. For the scenarios where both the costs are considered, we qualitatively analyse the effect of the choice of the relative weights on the resulting task allocation and provide guidelines for the choice of the weights. We present simulation results that illustrate the balanced allocation of tasks and validate the analysis for the trade-off between the costs related to queue lengths and task waiting times.
We analyze the bit complexity of efficient algorithms for fundamental optimization problems, such as linear regression, $p$-norm regression, and linear programming (LP). State-of-the-art algorithms are iterative, and in terms of the number of arithmetic operations, they match the current time complexity of multiplying two $n$-by-$n$ matrices (up to polylogarithmic factors). However, previous work has typically assumed infinite precision arithmetic, and due to complicated inverse maintenance techniques, the actual running times of these algorithms are unknown. To settle the running time and bit complexity of these algorithms, we demonstrate that a core common subroutine, known as \emph{inverse maintenance}, is backward-stable. Additionally, we show that iterative approaches for solving constrained weighted regression problems can be accomplished with bounded-error pre-conditioners. Specifically, we prove that linear programs can be solved approximately in matrix multiplication time multiplied by polylog factors that depend on the condition number $\kappa$ of the matrix and the inner and outer radius of the LP problem. $p$-norm regression can be solved approximately in matrix multiplication time multiplied by polylog factors in $\kappa$. Lastly, linear regression can be solved approximately in input-sparsity time multiplied by polylog factors in $\kappa$. Furthermore, we present results for achieving lower than matrix multiplication time for $p$-norm regression by utilizing faster solvers for sparse linear systems.
This paper studies the class of scenario-based safety testing algorithms in the black-box safety testing configuration. For algorithms sharing the same state-action set coverage with different sampling distributions, it is commonly believed that prioritizing the exploration of high-risk state-actions leads to a better sampling efficiency. Our proposal disputes the above intuition by introducing an impossibility theorem that provably shows all safety testing algorithms of the aforementioned difference perform equally well with the same expected sampling efficiency. Moreover, for testing algorithms covering different sets of state-actions, the sampling efficiency criterion is no longer applicable as different algorithms do not necessarily converge to the same termination condition. We then propose a testing aggressiveness definition based on the almost safe set concept along with an unbiased and efficient algorithm that compares the aggressiveness between testing algorithms. Empirical observations from the safety testing of bipedal locomotion controllers and vehicle decision-making modules are also presented to support the proposed theoretical implications and methodologies.
Deep latent generative models have attracted increasing attention due to the capacity of combining the strengths of deep learning and probabilistic models in an elegant way. The data representations learned with the models are often continuous and dense. However in many applications, sparse representations are expected, such as learning sparse high dimensional embedding of data in an unsupervised setting, and learning multi-labels from thousands of candidate tags in a supervised setting. In some scenarios, there could be further restriction on degree of sparsity: the number of non-zero features of a representation cannot be larger than a pre-defined threshold $L_0$. In this paper we propose a sparse deep latent generative model SDLGM to explicitly model degree of sparsity and thus enable to learn the sparse structure of the data with the quantified sparsity constraint. The resulting sparsity of a representation is not fixed, but fits to the observation itself under the pre-defined restriction. In particular, we introduce to each observation $i$ an auxiliary random variable $L_i$, which models the sparsity of its representation. The sparse representations are then generated with a two-step sampling process via two Gumbel-Softmax distributions. For inference and learning, we develop an amortized variational method based on MC gradient estimator. The resulting sparse representations are differentiable with backpropagation. The experimental evaluation on multiple datasets for unsupervised and supervised learning problems shows the benefits of the proposed method.
Data augmentation is essential when applying Machine Learning in small-data regimes. It generates new samples following the observed data distribution while increasing their diversity and variability to help researchers and practitioners improve their models' robustness and, thus, deploy them in the real world. Nevertheless, its usage in tabular data still needs to be improved, as prior knowledge about the underlying data mechanism is seldom considered, limiting the fidelity and diversity of the generated data. Causal data augmentation strategies have been pointed out as a solution to handle these challenges by relying on conditional independence encoded in a causal graph. In this context, this paper experimentally analyzed the ADMG causal augmentation method considering different settings to support researchers and practitioners in understanding under which conditions prior knowledge helps generate new data points and, consequently, enhances the robustness of their models. The results highlighted that the studied method (a) is independent of the underlying model mechanism, (b) requires a minimal number of observations that may be challenging in a small-data regime to improve an ML model's accuracy, (c) propagates outliers to the augmented set degrading the performance of the model, and (d) is sensitive to its hyperparameter's value.
The log odds ratio is a well-established metric for evaluating the association between binary outcome and exposure variables. Despite its widespread use, there has been limited discussion on how to summarize the log odds ratio as a function of confounders through averaging. To address this issue, we propose the Average Adjusted Association (AAA), which is a summary measure of association in a heterogeneous population, adjusted for observed confounders. To facilitate the use of it, we also develop efficient double/debiased machine learning (DML) estimators of the AAA. Our DML estimators use two equivalent forms of the efficient influence function, and are applicable in various sampling scenarios, including random sampling, outcome-based sampling, and exposure-based sampling. Through real data and simulations, we demonstrate the practicality and effectiveness of our proposed estimators in measuring the AAA.
Homogeneity tests and interval estimations of the risk difference between two groups are of general interest under paired Bernoulli settings with the presence of stratification effects. Dallal [1] proposed a model by parameterizing the probability of an occurrence at one site given an occurrence at the other site. Based on this model, we propose three test statistics and evaluate their performances regarding type I error controls and powers. Confidence intervals of a common risk difference with satisfactory coverage probabilities and interval length are constructed. Our simulation results show that the score test is the most robust and the profile likelihood confidence interval outperforms other methods proposed. Data from a study of acute otitis media is used to illustrate our proposed procedures.
We develop flexible and nonparametric estimators of the average treatment effect (ATE) transported to a new population that offer potential efficiency gains by incorporating only a sufficient subset of effect modifiers that are differentially distributed between the source and target populations into the transport step. We develop both a one-step estimator when this sufficient subset of effect modifiers is known and a collaborative one-step estimator when it is unknown. We discuss when we would expect our estimators to be more efficient than those that assume all covariates may be relevant effect modifiers and the exceptions when we would expect worse efficiency. We use simulation to compare finite sample performance across our proposed estimators and existing estimators of the transported ATE, including in the presence of practical violations of the positivity assumption. Lastly, we apply our proposed estimators to a large-scale housing trial.