In this paper, we study the problem of designing experiments that are conducted on a set of units such as users or groups of users in an online marketplace, for multiple time periods such as weeks or months. These experiments are particularly useful to study the treatments that have causal effects on both current and future outcomes (instantaneous and lagged effects). The design problem involves selecting a treatment time for each unit, before or during the experiment, in order to most precisely estimate the instantaneous and lagged effects, post experimentation. This optimization of the treatment decisions can directly minimize the opportunity cost of the experiment by reducing its sample size requirement. The optimization is an NP-hard integer program for which we provide a near-optimal solution, when the design decisions are performed all at the beginning (fixed-sample-size designs). Next, we study sequential experiments that allow adaptive decisions during the experiments, and also potentially early stop the experiments, further reducing their cost. However, the sequential nature of these experiments complicates both the design phase and the estimation phase. We propose a new algorithm, PGAE, that addresses these challenges by adaptively making treatment decisions, estimating the treatment effects, and drawing valid post-experimentation inference. PGAE combines ideas from Bayesian statistics, dynamic programming, and sample splitting. Using synthetic experiments on real data sets from multiple domains, we demonstrate that our proposed solutions for fixed-sample-size and sequential experiments reduce the opportunity cost of the experiments by over 50% and 70%, respectively, compared to benchmarks.
We study a new two-time-scale stochastic gradient method for solving optimization problems, where the gradients are computed with the aid of an auxiliary variable under samples generated by time-varying Markov random processes parameterized by the underlying optimization variable. These time-varying samples make gradient directions in our update biased and dependent, which can potentially lead to the divergence of the iterates. In our two-time-scale approach, one scale is to estimate the true gradient from these samples, which is then used to update the estimate of the optimal solution. While these two iterates are implemented simultaneously, the former is updated "faster" (using bigger step sizes) than the latter (using smaller step sizes). Our first contribution is to characterize the finite-time complexity of the proposed two-time-scale stochastic gradient method. In particular, we provide explicit formulas for the convergence rates of this method under different structural assumptions, namely, strong convexity, convexity, the Polyak-Lojasiewicz condition, and general non-convexity. We apply our framework to two problems in control and reinforcement learning. First, we look at the standard online actor-critic algorithm over finite state and action spaces and derive a convergence rate of O(k^(-2/5)), which recovers the best known rate derived specifically for this problem. Second, we study an online actor-critic algorithm for the linear-quadratic regulator and show that a convergence rate of O(k^(-2/3)) is achieved. This is the first time such a result is known in the literature. Finally, we support our theoretical analysis with numerical simulations where the convergence rates are visualized.
We consider the question of adaptive data analysis within the framework of convex optimization. We ask how many samples are needed in order to compute $\epsilon$-accurate estimates of $O(1/\epsilon^2)$ gradients queried by gradient descent, and we provide two intermediate answers to this question. First, we show that for a general analyst (not necessarily gradient descent) $\Omega(1/\epsilon^3)$ samples are required. This rules out the possibility of a foolproof mechanism. Our construction builds upon a new lower bound (that may be of interest of its own right) for an analyst that may ask several non adaptive questions in a batch of fixed and known $T$ rounds of adaptivity and requires a fraction of true discoveries. We show that for such an analyst $\Omega (\sqrt{T}/\epsilon^2)$ samples are necessary. Second, we show that, under certain assumptions on the oracle, in an interaction with gradient descent $\tilde \Omega(1/\epsilon^{2.5})$ samples are necessary. Our assumptions are that the oracle has only \emph{first order access} and is \emph{post-hoc generalizing}. First order access means that it can only compute the gradients of the sampled function at points queried by the algorithm. Our assumption of \emph{post-hoc generalization} follows from existing lower bounds for statistical queries. More generally then, we provide a generic reduction from the standard setting of statistical queries to the problem of estimating gradients queried by gradient descent. These results are in contrast with classical bounds that show that with $O(1/\epsilon^2)$ samples one can optimize the population risk to accuracy of $O(\epsilon)$ but, as it turns out, with spurious gradients.
Randomized field experiments are the gold standard for evaluating the impact of software changes on customers. In the online domain, randomization has been the main tool to ensure exchangeability. However, due to the different deployment conditions and the high dependence on the surrounding environment, designing experiments for automotive software needs to consider a higher number of restricted variables to ensure conditional exchangeability. In this paper, we show how at Volvo Cars we utilize causal graphical models to design experiments and explicitly communicate the assumptions of experiments. These graphical models are used to further assess the experiment validity, compute direct and indirect causal effects, and reason on the transportability of the causal conclusions.
Nowadays, machine learning is playing a crucial role in harnessing the power of the massive amounts of data that we are currently producing every day in our digital world. With the booming demand for machine learning applications, it has been recognized that the number of knowledgeable data scientists can not scale with the growing data volumes and application needs in our digital world. In response to this demand, several automated machine learning (AutoML) techniques and frameworks have been developed to fill the gap of human expertise by automating the process of building machine learning pipelines. In this study, we present a comprehensive evaluation and comparison of the performance characteristics of six popular AutoML frameworks, namely, Auto-Weka, AutoSKlearn, TPOT, Recipe, ATM, and SmartML across 100 data sets from established AutoML benchmark suites. Our experimental evaluation considers different aspects for its comparison including the performance impact of several design decisions including time budget, size of search space, meta-learning, and ensemble construction. The results of our study reveal various interesting insights that can significantly guide and impact the design of AutoML frameworks.
This paper considers the problem of inference in cluster randomized experiments when cluster sizes are non-ignorable. Here, by a cluster randomized experiment, we mean one in which treatment is assigned at the level of the cluster; by non-ignorable cluster sizes we mean that "large" clusters and "small" clusters may be heterogeneous, and, in particular, the effects of the treatment may vary across clusters of differing sizes. In order to permit this sort of flexibility, we consider a sampling framework in which cluster sizes themselves are random. In this way, our analysis departs from earlier analyses of cluster randomized experiments in which cluster sizes are treated as non-random. We distinguish between two different parameters of interest: the equally-weighted cluster-level average treatment effect, and the size-weighted cluster-level average treatment effect. For each parameter, we provide methods for inference in an asymptotic framework where the number of clusters tends to infinity and treatment is assigned using simple random sampling. We additionally permit the experimenter to sample only a subset of the units within each cluster rather than the entire cluster and demonstrate the implications of such sampling for some commonly used estimators. A small simulation study shows the practical relevance of our theoretical results.
The security of quantum key distribution (QKD) is severely threatened by discrepancies between realistic devices and theoretical assumptions. Recently, a significant framework called the reference technique was proposed to provide security against arbitrary source flaws, including pulse correlations. Here, we propose an efficient four-phase twin-field QKD using laser pulses adopting the reference technique for security against all possible source imperfections. We present a characterization of source flaws and connect them to experimental data, together with a finite-key analysis. In addition, we demonstrate the feasibility of our protocol through a proof-of-principle experimental implementation and demonstrate a secure key rate of 1.63 kbps with a 20 dB channel loss. Compared with previous QKD protocols with imperfect devices, our work considerably improves both the secure key rate and the transmission distance, and shows application potential in the practical deployment of secure QKD with device imperfections.
Unlike conventional cars, connected and autonomous vehicles (CAVs) can cross intersections in a lane-free order and utilise the whole area of intersections. This paper presents a minimum-time optimal control problem to centrally control the CAVs to simultaneously cross an intersection in the shortest possible time. Dual problem theory is employed to convexify the constraints of CAVs to avoid collision with each other and with road boundaries. The developed formulation is smooth and solvable by gradient-based algorithms. Simulation results show that the proposed strategy reduces the crossing time of intersections by an average of 52% and 54% as compared to, respectively, the state-of-the-art reservation-based and lane-free methods. Furthermore, the crossing time by the proposed strategy is fixed to a constant value for an intersection regardless of the number of CAVs.
We provide a decision theoretic analysis of bandit experiments. The setting corresponds to a dynamic programming problem, but solving this directly is typically infeasible. Working within the framework of diffusion asymptotics, we define suitable notions of asymptotic Bayes and minimax risk for bandit experiments. For normally distributed rewards, the minimal Bayes risk can be characterized as the solution to a nonlinear second-order partial differential equation (PDE). Using a limit of experiments approach, we show that this PDE characterization also holds asymptotically under both parametric and non-parametric distribution of the rewards. The approach further describes the state variables it is asymptotically sufficient to restrict attention to, and therefore suggests a practical strategy for dimension reduction. The upshot is that we can approximate the dynamic programming problem defining the bandit experiment with a PDE which can be efficiently solved using sparse matrix routines. We derive the optimal Bayes and minimax policies from the numerical solutions to these equations. The proposed policies substantially dominate existing methods such as Thompson sampling. The framework also allows for substantial generalizations to the bandit problem such as time discounting and pure exploration motives.
We present IOHexperimenter, the experimentation module of the IOHprofiler project, which aims at providing an easy-to-use and highly customizable toolbox for benchmarking iterative optimization heuristics such as local search, evolutionary and genetic algorithms, Bayesian optimization techniques, etc. IOHexperimenter can be used as a stand-alone tool or as part of a benchmarking pipeline that uses other components of IOHprofiler such as IOHanalyzer, the module for interactive performance analysis and visualization. IOHexperimenter provides an efficient interface between optimization problems and their solvers while allowing for granular logging of the optimization process. These logs are fully compatible with existing tools for interactive data analysis, which significantly speeds up the deployment of a benchmarking pipeline. The main components of IOHexperimenter are the environment to build customized problem suites and the various logging options that allow users to steer the granularity of the data records.
The fact that the millimeter-wave (mmWave) multiple-input multiple-output (MIMO) channel has sparse support in the spatial domain has motivated recent compressed sensing (CS)-based mmWave channel estimation methods, where the angles of arrivals (AoAs) and angles of departures (AoDs) are quantized using angle dictionary matrices. However, the existing CS-based methods usually obtain the estimation result through one-stage channel sounding that have two limitations: (i) the requirement of large-dimensional dictionary and (ii) unresolvable quantization error. These two drawbacks are irreconcilable; improvement of the one implies deterioration of the other. To address these challenges, we propose, in this paper, a two-stage method to estimate the AoAs and AoDs of mmWave channels. In the proposed method, the channel estimation task is divided into two stages, Stage I and Stage II. Specifically, in Stage I, the AoAs are estimated by solving a multiple measurement vectors (MMV) problem. In Stage II, based on the estimated AoAs, the receive sounders are designed to estimate AoDs. The dimension of the angle dictionary in each stage can be reduced, which in turn reduces the computational complexity substantially. We then analyze the successful recovery probability (SRP) of the proposed method, revealing the superiority of the proposed framework over the existing one-stage CS-based methods. We further enhance the reconstruction performance by performing resource allocation between the two stages. We also overcome the unresolvable quantization error issue present in the prior techniques by applying the atomic norm minimization method to each stage of the proposed two-stage approach. The simulation results illustrate the substantially improved performance with low complexity of the proposed two-stage method.