In the online bisection problem one has to maintain a partition of $n$ elements into two clusters of cardinality $n/2$. During runtime, an online algorithm is given a sequence of requests, each being a pair of elements: an inter-cluster request costs one unit while an intra-cluster one is free. The algorithm may change the partition, paying a unit cost for each element that changes its cluster. This natural problem admits a simple deterministic $O(n^2)$-competitive algorithm [Avin et al., DISC 2016]. While several significant improvements over this result have been obtained since the original work, all of them either limit the generality of the input or assume some form of resource augmentation (e.g., larger clusters). Moreover, the algorithm of Avin et al. achieves the best known competitive ratio even if randomization is allowed. In this paper, we present a first randomized online algorithm that breaks this natural barrier and achieves a competitive ratio of $\tilde{O}(n^{27/14})$ without resource augmentation and for an arbitrary sequence of requests.
AI-powered programming assistants are increasingly gaining popularity, with GitHub Copilot alone used by over a million developers worldwide. These tools are far from perfect, however, producing code suggestions that may be incorrect or incomplete in subtle ways. As a result, developers face a new set of challenges when they need to understand, validate, and choose between AI's suggestions. This paper explores whether Live Programming, a continuous display of a program's runtime values, can help address these challenges. We introduce Live Exploration of AI-Generated Programs, a new interaction model for AI programming assistants that supports exploring multiple code suggestions through Live Programming. We implement this interaction model in a prototype Python environment LEAP and evaluate it through a between-subject study. Our results motivate several design opportunities for future AI-powered programming tools.
The number of disclosed vulnerabilities has been steadily increasing over the years. At the same time, organizations face significant challenges patching their systems, leading to a need to prioritize vulnerability remediation in order to reduce the risk of attacks. Unfortunately, existing vulnerability scoring systems are either vendor-specific, proprietary, or are only commercially available. Moreover, these and other prioritization strategies based on vulnerability severity are poor predictors of actual vulnerability exploitation because they do not incorporate new information that might impact the likelihood of exploitation. In this paper we present the efforts behind building a Special Interest Group (SIG) that seeks to develop a completely data-driven exploit scoring system that produces scores for all known vulnerabilities, that is freely available, and which adapts to new information. The Exploit Prediction Scoring System (EPSS) SIG consists of more than 170 experts from around the world and across all industries, providing crowd-sourced expertise and feedback. Based on these collective insights, we describe the design decisions and trade-offs that lead to the development of the next version of EPSS. This new machine learning model provides an 82\% performance improvement over past models in distinguishing vulnerabilities that are exploited in the wild and thus may be prioritized for remediation.
In federated frequency estimation (FFE), multiple clients work together to estimate the frequencies of their collective data by communicating with a server that respects the privacy constraints of Secure Summation (SecSum), a cryptographic multi-party computation protocol that ensures that the server can only access the sum of client-held vectors. For single-round FFE, it is known that count sketching is nearly information-theoretically optimal for achieving the fundamental accuracy-communication trade-offs [Chen et al., 2022]. However, we show that under the more practical multi-round FEE setting, simple adaptations of count sketching are strictly sub-optimal, and we propose a novel hybrid sketching algorithm that is provably more accurate. We also address the following fundamental question: how should a practitioner set the sketch size in a way that adapts to the hardness of the underlying problem? We propose a two-phase approach that allows for the use of a smaller sketch size for simpler problems (e.g. near-sparse or light-tailed distributions). We conclude our work by showing how differential privacy can be added to our algorithm and verifying its superior performance through extensive experiments conducted on large-scale datasets.
We study non-monetary mechanisms for the fair and efficient allocation of reusable public resources, i.e., resources used for varying durations. We consider settings where a limited resource is repeatedly shared among a set of agents, each of whom may request to use the resource over multiple consecutive rounds, receiving utility only if they get to use the resource for the full duration of their request. Such settings are of particular significance in scientific research where large-scale instruments such as electron microscopes, particle colliders, or telescopes are shared between multiple research groups; this model also subsumes and extends existing models of repeated non-monetary allocation where resources are required for a single round only. We study a simple pseudo-market mechanism where upfront we endow each agent with a budget of artificial credits, proportional to the fair share of the resource we want the agent to receive. The endowments thus define for each agent her ideal utility as that which she derives from her favorite allocation with no competition, but subject to getting at most her fair share of the resource across rounds. Next, on each round, and for each available resource item, our mechanism runs a first-price auction with a selective reserve, wherein each agent submits a desired duration and a per-round-bid, which must be at least the reserve price if requesting for multiple rounds; the bidder with the highest per-round-bid wins, and gets to use the item for the desired duration. We consider this problem in a Bayesian setting and show that under a carefully chosen reserve price, irrespective of how others bid, each agent has a simple strategy that guarantees she receives a $1/2$ fraction of her ideal utility in expectation. We also show this result is tight, i.e., no mechanism can guarantee that all agents get more than half of their ideal utility.
We consider robot learning in the context of shared autonomy, where control of the system can switch between a human teleoperator and autonomous control. In this setting we address reinforcement learning, and learning from demonstration, where there is a cost associated with human time. This cost represents the human time required to teleoperate the robot, or recover the robot from failures. For each episode, the agent must choose between requesting human teleoperation, or using one of its autonomous controllers. In our approach, we learn to predict the success probability for each controller, given the initial state of an episode. This is used in a contextual multi-armed bandit algorithm to choose the controller for the episode. A controller is learnt online from demonstrations and reinforcement learning so that autonomous performance improves, and the system becomes less reliant on the teleoperator with more experience. We show that our approach to controller selection reduces the human cost to perform two simulated tasks and a single real-world task.
Learning to control unknown nonlinear dynamical systems is a fundamental problem in reinforcement learning and control theory. A commonly applied approach is to first explore the environment (exploration), learn an accurate model of it (system identification), and then compute an optimal controller with the minimum cost on this estimated system (policy optimization). While existing work has shown that it is possible to learn a uniformly good model of the system~\citep{mania2020active}, in practice, if we aim to learn a good controller with a low cost on the actual system, certain system parameters may be significantly more critical than others, and we therefore ought to focus our exploration on learning such parameters. In this work, we consider the setting of nonlinear dynamical systems and seek to formally quantify, in such settings, (a) which parameters are most relevant to learning a good controller, and (b) how we can best explore so as to minimize uncertainty in such parameters. Inspired by recent work in linear systems~\citep{wagenmaker2021task}, we show that minimizing the controller loss in nonlinear systems translates to estimating the system parameters in a particular, task-dependent metric. Motivated by this, we develop an algorithm able to efficiently explore the system to reduce uncertainty in this metric, and prove a lower bound showing that our approach learns a controller at a near-instance-optimal rate. Our algorithm relies on a general reduction from policy optimization to optimal experiment design in arbitrary systems, and may be of independent interest. We conclude with experiments demonstrating the effectiveness of our method in realistic nonlinear robotic systems.
Multiple measures, such as WEAT or MAC, attempt to quantify the magnitude of bias present in word embeddings in terms of a single-number metric. However, such metrics and the related statistical significance calculations rely on treating pre-averaged data as individual data points and employing bootstrapping techniques with low sample sizes. We show that similar results can be easily obtained using such methods even if the data are generated by a null model lacking the intended bias. Consequently, we argue that this approach generates false confidence. To address this issue, we propose a Bayesian alternative: hierarchical Bayesian modeling, which enables a more uncertainty-sensitive inspection of bias in word embeddings at different levels of granularity. To showcase our method, we apply it to Religion, Gender, and Race word lists from the original research, together with our control neutral word lists. We deploy the method using Google, Glove, and Reddit embeddings. Further, we utilize our approach to evaluate a debiasing technique applied to Reddit word embedding. Our findings reveal a more complex landscape than suggested by the proponents of single-number metrics. The datasets and source code for the paper are publicly available.
In the rank-constrained optimization problem (RCOP), it minimizes a linear objective function over a prespecified closed rank-constrained domain set and $m$ generic two-sided linear matrix inequalities. Motivated by the Dantzig-Wolfe (DW) decomposition, a popular approach of solving many nonconvex optimization problems, we investigate the strength of DW relaxation (DWR) of the RCOP, which admits the same formulation as RCOP except replacing the domain set by its closed convex hull. Notably, our goal is to characterize conditions under which the DWR matches RCOP for any m two-sided linear matrix inequalities. From the primal perspective, we develop the first-known simultaneously necessary and sufficient conditions that achieve: (i) extreme point exactness -- all the extreme points of the DWR feasible set belong to that of the RCOP; (ii) convex hull exactness -- the DWR feasible set is identical to the closed convex hull of RCOP feasible set; and (iii) objective exactness -- the optimal values of the DWR and RCOP coincide. The proposed conditions unify, refine, and extend the existing exactness results in the quadratically constrained quadratic program (QCQP) and fair unsupervised learning. These conditions can be very useful to identify new results, including the extreme point exactness for a QCQP problem that admits an inhomogeneous objective function with two homogeneous two-sided quadratic constraints and the convex hull exactness for fair SVD.
Reconfigurable intelligent surfaces (RISs) allow controlling the propagation environment in wireless networks by tuning multiple reflecting elements. RISs have been traditionally realized through single connected architectures, mathematically characterized by a diagonal scattering matrix. Recently, beyond diagonal RISs (BD-RISs) have been proposed as a novel branch of RISs whose scattering matrix is not limited to be diagonal, which creates new benefits and opportunities for RISs. Efficient BD-RIS architectures have been realized based on group and fully connected reconfigurable impedance networks. However, a closed-form solution for the global optimal scattering matrix of these architectures is not yet available. In this paper, we provide such a closed-form solution proving that the theoretical performance upper bounds can be exactly achieved for any channel realization. We first consider the received signal power maximization in single-user single-input single-output (SISO) systems aided by a BD-RIS working in reflective or transmissive mode. Then, we extend our solution to single-user multiple-input multiple-output (MIMO) and multi-user multiple-input single-output (MISO) systems. We show that our algorithm is less complex than the iterative optimization algorithms employed in the previous literature. The complexity of our algorithm grows linearly (resp. cubically) with the number of RIS elements in the case of group (resp. fully) connected architectures.
Graph convolutional network (GCN) has been successfully applied to many graph-based applications; however, training a large-scale GCN remains challenging. Current SGD-based algorithms suffer from either a high computational cost that exponentially grows with number of GCN layers, or a large space requirement for keeping the entire graph and the embedding of each node in memory. In this paper, we propose Cluster-GCN, a novel GCN algorithm that is suitable for SGD-based training by exploiting the graph clustering structure. Cluster-GCN works as the following: at each step, it samples a block of nodes that associate with a dense subgraph identified by a graph clustering algorithm, and restricts the neighborhood search within this subgraph. This simple but effective strategy leads to significantly improved memory and computational efficiency while being able to achieve comparable test accuracy with previous algorithms. To test the scalability of our algorithm, we create a new Amazon2M data with 2 million nodes and 61 million edges which is more than 5 times larger than the previous largest publicly available dataset (Reddit). For training a 3-layer GCN on this data, Cluster-GCN is faster than the previous state-of-the-art VR-GCN (1523 seconds vs 1961 seconds) and using much less memory (2.2GB vs 11.2GB). Furthermore, for training 4 layer GCN on this data, our algorithm can finish in around 36 minutes while all the existing GCN training algorithms fail to train due to the out-of-memory issue. Furthermore, Cluster-GCN allows us to train much deeper GCN without much time and memory overhead, which leads to improved prediction accuracy---using a 5-layer Cluster-GCN, we achieve state-of-the-art test F1 score 99.36 on the PPI dataset, while the previous best result was 98.71 by [16]. Our codes are publicly available at //github.com/google-research/google-research/tree/master/cluster_gcn.