We revisit the classic regular expression matching problem, that is, given a regular expression $R$ and a string $Q$, decide if $Q$ matches any of the strings specified by $R$. A standard textbook solution [Thompson, CACM 1968] solves this problem in $O(nm)$ time, where $n$ is the length of $Q$ and $m$ is the number of characters in $R$. More recently, several results that improve this bound by polylogarithmic factor have appeared. All of these solutions are essentially based on constructing and simulation a non-deterministic finite automaton. On the other hand, assuming the strong exponential time hypotheses we cannot solve regular expression $O((nm)^{1-\epsilon})$ [Backurs and Indyk, FOCS 2016]. Hence, a natural question is if we can design algorithms that can take advantage of other parameters of the problem to obtain more fine-grained bounds. We present the first algorithm for regular expression matching that can take advantage of sparsity of the automaton simulation. More precisely, we define the \emph{density}, $\Delta$, of the instance to be the total number of states in a simulation of a natural automaton for $R$. The density is always at most $nm+1$ but may be significantly smaller for many typical scenarios, e.g., when a string only matches a small part of the regular expression. Our main result is a new algorithm that solves the problem in $$O\left(\Delta \log \log \frac{nm}{\Delta} + n + m\right)$$ time. This result essentially replaces $nm$ with $\Delta$ in the complexity of regular expression matching. Prior to this work no non-trivial bound in terms of $\Delta$ was known. The key technical contribution is a new linear space representation of the classic position automaton that supports fast state-set transition computation in near-linear time in the size of the input and output state sets.
This paper investigates the collaboration of multiple connected and automated vehicles (CAVs) in different scenarios. In general, the collaboration of CAVs can be formulated as a nonlinear and nonconvex model predictive control (MPC) problem. Most of the existing approaches available for utilization to solve such an optimization problem suffer from the drawback of considerable computational burden, which hinders the practical implementation in real time. This paper proposes the use of sequential convex programming (SCP), which is a powerful approach to solving the nonlinear and nonconvex MPC problem in real time. To appropriately deploy the methodology, as a first stage, SCP requires linearization and discretization when addressing the nonlinear dynamics of the system model adequately. Based on the linearization and discretization, the original MPC problem can be transformed into a quadratically constrained quadratic programming (QCQP) problem. Besides, SCP also involves convexification to handle the associated nonconvex constraints. Thus, the nonconvex QCQP can be reduced to a quadratic programming (QP) problem that can be solved rather quickly. Therefore, the computational efficiency is suitably improved despite the existence of nonlinear and nonconvex characteristics, whereby the implementation is realized in real time. Furthermore, simulation results in three different scenarios of autonomous driving are presented to validate the effectiveness and efficiency of our proposed approach.
In this paper we prove the efficacy of a simple greedy algorithm for a finite horizon online resource allocation/matching problem, when the corresponding static planning linear program (SPP) exhibits a non-degeneracy condition called the general position gap (GPG). The key intuition that we formalize is that the solution of the reward maximizing SPP is the same as a feasibility Linear Program restricted to the optimal basic activities, and under GPG this solution can be tracked with bounded regret by a greedy algorithm, i.e., without the commonly used technique of periodically resolving the SPP. The goal of the decision maker is to combine resources (from a finite set of resource types) into configurations (from a finite set of feasible configurations) where each configuration is specified by the number of resources consumed of each type and a reward. The resources are further subdivided into three types - offline (whose quantity is known and available at time 0), online-queueable (which arrive online and can be stored in a buffer), and online-nonqueueable (which arrive online and must be matched on arrival or lost). Under GRG we prove that, (i) our greedy algorithm gets bounded any-time regret of $\mathcal{O}(1/\epsilon_0)$ for matching reward ($\epsilon_0$ is a measure of the GPG) when no configuration contains both an online-queueable and an online-nonqueueable resource, and (ii) $\mathcal{O}(\log t)$ expected any-time regret otherwise (we also prove a matching lower bound). By considering the three types of resources, our matching framework encompasses several well-studied problems such as dynamic multi-sided matching, network revenue management, online stochastic packing, and multiclass queueing systems.
We consider the problem of estimating a dose-response curve, both globally and locally at a point. Continuous treatments arise often in practice, e.g. in the form of time spent on an operation, distance traveled to a location or dosage of a drug. Letting A denote a continuous treatment variable, the target of inference is the expected outcome if everyone in the population takes treatment level A=a. Under standard assumptions, the dose-response function takes the form of a partial mean. Building upon the recent literature on nonparametric regression with estimated outcomes, we study three different estimators. As a global method, we construct an empirical-risk-minimization-based estimator with an explicit characterization of second-order remainder terms. As a local method, we develop a two-stage, doubly-robust (DR) learner. Finally, we construct a mth-order estimator based on the theory of higher-order influence functions. Under certain conditions, this higher order estimator achieves the fastest rate of convergence that we are aware of for this problem. However, the other two approaches are easier to implement using off-the-shelf software, since they are formulated as two-stage regression tasks. For each estimator, we provide an upper bound on the mean-square error and investigate its finite-sample performance in a simulation. Finally, we describe a flexible, nonparametric method to perform sensitivity analysis to the no-unmeasured-confounding assumption when the treatment is continuous.
We consider studies where multiple measures on an outcome variable are collected over time, but some subjects drop out before the end of follow up. Analyses of such data often proceed under either a 'last observation carried forward' or 'missing at random' assumption. We consider two alternative strategies for identification; the first is closely related to the difference-in-differences methodology in the causal inference literature. The second enables correction for violations of the parallel trend assumption, so long as one has access to a valid 'bespoke instrumental variable'. These are compared with existing approaches, first conceptually and then in an analysis of data from the Framingham Heart Study.
Citations in science are being studied from several perspectives. On the one hand, there are approaches such as scientometrics and the science of science, which take a more quantitative perspective. In this chapter I briefly review some of the literature on citations, citation distributions and models of citations. These citations feature prominently in another part of the literature which is dealing with research evaluation and the role of metrics and indicators in that process. Here I briefly review part of the discussion in research evaluation. This also touches on the subject of how citations relate to peer review. Finally, I try to integrate the two literatures with the aim of clarifying what I believe the two can learn from each other. The fundamental problem in research evaluation is that research quality is unobservable. This has consequences for conclusions that we can draw from quantitative studies of citations and citation models. The term "indicators" is a relevant concept in this context, which I try to clarify. Causality is important for properly understanding indicators, especially when indicators are used in practice: when we act on indicators, we enter causal territory. Even when an indicator might have been valid, through its very use, the consequences of its use may invalidate it. By combining citation models with proper causal reasoning and acknowledging the fundamental problem about unobservable research quality, we may hope to make progress.
Classically, for many computational problems one can conclude time lower bounds conditioned on the hardness of one or more of key problems: k-SAT, 3SUM and APSP. More recently, similar results have been derived in the quantum setting conditioned on the hardness of k-SAT and 3SUM. This is done using fine-grained reductions, where the approach is to (1) select a key problem $X$ that, for some function $T$, is conjectured to not be solvable by any $O(T(n)^{1-\epsilon})$ time algorithm for any constant $\epsilon > 0$ (in a fixed model of computation), and (2) reduce $X$ in a fine-grained way to these computational problems, thus giving (mostly) tight conditional time lower bounds for them. Interestingly, for Delta-Matching Triangles and Triangle Collection, classical hardness results have been derived conditioned on hardness of all three mentioned key problems. More precisely, it is proven that an $n^{3-\epsilon}$ time classical algorithm for either of these two graph problems would imply faster classical algorithms for k-SAT, 3SUM and APSP, which makes Delta-Matching Triangles and Triangle Collection worthwhile to study. In this paper, we show that an $n^{1.5-\epsilon}$ time quantum algorithm for either of these two graph problems would imply faster quantum algorithms for k-SAT, 3SUM, and APSP. We first formulate a quantum hardness conjecture for APSP and then present quantum reductions from k-SAT, 3SUM, and APSP to Delta-Matching Triangles and Triangle Collection. Additionally, based on the quantum APSP conjecture, we are also able to prove quantum lower bounds for a matrix problem and many graph problems. The matching upper bounds follow trivially for most of them, except for Delta-Matching Triangles and Triangle Collection for which we present quantum algorithms that require careful use of data structures and Ambainis' variable time search.
Optimizing noisy functions online, when evaluating the objective requires experiments on a deployed system, is a crucial task arising in manufacturing, robotics and many others. Often, constraints on safe inputs are unknown ahead of time, and we only obtain noisy information, indicating how close we are to violating the constraints. Yet, safety must be guaranteed at all times, not only for the final output of the algorithm. We introduce a general approach for seeking a stationary point in high dimensional non-linear stochastic optimization problems in which maintaining safety during learning is crucial. Our approach called LB-SGD is based on applying stochastic gradient descent (SGD) with a carefully chosen adaptive step size to a logarithmic barrier approximation of the original problem. We provide a complete convergence analysis of non-convex, convex, and strongly-convex smooth constrained problems, with first-order and zeroth-order feedback. Our approach yields efficient updates and scales better with dimensionality compared to existing approaches. We empirically compare the sample complexity and the computational cost of our method with existing safe learning approaches. Beyond synthetic benchmarks, we demonstrate the effectiveness of our approach on minimizing constraint violation in policy search tasks in safe reinforcement learning (RL).
Deep neural networks have exhibited remarkable performance in image super-resolution (SR) tasks by learning a mapping from low-resolution (LR) images to high-resolution (HR) images. However, the SR problem is typically an ill-posed problem and existing methods would come with several limitations. First, the possible mapping space of SR can be extremely large since there may exist many different HR images that can be downsampled to the same LR image. As a result, it is hard to directly learn a promising SR mapping from such a large space. Second, it is often inevitable to develop very large models with extremely high computational cost to yield promising SR performance. In practice, one can use model compression techniques to obtain compact models by reducing model redundancy. Nevertheless, it is hard for existing model compression methods to accurately identify the redundant components due to the extremely large SR mapping space. To alleviate the first challenge, we propose a dual regression learning scheme to reduce the space of possible SR mappings. Specifically, in addition to the mapping from LR to HR images, we learn an additional dual regression mapping to estimate the downsampling kernel and reconstruct LR images. In this way, the dual mapping acts as a constraint to reduce the space of possible mappings. To address the second challenge, we propose a lightweight dual regression compression method to reduce model redundancy in both layer-level and channel-level based on channel pruning. Specifically, we first develop a channel number search method that minimizes the dual regression loss to determine the redundancy of each layer. Given the searched channel numbers, we further exploit the dual regression manner to evaluate the importance of channels and prune the redundant ones. Extensive experiments show the effectiveness of our method in obtaining accurate and efficient SR models.
Graph machine learning has been extensively studied in both academic and industry. However, as the literature on graph learning booms with a vast number of emerging methods and techniques, it becomes increasingly difficult to manually design the optimal machine learning algorithm for different graph-related tasks. To tackle the challenge, automated graph machine learning, which aims at discovering the best hyper-parameter and neural architecture configuration for different graph tasks/data without manual design, is gaining an increasing number of attentions from the research community. In this paper, we extensively discuss automated graph machine approaches, covering hyper-parameter optimization (HPO) and neural architecture search (NAS) for graph machine learning. We briefly overview existing libraries designed for either graph machine learning or automated machine learning respectively, and further in depth introduce AutoGL, our dedicated and the world's first open-source library for automated graph machine learning. Last but not least, we share our insights on future research directions for automated graph machine learning. This paper is the first systematic and comprehensive discussion of approaches, libraries as well as directions for automated graph machine learning.
This manuscript portrays optimization as a process. In many practical applications the environment is so complex that it is infeasible to lay out a comprehensive theoretical model and use classical algorithmic theory and mathematical optimization. It is necessary as well as beneficial to take a robust approach, by applying an optimization method that learns as one goes along, learning from experience as more aspects of the problem are observed. This view of optimization as a process has become prominent in varied fields and has led to some spectacular success in modeling and systems that are now part of our daily lives.