Floyd and Knuth investigated in 1990 register machines which can add, subtract and compare integers as primitive operations. They asked whether their current bound on the number of registers for multiplying and dividing fast (running in time linear in the size of the input) can be improved and whether one can output fast the powers of two summing up to a positive integer in subquadratic time. Both questions are answered positively. Furthermore, it is shown that every function computed by only one register is automatic and that automatic functions with one input can be computed with four registers in linear time; automatic functions with a larger number of inputs can be computed with 5 registers in linear time. There is a nonautomatic function with one input which can be computed with two registers in linear time.
In many machine learning tasks, a common approach for dealing with large-scale data is to build a small summary, {\em e.g.,} coreset, that can efficiently represent the original input. However, real-world datasets usually contain outliers and most existing coreset construction methods are not resilient against outliers (in particular, an outlier can be located arbitrarily in the space by an adversarial attacker). In this paper, we propose a novel robust coreset method for the {\em continuous-and-bounded learning} problems (with outliers) which includes a broad range of popular optimization objectives in machine learning, {\em e.g.,} logistic regression and $ k $-means clustering. Moreover, our robust coreset can be efficiently maintained in fully-dynamic environment. To the best of our knowledge, this is the first robust and fully-dynamic coreset construction method for these optimization problems. Another highlight is that our coreset size can depend on the doubling dimension of the parameter space, rather than the VC dimension of the objective function which could be very large or even challenging to compute. Finally, we conduct the experiments on real-world datasets to evaluate the effectiveness of our proposed robust coreset method.
\emph{$K$-best enumeration}, which asks to output $k$ best solutions without duplication, plays an important role in data analysis for many fields. In such fields, data can be typically represented by graphs, and thus subgraph enumeration has been paid much attention to. However, $k$-best enumeration tends to be intractable since, in many cases, finding one optimum solution is \NP-hard. To overcome this difficulty, we combine $k$-best enumeration with a new concept of enumeration algorithms called \emph{approximation enumeration algorithms}, which has been recently proposed. As a main result, we propose an $\alpha$-approximation algorithm for minimal connected edge dominating sets which outputs $k$ minimal solutions with cardinality at most $\alpha\cdot\overline{\rm OPT}$, where $\overline{\rm OPT}$ is the cardinality of a mini\emph{mum} solution which is \emph{not} outputted by the algorithm, and $\alpha$ is constant. Moreover, our proposed algorithm runs in $O(nm^2\Delta)$ delay, where $n$, $m$, $\Delta$ are the number of vertices, the number of edges, and the maximum degree of an input graph.
The voter process is a classic stochastic process that models the invasion of a mutant trait $A$ (e.g., a new opinion, belief, legend, genetic mutation, magnetic spin) in a population of agents (e.g., people, genes, particles) who share a resident trait $B$, spread over the nodes of a graph. An agent may adopt the trait of one of its neighbors at any time, while the invasion bias $r\in(0,\infty)$ quantifies the stochastic preference towards ($r>1$) or against ($r<1$) adopting $A$ over $B$. Success is measured in terms of the fixation probability, i.e., the probability that eventually all agents have adopted the mutant trait $A$. In this paper we study the problem of fixation probability maximization under this model: given a budget $k$, find a set of $k$ agents to initiate the invasion that maximizes the fixation probability. We show that the problem is NP-hard for both $r>1$ and $r<1$, while the latter case is also inapproximable within any multiplicative factor. On the positive side, we show that when $r>1$, the optimization function is submodular and thus can be greedily approximated within a factor $1-1/e$. An experimental evaluation of some proposed heuristics corroborates our results.
In many real-life situations, we are often faced with combinatorial problems under linear cost functions. In this paper, we propose a fast method for exactly enumerating a very large number of all lower cost solutions for various combinatorial problems. Our method is based on backtracking for a given decision diagram which represents the feasible solutions of a problem. The main idea is to memoize the intervals of cost bounds to avoid duplicate search in the backtracking process. Although existing state-of-the-art methods with dynamic programming requires a pseudo-polynomial time with the total cost values, it may take an exponential time when the cost values become large. In contrast, the computation time of the proposed method does not directly depend on the total cost values, but is bounded by the input and output size of the decision diagrams. Therefore, it can be much faster than the existing methods if the cost values are large but the output of the decision diagrams are well-compressed. Our experimental results show that, for some practical instances of the Hamiltonian path problem, we succeeded in exactly enumerating billions of all lower cost solutions in a few seconds, which is hundred or more times faster than existing methods. Our method can be regarded as a novel search algorithm which integrates the two classical techniques, branch-and-bound and dynamic programming. This method would have many applications in various fields, including operations research, data mining, statistical testing, hardware/software system design, etc.
Quantum error mitigation (QEM) is a class of promising techniques for reducing the computational error of variational quantum algorithms. In general, the computational error reduction comes at the cost of a sampling overhead due to the variance-boosting effect caused by the channel inversion operation, which ultimately limits the applicability of QEM. Existing sampling overhead analysis of QEM typically assumes exact channel inversion, which is unrealistic in practical scenarios. In this treatise, we consider a practical channel inversion strategy based on Monte Carlo sampling, which introduces additional computational error that in turn may be eliminated at the cost of an extra sampling overhead. In particular, we show that when the computational error is small compared to the dynamic range of the error-free results, it scales with the square root of the number of gates. By contrast, the error exhibits a linear scaling with the number of gates in the absence of QEM under the same assumptions. Hence, the error scaling of QEM remains to be preferable even without the extra sampling overhead. Our analytical results are accompanied by numerical examples.
We prove a lower bound on the probability of Shor's order-finding algorithm successfully recovering the order $r$ in a single run. The bound implies that by performing two limited searches in the classical post-processing part of the algorithm, a high success probability can be guaranteed, for any $r$, without re-running the quantum part or increasing the exponent length compared to Shor. Asymptotically, in the limit as $r$ tends to infinity, the probability of successfully recovering $r$ in a single run tends to one. Already for moderate $r$, a high success probability exceeding e.g. $1 - 10^{-4}$ can be guaranteed. As corollaries, we prove analogous results for the probability of completely factoring any integer $N$ in a single run of the order-finding algorithm.
We consider answering queries on data available through access methods, that provide lookup access to the tuples matching a given binding. Such interfaces are common on the Web; further, they often have bounds on how many results they can return, e.g., because of pagination or rate limits. We thus study result-bounded methods, which may return only a limited number of tuples. We study how to decide if a query is answerable using result-bounded methods, i.e., how to compute a plan that returns all answers to the query using the methods, assuming that the underlying data satisfies some integrity constraints. We first show how to reduce answerability to a query containment problem with constraints. Second, we show "schema simplification" theorems describing when and how result-bounded services can be used. Finally, we use these theorems to give decidability and complexity results about answerability for common constraint classes.
Logistic Bandits have recently undergone careful scrutiny by virtue of their combined theoretical and practical relevance. This research effort delivered statistically efficient algorithms, improving the regret of previous strategies by exponentially large factors. Such algorithms are however strikingly costly as they require $\Omega(t)$ operations at each round. On the other hand, a different line of research focused on computational efficiency ($\mathcal{O}(1)$ per-round cost), but at the cost of letting go of the aforementioned exponential improvements. Obtaining the best of both world is unfortunately not a matter of marrying both approaches. Instead we introduce a new learning procedure for Logistic Bandits. It yields confidence sets which sufficient statistics can be easily maintained online without sacrificing statistical tightness. Combined with efficient planning mechanisms we design fast algorithms which regret performance still match the problem-dependent lower-bound of Abeille et al. (2021). To the best of our knowledge, those are the first Logistic Bandit algorithms that simultaneously enjoy statistical and computational efficiency.
Machine reading comprehension (MRC) requires reasoning about both the knowledge involved in a document and knowledge about the world. However, existing datasets are typically dominated by questions that can be well solved by context matching, which fail to test this capability. To encourage the progress on knowledge-based reasoning in MRC, we present knowledge-based MRC in this paper, and build a new dataset consisting of 40,047 question-answer pairs. The annotation of this dataset is designed so that successfully answering the questions requires understanding and the knowledge involved in a document. We implement a framework consisting of both a question answering model and a question generation model, both of which take the knowledge extracted from the document as well as relevant facts from an external knowledge base such as Freebase/ProBase/Reverb/NELL. Results show that incorporating side information from external KB improves the accuracy of the baseline question answer system. We compare it with a standard MRC model BiDAF, and also provide the difficulty of the dataset and lay out remaining challenges.
Knowledge graph embedding aims to embed entities and relations of knowledge graphs into low-dimensional vector spaces. Translating embedding methods regard relations as the translation from head entities to tail entities, which achieve the state-of-the-art results among knowledge graph embedding methods. However, a major limitation of these methods is the time consuming training process, which may take several days or even weeks for large knowledge graphs, and result in great difficulty in practical applications. In this paper, we propose an efficient parallel framework for translating embedding methods, called ParTrans-X, which enables the methods to be paralleled without locks by utilizing the distinguished structures of knowledge graphs. Experiments on two datasets with three typical translating embedding methods, i.e., TransE [3], TransH [17], and a more efficient variant TransE- AdaGrad [10] validate that ParTrans-X can speed up the training process by more than an order of magnitude.