We study the query version of the approximate heavy hitter and quantile problems. In the former problem, the input is a parameter $\varepsilon$ and a set $P$ of $n$ points in $\mathbb{R}^d$ where each point is assigned a color from a set $C$, and we want to build a structure s.t. given any geometric range $\gamma$, we can efficiently find a list of approximate heavy hitters in $\gamma\cap P$, i.e., colors that appear at least $\varepsilon |\gamma \cap P|$ times in $\gamma \cap P$, as well as their frequencies with an additive error of $\varepsilon |\gamma \cap P|$. In the latter problem, each point is assigned a weight from a totally ordered universe and the query must output a sequence $S$ of $1+1/\varepsilon$ weights s.t. the $i$-th weight in $S$ has approximate rank $i\varepsilon|\gamma\cap P|$, meaning, rank $i\varepsilon|\gamma\cap P|$ up to an additive error of $\varepsilon|\gamma\cap P|$. Previously, optimal results were only known in 1D [WY11] but a few sub-optimal methods were available in higher dimensions [AW17, ACH+12]. We study the problems for 3D halfspace and dominance queries. We consider the real RAM model with integer registers of size $w=\Theta(\log n)$ bits. For dominance queries, we show optimal solutions for both heavy hitter and quantile problems: using linear space, we can answer both queries in time $O(\log n + 1/\varepsilon)$. Note that as the output size is $\frac{1}{\varepsilon}$, after investing the initial $O(\log n)$ searching time, our structure takes on average $O(1)$ time to find a heavy hitter or a quantile! For more general halfspace heavy hitter queries, the same optimal query time can be achieved by increasing the space by an extra $\log_w\frac{1}{\varepsilon}$ (resp. $\log\log_w\frac{1}{\varepsilon}$) factor in 3D (resp. 2D). By spending extra $\log^{O(1)}\frac{1}{\varepsilon}$ factors in time and space, we can also support quantile queries.
As their size increases, Large Languages Models (LLMs) are natural candidates for network pruning methods: approaches that drop a subset of network weights while striving to preserve performance. Existing methods, however, require either retraining, which is rarely affordable for billion-scale LLMs, or solving a weight reconstruction problem reliant on second-order information, which may also be computationally expensive. In this paper, we introduce a novel, straightforward yet effective pruning method, termed Wanda (Pruning by Weights and activations), designed to induce sparsity in pretrained LLMs. Motivated by the recent observation of emergent large magnitude features in LLMs, our approach prune weights with the smallest magnitudes multiplied by the corresponding input activations, on a per-output basis. Notably, Wanda requires no retraining or weight update, and the pruned LLM can be used as is. We conduct a thorough evaluation of our method on LLaMA across various language benchmarks. Wanda significantly outperforms the established baseline of magnitude pruning and competes favorably against recent methods involving intensive weight update. Code is available at //github.com/locuslab/wanda.
This paper considers an ML inspired approach to hypothesis testing known as classifier/classification-accuracy testing ($\mathsf{CAT}$). In $\mathsf{CAT}$, one first trains a classifier by feeding it labeled synthetic samples generated by the null and alternative distributions, which is then used to predict labels of the actual data samples. This method is widely used in practice when the null and alternative are only specified via simulators (as in many scientific experiments). We study goodness-of-fit, two-sample ($\mathsf{TS}$) and likelihood-free hypothesis testing ($\mathsf{LFHT}$), and show that $\mathsf{CAT}$ achieves (near-)minimax optimal sample complexity in both the dependence on the total-variation ($\mathsf{TV}$) separation $\epsilon$ and the probability of error $\delta$ in a variety of non-parametric settings, including discrete distributions, $d$-dimensional distributions with a smooth density, and the Gaussian sequence model. In particular, we close the high probability sample complexity of $\mathsf{LFHT}$ for each class. As another highlight, we recover the minimax optimal complexity of $\mathsf{TS}$ over discrete distributions, which was recently established by Diakonikolas et al. (2021). The corresponding $\mathsf{CAT}$ simply compares empirical frequencies in the first half of the data, and rejects the null when the classification accuracy on the second half is better than random.
We study the problem of social welfare maximization in bilateral trade, where two agents, a buyer and a seller, trade an indivisible item. We consider arguably the simplest form of mechanisms -- the fixed-price mechanisms, where the designer offers trade at a fixed price to the seller and buyer. Besides the simple form, fixed-price mechanisms are also the only DSIC and budget balanced mechanisms in bilateral trade. We obtain improved approximation ratios of fixed-price mechanisms in different settings. In the full prior information setting where the designer has access to the value distributions of both the seller and buyer, we show that the optimal fixed-price mechanism can achieve at least $0.72$ of the optimal welfare, and no fixed-price mechanism can achieve more than $0.7381$ of the optimal welfare. Prior to our result the state of the art approximation ratio was $1 - 1/e + 0.0001 \approx 0.632$. Interestingly, we further show that the optimal approximation ratio achievable with full prior information is identical to the optimal approximation ratio obtainable with only one-sided prior information. We further consider two limited information settings. In the first one, the designer is only given the mean of the buyer's (or the seller's) value. We show that with such minimal information, one can already design a fixed-price mechanism that achieves $2/3$ of the optimal social welfare, which surpasses the previous state of the art ratio even when the designer has access to the full prior information. Furthermore, $2/3$ is the optimal attainable ratio in this setting. In the second one, we assume that the designer has sample access to the value distributions. We propose a new family mechanisms called order statistic mechanisms and provide a complete characterization of their approximation ratios for any fixed number of samples.
Consider a two-person zero-sum search game between a Hider and a Searcher. The Hider chooses to hide in one of $n$ discrete locations (or "boxes") and the Searcher chooses a search sequence specifying which order to look in these boxes until finding the Hider. A search at box $i$ takes $t_i$ time units and finds the Hider - if hidden there - independently with probability $q_i$, for $i=1,\ldots,n$. The Searcher wants to minimize the expected total time needed to find the Hider, while the Hider wants to maximize it. It is shown in the literature that the Searcher has an optimal search strategy that mixes up to $n$ distinct search sequences with appropriate probabilities. This paper investigates the existence of optimal pure strategies for the Searcher - a single deterministic search sequence that achieves the optimal expected total search time regardless of where the Hider hides. We identify several cases in which the Searcher has an optimal pure strategy, and several cases in which such optimal pure strategy does not exist. An optimal pure search strategy has significant practical value because the Searcher does not need to randomize their actions and will avoid second guessing themselves if the chosen search sequence from an optimal mixed strategy does not turn out well.
A fundamental problem in data management is to find the elements in an array that match a query. Recently, learned indexes are being extensively used to solve this problem, where they learn a model to predict the location of the items in the array. They are empirically shown to outperform non-learned methods (e.g., B-trees or binary search that answer queries in $O(\log n)$ time) by orders of magnitude. However, success of learned indexes has not been theoretically justified. Only existing attempt shows the same query time of $O(\log n)$, but with a constant factor improvement in space complexity over non-learned methods, under some assumptions on data distribution. In this paper, we significantly strengthen this result, showing that under mild assumptions on data distribution, and the same space complexity as non-learned methods, learned indexes can answer queries in $O(\log\log n)$ expected query time. We also show that allowing for slightly larger but still near-linear space overhead, a learned index can achieve $O(1)$ expected query time. Our results theoretically prove learned indexes are orders of magnitude faster than non-learned methods, theoretically grounding their empirical success.
Machine learning methods are commonly evaluated and compared by their performance on data sets from public repositories. This allows for multiple methods, oftentimes several thousands, to be evaluated under identical conditions and across time. The highest ranked performance on a problem is referred to as state-of-the-art (SOTA) performance, and is used, among other things, as a reference point for publication of new methods. Using the highest-ranked performance as an estimate for SOTA is a biased estimator, giving overly optimistic results. The mechanisms at play are those of multiplicity, a topic that is well-studied in the context of multiple comparisons and multiple testing, but has, as far as the authors are aware of, been nearly absent from the discussion regarding SOTA estimates. The optimistic state-of-the-art estimate is used as a standard for evaluating new methods, and methods with substantial inferior results are easily overlooked. In this article, we provide a probability distribution for the case of multiple classifiers so that known analyses methods can be engaged and a better SOTA estimate can be provided. We demonstrate the impact of multiplicity through a simulated example with independent classifiers. We show how classifier dependency impacts the variance, but also that the impact is limited when the accuracy is high. Finally, we discuss a real-world example; a Kaggle competition from 2020.
We consider the problem of answering connectivity queries on a real algebraic curve. The curve is given as the real trace of an algebraic curve, assumed to be in generic position, and being defined by some rational parametrizations. The query points are given by a zero-dimensional parametrization. We design an algorithm which counts the number of connected components of the real curve under study, and decides which query point lie in which connected component, in time log-linear in $N^6$, where $N$ is the maximum of the degrees and coefficient bit-sizes of the polynomials given as input. This matches the currently best-known bound for computing the topology of real plane curves. The main novelty of this algorithm is the avoidance of the computation of the complete topology of the curve.
We consider the effects of allowing a finite state verifier in an interactive proof system to use a bounded number of private coins, in addition to "public" coins whose outcomes are visible to the prover. Although swapping between private and public-coin machines does not change the class of verifiable languages when the verifiers are given reasonably large time and space bounds, this distinction has well known effects for the capabilities of constant space verifiers. We show that a constant private-coin "budget" (independent of the length of the input) increases the power of public-coin interactive proofs with finite state verifiers considerably, and provide a new characterization of the complexity class $\rm P$ as the set of languages that are verifiable by such machines with arbitrarily small error in expected polynomial time.
Reliable probabilistic primality tests are fundamental in public-key cryptography. In adversarial scenarios, a composite with a high probability of passing a specific primality test could be chosen. In such cases, we need worst-case error estimates for the test. However, in many scenarios the numbers are randomly chosen and thus have significantly smaller error probability. Therefore, we are interested in average case error estimates. In this paper, we establish such bounds for the strong Lucas primality test, as only worst-case, but no average case error bounds, are currently available. This allows us to use this test with more confidence. We examine an algorithm that draws odd $k$-bit integers uniformly and independently, runs $t$ independent iterations of the strong Lucas test with randomly chosen parameters, and outputs the first number that passes all $t$ consecutive rounds. We attain numerical upper bounds on the probability on returing a composite. Furthermore, we consider a modified version of this algorithm that excludes integers divisible by small primes, resulting in improved bounds. Additionally, we classify the numbers that contribute most to our estimate.
In this brief note, we consider estimation of the bitwise combination $x_1 \lor \dots \lor x_n = \max_i x_i$ observing a set of noisy bits $\tilde x_i \in \{0, 1\}$ that represent the true, unobserved bits $x_i \in \{0, 1\}$ under randomized response. We demonstrate that various existing estimators for the extreme bit, including those based on computationally costly estimates of the sum of bits, can be reduced to a simple closed form computed in linear time (in $n$) and constant space, including in an online fashion as new $\tilde x_i$ are observed. In particular, we derive such an estimator and provide its variance using only elementary techniques.