Quantum computing devices can now perform sampling tasks which, according to complexity-theoretic and numerical evidence, are beyond the reach of classical computers. This raises the question of how one can efficiently verify that a quantum computer operating in this regime works as intended. In 2008, Shepherd and Bremner proposed a protocol in which a verifier constructs a unitary from the comparatively easy-to-implement family of so-called IQP circuits, and challenges a prover to execute it on a quantum computer. The challenge problem is designed to contain an obfuscated secret, which can be turned into a statistical test that accepts samples from a correct quantum implementation. It was conjectured that extracting the secret from the challenge problem is NP-hard, so that the ability to pass the test constitutes strong evidence that the prover possesses a quantum device and that it works as claimed. Unfortunately, about a decade later, Kahanamoku-Meyer found an efficient classical secret extraction attack. Bremner, Cheng, and Ji very recently followed up by constructing a wide-ranging generalization of the original protocol. Their IQP Stabilizer Scheme has been explicitly designed to circumvent the known weakness. They also suggested that the original construction can be made secure by adjusting the problem parameters. In this work, we develop a number of secret extraction attacks which are effective against both new approaches in a wide range of problem parameters. The important problem of finding an efficient and reliable verification protocol for sampling-based proofs of quantum supremacy thus remains open.
One of the challenges of natural language understanding is to deal with the subjectivity of sentences, which may express opinions and emotions that add layers of complexity and nuance. Sentiment analysis is a field that aims to extract and analyze these subjective elements from text, and it can be applied at different levels of granularity, such as document, paragraph, sentence, or aspect. Aspect-based sentiment analysis is a well-studied topic with many available data sets and models. However, there is no clear definition of what makes a sentence difficult for aspect-based sentiment analysis. In this paper, we explore this question by conducting an experiment with three data sets: "Laptops", "Restaurants", and "MTSC" (Multi-Target-dependent Sentiment Classification), and a merged version of these three datasets. We study the impact of domain diversity and syntactic diversity on difficulty. We use a combination of classifiers to identify the most difficult sentences and analyze their characteristics. We employ two ways of defining sentence difficulty. The first one is binary and labels a sentence as difficult if the classifiers fail to correctly predict the sentiment polarity. The second one is a six-level scale based on how many of the top five best-performing classifiers can correctly predict the sentiment polarity. We also define 9 linguistic features that, combined, aim at estimating the difficulty at sentence level.
Clocks are a central part of many computing paradigms, and are mainly used to synchronise the delicate operation of switching, necessary to drive modern computational processes. Unfortunately, this synchronisation process is reaching a natural ``apocalypse''. No longer can clock scaling be used as a blunt tool to accelerate computation, we are up against the natural limits of switching and synchronisation across large processors. Therefore, we need to rethink how time is utilised in computation, using it more naturally in the role of representing data. This can be achieved by using a time interval delineated by discrete start and end events, and by re-casting computational operations into the time domain. With this, computer systems can be developed that are naturally scaleable in time and space, and can use ambient time references built to the best effort of the available technology. Our ambition is to better manage the energy/computation time trade-off, and to explicitly embed the resolution of the data in the time domain. We aim to recast calculations into the ``for free'' format that time offers, and in addition, perform these calculations at the highest clock or oscillator resolution possible.
We develop an anytime-valid permutation test, where the dataset is fixed and the permutations are sampled sequentially one by one, with the objective of saving computational resources by sampling fewer permutations and stopping early. The core technical advance is the development of new test martingales (nonnegative martingales with initial value one) for testing exchangeability against a very particular alternative. These test martingales are constructed using new and simple betting strategies that smartly bet on the relative ranks of permuted test statistics. The betting strategies are guided by the derivation of a simple log-optimal betting strategy, and display excellent power in practice. In contrast to a well-known method by Besag and Clifford, our method yields a valid e-value or a p-value at any stopping time, and with particular stopping rules, it yields computational gains under both the null and the alternative without compromising power.
Human participants play a central role in the development of modern artificial intelligence (AI) technology, in psychological science, and in user research. Recent advances in generative AI have attracted growing interest to the possibility of replacing human participants in these domains with AI surrogates. We survey several such "substitution proposals" to better understand the arguments for and against substituting human participants with modern generative AI. Our scoping review indicates that the recent wave of these proposals is motivated by goals such as reducing the costs of research and development work and increasing the diversity of collected data. However, these proposals ignore and ultimately conflict with foundational values of work with human participants: representation, inclusion, and understanding. This paper critically examines the principles and goals underlying human participation to help chart out paths for future work that truly centers and empowers participants.
In social choice theory with ordinal preferences, a voting method satisfies the axiom of positive involvement if adding to a preference profile a voter who ranks an alternative uniquely first cannot cause that alternative to go from winning to losing. In this note, we prove a new impossibility theorem concerning this axiom: there is no ordinal voting method satisfying positive involvement that also satisfies the Condorcet winner and loser criteria, resolvability, and a common invariance property for Condorcet methods, namely that the choice of winners depends only on the ordering of majority margins by size.
Support Vector Machines (SVMs) are an important tool for performing classification on scattered data, where one usually has to deal with many data points in high-dimensional spaces. We propose solving SVMs in primal form using feature maps based on trigonometric functions or wavelets. In small dimensional settings the Fast Fourier Transform (FFT) and related methods are a powerful tool in order to deal with the considered basis functions. For growing dimensions the classical FFT-based methods become inefficient due to the curse of dimensionality. Therefore, we restrict ourselves to multivariate basis functions, each one of them depends only on a small number of dimensions. This is motivated by the well-known sparsity of effects and recent results regarding the reconstruction of functions from scattered data in terms of truncated analysis of variance (ANOVA) decomposition, which makes the resulting model even interpretable in terms of importance of the features as well as their couplings. The usage of small superposition dimensions has the consequence that the computational effort no longer grows exponentially but only polynomially with respect to the dimension. In order to enforce sparsity regarding the basis coefficients, we use the frequently applied $\ell_2$-norm and, in addition, $\ell_1$-norm regularization. The found classifying function, which is the linear combination of basis functions, and its variance can then be analyzed in terms of the classical ANOVA decomposition of functions. Based on numerical examples we show that we are able to recover the signum of a function that perfectly fits our model assumptions. We obtain better results with $\ell_1$-norm regularization, both in terms of accuracy and clarity of interpretability.
Direct reciprocity is a wide-spread mechanism for evolution of cooperation. In repeated interactions, players can condition their behavior on previous outcomes. A well known approach is given by reactive strategies, which respond to the co-player's previous move. Here we extend reactive strategies to longer memories. A reactive-$n$ strategy takes into account the sequence of the last $n$ moves of the co-player. A reactive-$n$ counting strategy records how often the co-player has cooperated during the last $n$ rounds. We derive an algorithm to identify all partner strategies among reactive-$n$ strategies. We give explicit conditions for all partner strategies among reactive-2, reactive-3 strategies, and reactive-$n$ counting strategies. Partner strategies are those that ensure mutual cooperation without exploitation. We perform evolutionary simulations and find that longer memory increases the average cooperation rate for reactive-$n$ strategies but not for reactive counting strategies. Paying attention to the sequence of moves is necessary for reaping the advantages of longer memory.
Data reduction is a fundamental challenge of modern technology, where classical statistical methods are not applicable because of computational limitations. We consider linear regression for an extraordinarily large number of observations, but only a few covariates. Subsampling aims at the selection of a given percentage of the existing original data. Under distributional assumptions on the covariates, we derive D-optimal subsampling designs and study their theoretical properties. We make use of fundamental concepts of optimal design theory and an equivalence theorem from constrained convex optimization. The thus obtained subsampling designs provide simple rules for whether to accept or reject a data point, allowing for an easy algorithmic implementation. In addition, we propose a simplified subsampling method with lower computational complexity that differs from the D-optimal design. We present a simulation study, comparing both subsampling schemes with the IBOSS method in the case of a fixed size of the subsample.
Selecting the number of clusters is one of the key processes when applying clustering algorithms. To fulfill this task, various cluster validity indices (CVIs) have been introduced. Most of the cluster validity indices are defined to detect the optimal number of clusters hidden in a dataset. However, users sometimes do not expect to get the optimal number of groups but a secondary one which is more reasonable for their applications. This has motivated us to introduce a Bayesian cluster validity index (BCVI) based on existing underlying indices. This index is defined based on either Dirichlet or Generalized Dirichlet priors which result in the same posterior distribution. Our BCVI is then tested based on the Wiroonsri index (WI), and the Wiroonsri-Preedasawakul index (WP) as underlying indices for hard and soft clustering, respectively. We compare their outcomes with the original underlying indices, as well as a few more existing CVIs including Davies and Bouldin (DB), Starczewski (STR), Xie and Beni (XB), and KWON2 indices. Our proposed BCVI clearly benefits the use of CVIs when experiences matter where users can specify their expected range of the final number of clusters. This aspect is emphasized by our experiment classified into three different cases. Finally, we present some applications to real-world datasets including MRI brain tumor images. Our tools will be added to a new version of the recently developed R package ``UniversalCVI''.
Modern machine learning applications have witnessed the remarkable success of optimization algorithms that are designed to find flat minima. Motivated by this design choice, we undertake a formal study that (i) formulates the notion of flat minima, and (ii) studies the complexity of finding them. Specifically, we adopt the trace of the Hessian of the cost function as a measure of flatness, and use it to formally define the notion of approximate flat minima. Under this notion, we then analyze algorithms that find approximate flat minima efficiently. For general cost functions, we discuss a gradient-based algorithm that finds an approximate flat local minimum efficiently. The main component of the algorithm is to use gradients computed from randomly perturbed iterates to estimate a direction that leads to flatter minima. For the setting where the cost function is an empirical risk over training data, we present a faster algorithm that is inspired by a recently proposed practical algorithm called sharpness-aware minimization, supporting its success in practice.