Chandran et al. (SIAM J. Comput.'14) formally introduced the cryptographic task of position verification, where they also showed that it cannot be achieved by classical protocols. In this work, we initiate the study of position verification protocols with classical verifiers. We identify that proofs of quantumness (and thus computational assumptions) are necessary for such position verification protocols. For the other direction, we adapt the proof of quantumness protocol by Brakerski et al. (FOCS'18) to instantiate such a position verification protocol. As a result, we achieve classically verifiable position verification assuming the quantum hardness of Learning with Errors. Along the way, we develop the notion of 1-of-2 non-local soundness for the framework of 1-of-2 puzzles, first introduced by Radian and Sattath (AFT'19), which can be viewed as a computational unclonability property. We show that 1-of-2 non-local soundness follows from the standard 2-of-2 soundness, which could be of independent interest.
We give a direct product theorem for the entanglement-assisted interactive quantum communication complexity of an $l$-player predicate $\mathsf{V}$. In particular we show that for a distribution $p$ that is product across the input sets of the $l$ players, the success probability of any entanglement-assisted quantum communication protocol for computing $n$ copies of $\mathsf{V}$, whose communication is $o(\log(\mathrm{eff}^*(\mathsf{V},p))\cdot n)$, goes down exponentially in $n$. Here $\mathrm{eff}^*(\mathsf{V}, p)$ is a distributional version of the quantum efficiency or partition bound introduced by Laplante, Lerays and Roland (2014), which is a lower bound on the distributional quantum communication complexity of computing a single copy of $\mathsf{V}$ with respect to $p$. As an application of our result, we show that it is possible to do device-independent quantum key distribution (DIQKD) without the assumption that devices do not leak any information after inputs are provided to them. We analyze the DIQKD protocol given by Jain, Miller and Shi (2017), and show that when the protocol is carried out with devices that are compatible with $n$ copies of the Magic Square game, it is possible to extract $\Omega(n)$ bits of key from it, even in the presence of $O(n)$ bits of leakage. Our security proof is parallel, i.e., the honest parties can enter all their inputs into their devices at once, and works for a leakage model that is arbitrarily interactive, i.e., the devices of the honest parties Alice and Bob can exchange information with each other and with the eavesdropper Eve in any number of rounds, as long as the total number of bits or qubits communicated is bounded.
We revisit the approaches to the solution of parity games based on progress measures and show how the notion of quasi dominions can be integrated with those approaches. The idea is that, while progress measure based techniques typically focus on one of the two players, little information is gathered on the other player during the solution process. Adding quasi dominions provides additional information on this player that can be leveraged to greatly accelerate convergence to a progress measure. To accommodate quasi dominions, however, non trivial refinements of the approach are necessary. In particular, we need to introduce a novel notion of measure and a new method to prove correctness of the resulting solution technique.
It has been hypothesized that quantum computers may lend themselves well to applications in machine learning. In the present work, we analyze function classes defined via quantum kernels. Quantum computers offer the possibility to efficiently compute inner products of exponentially large density operators that are classically hard to compute. However, having an exponentially large feature space renders the problem of generalization hard. Furthermore, being able to evaluate inner products in high dimensional spaces efficiently by itself does not guarantee a quantum advantage, as already classically tractable kernels can correspond to high- or infinite-dimensional reproducing kernel Hilbert spaces (RKHS). We analyze the spectral properties of quantum kernels and find that we can expect an advantage if their RKHS is low dimensional and contains functions that are hard to compute classically. If the target function is known to lie in this class, this implies a quantum advantage, as the quantum computer can encode this inductive bias, whereas there is no classically efficient way to constrain the function class in the same way. However, we show that finding suitable quantum kernels is not easy because the kernel evaluation might require exponentially many measurements. In conclusion, our message is a somewhat sobering one: we conjecture that quantum machine learning models can offer speed-ups only if we manage to encode knowledge about the problem at hand into quantum circuits, while encoding the same bias into a classical model would be hard. These situations may plausibly occur when learning on data generated by a quantum process, however, they appear to be harder to come by for classical datasets.
Deciding whether a diagram of a knot can be untangled with a given number of moves (as a part of the input) is known to be NP-complete. In this paper we determine the parameterized complexity of this problem with respect to a natural parameter called defect. Roughly speaking, it measures the efficiency of the moves used in the shortest untangling sequence of Reidemeister moves. We show that the II- moves in a shortest untangling sequence can be essentially performed greedily. Using that, we show that this problem belongs to W[P] when parameterized by the defect. We also show that this problem is W[P]-hard by a reduction from Minimum axiom set.
Screening for any of the Autism Spectrum Disorders is a complicated process often involving a hybrid of behavioural observations and questionnaire based tests. Typically carried out in a controlled setting, this process requires trained clinicians or psychiatrists for such assessments. Riding on the wave of technical advancement in mobile platforms, several attempts have been made at incorporating such assessments on mobile and tablet devices. In this paper we analyse videos generated using one such screening test. This paper reports the first use of the efficacy of using the observer's distance from the display screen while administering a sensory sensitivity test as a behavioural marker for autism for children aged 2-7 years The potential for using a test such as this in casual home settings is promising.
Most algorithmic studies on multi-agent information design so far have focused on the restricted situation with no inter-agent externalities; a few exceptions investigated truly strategic games such as zero-sum games and second-price auctions but have all focused only on optimal public signaling. This paper initiates the algorithmic information design of both \emph{public} and \emph{private} signaling in a fundamental class of games with negative externalities, i.e., singleton congestion games, with wide application in today's digital economy, machine scheduling, routing, etc. For both public and private signaling, we show that the optimal information design can be efficiently computed when the number of resources is a constant. To our knowledge, this is the first set of efficient \emph{exact} algorithms for information design in succinctly representable many-player games. Our results hinge on novel techniques such as developing certain ``reduced forms'' to compactly characterize equilibria in public signaling or to represent players' marginal beliefs in private signaling. When there are many resources, we show computational intractability results. To overcome the issue of multiple equilibria, here we introduce a new notion of equilibrium-\emph{oblivious} hardness, which rules out any possibility of computing a good signaling scheme, irrespective of the equilibrium selection rule.
Many common ``consumer'' applications, i.e., applications widely used by non-technical users, are now provided by a very small number of companies, even if that set of companies differ across geographic regions, or rely on a very small number of implementations even if the applications are largely standards-based. While likely only a partial solution, we can draw on earlier regulatory experiences to facilitate competition or at least lessen the impact of the lack thereof.
We investigate properties of Poncelet $N$-gon families inscribed in a parabola and circumscribing a focus-centered circle. These can be regarded as the polar images of a bicentric family with respect to the circumcircle, such that the bicentric incircle contains the circumcenter. We derive closure conditions for several $N$ and describe curious Euclidean properties such as straight line, circular, and point, loci, as well as a (perhaps new) conserved quantity.
Blockchain protocols come with a variety of security guarantees. For example, BFT-inspired protocols such as Algorand tend to be secure in the partially synchronous setting, while longest chain protocols like Bitcoin will normally require stronger synchronicity to be secure. Another fundamental distinction, directly relevant to scalability solutions such as sharding, is whether or not a single untrusted user is able to point to *certificates*, which provide incontrovertible proof of block confirmation. Algorand produces such certificates, while Bitcoin does not. Are these properties accidental? Or are they inherent consequences of the paradigm of protocol design? Our aim in this paper is to understand what, fundamentally, governs the nature of security for permissionless blockchain protocols. Using the framework developed in (Lewis-Pye and Roughgarden, 2021), we prove general results showing that these questions relate directly to properties of the user selection process, i.e., the method (such as proof-of-work or proof-of-stake) which is used to select users with the task of updating state. Our results suffice to establish, for example, that the production of certificates is impossible for proof-of-work protocols, but is automatic for standard forms of proof-of-stake protocols. As a byproduct of our work, we also define a number of security notions and identify the equivalences and inequivalences among them.
We present an algorithm for the repair of parameterized systems. The repair problem is, for a given process implementation, to find a refinement such that a given property is satisfied by the resulting parameterized system, and deadlocks are avoided. Our algorithm employs a constraint-based search for candidate repairs, and uses a parameterized model checker to determine their correctness and update the constraint system in case errors are reachable. We apply this algorithm on systems that can be represented as well-structured transition systems (WSTS), including disjunctive systems, pairwise rendezvous systems, and broadcast protocols. Moreover, we show that parameterized deadlock detection can be decided in NEXPTIME for disjunctive systems, vastly improving on the best known lower bound, and that it is in general undecidable for broadcast protocols.