In this work, we show that a pair of entangled qubits can be used to compute a product privately. More precisely, two participants with a private input from a finite field can perform local operations on a shared, Bell-like quantum state, and when these qubits are later sent to a third participant, the third participant can determine the product of the inputs, but without learning more about the individual inputs. We give a concrete way to realize this product computation for arbitrary finite fields of prime order.
We consider the problems of testing and learning quantum $k$-junta channels, which are $n$-qubit to $n$-qubit quantum channels acting non-trivially on at most $k$ out of $n$ qubits and leaving the rest of qubits unchanged. We show the following. 1. An $\widetilde{O}\left(k\right)$-query algorithm to distinguish whether the given channel is $k$-junta channel or is far from any $k$-junta channels, and a lower bound $\Omega\left(\sqrt{k}\right)$ on the number of queries; 2. An $\widetilde{O}\left(4^k\right)$-query algorithm to learn a $k$-junta channel, and a lower bound $\Omega\left(4^k/k\right)$ on the number of queries. This gives the first junta channel testing and learning results, and partially answers an open problem raised by Chen et al. (2023). In order to settle these problems, we develop a Fourier analysis framework over the space of superoperators and prove several fundamental properties, which extends the Fourier analysis over the space of operators introduced in Montanaro and Osborne (2010).
This paper presents an efficient algorithm for the sequential positioning, also called nested dissection, of two planes in an arbitrary polyhedron. Two planar interfaces are positioned such that the first plane truncates a given volume from this arbitrary polyhedron and the next plane truncates a second given volume from the residual polyhedron. This is a relevant task in the numerical simulation of three-phase flows when resorting to the geometric Volume-of-Fluid (VoF) method with a Piecewise Linear Interface Calculation (PLIC). An efficient algorithm for this task significantly speeds up the three-phase PLIC algorithm. The present study describes a method based on a recursive application of the Gaussian divergence theorem, where the fact that the truncated polyhedron shares multiple faces with the original polyhedron can be exploited to reduce the computational effort. A careful choice of the coordinate system origin for the volume computation allows for successive positioning of two planes without reestablishing polyhedron connectivity. Combined with a highly efficient root finding, this results in a significant performance gain in the reconstruction of the three-phase interface configurations. The performance of the new method is assessed in a series of carefully designed numerical experiments. Compared to a conventional decomposition-based approach, the number of iterations and, thus, of the required truncations was reduced by up to an order of magnitude. The PLIC positioning run-time was reduced by about 90% in our reference implementation. Integrated into the multi-phase flow solver Free Surface 3D (FS3D), an overall performance gain of about 20% was achieved. Allowing for simple integration into existing numerical schemes, the proposed algorithm is self-contained (example Fortran Module see //doi.org/10.18419/darus-2488), requiring no external decomposition libraries.
We consider the problem of sequentially optimizing a time-varying objective function using time-varying Bayesian optimization (TVBO). Here, the key challenge is the exploration-exploitation trade-off under time variations. Current approaches to TVBO require prior knowledge of a constant rate of change. However, in practice, the rate of change is usually unknown. We propose an event-triggered algorithm, ET-GP-UCB, that treats the optimization problem as static until it detects changes in the objective function online and then resets the dataset. This allows the algorithm to adapt to realized temporal changes without the need for prior knowledge. The event-trigger is based on probabilistic uniform error bounds used in Gaussian process regression. We provide regret bounds for ET-GP-UCB and show in numerical experiments that it outperforms state-of-the-art algorithms on synthetic and real-world data. Furthermore, these results demonstrate that ET-GP-UCB is readily applicable to various settings without tuning hyperparameters.
We present a novel and easy-to-use method for calibrating error-rate based confidence intervals to evidence-based support intervals. Support intervals are obtained from inverting Bayes factors based on a parameter estimate and its standard error. A $k$ support interval can be interpreted as "the observed data are at least $k$ times more likely under the included parameter values than under a specified alternative". Support intervals depend on the specification of prior distributions for the parameter under the alternative, and we present several types that allow different forms of external knowledge to be encoded. We also show how prior specification can to some extent be avoided by considering a class of prior distributions and then computing so-called minimum support intervals which, for a given class of priors, have a one-to-one mapping with confidence intervals. We also illustrate how the sample size of a future study can be determined based on the concept of support. Finally, we show how the bound for the type I error rate of Bayes factors leads to a bound for the coverage of support intervals. An application to data from a clinical trial illustrates how support intervals can lead to inferences that are both intuitive and informative.
The general adversary dual is a powerful tool in quantum computing because it gives a query-optimal bounded-error quantum algorithm for deciding any Boolean function. Unfortunately, the algorithm uses linear qubits in the worst case, and only works if the constraints of the general adversary dual are exactly satisfied. The challenge of improving the algorithm is that it is brittle to arbitrarily small errors since it relies on a reflection over a span of vectors. We overcome this challenge and build a robust dual adversary algorithm that can handle approximately satisfied constraints. As one application of our robust algorithm, we prove that for any Boolean function with polynomially many 1-valued inputs (or in fact a slightly weaker condition) there is a query-optimal algorithm that uses logarithmic qubits. As another application, we prove that numerically derived, approximate solutions to the general adversary dual give a bounded-error quantum algorithm under certain conditions. Further, we show that these conditions empirically hold with reasonable iterations for Boolean functions with small domains. We also develop several tools that may be of independent interest, including a robust approximate spectral gap lemma, a method to compress a general adversary dual solution using the Johnson-Lindenstrauss lemma, and open-source code to find solutions to the general adversary dual.
Cumulative memory -- the sum of space used per step over the duration of a computation -- is a fine-grained measure of time-space complexity that was introduced to analyze cryptographic applications like password hashing. It is a more accurate cost measure for algorithms that have infrequent spikes in memory usage and are run in environments such as cloud computing that allow dynamic allocation and de-allocation of resources during execution, or when many multiple instances of an algorithm are interleaved in parallel. We prove the first lower bounds on cumulative memory complexity for both sequential classical computation and quantum circuits. Moreover, we develop general paradigms for bounding cumulative memory complexity inspired by the standard paradigms for proving time-space tradeoff lower bounds that can only lower bound the maximum space used during an execution. The resulting lower bounds on cumulative memory that we obtain are just as strong as the best time-space tradeoff lower bounds, which are very often known to be tight. Although previous results for pebbling and random oracle models have yielded time-space tradeoff lower bounds larger than the cumulative memory complexity, our results show that in general computational models such separations cannot follow from known lower bound techniques and are not true for many functions. Among many possible applications of our general methods, we show that any classical sorting algorithm with success probability at least $1/\text{poly}(n)$ requires cumulative memory $\tilde \Omega(n^2)$, any classical matrix multiplication algorithm requires cumulative memory $\Omega(n^6/T)$, any quantum sorting circuit requires cumulative memory $\Omega(n^3/T)$, and any quantum circuit that finds $k$ disjoint collisions in a random function requires cumulative memory $\Omega(k^3n/T^2)$.
Entropic optimal transport (EOT) presents an effective and computationally viable alternative to unregularized optimal transport (OT), offering diverse applications for large-scale data analysis. In this work, we derive novel statistical bounds for empirical plug-in estimators of the EOT cost and show that their statistical performance in the entropy regularization parameter $\epsilon$ and the sample size $n$ only depends on the simpler of the two probability measures. For instance, under sufficiently smooth costs this yields the parametric rate $n^{-1/2}$ with factor $\epsilon^{-d/2}$, where $d$ is the minimum dimension of the two population measures. This confirms that empirical EOT also adheres to the lower complexity adaptation principle, a hallmark feature only recently identified for unregularized OT. As a consequence of our theory, we show that the empirical entropic Gromov-Wasserstein distance and its unregularized version for measures on Euclidean spaces also obey this principle. Additionally, we comment on computational aspects and complement our findings with Monte Carlo simulations. Our techniques employ empirical process theory and rely on a dual formulation of EOT over a single function class. Crucial to our analysis is the observation that the entropic cost-transformation of a function class does not increase its uniform metric entropy by much.
An important tool in algorithm design is the ability to build algorithms from other algorithms that run as subroutines. In the case of quantum algorithms, a subroutine may be called on a superposition of different inputs, which complicates things. For example, a classical algorithm that calls a subroutine $Q$ times, where the average probability of querying the subroutine on input $i$ is $p_i$, and the cost of the subroutine on input $i$ is $T_i$, incurs expected cost $Q\sum_i p_i E[T_i]$ from all subroutine queries. While this statement is obvious for classical algorithms, for quantum algorithms, it is much less so, since naively, if we run a quantum subroutine on a superposition of inputs, we need to wait for all branches of the superposition to terminate before we can apply the next operation. We nonetheless show an analogous quantum statement (*): If $q_i$ is the average query weight on $i$ over all queries, the cost from all quantum subroutine queries is $Q\sum_i q_i E[T_i]$. Here the query weight on $i$ for a particular query is the probability of measuring $i$ in the input register if we were to measure right before the query. We prove this result using the technique of multidimensional quantum walks, recently introduced in arXiv:2208.13492. We present a more general version of their quantum walk edge composition result, which yields variable-time quantum walks, generalizing variable-time quantum search, by, for example, replacing the update cost with $\sqrt{\sum_{u,v}\pi_u P_{u,v} E[T_{u,v}^2]}$, where $T_{u,v}$ is the cost to move from vertex $u$ to vertex $v$. The same technique that allows us to compose quantum subroutines in quantum walks can also be used to compose in any quantum algorithm, which is how we prove (*).
An approximate projection onto the tangent cone to the variety of third-order tensors of bounded tensor-train rank is proposed and proven to satisfy a better angle condition than the one proposed by Kutschan (2019). Such an approximate projection enables, e.g., to compute gradient-related directions in the tangent cone, as required by algorithms aiming at minimizing a continuously differentiable function on the variety, a problem appearing notably in tensor completion. A numerical experiment is presented which indicates that, in practice, the angle condition satisfied by the proposed approximate projection is better than both the one satisfied by the approximate projection introduced by Kutschan and the proven theoretical bound.
The distributed computation of a Nash equilibrium in aggregative games is gaining increased traction in recent years. Of particular interest is the mediator-free scenario where individual players only access or observe the decisions of their neighbors due to practical constraints. Given the competitive rivalry among participating players, protecting the privacy of individual players becomes imperative when sensitive information is involved. We propose a fully distributed equilibrium-computation approach for aggregative games that can achieve both rigorous differential privacy and guaranteed computation accuracy of the Nash equilibrium. This is in sharp contrast to existing differential-privacy solutions for aggregative games that have to either sacrifice the accuracy of equilibrium computation to gain rigorous privacy guarantees, or allow the cumulative privacy budget to grow unbounded, hence losing privacy guarantees, as iteration proceeds. Our approach uses independent noises across players, thus making it effective even when adversaries have access to all shared messages as well as the underlying algorithm structure. The encryption-free nature of the proposed approach, also ensures efficiency in computation and communication. The approach is also applicable in stochastic aggregative games, able to ensure both rigorous differential privacy and guaranteed computation accuracy of the Nash equilibrium when individual players only have stochastic estimates of their pseudo-gradient mappings. Numerical comparisons with existing counterparts confirm the effectiveness of the proposed approach.