The $\Sigma$-QMAC problem is introduced, involving $S$ servers, $K$ classical ($\mathbb{F}_d$) data streams, and $T$ independent quantum systems. Data stream ${\sf W}_k, k\in[K]$ is replicated at a subset of servers $\mathcal{W}(k)\subset[S]$, and quantum system $\mathcal{Q}_t, t\in[T]$ is distributed among a subset of servers $\mathcal{E}(t)\subset[S]$ such that Server $s\in\mathcal{E}(t)$ receives subsystem $\mathcal{Q}_{t,s}$ of $\mathcal{Q}_t=(\mathcal{Q}_{t,s})_{s\in\mathcal{E}(t)}$. Servers manipulate their quantum subsystems according to their data and send the subsystems to a receiver. The total download cost is $\sum_{t\in[T]}\sum_{s\in\mathcal{E}(t)}\log_d|\mathcal{Q}_{t,s}|$ qudits, where $|\mathcal{Q}|$ is the dimension of $\mathcal{Q}$. The states and measurements of $(\mathcal{Q}_t)_{t\in[T]}$ are required to be separable across $t\in[T]$ throughout, but for each $t\in[T]$, the subsystems of $\mathcal{Q}_{t}$ can be prepared initially in an arbitrary (independent of data) entangled state, manipulated arbitrarily by the respective servers, and measured jointly by the receiver. From the measurements, the receiver must recover the sum of all data streams. Rate is defined as the number of dits ($\mathbb{F}_d$ symbols) of the desired sum computed per qudit of download. The capacity of $\Sigma$-QMAC, i.e., the supremum of achievable rates is characterized for arbitrary data and entanglement distributions $\mathcal{W}, \mathcal{E}$. Coding based on the $N$-sum box abstraction is optimal in every case.
We present a randomized, inverse-free algorithm for producing an approximate diagonalization of any $n \times n$ matrix pencil $(A,B)$. The bulk of the algorithm rests on a randomized divide-and-conquer eigensolver for the generalized eigenvalue problem originally proposed by Ballard, Demmel, and Dumitriu [Technical Report 2010]. We demonstrate that this divide-and-conquer approach can be formulated to succeed with high probability as long as the input pencil is sufficiently well-behaved, which is accomplished by generalizing the recent pseudospectral shattering work of Banks, Garza-Vargas, Kulkarni, and Srivastava [Foundations of Computational Mathematics 2022]. In particular, we show that perturbing and scaling $(A,B)$ regularizes its pseudospectra, allowing divide-and-conquer to run over a simple random grid and in turn producing an accurate diagonalization of $(A,B)$ in the backward error sense. The main result of the paper states the existence of a randomized algorithm that with high probability (and in exact arithmetic) produces invertible $S,T$ and diagonal $D$ such that $||A - SDT^{-1}||_2 \leq \varepsilon$ and $||B - SIT^{-1}||_2 \leq \varepsilon$ in at most $O \left( \log(n) \log^2 \left( \frac{n}{\varepsilon} \right) T_{\text{MM}}(n) \right)$ operations, where $T_{\text{MM}}(n)$ is the asymptotic complexity of matrix multiplication. This not only provides a new set of guarantees for highly parallel generalized eigenvalue solvers but also establishes nearly matrix multiplication time as an upper bound on the complexity of exact arithmetic matrix pencil diagonalization.
A sequence of random variables is called exchangeable if its joint distribution is invariant under permutations. The original formulation of de Finetti's theorem says that any exchangeable sequence of $\{0,1\}$-valued random variables can be thought of as a mixture of independent and identically distributed sequences in a certain precise mathematical sense. Interpreting this statement from a convex analytic perspective, Hewitt and Savage obtained the same conclusion for more general state spaces under some topological conditions. The main contribution of this paper is in providing a new framework that explains the theorem purely as a consequence of the underlying distribution of the random variables, with no topological conditions (beyond Hausdorffness) on the state space being necessary if the distribution is Radon. We also show that it is consistent with the axioms of ZFC that de Finetti's theorem holds for all sequences of exchangeable random variables taking values in any complete metric space. The framework we use is based on nonstandard analysis. We have provided a self-contained introduction to nonstandard analysis as an appendix, thus rendering measure theoretic probability and point-set topology as the only prerequisites for this paper. Our introduction aims to develop some new ideologies that might be of interest to mathematicians, philosophers, and mathematics educators alike. Our technical tools come from nonstandard topological measure theory, in which a highlight is a new generalization of Prokhorov's theorem. Modulo such technical tools, our proof relies on properties of the empirical measures induced by hyperfinitely many identically distributed random variables -- a feature that allows us to establish de Finetti's theorem in the generality that we seek while still retaining the combinatorial intuition of proofs of simpler versions of de Finetti's theorem.
This paper considers an ML inspired approach to hypothesis testing known as classifier/classification-accuracy testing ($\mathsf{CAT}$). In $\mathsf{CAT}$, one first trains a classifier by feeding it labeled synthetic samples generated by the null and alternative distributions, which is then used to predict labels of the actual data samples. This method is widely used in practice when the null and alternative are only specified via simulators (as in many scientific experiments). We study goodness-of-fit, two-sample ($\mathsf{TS}$) and likelihood-free hypothesis testing ($\mathsf{LFHT}$), and show that $\mathsf{CAT}$ achieves (near-)minimax optimal sample complexity in both the dependence on the total-variation ($\mathsf{TV}$) separation $\epsilon$ and the probability of error $\delta$ in a variety of non-parametric settings, including discrete distributions, $d$-dimensional distributions with a smooth density, and the Gaussian sequence model. In particular, we close the high probability sample complexity of $\mathsf{LFHT}$ for each class. As another highlight, we recover the minimax optimal complexity of $\mathsf{TS}$ over discrete distributions, which was recently established by Diakonikolas et al. (2021). The corresponding $\mathsf{CAT}$ simply compares empirical frequencies in the first half of the data, and rejects the null when the classification accuracy on the second half is better than random.
This paper considers a single-trajectory system identification problem for linear systems under general nonlinear and/or time-varying policies with i.i.d. random excitation noises. The problem is motivated by safe learning-based control for constrained linear systems, where the safe policies during the learning process are usually nonlinear and time-varying for satisfying the state and input constraints. In this paper, we provide a non-asymptotic error bound for least square estimation when the data trajectory is generated by any nonlinear and/or time-varying policies as long as the generated state and action trajectories are bounded. This significantly generalizes the existing non-asymptotic guarantees for linear system identification, which usually consider i.i.d. random inputs or linear policies. Interestingly, our error bound is consistent with that for linear policies with respect to the dependence on the trajectory length, system dimensions, and excitation levels. Lastly, we demonstrate the applications of our results by safe learning with robust model predictive control and provide numerical analysis.
We construct a quantum oracle relative to which $\mathsf{BQP} = \mathsf{QMA}$ but cryptographic pseudorandom quantum states and pseudorandom unitary transformations exist, a counterintuitive result in light of the fact that pseudorandom states can be "broken" by quantum Merlin-Arthur adversaries. We explain how this nuance arises as the result of a distinction between algorithms that operate on quantum and classical inputs. On the other hand, we show that some computational complexity assumption is needed to construct pseudorandom states, by proving that pseudorandom states do not exist if $\mathsf{BQP} = \mathsf{PP}$. We discuss implications of these results for cryptography, complexity theory, and quantum tomography.
Empirical evidence demonstrates that citations received by scholarly publications follow a pattern of preferential attachment, resulting in a power-law distribution. Such asymmetry has sparked significant debate regarding the use of citations for research evaluation. However, a consensus has yet to be established concerning the historical trends in citation concentration. Are citations becoming more concentrated in a small number of articles? Or have recent geopolitical and technical changes in science led to more decentralized distributions? This ongoing debate stems from a lack of technical clarity in measuring inequality. Given the variations in citation practices across disciplines and over time, it is crucial to account for multiple factors that can influence the findings. This article explores how reference-based and citation-based approaches, uncited articles, citation inflation, the expansion of bibliometric databases, disciplinary differences, and self-citations affect the evolution of citation concentration. Our results indicate a decreasing trend in citation concentration, primarily driven by a decline in uncited articles, which, in turn, can be attributed to the growing significance of Asia and Europe. On the whole, our findings clarify current debates on citation concentration and show that, contrary to a widely-held belief, citations are increasingly scattered.
An algorithm for robust initial orbit determination (IOD) under perturbed orbital dynamics is presented. By leveraging map inversion techniques defined in the algebra of Taylor polynomials, this tool is capable of not only returning an highly accurate solution to the IOD problem, but also estimating a range of validity for the aforementioned solution in which the true orbit state should lie. Automatic domain splitting is then used on top of the IOD routines to ensure the local truncation error introduced by a polynomial representation of the state estimate remains below a predefined threshold to meet the specified requirements in accuracy. The algorithm is adapted to three types of ground based sensors, namely range radars, Doppler-only radars and optical telescopes by taking into account their different constraints in terms of available measurements and sensor noise. Its improved performance with respect to a Keplerian based IOD solution is finally demonstrated with large scale numerical simulations over a subset of tracked objects in low Earth orbit.
A novel positive dependence property is introduced, called positive measure inducing (PMI for short), being fulfilled by numerous copula classes, including Gaussian, Fr\'echet, Farlie-Gumbel-Morgenstern and Frank copulas; it is conjectured that even all positive quadrant dependent Archimedean copulas meet this property. From a geometric viewpoint, a PMI copula concentrates more mass near the main diagonal than in the opposite diagonal. A striking feature of PMI copulas is that they impose an ordering on a certain class of copula-induced measures of concordance, the latter originating in Edwards et al. (2004) and including Spearman's rho $\rho$ and Gini's gamma $\gamma$, leading to numerous new inequalities such as $3 \gamma \geq 2 \rho$. The measures of concordance within this class are estimated using (classical) empirical copulas and the intrinsic construction via empirical checkerboard copulas, and the estimators' asymptotic behaviour is determined. Building upon the presented inequalities, asymptotic tests are constructed having the potential of being used for detecting whether the underlying dependence structure of a given sample is PMI, which in turn can be used for excluding certain copula families from model building. The excellent performance of the tests is demonstrated in a simulation study and by means of a real-data example.
In federated frequency estimation (FFE), multiple clients work together to estimate the frequencies of their collective data by communicating with a server that respects the privacy constraints of Secure Summation (SecSum), a cryptographic multi-party computation protocol that ensures that the server can only access the sum of client-held vectors. For single-round FFE, it is known that count sketching is nearly information-theoretically optimal for achieving the fundamental accuracy-communication trade-offs [Chen et al., 2022]. However, we show that under the more practical multi-round FEE setting, simple adaptations of count sketching are strictly sub-optimal, and we propose a novel hybrid sketching algorithm that is provably more accurate. We also address the following fundamental question: how should a practitioner set the sketch size in a way that adapts to the hardness of the underlying problem? We propose a two-phase approach that allows for the use of a smaller sketch size for simpler problems (e.g. near-sparse or light-tailed distributions). We conclude our work by showing how differential privacy can be added to our algorithm and verifying its superior performance through extensive experiments conducted on large-scale datasets.
Classic algorithms and machine learning systems like neural networks are both abundant in everyday life. While classic computer science algorithms are suitable for precise execution of exactly defined tasks such as finding the shortest path in a large graph, neural networks allow learning from data to predict the most likely answer in more complex tasks such as image classification, which cannot be reduced to an exact algorithm. To get the best of both worlds, this thesis explores combining both concepts leading to more robust, better performing, more interpretable, more computationally efficient, and more data efficient architectures. The thesis formalizes the idea of algorithmic supervision, which allows a neural network to learn from or in conjunction with an algorithm. When integrating an algorithm into a neural architecture, it is important that the algorithm is differentiable such that the architecture can be trained end-to-end and gradients can be propagated back through the algorithm in a meaningful way. To make algorithms differentiable, this thesis proposes a general method for continuously relaxing algorithms by perturbing variables and approximating the expectation value in closed form, i.e., without sampling. In addition, this thesis proposes differentiable algorithms, such as differentiable sorting networks, differentiable renderers, and differentiable logic gate networks. Finally, this thesis presents alternative training strategies for learning with algorithms.