In this paper, we propose a novel bipartite entanglement purification protocol built upon hashing and upon the guessing random additive noise decoding (GRAND) approach recently devised for classical error correction codes. Our protocol offers substantial advantages over existing hashing protocols, requiring fewer qubits for purification, achieving higher fidelities, and delivering better yields with reduced computational costs. We provide numerical and semi-analytical results to corroborate our findings and provide a detailed comparison with the hashing protocol of Bennet et al. Although that pioneering work devised performance bounds, it did not offer an explicit construction for implementation. The present work fills that gap, offering both an explicit and more efficient purification method. We demonstrate that our protocol is capable of purifying states with noise on the order of 10% per Bell pair even with a small ensemble of 16 pairs. The work explores a measurement-based implementation of the protocol to address practical setups with noise. This work opens the path to practical and efficient entanglement purification using hashing-based methods with feasible computational costs. Compared to the original hashing protocol, the proposed method can achieve some desired fidelity with a number of initial resources up to one hundred times smaller. Therefore, the proposed method seems well-fit for future quantum networks with a limited number of resources and entails a relatively low computational overhead.
In this paper, we propose a new test for testing the equality of two population covariance matrices in the ultra-high dimensional setting that the dimension is much larger than the sizes of both of the two samples. Our proposed methodology relies on a data splitting procedure and a comparison of a set of well selected eigenvalues of the sample covariance matrices on the split data sets. Compared to the existing methods, our methodology is adaptive in the sense that (i). it does not require specific assumption (e.g., comparable or balancing, etc.) on the sizes of two samples; (ii). it does not need quantitative or structural assumptions of the population covariance matrices; (iii). it does not need the parametric distributions or the detailed knowledge of the moments of the two populations. Theoretically, we establish the asymptotic distributions of the statistics used in our method and conduct the power analysis. We justify that our method is powerful under very weak alternatives. We conduct extensive numerical simulations and show that our method significantly outperforms the existing ones both in terms of size and power. Analysis of two real data sets is also carried out to demonstrate the usefulness and superior performance of our proposed methodology. An $\texttt{R}$ package $\texttt{UHDtst}$ is developed for easy implementation of our proposed methodology.
In this article, we study nonparametric inference for a covariate-adjusted regression function. This parameter captures the average association between a continuous exposure and an outcome after adjusting for other covariates. In particular, under certain causal conditions, this parameter corresponds to the average outcome had all units been assigned to a specific exposure level, known as the causal dose-response curve. We propose a debiased local linear estimator of the covariate-adjusted regression function, and demonstrate that our estimator converges pointwise to a mean-zero normal limit distribution. We use this result to construct asymptotically valid confidence intervals for function values and differences thereof. In addition, we use approximation results for the distribution of the supremum of an empirical process to construct asymptotically valid uniform confidence bands. Our methods do not require undersmoothing, permit the use of data-adaptive estimators of nuisance functions, and our estimator attains the optimal rate of convergence for a twice differentiable function. We illustrate the practical performance of our estimator using numerical studies and an analysis of the effect of air pollution exposure on cardiovascular mortality.
In this paper, we introduce a new simple approach to developing and establishing the convergence of splitting methods for a large class of stochastic differential equations (SDEs), including additive, diagonal and scalar noise types. The central idea is to view the splitting method as a replacement of the driving signal of an SDE, namely Brownian motion and time, with a piecewise linear path that yields a sequence of ODEs $-$ which can be discretised to produce a numerical scheme. This new way of understanding splitting methods is inspired by, but does not use, rough path theory. We show that when the driving piecewise linear path matches certain iterated stochastic integrals of Brownian motion, then a high order splitting method can be obtained. We propose a general proof methodology for establishing the strong convergence of these approximations that is akin to the general framework of Milstein and Tretyakov. That is, once local error estimates are obtained for the splitting method, then a global rate of convergence follows. This approach can then be readily applied in future research on SDE splitting methods. By incorporating recently developed approximations for iterated integrals of Brownian motion into these piecewise linear paths, we propose several high order splitting methods for SDEs satisfying a certain commutativity condition. In our experiments, which include the Cox-Ingersoll-Ross model and additive noise SDEs (noisy anharmonic oscillator, stochastic FitzHugh-Nagumo model, underdamped Langevin dynamics), the new splitting methods exhibit convergence rates of $O(h^{3/2})$ and outperform schemes previously proposed in the literature.
AI alignment work is important from both a commercial and a safety lens. With this paper, we aim to help actors who support alignment efforts to make these efforts as effective as possible, and to avoid potential adverse effects. We begin by suggesting that institutions that are trying to act in the public interest (such as governments) should aim to support specifically alignment work that reduces accident or misuse risks. We then describe four problems which might cause alignment efforts to be counterproductive, increasing large-scale AI risks. We suggest mitigations for each problem. Finally, we make a broader recommendation that institutions trying to act in the public interest should think systematically about how to make their alignment efforts as effective, and as likely to be beneficial, as possible.
Can generative AI help us speed up the authoring of tools to help self-represented litigants? In this paper, we describe 3 approaches to automating the completion of court forms: a generative AI approach that uses GPT-3 to iteratively prompt the user to answer questions, a constrained template-driven approach that uses GPT-4-turbo to generate a draft of questions that are subject to human review, and a hybrid method. We use the open source Docassemble platform in all 3 experiments, together with a tool created at Suffolk University Law School called the Assembly Line Weaver. We conclude that the hybrid model of constrained automated drafting with human review is best suited to the task of authoring guided interviews.
Haagerup's proof of the non commutative little Grothendieck inequality raises some questions on the commutative little inequality, and it offers a new result on scalar matrices with non negative entries. The theory of completely bounded maps implies that the commutative Grothendieck inequality follows from the little commutative inequality, and that this passage may be given a geometric form as a relation between a pair of compact convex sets of positive matrices, which, in turn, characterizes the little constant in the complex case.
This paper proposes a Bayes Net based Monte Carlo optimization for motion planning (BN-MCO). Primarily, we adjust the potential fields determined by goal and start constraints to progressively guide the sampled clusters toward the goal and start points. Then, we utilize the Gaussian mixed modal (GMM) to perform the Monte Carlo optimization, confronting these two non-convex potential fields. Moreover, KL divergence measures the bias between the true distribution determined by the fields and the proposed GMM, whose parameters are learned incrementally according to the manifold information of the bias. In this way, the Bayesian network consisting of sequential updated GMMs expands until the constraints are satisfied and the shortest path method can find a feasible path. Finally, we tune the key parameters and benchmark BN-MCO against the other 5 planners on LBR-iiwa in a bookshelf. The result shows the highest success rate and moderate solving efficiency of BN-MCO.
In this paper we present a low-rank method for conforming multipatch discretizations of compressible linear elasticity problems using Isogeometric Analysis. The proposed technique is a non-trivial extension of [M. Montardini, G. Sangalli, and M. Tani. A low-rank isogeometric solver based on Tucker tensors. Comput. Methods Appl. Mech. Engrg., page 116472, 2023.] to multipatch geometries. We tackle the model problem using an overlapping Schwarz method, where the subdomains can be defined as unions of neighbouring patches. Then on each subdomain we approximate the blocks of the linear system matrix and of the right-hand side vector using Tucker matrices and Tucker vectors, respectively. We use the Truncated Preconditioned Conjugate Gradient as a linear solver, coupled with a suited preconditioner. The numerical experiments show the advantages of this approach in terms of memory storage. Moreover, the number of iterations is robust with respect to the relevant parameters.
In this paper, we find a necessary and sufficient condition for multi-twisted Reed-Solomon codes to be MDS. In particular, we introduce a new class of MDS double-twisted Reed-Solomon codes $\mathcal{C}_{\bm \alpha, \bm t, \bm h, \bm \eta}$ with twists $\bm t = (1, 2)$ and hooks $\bm h = (0, 1)$ over the finite field $\mathbb{F}_q$, providing a non-trivial example over $\mathbb{F}_{16}$ and enumeration over the finite fields of size up to 17. Moreover, we obtain necessary conditions for the existence of multi-twisted Reed-Solomon codes with small dimensional hull. Consequently, we derive conditions for the existence of MDS multi-twisted Reed-Solomon codes with small dimensional hull.
This paper does not describe a working system. Instead, it presents a single idea about representation which allows advances made by several different groups to be combined into an imaginary system called GLOM. The advances include transformers, neural fields, contrastive representation learning, distillation and capsules. GLOM answers the question: How can a neural network with a fixed architecture parse an image into a part-whole hierarchy which has a different structure for each image? The idea is simply to use islands of identical vectors to represent the nodes in the parse tree. If GLOM can be made to work, it should significantly improve the interpretability of the representations produced by transformer-like systems when applied to vision or language