Quantum error correction has recently been shown to benefit greatly from specific physical encodings of the code qubits. In particular, several researchers have considered the individual code qubits being encoded with the continuous variable GottesmanKitaev-Preskill (GKP) code, and then imposed an outer discrete-variable code such as the surface code on these GKP qubits. Under such a concatenation scheme, the analog information from the inner GKP error correction improves the noise threshold of the outer code. However, the surface code has vanishing rate and demands a lot of resources with growing distance. In this work, we concatenate the GKP code with generic quantum low-density parity-check (QLDPC) codes and demonstrate a natural way to exploit the GKP analog information in iterative decoding algorithms. We first show the noise thresholds for two lifted product QLDPC code families, and then show the improvements of noise thresholds when the iterative decoder - a hardware-friendly min-sum algorithm (MSA) - utilizes the GKP analog information. We also show that, when the GKP analog information is combined with a sequential update schedule for MSA, the scheme surpasses the well-known CSS Hamming bound for these code families. Furthermore, we observe that the GKP analog information helps the iterative decoder in escaping harmful trapping sets in the Tanner graph of the QLDPC code, thereby eliminating or significantly lowering the error floor of the logical error rate curves. Finally, we discuss new fundamental and practical questions that arise from this work on channel capacity under GKP analog information, and on improving decoder design and analysis.
We consider a coded compressed sensing approach for the unsourced random access and replace the outer tree code proposed by Amalladinne et al. with the list recoverable code capable of correcting t errors. A finite-length random coding bound for such codes is derived. The numerical experiments in the single antenna quasi-static Rayleigh fading MAC show that transition to list recoverable codes correcting t errors improves the performance of coded compressed sensing scheme by 7-10 dB compared to the tree code-based scheme. We propose two practical constructions of outer codes. The first is a modification of the tree code. It utilizes the same code structure, and a key difference is a decoder capable of correcting up to t errors. The second is based on the Reed-Solomon codes and Guruswami-Sudan list decoding algorithm. The first scheme provides an energy efficiency very close to the random coding bound when the decoding complexity is unbounded. But for the practical parameters, the second scheme is better and improves the performance of a tree code-based scheme when the number of active users is less than 200.
System-oriented IR evaluations are limited to rather abstract understandings of real user behavior. As a solution, simulating user interactions provides a cost-efficient way to support system-oriented experiments with more realistic directives when no interaction logs are available. While there are several user models for simulated clicks or result list interactions, very few attempts have been made towards query simulations, and it has not been investigated if these can reproduce properties of real queries. In this work, we validate simulated user query variants with the help of TREC test collections in reference to real user queries that were made for the corresponding topics. Besides, we introduce a simple yet effective method that gives better reproductions of real queries than the established methods. Our evaluation framework validates the simulations regarding the retrieval performance, reproducibility of topic score distributions, shared task utility, effort and effect, and query term similarity when compared with real user query variants. While the retrieval effectiveness and statistical properties of the topic score distributions as well as economic aspects are close to that of real queries, it is still challenging to simulate exact term matches and later query reformulations.
A randomized Gram-Schmidt algorithm is developed for orthonormalization of high-dimensional vectors or QR factorization. The proposed process can be less computationally expensive than the classical Gram-Schmidt process while being at least as numerically stable as the modified Gram-Schmidt process. Our approach is based on random sketching, which is a dimension reduction technique consisting in estimation of inner products of high-dimensional vectors by inner products of their small efficiently-computable random images, so-called sketches. In this way, an approximate orthogonality of the full vectors can be obtained by orthogonalization of their sketches. The proposed Gram-Schmidt algorithm can provide computational cost reduction in any architecture. The benefit of random sketching can be amplified by performing the non-dominant operations in higher precision. In this case the numerical stability can be guaranteed with a working unit roundoff independent of the dimension of the problem. The proposed Gram-Schmidt process can be applied to Arnoldi iteration and result in new Krylov subspace methods for solving high-dimensional systems of equations or eigenvalue problems. Among them we chose randomized GMRES method as a practical application of the methodology.
The main aim of this paper is to solve an inverse source problem for a general nonlinear hyperbolic equation. Combining the quasi-reversibility method and a suitable Carleman weight function, we define a map of which fixed point is the solution to the inverse problem. To find this fixed point, we define a recursive sequence with an arbitrary initial term by the same manner as in the classical proof of the contraction principle. Applying a Carleman estimate, we show that the sequence above converges to the desired solution with the exponential rate. Therefore, our new method can be considered as an analog of the contraction principle. We rigorously study the stability of our method with respect to noise. Numerical examples are presented.
In this article we prove that a class of Goppa codes whose Goppa polynomial is of the form $g(x) = x + x^q + \cdots + x^{q^{m-1}}$ where $m \geq 3$ (i.e. $g(x)$ is a trace polynomial from a field extension of degree $m \geq 3$) has a better minimum distance than what the Goppa bound $d \geq 2deg(g(x))+1$ implies. Our improvement is based on finding another Goppa polynomial $h$ such that $C(L,g) = C(M, h)$ but $deg(h) > deg(g)$. This is a significant improvement over Trace Goppa codes over quadratic field extensions (i.e. the case $m = 2$), as the Goppa bound for the quadratic case is sharp.
In this paper we consider a linearized variable-time-step two-step backward differentiation formula (BDF2) scheme for solving nonlinear parabolic equations. The scheme is constructed by using the variable time-step BDF2 for the linear term and a Newton linearized method for the nonlinear term in time combining with a Galerkin finite element method (FEM) in space. We prove the unconditionally optimal error estimate of the proposed scheme under mild restrictions on the ratio of adjacent time-steps, i.e. $0<r_k < r_{\max} \approx 4.8645$ and on the maximum time step. The proof involves the discrete orthogonal convolution (DOC) and discrete complementary convolution (DCC) kernels, and the error splitting approach. In addition, our analysis also shows that the first level solution $u^1$ obtained by BDF1 (i.e. backward Euler scheme) does not cause the loss of global accuracy of second order. Numerical examples are provided to demonstrate our theoretical results.
We study the problem of erasure correction (node repair) for regenerating codes defined on graphs wherein the cost of transmitting the information to the failed node depends on the graphical distance from this node to the helper vertices of the graph. The information passed to the failed node from the helpers traverses several vertices of the graph, and savings in communication complexity can be attained if the intermediate vertices process the information rather than simply relaying it toward the failed node. We derive simple information-theoretic bounds on the amount of information communicated between the nodes in the course of the repair. Next we show that Minimum Storage Regenerating (MSR) codes can be modified to perform the intermediate processing, thereby attaining the lower bound on the information exchange on the graph. We also consider node repair when the underlying graph is random, deriving conditions on the parameters that support recovery of the failed node with communication complexity smaller than required by the simple relaying.
Lattice Boltzmann schemes rely on the enlargement of the size of the target problem in order to solve PDEs in a highly parallelizable and efficient kinetic-like fashion, split into a collision and a stream phase. This structure, despite the well-known advantages from a computational standpoint, is not suitable to construct a rigorous notion of consistency with respect to the target equations and to provide a precise notion of stability. In order to alleviate these shortages and introduce a rigorous framework, we demonstrate that any lattice Boltzmann scheme can be rewritten as a corresponding multi-step Finite Difference scheme on the conserved variables. This is achieved by devising a suitable formalism based on operators, commutative algebra and polynomials. Therefore, the notion of consistency of the corresponding Finite Difference scheme allows to invoke the Lax-Richtmyer theorem in the case of linear lattice Boltzmann schemes. Moreover, we show that the frequently-used von Neumann-like stability analysis for lattice Boltzmann schemes entirely corresponds to the von Neumann stability analysis of their Finite Difference counterpart. More generally, the usual tools for the analysis of Finite Difference schemes are now readily available to study lattice Boltzmann schemes. Their relevance is verified by means of numerical illustrations.
Hamilton and Moitra (2021) showed that, in certain regimes, it is not possible to accelerate Riemannian gradient descent in the hyperbolic plane if we restrict ourselves to algorithms which make queries in a (large) bounded domain and which receive gradients and function values corrupted by a (small) amount of noise. We show that acceleration remains unachievable for any deterministic algorithm which receives exact gradient and function-value information (unbounded queries, no noise). Our results hold for the classes of strongly and nonstrongly geodesically convex functions, and for a large class of Hadamard manifolds including hyperbolic spaces and the symmetric space $\mathrm{SL}(n) / \mathrm{SO}(n)$ of positive definite $n \times n$ matrices of determinant one. This cements a surprising gap between the complexity of convex optimization and geodesically convex optimization: for hyperbolic spaces, Riemannian gradient descent is optimal on the class of smooth and and strongly geodesically convex functions, in the regime where the condition number scales with the radius of the optimization domain. The key idea for proving the lower bound consists of perturbing the hard functions of Hamilton and Moitra (2021) with sums of bump functions chosen by a resisting oracle.
Graph convolution is the core of most Graph Neural Networks (GNNs) and usually approximated by message passing between direct (one-hop) neighbors. In this work, we remove the restriction of using only the direct neighbors by introducing a powerful, yet spatially localized graph convolution: Graph diffusion convolution (GDC). GDC leverages generalized graph diffusion, examples of which are the heat kernel and personalized PageRank. It alleviates the problem of noisy and often arbitrarily defined edges in real graphs. We show that GDC is closely related to spectral-based models and thus combines the strengths of both spatial (message passing) and spectral methods. We demonstrate that replacing message passing with graph diffusion convolution consistently leads to significant performance improvements across a wide range of models on both supervised and unsupervised tasks and a variety of datasets. Furthermore, GDC is not limited to GNNs but can trivially be combined with any graph-based model or algorithm (e.g. spectral clustering) without requiring any changes to the latter or affecting its computational complexity. Our implementation is available online.