Long quantum codes using projective Reed-Muller codes are constructed. Projective Reed-Muller are evaluation codes obtained by evaluating homogeneous polynomials at the projective space. We obtain asymmetric and symmetric quantum codes by using the CSS construction and the Hermitian construction, respectively. We provide entanglement-assisted quantum error-correcting codes from projective Reed-Muller codes with flexible amounts of entanglement by considering equivalent codes. Moreover, we also construct quantum codes from subfield subcodes of projective Reed-Muller codes.
We propose and analyze an $H^2$-conforming Virtual Element Method (VEM) for the simplest linear elliptic PDEs in nondivergence form with Cordes coefficients. The VEM hinges on a hierarchical construction valid for any dimension $d \ge 2$. The analysis relies on the continuous Miranda-Talenti estimate for convex domains $\Omega$ and is rather elementary. We prove stability and error estimates in $H^2(\Omega)$, including the effect of quadrature, under minimal regularity of the data. Numerical experiments illustrate the interplay of coefficient regularity and convergence rates in $H^2(\Omega)$.
Quantum Tanner codes constitute a family of quantum low-density parity-check (LDPC) codes with good parameters, i.e., constant encoding rate and relative distance. In this article, we prove that quantum Tanner codes also facilitate single-shot quantum error correction (QEC) of adversarial noise, where one measurement round (consisting of constant-weight parity checks) suffices to perform reliable QEC even in the presence of measurement errors. We establish this result for both the sequential and parallel decoding algorithms introduced by Leverrier and Z\'emor. Furthermore, we show that in order to suppress errors over multiple repeated rounds of QEC, it suffices to run the parallel decoding algorithm for constant time in each round. Combined with good code parameters, the resulting constant-time overhead of QEC and robustness to (possibly time-correlated) adversarial noise make quantum Tanner codes alluring from the perspective of quantum fault-tolerant protocols.
The method of fundamental solutions (MFS), also known as the method of auxiliary sources (MAS), is a well-known computational method for the solution of boundary-value problems. The final solution ("MAS solution") is obtained once we have found the amplitudes of $N$ auxiliary "MAS sources." Past studies have demonstrated that it is possible for the MAS solution to converge to the true solution even when the $N$ auxiliary sources diverge and oscillate. The present paper extends the past studies by demonstrating this possibility within the context of Laplace's equation with Neumann boundary conditions. One can thus obtain the correct solution from sources that, when $N$ is large, must be considered unphysical. We carefully explain the underlying reasons for the unphysical results, distinguish from other difficulties that might concurrently arise, and point to significant differences with time-dependent problems that were studied in the past.
We consider a sharp interface formulation for the multi-phase Mullins-Sekerka flow. The flow is characterized by a network of curves evolving such that the total surface energy of the curves is reduced, while the areas of the enclosed phases are conserved. Making use of a variational formulation, we introduce a fully discrete finite element method. Our discretization features a parametric approximation of the moving interfaces that is independent of the discretization used for the equations in the bulk. The scheme can be shown to be unconditionally stable and to satisfy an exact volume conservation property. Moreover, an inherent tangential velocity for the vertices on the discrete curves leads to asymptotically equidistributed vertices, meaning no remeshing is necessary in practice. Several numerical examples, including a convergence experiment for the three-phase Mullins-Sekerka flow, demonstrate the capabilities of the introduced method.
Sparse recovery principles play an important role in solving many nonlinear ill-posed inverse problems. We investigate a variational framework with support Oracle for compressed sensing sparse reconstructions, where the available measurements are nonlinear and possibly corrupted by noise. A graph neural network, named Oracle-Net, is proposed to predict the support from the nonlinear measurements and is integrated into a regularized recovery model to enforce sparsity. The derived nonsmooth optimization problem is then efficiently solved through a constrained proximal gradient method. Error bounds on the approximate solution of the proposed Oracle-based optimization are provided in the context of the ill-posed Electrical Impedance Tomography problem. Numerical solutions of the EIT nonlinear inverse reconstruction problem confirm the potential of the proposed method which improves the reconstruction quality from undersampled measurements, under sparsity assumptions.
When analyzing large datasets, it is common to select a model prior to making inferences. For reliable inferences, it is important to make adjustments that account for the model selection process, resulting in selective inferences. Our paper introduces an asymptotic pivot to infer about the effects of selected variables on conditional quantile functions. Utilizing estimators from smoothed quantile regression, our proposed pivot is easy to compute and ensures asymptotically-exact selective inferences without making strict distributional assumptions about the response variable. At the core of the pivot is the use of external randomization, which enables us to utilize the full sample for both selection and inference without the need to partition the data into independent data subsets or discard data at either step. On simulated data, we find that: (i) the asymptotic confidence intervals based on our pivot achieve the desired coverage rates, even in cases where sample splitting fails due to insufficient sample size for inference; (ii) our intervals are consistently shorter than those produced by sample splitting across various models and signal settings. We report similar findings when we apply our approach to study risk factors for low birth weights in a publicly accessible dataset of US birth records from 2022.
There has been an increasing interest on summary-free versions of approximate Bayesian computation (ABC), which replace distances among summaries with discrepancies between the empirical distributions of the observed data and the synthetic samples generated under the proposed parameter values. The success of these solutions has motivated theoretical studies on the limiting properties of the induced posteriors. However, current results (i) are often tailored to a specific discrepancy, (ii) require, either explicitly or implicitly, regularity conditions on the data generating process and the assumed statistical model, and (iii) yield bounds depending on sequences of control functions that are not made explicit. As such, there is the lack of a theoretical framework that (i) is unified, (ii) facilitates the derivation of limiting properties that hold uniformly, and (iii) relies on verifiable assumptions that provide concentration bounds clarifying which factors govern the limiting behavior of the ABC posterior. We address this gap via a novel theoretical framework that introduces the concept of Rademacher complexity in the analysis of the limiting properties for discrepancy-based ABC posteriors. This yields a unified theory that relies on constructive arguments and provides more informative asymptotic results and uniform concentration bounds, even in settings not covered by current studies. These advancements are obtained by relating the properties of summary-free ABC posteriors to the behavior of the Rademacher complexity associated with the chosen discrepancy within the family of integral probability semimetrics. This family extends summary-based ABC, and includes the Wasserstein distance and maximum mean discrepancy (MMD), among others. As clarified through a focus on the MMD case and via illustrative simulations, this perspective yields an improved understanding of summary-free ABC.
We apply a generalized piecewise-linear (PL) version of Morse theory due to Grunert-Kuhnel-Rote to define and study new local and global notions of topological complexity for fully-connected feedforward ReLU neural network functions, F: R^n -> R. Along the way, we show how to construct, for each such F, a canonical polytopal complex K(F) and a deformation retract of the domain onto K(F), yielding a convenient compact model for performing calculations. We also give a construction showing that local complexity can be arbitrarily high.
An interesting case of the well-known Dataset Shift Problem is the classification of Electroencephalogram (EEG) signals in the context of Brain-Computer Interface (BCI). The non-stationarity of EEG signals can lead to poor generalisation performance in BCI classification systems used in different sessions, also from the same subject. In this paper, we start from the hypothesis that the Dataset Shift problem can be alleviated by exploiting suitable eXplainable Artificial Intelligence (XAI) methods to locate and transform the relevant characteristics of the input for the goal of classification. In particular, we focus on an experimental analysis of explanations produced by several XAI methods on an ML system trained on a typical EEG dataset for emotion recognition. Results show that many relevant components found by XAI methods are shared across the sessions and can be used to build a system able to generalise better. However, relevant components of the input signal also appear to be highly dependent on the input itself.
Galois self-orthogonal (SO) codes are generalizations of Euclidean and Hermitian SO codes. Algebraic geometry (AG) codes are the first known class of linear codes exceeding the Gilbert-Varshamov bound. Both of them have attracted much attention for their rich algebraic structures and wide applications in these years. In this paper, we consider them together and study Galois SO AG codes. A criterion for an AG code being Galois SO is presented. Based on this criterion, we construct several new classes of maximum distance separable (MDS) Galois SO AG codes from projective lines and several new classes of Galois SO AG codes from projective elliptic curves, hyper-elliptic curves and hermitian curves. In addition, we give an embedding method that allows us to obtain more MDS Galois SO codes from known MDS Galois SO AG codes.