The Pauli stabilizer formalism is perhaps the most thoroughly studied means of procuring quantum error-correcting codes, whereby the code is obtained through commutative Pauli operators and ``stabilized'' by them. In this work we will show that every quantum error-correcting code, including Pauli stabilizer codes and subsystem codes, has a similar structure, in that the code can be stabilized by commutative ``Paulian'' operators which share many features with Pauli operators and which form a \textbf{Paulian stabilizer group}. By facilitating a controlled gate we can measure these Paulian operators to acquire the error syndrome. Examples concerning codeword stabilized codes and bosonic codes will be presented; specifically, one of the examples has been demonstrated experimentally and the observable for detecting the error turns out to be Paulian, thereby showing the potential utility of this approach. This work provides a possible approach to implement error-correcting codes and to find new codes.
Engineers are often faced with the decision to select the most appropriate model for simulating the behavior of engineered systems, among a candidate set of models. Experimental monitoring data can generate significant value by supporting engineers toward such decisions. Such data can be leveraged within a Bayesian model updating process, enabling the uncertainty-aware calibration of any candidate model. The model selection task can subsequently be cast into a problem of decision-making under uncertainty, where one seeks to select the model that yields an optimal balance between the reward associated with model precision, in terms of recovering target Quantities of Interest (QoI), and the cost of each model, in terms of complexity and compute time. In this work, we examine the model selection task by means of Bayesian decision theory, under the prism of availability of models of various refinements, and thus varying levels of fidelity. In doing so, we offer an exemplary application of this framework on the IMAC-MVUQ Round-Robin Challenge. Numerical investigations show various outcomes of model selection depending on the target QoI.
The supersingular Endomorphism Ring problem is the following: given a supersingular elliptic curve, compute all of its endomorphisms. The presumed hardness of this problem is foundational for isogeny-based cryptography. The One Endomorphism problem only asks to find a single non-scalar endomorphism. We prove that these two problems are equivalent, under probabilistic polynomial time reductions. We prove a number of consequences. First, assuming the hardness of the endomorphism ring problem, the Charles--Goren--Lauter hash function is collision resistant, and the SQIsign identification protocol is sound. Second, the endomorphism ring problem is equivalent to the problem of computing arbitrary isogenies between supersingular elliptic curves, a result previously known only for isogenies of smooth degree. Third, there exists an unconditional probabilistic algorithm to solve the endomorphism ring problem in time O~(sqrt(p)), a result that previously required to assume the generalized Riemann hypothesis. To prove our main result, we introduce a flexible framework for the study of isogeny graphs with additional information. We prove a general and easy-to-use rapid mixing theorem. The proof of this result goes via an augmented Deuring correspondence and the Jacquet-Langlands correspondence.
We deal with Mckean-Vlasov and Boltzmann type jump equations. This means that the coefficients of the stochastic equation depend on the law of the solution, and the equation is driven by a Poisson point measure with intensity measure which depends on the law of the solution as well. In [3], Alfonsi and Bally have proved that under some suitable conditions, the solution $X_t$ of such equation exists and is unique. One also proves that $X_t$ is the probabilistic interpretation of an analytical weak equation. Moreover, the Euler scheme $X_t^{\mathcal{P}}$ of this equation converges to $X_t$ in Wasserstein distance. In this paper, under more restricted assumptions, we show that the Euler scheme $X_t^{\mathcal{P}}$ converges to $X_t$ in total variation distance and $X_t$ has a smooth density (which is a function solution of the analytical weak equation). On the other hand, in view of simulation, we use a truncated Euler scheme $X^{\mathcal{P},M}_t$ which has a finite numbers of jumps in any compact interval. We prove that $X^{\mathcal{P},M}_{t}$ also converges to $X_t$ in total variation distance. Finally, we give an algorithm based on a particle system associated to $X^{\mathcal{P},M}_t$ in order to approximate the density of the law of $X_t$. Complete estimates of the error are obtained.
As models grow larger and more complex, achieving better off-sample generalization with minimal trial-and-error is critical to the reliability and economy of machine learning workflows. As a proxy for the well-studied heuristic of seeking "flat" local minima, gradient regularization is a natural avenue, and first-order approximations such as Flooding and sharpness-aware minimization (SAM) have received significant attention, but their performance depends critically on hyperparameters (flood threshold and neighborhood radius, respectively) that are non-trivial to specify in advance. In order to develop a procedure which is more resilient to misspecified hyperparameters, with the hard-threshold "ascent-descent" switching device used in Flooding as motivation, we propose a softened, pointwise mechanism called SoftAD that downweights points on the borderline, limits the effects of outliers, and retains the ascent-descent effect. We contrast formal stationarity guarantees with those for Flooding, and empirically demonstrate how SoftAD can realize classification accuracy competitive with SAM and Flooding while maintaining a much smaller loss generalization gap and model norm. Our empirical tests range from simple binary classification on the plane to image classification using neural networks with millions of parameters; the key trends are observed across all datasets and models studied, and suggest a potential new approach to implicit regularization.
We introduce time-ordered multibody interactions to describe complex systems manifesting temporal as well as multibody dependencies. First, we show how the dynamics of multivariate Markov chains can be decomposed in ensembles of time-ordered multibody interactions. Then, we present an algorithm to extract those interactions from data capturing the system-level dynamics of node states and a measure to characterize the complexity of interaction ensembles. Finally, we experimentally validate the robustness of our algorithm against statistical errors and its efficiency at inferring parsimonious interaction ensembles.
Partial Least Squares (PLS) regression emerged as an alternative to ordinary least squares for addressing multicollinearity in a wide range of scientific applications. As multidimensional tensor data is becoming more widespread, tensor adaptations of PLS have been developed. Our investigations reveal that the previously established asymptotic result of the PLS estimator for a tensor response breaks down as the tensor dimensions and the number of features increase relative to the sample size. To address this, we propose Sparse Higher Order Partial Least Squares (SHOPS) regression and an accompanying algorithm. SHOPS simultaneously accommodates variable selection, dimension reduction, and tensor association denoising. We establish the asymptotic accuracy of the SHOPS algorithm under a high-dimensional regime and verify these results through comprehensive simulation experiments, and applications to two contemporary high-dimensional biological data analysis.
I propose an alternative algorithm to compute the MMS voting rule. Instead of using linear programming, in this new algorithm the maximin support value of a committee is computed using a sequence of maximum flow problems.
Linear logic has provided new perspectives on proof-theory, denotational semantics and the study of programming languages. One of its main successes are proof-nets, canonical representations of proofs that lie at the intersection between logic and graph theory. In the case of the minimalist proof-system of multiplicative linear logic without units (MLL), these two aspects are completely fused: proof-nets for this system are graphs satisfying a correctness criterion that can be fully expressed in the language of graphs. For more expressive logical systems (containing logical constants, quantifiers and exponential modalities), this is not completely the case. The purely graphical approach of proof-nets deprives them of any sequential structure that is crucial to represent the order in which arguments are presented, which is necessary for these extensions. Rebuilding this order of presentation - sequentializing the graph - is thus a requirement for a graph to be logical. Presentations and study of the artifacts ensuring that sequentialization can be done, such as boxes or jumps, are an integral part of researches on linear logic. Jumps, extensively studied by Faggian and di Giamberardino, can express intermediate degrees of sequentialization between a sequent calculus proof and a fully desequentialized proof-net. We propose to analyze the logical strength of jumps by internalizing them in an extention of MLL where axioms on a specific formula, the jumping formula, introduce constrains on the possible sequentializations. The jumping formula needs to be treated non-linearly, which we do either axiomatically, or by embedding it in a very controlled fragment of multiplicative-exponential linear logic, uncovering the exponential logic of sequentialization.
In order to give quantitative estimates for approximating the ergodic limit, we investigate probabilistic limit behaviors of time-averaging estimators of numerical discretizations for a class of time-homogeneous Markov processes, by studying the corresponding strong law of large numbers and the central limit theorem. Verifiable general sufficient conditions are proposed to ensure these limit behaviors, which are related to the properties of strong mixing and strong convergence for numerical discretizations of Markov processes. Our results hold for test functionals with lower regularity compared with existing results, and the analysis does not require the existence of the Poisson equation associated with the underlying Markov process. Notably, our results are applicable to numerical discretizations for a large class of stochastic systems, including stochastic ordinary differential equations, infinite dimensional stochastic evolution equations, and stochastic functional differential equations.
Neural ordinary differential equations (neural ODEs) are a popular family of continuous-depth deep learning models. In this work, we consider a large family of parameterized ODEs with continuous-in-time parameters, which include time-dependent neural ODEs. We derive a generalization bound for this class by a Lipschitz-based argument. By leveraging the analogy between neural ODEs and deep residual networks, our approach yields in particular a generalization bound for a class of deep residual networks. The bound involves the magnitude of the difference between successive weight matrices. We illustrate numerically how this quantity affects the generalization capability of neural networks.