Universal fault-tolerant quantum computers will require the use of efficient protocols to implement encoded operations necessary in the execution of algorithms. In this work, we show how satisfiability modulo theories (SMT) solvers can be used to automate the construction of Clifford circuits with certain fault-tolerance properties and apply our techniques to a fault-tolerant magic state preparation protocol. Part of the protocol requires converting magic states encoded in the color code to magic states encoded in the surface code. Since the teleportation step involves decoding a color code merged with a surface code, we develop a new decoding algorithm applicable to such codes.
Workflow nets are a well-established mathematical formalism for the analysis of business processes arising from either modeling tools or process mining. The central decision problems for workflow nets are $k$-soundness, generalised soundness and structural soundness. Most existing tools focus on $k$-soundness. In this work, we propose novel scalable semi-procedures for generalised and structural soundness. This is achieved via integral and continuous Petri net reachability relaxations. We show that our approach is competitive against state-of-the-art tools.
In this paper we introduce CorticalFlow, a new geometric deep-learning model that, given a 3-dimensional image, learns to deform a reference template towards a targeted object. To conserve the template mesh's topological properties, we train our model over a set of diffeomorphic transformations. This new implementation of a flow Ordinary Differential Equation (ODE) framework benefits from a small GPU memory footprint, allowing the generation of surfaces with several hundred thousand vertices. To reduce topological errors introduced by its discrete resolution, we derive numeric conditions which improve the manifoldness of the predicted triangle mesh. To exhibit the utility of CorticalFlow, we demonstrate its performance for the challenging task of brain cortical surface reconstruction. In contrast to current state-of-the-art, CorticalFlow produces superior surfaces while reducing the computation time from nine and a half minutes to one second. More significantly, CorticalFlow enforces the generation of anatomically plausible surfaces; the absence of which has been a major impediment restricting the clinical relevance of such surface reconstruction methods.
Given a family of squares in the plane, their $packing \ problem$ asks for the maximum number, $\nu$, of pairwise disjoint squares among them, while their $hitting \ problem$ asks for the minimum number, $\tau$, of points hitting all of them, $\tau \ge \nu$. Both problems are NP-hard even if all the rectangles are unit squares and their sides are parallel to the axes. The main results of this work are providing the first bounds for the $\tau / \nu$ ratio on not necessarily axis-parallel squares. We establish an upper bound of $6$ for unit squares and $10$ for squares of varying sizes. The worst ratios we can provide with examples are $3$ and $4$, respectively. For comparison, in the axis-parallel case, the supremum of the considered ratio is in the interval $[\frac{3}{2},2]$ for unit squares and $[\frac{3}{2},4]$ for arbitrary squares. The new bounds necessitate a mixture of novel and classical techniques of possibly extendable use. Furthermore, we study rectangles with a bounded ``aspect ratio'', where the $aspect \ ratio$ of a rectangle is the larger side of a rectangle divided by its smaller side. We improve on the well-known best $\tau/\nu$ bound, which is quadratic in terms of the aspect ratio. We reduce it from quadratic to linear for rectangles, even if they are not axis-parallel, and from linear to logarithmic, for axis-parallel rectangles. Finally, we prove similar bounds for the chromatic numbers of squares and rectangles with a bounded aspect ratio.
Linear error-correcting codes can be used for constructing secret sharing schemes; however finding in general the access structures of these secret sharing schemes and, in particular, determining efficient access structures is difficult. Here we investigate the properties of certain algebraic hypersurfaces over finite fields, whose intersection numbers with any hyperplane only takes a few values; these varieties give rise to $q$-divisible linear codes with at most $5$ weights. Furthermore, for $q$ odd these codes turn out to be minimal and we characterize the access structures of the secret sharing schemes based on their dual codes. Indeed, the secret sharing schemes thus obtained are democratic, that is each participant belongs to the same number of minimal access sets and can easily be described.
Networks are hard to configure correctly, and misconfigurations occur frequently, leading to outages or security breaches. Formal verification techniques have been applied to guarantee the correctness of network configurations, thereby improving network reliability. This work addresses verification of distributed network control planes, with two distinct contributions to improve the scalability of formal verification. Our first contribution is a hierarchy of abstractions of varying precision which introduce nondeterminism into the route selection procedure that routers use to select the best available route. We prove the soundness of these abstractions and show their benefits. Our second contribution is a novel SMT encoding which uses symbolic graphs to encode all possible stable routing trees that are compliant with the given network control plane configurations. We have implemented our abstractions and SMT encodings in a prototype tool called ACORN. Our evaluations show that our abstractions can provide significant relative speedups (up to 323x) in performance, and ACORN can scale up to $\approx37,000$ routers (organized in FatTree topologies, with synthesized shortest-path routing and valley-free policies) for verifying reachability. This far exceeds the performance of existing tools for control plane verification.
Topological Data Analysis is a growing area of data science, which aims at computing and characterizing the geometry and topology of data sets, in order to produce useful descriptors for subsequent statistical and machine learning tasks. Its main computational tool is persistent homology, which amounts to track the topological changes in growing families of subsets of the data set itself, called filtrations, and encode them in an algebraic object, called persistence module. Even though algorithms and theoretical properties of modules are now well-known in the single-parameter case, that is, when there is only one filtration to study, much less is known in the multi-parameter case, where several filtrations are given at once. Though more complicated, the resulting persistence modules are usually richer and encode more information, making them better descriptors for data science. In this article, we present the first approximation scheme, which is based on fibered barcodes and exact matchings, two constructions that stem from the theory of single-parameter persistence, for computing and decomposing general multi-parameter persistence modules. Our algorithm has controlled complexity and running time, and works in arbitrary dimension, i.e., with an arbitrary number of filtrations. Moreover, when restricting to specific classes of multi-parameter persistence modules, namely the ones that can be decomposed into intervals, we establish theoretical results about the approximation error between our estimate and the true module in terms of interleaving distance. Finally, we present empirical evidence validating output quality and speed-up on several data sets.
Solving the time-dependent Schr\"odinger equation is an important application area for quantum algorithms. We consider Schr\"odinger's equation in the semi-classical regime. Here the solutions exhibit strong multiple-scale behavior due to a small parameter $\hbar$, in the sense that the dynamics of the quantum states and the induced observables can occur on different spatial and temporal scales. Such a Schr\"odinger equation finds many applications, including in Born-Oppenheimer molecular dynamics and Ehrenfest dynamics. This paper considers quantum analogues of pseudo-spectral (PS) methods on classical computers. Estimates on the gate counts in terms of $\hbar$ and the precision $\varepsilon$ are obtained. It is found that the number of required qubits, $m$, scales only logarithmically with respect to $\hbar$. When the solution has bounded derivatives up to order $\ell$, the symmetric Trotting method has gate complexity $\mathcal{O}\Big({ (\varepsilon \hbar)^{-\frac12} \mathrm{polylog}(\varepsilon^{-\frac{3}{2\ell}} \hbar^{-1-\frac{1}{2\ell}})}\Big),$ provided that the diagonal unitary operators in the pseudo-spectral methods can be implemented with $\mathrm{poly}(m)$ operations. When physical observables are the desired outcomes, however, the step size in the time integration can be chosen independently of $\hbar$. The gate complexity in this case is reduced to $\mathcal{O}\Big({\varepsilon^{-\frac12} \mathrm{polylog}( \varepsilon^{-\frac3{2\ell}} \hbar^{-1} )}\Big),$ with $\ell$ again indicating the smoothness of the solution.
This paper introduces Eilmer, a general-purpose open-source compressible flow solver developed at the University of Queensland, designed to support research calculations in hypersonics and high-speed aerothermodynamics. Eilmer has a broad userbase in several university research groups and a wide range of capabilities, which are documented on the project's website, in the accompanying reference manuals, and in an extensive catalogue of example simulations. The first part of this paper describes the formulation of the code: the equations, physical models, and numerical methods that are used in a basic fluid dynamics simulation, as well as a handful of optional multi-physics models that are commonly added on to do calculations of hypersonic flow. The second section describes the processes used to develop and maintain the code, documenting our adherence to good programming practice and endorsing certain techniques that seem to be particularly helpful for scientific codes. The final section describes a half-dozen example simulations that span the range of Eilmer's capabilities, each consisting of some sample results and a short explanation of the problem being solved, which together will hopefully assist new users in beginning to use Eilmer in their own research projects.
A striking observation about iterative magnitude pruning (IMP; Frankle et al. 2020) is that $\unicode{x2014}$ after just a few hundred steps of dense training $\unicode{x2014}$ the method can find a sparse sub-network that can be trained to the same accuracy as the dense network. However, the same does not hold at step 0, i.e. random initialization. In this work, we seek to understand how this early phase of pre-training leads to a good initialization for IMP both through the lens of the data distribution and the loss landscape geometry. Empirically we observe that, holding the number of pre-training iterations constant, training on a small fraction of (randomly chosen) data suffices to obtain an equally good initialization for IMP. We additionally observe that by pre-training only on "easy" training data, we can decrease the number of steps necessary to find a good initialization for IMP compared to training on the full dataset or a randomly chosen subset. Finally, we identify novel properties of the loss landscape of dense networks that are predictive of IMP performance, showing in particular that more examples being linearly mode connected in the dense network correlates well with good initializations for IMP. Combined, these results provide new insight into the role played by the early phase training in IMP.
Nowadays, the shipbuilding industry is facing a radical change towards solutions with a smaller environmental impact. This can be achieved with low emissions engines, optimized shape designs with lower wave resistance and noise generation, and by reducing the metal raw materials used during the manufacturing. This work focuses on the last aspect by presenting a complete structural optimization pipeline for modern passenger ship hulls which exploits advanced model order reduction techniques to reduce the dimensionality of both input parameters and outputs of interest. We introduce a novel approach which incorporates parameter space reduction through active subspaces into the proper orthogonal decomposition with interpolation method. This is done in a multi-fidelity setting. We test the whole framework on a simplified model of a midship section and on the full model of a passenger ship, controlled by 20 and 16 parameters, respectively. We present a comprehensive error analysis and show the capabilities and usefulness of the methods especially during the preliminary design phase, finding new unconsidered designs while handling high dimensional parameterizations.