We introduce a novel quantum programming language featuring higher-order programs and quantum controlflow which ensures that all qubit transformations are unitary. Our language boasts a type system guaranteeingboth unitarity and polynomial-time normalization. Unitarity is achieved by using a special modality forsuperpositions while requiring orthogonality among superposed terms. Polynomial-time normalization isachieved using a linear-logic-based type discipline employing Barber and Plotkin duality along with a specificmodality to account for potential duplications. This type discipline also guarantees that derived values havepolynomial size. Our language seamlessly combines the two modalities: quantum circuit programs upholdunitarity, and all programs are evaluated in polynomial time, ensuring their feasibility.
We propose a data-driven, closure model for Reynolds-averaged Navier-Stokes (RANS) simulations that incorporates aleatoric, model uncertainty. The proposed closure consists of two parts. A parametric one, which utilizes previously proposed, neural-network-based tensor basis functions dependent on the rate of strain and rotation tensor invariants. This is complemented by latent, random variables which account for aleatoric model errors. A fully Bayesian formulation is proposed, combined with a sparsity-inducing prior in order to identify regions in the problem domain where the parametric closure is insufficient and where stochastic corrections to the Reynolds stress tensor are needed. Training is performed using sparse, indirect data, such as mean velocities and pressures, in contrast to the majority of alternatives that require direct Reynolds stress data. For inference and learning, a Stochastic Variational Inference scheme is employed, which is based on Monte Carlo estimates of the pertinent objective in conjunction with the reparametrization trick. This necessitates derivatives of the output of the RANS solver, for which we developed an adjoint-based formulation. In this manner, the parametric sensitivities from the differentiable solver can be combined with the built-in, automatic differentiation capability of the neural network library in order to enable an end-to-end differentiable framework. We demonstrate the capability of the proposed model to produce accurate, probabilistic, predictive estimates for all flow quantities, even in regions where model errors are present, on a separated flow in the backward-facing step benchmark problem.
Traffic flow modeling relies heavily on fundamental diagrams. However, deterministic fundamental diagrams, such as single or multi-regime models, cannot capture the uncertainty pattern that underlies traffic flow. To address this limitation, a sparse non-parametric regression model is proposed in this paper to formulate the stochastic fundamental diagram. Unlike parametric stochastic fundamental diagram models, a non-parametric model is insensitive to parameters, flexible, and applicable. The computation complexity and the huge memory required for training in the Gaussian process regression have been reduced by introducing the sparse Gaussian process regression. The paper also discusses how empirical knowledge influences the modeling process. The paper analyzes the influence of modeling empirical knowledge in the prior of the stochastic fundamental diagram model and whether empirical knowledge can improve the robustness and accuracy of the proposed model. By introducing several well-known single-regime fundamental diagram models as the prior and testing the model's robustness and accuracy with different sampling methods given real-world data, the authors find that empirical knowledge can only benefit the model under small inducing samples given a relatively clean and large dataset. A pure data-driven approach is sufficient to estimate and describe the pattern of the density-speed relationship.
We consider the problems of testing and learning an $n$-qubit $k$-local Hamiltonian from queries to its evolution operator with respect the 2-norm of the Pauli spectrum, or equivalently, the normalized Frobenius norm. For testing whether a Hamiltonian is $\epsilon_1$-close to $k$-local or $\epsilon_2$-far from $k$-local, we show that $O(1/(\epsilon_2-\epsilon_1)^{8})$ queries suffice. This solves two questions posed in a recent work by Bluhm, Caro and Oufkir. For learning up to error $\epsilon$, we show that $\exp(O(k^2+k\log(1/\epsilon)))$ queries suffice. Our proofs are simple, concise and based on Pauli-analytic techniques.
We present a new approach to compute eigenvalues and eigenvectors of locally definite multiparameter eigenvalue problems by its signed multiindex. The method has the interpretation of a semismooth Newton method applied to certain functions that have a unique zero. We can therefore show local quadratic convergence, and for certain extreme eigenvalues even global linear convergence of the method. Local definiteness is a weaker condition than right and left definiteness, which is often considered for multiparameter eigenvalue problems. These conditions are naturally satisfied for multiparameter Sturm-Liouville problems that arise when separation of variables can be applied to multidimensional boundary eigenvalue problems.
Signal cancellation provides a radically new and efficient approach to exploratory factor analysis, without matrix decomposition nor presetting the required number of factors. Its current implementation requires that each factor has at least two unique indicators. Its principle is that it is always possible to combine two indicator variables exclusive to the same factor with weights that cancel their common factor information. Successful combinations, consisting of nose only, are recognized by their null correlations with all remaining variables. The optimal combinations of multifactorial indicators, though, typically retain correlations with some other variables. Their signal, however, can be cancelled through combinations with unifactorial indicators of their contributing factors. The loadings are estimated from the relative signal cancellation weights of the variables involved along with their observed correlations. The factor correlations are obtained from those of their unifactorial indicators, corrected by their factor loadings. The method is illustrated with synthetic data from a complex six-factor structure that even includes two doublet factors. Another example using actual data documents that signal cancellation can rival confirmatory factor analysis.
Boolean networks are extensively applied as models of complex dynamical systems, aiming at capturing essential features related to causality and synchronicity of the state changes of components along time. Dynamics of Boolean networks result from the application of their Boolean map according to a so-called update mode, specifying the possible transitions between network configurations. In this paper, we explore update modes that possess a memory on past configurations, and provide a generic framework to define them. We show that recently introduced modes such as the most permissive and interval modes can be naturally expressed in this framework. We propose novel update modes, the history-based and trapping modes, and provide a comprehensive comparison between them. Furthermore, we show that trapping dynamics, which further generalize the most permissive mode, correspond to a rich class of networks related to transitive dynamics and encompassing commutative networks. Finally, we provide a thorough characterization of the structure of minimal and principal trapspaces, bringing a combinatorial and algebraic understanding of these objects.
Transformers are neural networks that revolutionized natural language processing and machine learning. They process sequences of inputs, like words, using a mechanism called self-attention, which is trained via masked language modeling (MLM). In MLM, a word is randomly masked in an input sequence, and the network is trained to predict the missing word. Despite the practical success of transformers, it remains unclear what type of data distribution self-attention can learn efficiently. Here, we show analytically that if one decouples the treatment of word positions and embeddings, a single layer of self-attention learns the conditionals of a generalized Potts model with interactions between sites and Potts colors. Moreover, we show that training this neural network is exactly equivalent to solving the inverse Potts problem by the so-called pseudo-likelihood method, well known in statistical physics. Using this mapping, we compute the generalization error of self-attention in a model scenario analytically using the replica method.
We consider the computation of model-free bounds for multi-asset options in a setting that combines dependence uncertainty with additional information on the dependence structure. More specifically, we consider the setting where the marginal distributions are known and partial information, in the form of known prices for multi-asset options, is also available in the market. We provide a fundamental theorem of asset pricing in this setting, as well as a superhedging duality that allows to transform the maximization problem over probability measures in a more tractable minimization problem over trading strategies. The latter is solved using a penalization approach combined with a deep learning approximation using artificial neural networks. The numerical method is fast and the computational time scales linearly with respect to the number of traded assets. We finally examine the significance of various pieces of additional information. Empirical evidence suggests that "relevant" information, i.e. prices of derivatives with the same payoff structure as the target payoff, are more useful that other information, and should be prioritized in view of the trade-off between accuracy and computational efficiency.
In relational verification, judicious alignment of computational steps facilitates proof of relations between programs using simple relational assertions. Relational Hoare logics (RHL) provide compositional rules that embody various alignments of executions. Seemingly more flexible alignments can be expressed in terms of product automata based on program transition relations. A single degenerate alignment rule (self-composition), atop a complete Hoare logic, comprises a RHL for $\forall\forall$ properties that is complete in the ordinary logical sense (Cook'78). The notion of alignment completeness was previously proposed as a more satisfactory measure, and some rules were shown to be alignment complete with respect to a few ad hoc forms of alignment automata. This paper proves alignment completeness with respect to a general class of $\forall\forall$ alignment automata, for a RHL comprised of standard rules together with a rule of semantics-preserving rewrites based on Kleene algebra with tests. A new logic for $\forall\exists$ properties is introduced and shown to be alignment complete. The $\forall\forall$ and $\forall\exists$ automata are shown to be semantically complete. Thus the logics are both complete in the ordinary sense. Recent work by D'Osualdo et al highlights the importance of completeness relative to assumptions (which we term entailment completeness), and presents $\forall\forall$ examples seemingly beyond the scope of RHLs. Additional rules enable these examples to be proved in our RHL, shedding light on the open problem of entailment completeness.
We consider the problem of causal inference based on observational data (or the related missing data problem) with a binary or discrete treatment variable. In that context we study counterfactual density estimation, which provides more nuanced information than counterfactual mean estimation (i.e., the average treatment effect). We impose the shape-constraint of log-concavity (a unimodality constraint) on the counterfactual densities, and then develop doubly robust estimators of the log-concave counterfactual density (based on an augmented inverse-probability weighted pseudo-outcome), and show the consistency in various global metrics of that estimator. Based on that estimator we also develop asymptotically valid pointwise confidence intervals for the counterfactual density.