We study many-valued coalgebraic logics with semi-primal algebras of truth-degrees. We provide a systematic way to lift endofunctors defined on the variety of Boolean algebras to endofunctors on the variety generated by a semi-primal algebra. We show that this can be extended to a technique to lift classical coalgebraic logics to many-valued ones, and that (one-step) completeness and expressivity are preserved under this lifting. For specific classes of endofunctors, we also describe how to obtain an axiomatization of the lifted many-valued logic directly from an axiomatization of the original classical one. In particular, we apply all of these techniques to classical modal logic.
Due to the linearity of quantum operations, it is not straightforward to implement nonlinear transformations on a quantum computer, making some practical tasks like a neural network hard to be achieved. In this work, we define a task called nonlinear transformation of complex amplitudes and provide an algorithm to achieve this task. Specifically, we construct a block-encoding of complex amplitudes from a state preparation unitary. This allows us to transform the complex amplitudes by using quantum singular value transformation. We evaluate the required overhead in terms of input dimension and precision, which reveals that the algorithm depends on the roughly square root of input dimension and achieves an exponential speedup on precision compared with previous work. We also discuss its possible applications to quantum machine learning, where complex amplitudes encoding classical or quantum data are processed by the proposed method. This paper provides a promising way to introduce highly complex nonlinearity of the quantum states, which is essentially missing in quantum mechanics.
Building on the recent development of the model-free generalized fiducial (MFGF) paradigm (Williams, 2023) for predictive inference with finite-sample frequentist validity guarantees, in this paper, we develop an MFGF-based approach to decision theory. Beyond the utility of the new tools we contribute to the field of decision theory, our work establishes a formal connection between decision theories from the perspectives of fiducial inference, conformal prediction, and imprecise probability theory. In our paper, we establish pointwise and uniform consistency of an {\em MFGF upper risk function} as an approximation to the true risk function via the derivation of nonasymptotic concentration bounds, and our work serves as the foundation for future investigations of the properties of the MFGF upper risk from the perspective of new decision-theoretic, finite-sample validity criterion, as in Martin (2021).
We analyze a bilinear optimal control problem for the Stokes--Brinkman equations: the control variable enters the state equations as a coefficient. In two- and three-dimensional Lipschitz domains, we perform a complete continuous analysis that includes the existence of solutions and first- and second-order optimality conditions. We also develop two finite element methods that differ fundamentally in whether the admissible control set is discretized or not. For each of the proposed methods, we perform a convergence analysis and derive a priori error estimates; the latter under the assumption that the domain is convex. Finally, assuming that the domain is Lipschitz, we develop an a posteriori error estimator for each discretization scheme and obtain a global reliability bound.
Parametrized and random unitary (or orthogonal) $n$-qubit circuits play a central role in quantum information. As such, one could naturally assume that circuits implementing symplectic transformation would attract similar attention. However, this is not the case, as $\mathbb{SP}(d/2)$ -- the group of $d\times d$ unitary symplectic matrices -- has thus far been overlooked. In this work, we aim at starting to right this wrong. We begin by presenting a universal set of generators $\mathcal{G}$ for the symplectic algebra $i\mathfrak{sp}(d/2)$, consisting of one- and two-qubit Pauli operators acting on neighboring sites in a one-dimensional lattice. Here, we uncover two critical differences between such set, and equivalent ones for unitary and orthogonal circuits. Namely, we find that the operators in $\mathcal{G}$ cannot generate arbitrary local symplectic unitaries and that they are not translationally invariant. We then review the Schur-Weyl duality between the symplectic group and the Brauer algebra, and use tools from Weingarten calculus to prove that Pauli measurements at the output of Haar random symplectic circuits can converge to Gaussian processes. As a by-product, such analysis provides us with concentration bounds for Pauli measurements in circuits that form $t$-designs over $\mathbb{SP}(d/2)$. To finish, we present tensor-network tools to analyze shallow random symplectic circuits, and we use these to numerically show that computational-basis measurements anti-concentrate at logarithmic depth.
We consider nonlinear delay differential and renewal equations with infinite delay. We extend the work of Gyllenberg et al, Appl. Math. Comput. (2018) by introducing a unifying abstract framework, and derive a finite-dimensional approximating system via pseudospectral discretization. For renewal equations, we consider a reformulation in the space of absolutely continuous functions via integration. We prove the one-to-one correspondence of equilibria between the original equation and its approximation, and that linearization and discretization commute. Our most important result is the proof of convergence of the characteristic roots of the pseudospectral approximation of the linear(ized) equations when the collocation nodes are chosen as the family of scaled zeros or extrema of Laguerre polynomials. This ensures that the finite-dimensional system correctly reproduces the stability properties of the original linear equation if the dimension of the approximation is large enough. The result is illustrated with several numerical tests, which also demonstrate the effectiveness of the approach for the bifurcation analysis of equilibria of nonlinear equations. The new approach used to prove convergence also provides the exact location of the spectrum of the differentiation matrices for the Laguerre zeros and extrema, adding new insights into properties that are important in the numerical solution of differential equations by pseudospectral methods.
Missing values have been thoroughly analyzed in the context of linear models, where the final aim is to build coefficient estimates. However, estimating coefficients does not directly solve the problem of prediction with missing entries: a manner to address empty components must be designed. Major approaches to deal with prediction with missing values are empirically driven and can be decomposed into two families: imputation (filling in empty fields) and pattern-by-pattern prediction, where a predictor is built on each missing pattern. Unfortunately, most simple imputation techniques used in practice (as constant imputation) are not consistent when combined with linear models. In this paper, we focus on the more flexible pattern-by-pattern approaches and study their predictive performances on Missing Completely At Random (MCAR) data. We first show that a pattern-by-pattern logistic regression model is intrinsically ill-defined, implying that even classical logistic regression is impossible to apply to missing data. We then analyze the perceptron model and show how the linear separability property extends to partially-observed inputs. Finally, we use the Linear Discriminant Analysis to prove that pattern-by-pattern LDA is consistent in a high-dimensional regime. We refine our analysis to more complex MNAR data.
We build a valid p-value based on a concentration inequality for bounded random variables introduced by Pelekis, Ramon and Wang. The motivation behind this work is the calibration of predictive algorithms in a distribution-free setting. The super-uniform p-value is tighter than Hoeffding and Bentkus alternatives in certain regions. Even though we are motivated by a calibration setting in a machine learning context, the ideas presented in this work are also relevant in classical statistical inference. Furthermore, we compare the power of a collection of valid p- values for bounded losses, which are presented in previous literature.
Generalization is the ability of machine learning models to make accurate predictions on new data by learning from training data. However, understanding generalization of quantum machine learning models has been a major challenge. Here, we introduce the data quantum Fisher information metric (DQFIM). It describes the capacity of variational quantum algorithms depending on variational ansatz, training data and their symmetries. We apply the DQFIM to quantify circuit parameters and training data needed to successfully train and generalize. Using the dynamical Lie algebra, we explain how to generalize using a low number of training states. Counter-intuitively, breaking symmetries of the training data can help to improve generalization. Finally, we find that out-of-distribution generalization, where training and testing data are drawn from different data distributions, can be better than using the same distribution. Our work provides a useful framework to explore the power of quantum machine learning models.
This paper presents an analysis of properties of two hybrid discretization methods for Gaussian derivatives, based on convolutions with either the normalized sampled Gaussian kernel or the integrated Gaussian kernel followed by central differences. The motivation for studying these discretization methods is that in situations when multiple spatial derivatives of different order are needed at the same scale level, they can be computed significantly more efficiently compared to more direct derivative approximations based on explicit convolutions with either sampled Gaussian kernels or integrated Gaussian kernels. While these computational benefits do also hold for the genuinely discrete approach for computing discrete analogues of Gaussian derivatives, based on convolution with the discrete analogue of the Gaussian kernel followed by central differences, the underlying mathematical primitives for the discrete analogue of the Gaussian kernel, in terms of modified Bessel functions of integer order, may not be available in certain frameworks for image processing, such as when performing deep learning based on scale-parameterized filters in terms of Gaussian derivatives, with learning of the scale levels. In this paper, we present a characterization of the properties of these hybrid discretization methods, in terms of quantitative performance measures concerning the amount of spatial smoothing that they imply, as well as the relative consistency of scale estimates obtained from scale-invariant feature detectors with automatic scale selection, with an emphasis on the behaviour for very small values of the scale parameter, which may differ significantly from corresponding results obtained from the fully continuous scale-space theory, as well as between different types of discretization methods.
We study stochastic approximation procedures for approximately solving a $d$-dimensional linear fixed point equation based on observing a trajectory of length $n$ from an ergodic Markov chain. We first exhibit a non-asymptotic bound of the order $t_{\mathrm{mix}} \tfrac{d}{n}$ on the squared error of the last iterate of a standard scheme, where $t_{\mathrm{mix}}$ is a mixing time. We then prove a non-asymptotic instance-dependent bound on a suitably averaged sequence of iterates, with a leading term that matches the local asymptotic minimax limit, including sharp dependence on the parameters $(d, t_{\mathrm{mix}})$ in the higher order terms. We complement these upper bounds with a non-asymptotic minimax lower bound that establishes the instance-optimality of the averaged SA estimator. We derive corollaries of these results for policy evaluation with Markov noise -- covering the TD($\lambda$) family of algorithms for all $\lambda \in [0, 1)$ -- and linear autoregressive models. Our instance-dependent characterizations open the door to the design of fine-grained model selection procedures for hyperparameter tuning (e.g., choosing the value of $\lambda$ when running the TD($\lambda$) algorithm).