Quantum algorithms offer significant speed-ups over their classical counterparts in various applications. In this paper, we develop quantum algorithms for the Kalman filter widely used in classical control engineering using the block encoding method. The entire calculation process is achieved by performing matrix operations on Hamiltonians based on the block encoding framework, including addition, multiplication, and inversion, which can be completed in a unified framework compared to previous quantum algorithms for solving control problems. We demonstrate that the quantum algorithm exponentially accelerates the computation of the Kalman filter compared to traditional methods. The time complexity can be reduced from $O(n^3)$ to $O(\kappa poly\log(n/\epsilon)\log(1/\epsilon'))$, where $n$ represents the matrix dimension, $\kappa$ denotes the condition number for the matrix to be inverted, $\epsilon$ indicates desired precision in block encoding, $\epsilon'$ signifies desired precision in matrix inversion. This paper provides a comprehensive quantum solution for implementing the Kalman filter and serves as an attempt to broaden the scope of quantum computation applications. Finally, we present an illustrative example implemented in Qiskit (a Python-based open-source toolkit) as a proof-of-concept.
In this paper we continue the work on implicit-explicit (IMEX) time discretizations for the incompressible Oseen equations that we started in \cite{BGG23} (E. Burman, D. Garg, J. Guzm\`an, {\emph{Implicit-explicit time discretization for Oseen's equation at high Reynolds number with application to fractional step methods}}, SIAM J. Numer. Anal., 61, 2859--2886, 2023). The pressure velocity coupling and the viscous terms are treated implicitly, while the convection term is treated explicitly using extrapolation. Herein we focus on the implicit-explicit Crank-Nicolson method for time discretization. For the discretization in space we consider finite element methods with stabilization on the gradient jumps. The stabilizing terms ensures inf-sup stability for equal order interpolation and robustness at high Reynolds number. Under suitable Courant conditions we prove stability of the implicit-explicit Crank-Nicolson scheme in this regime. The stabilization allows us to prove error estimates of order $O(h^{k+\frac12} + \tau^2)$. Here $h$ is the mesh parameter, $k$ the polynomial order and $\tau$ the time step. Finally we discuss some fractional step methods that are implied by the IMEX scheme. Numerical examples are reported comparing the different methods when applied to the Navier-Stokes' equations.
Mechanical interactions between rigid rings and flexible cables find broad application in both daily life (hanging clothes) and engineering systems (closing a tether-net). A reduced-order method for the dynamic analysis of sliding rings on a deformable one-dimensional (1D) rod-like object is proposed. In contrast to the conventional approach of discretizing joint rings into multiple nodes and edges for contact detection and numerical simulation, a single point is used to reduce the order of the model. To ensure that the sliding ring and flexible rod do not deviate from their desired positions, a new barrier function is formulated using the incremental potential theory. Subsequently, the interaction between tangent frictional forces is obtained through a delayed dissipative approach. The proposed barrier functional and the associated frictional functional are C2 continuous, hence the nonlinear elastodynamic system can be solved variationally by an implicit time-stepping scheme. The numerical framework is initially applied to simple examples where the analytical solutions are available for validation. Then, multiple complex practical engineering examples are considered to showcase the effectiveness of the proposed method. The simplified ring-to-rod interaction model has the capacity to enhance the realism of visual effects in image animations, while simultaneously facilitating the optimization of designs for space debris removal systems.
In this paper, we present a set of private and secure delegated quantum computing protocols and techniques tailored to user-level and industry-level use cases, depending on the computational resources available to the client, the specific privacy needs required, and the type of algorithm. Our protocols are presented at a high level as they are independent of the particular algorithm used for such encryption and decryption processes. Additionally, we propose a method to verify the correct execution of operations by the external server.
Multiple imputation (MI) models can be improved by including auxiliary covariates (AC), but their performance in high-dimensional data is not well understood. We aimed to develop and compare high-dimensional MI (HDMI) approaches using structured and natural language processing (NLP)-derived AC in studies with partially observed confounders. We conducted a plasmode simulation study using data from opioid vs. non-steroidal anti-inflammatory drug (NSAID) initiators (X) with observed serum creatinine labs (Z2) and time-to-acute kidney injury as outcome. We simulated 100 cohorts with a null treatment effect, including X, Z2, atrial fibrillation (U), and 13 other investigator-derived confounders (Z1) in the outcome generation. We then imposed missingness (MZ2) on 50% of Z2 measurements as a function of Z2 and U and created different HDMI candidate AC using structured and NLP-derived features. We mimicked scenarios where U was unobserved by omitting it from all AC candidate sets. Using LASSO, we data-adaptively selected HDMI covariates associated with Z2 and MZ2 for MI, and with U to include in propensity score models. The treatment effect was estimated following propensity score matching in MI datasets and we benchmarked HDMI approaches against a baseline imputation and complete case analysis with Z1 only. HDMI using claims data showed the lowest bias (0.072). Combining claims and sentence embeddings led to an improvement in the efficiency displaying the lowest root-mean-squared-error (0.173) and coverage (94%). NLP-derived AC alone did not perform better than baseline MI. HDMI approaches may decrease bias in studies with partially observed confounders where missingness depends on unobserved factors.
This paper outlines a method aiming to increase the efficiency of proof-of-work based blockchains using a ticket-based approach. To avoid the limitation of serially adding one block at a time to a blockchain, multiple semi-independent chains are used such that several valid blocks can be added in parallel, when they are added to separate chains. Blocks are added to different chains, the chain index being determined by a "ticket" that the miner must produce before mining a new block. This allows increasing the transaction rate by several orders of magnitude while the system is still fully decentralized and permissionless, and maintaining security in the sense that a successful attack would require the attacker to control a significant portion of the whole network.
Class imbalance in real-world data poses a common bottleneck for machine learning tasks, since achieving good generalization on under-represented examples is often challenging. Mitigation strategies, such as under or oversampling the data depending on their abundances, are routinely proposed and tested empirically, but how they should adapt to the data statistics remains poorly understood. In this work, we determine exact analytical expressions of the generalization curves in the high-dimensional regime for linear classifiers (Support Vector Machines). We also provide a sharp prediction of the effects of under/oversampling strategies depending on class imbalance, first and second moments of the data, and the metrics of performance considered. We show that mixed strategies involving under and oversampling of data lead to performance improvement. Through numerical experiments, we show the relevance of our theoretical predictions on real datasets, on deeper architectures and with sampling strategies based on unsupervised probabilistic models.
Rough set is one of the important methods for rule acquisition and attribute reduction. The current goal of rough set attribute reduction focuses more on minimizing the number of reduced attributes, but ignores the spatial similarity between reduced and decision attributes, which may lead to problems such as increased number of rules and limited generality. In this paper, a rough set attribute reduction algorithm based on spatial optimization is proposed. By introducing the concept of spatial similarity, to find the reduction with the highest spatial similarity, so that the spatial similarity between reduction and decision attributes is higher, and more concise and widespread rules are obtained. In addition, a comparative experiment with the traditional rough set attribute reduction algorithms is designed to prove the effectiveness of the rough set attribute reduction algorithm based on spatial optimization, which has made significant improvements on many datasets.
In the present paper, we introduce new tensor krylov subspace methods for solving large Sylvester tensor equations. The proposed method uses the well-known T-product for tensors and tensor subspaces. We introduce some new tensor products and the related algebraic properties. These new products will enable us to develop third-order the tensor FOM (tFOM), GMRES (tGMRES), tubal Block Arnoldi and the tensor tubal Block Arnoldi method to solve large Sylvester tensor equation. We give some properties related to these method and present some numerical experiments.
When modeling a vector of risk variables, extreme scenarios are often of special interest. The peaks-over-thresholds method hinges on the notion that, asymptotically, the excesses over a vector of high thresholds follow a multivariate generalized Pareto distribution. However, existing literature has primarily concentrated on the setting when all risk variables are always large simultaneously. In reality, this assumption is often not met, especially in high dimensions. In response to this limitation, we study scenarios where distinct groups of risk variables may exhibit joint extremes while others do not. These discernible groups are derived from the angular measure inherent in the corresponding max-stable distribution, whence the term extreme direction. We explore such extreme directions within the framework of multivariate generalized Pareto distributions, with a focus on their probability density functions in relation to an appropriate dominating measure. Furthermore, we provide a stochastic construction that allows any prespecified set of risk groups to constitute the distribution's extreme directions. This construction takes the form of a smoothed max-linear model and accommodates the full spectrum of conceivable max-stable dependence structures. Additionally, we introduce a generic simulation algorithm tailored for multivariate generalized Pareto distributions, offering specific implementations for extensions of the logistic and H\"usler-Reiss families capable of carrying arbitrary extreme directions.
In this paper, we present an implicit Crank-Nicolson finite element (FE) scheme for solving a nonlinear Schr\"odinger-type system, which includes Schr\"odinger-Helmholz system and Schr\"odinger-Poisson system. In our numerical scheme, we employ an implicit Crank-Nicolson method for time discretization and a conforming FE method for spatial discretization. The proposed method is proved to be well-posedness and ensures mass and energy conservation at the discrete level. Furthermore, we prove optimal $L^2$ error estimates for the fully discrete solutions. Finally, some numerical examples are provided to verify the convergence rate and conservation properties.