This paper addresses the problem of solving nonlinear systems in the context of symmetric quantum signal processing (QSP), a powerful technique for implementing matrix functions on quantum computers. Symmetric QSP focuses on representing target polynomials as products of matrices in SU(2) that possess symmetry properties. We present a novel Newton's method tailored for efficiently solving the nonlinear system involved in determining the phase factors within the symmetric QSP framework. Our method demonstrates rapid and robust convergence in all parameter regimes, including the challenging scenario with ill-conditioned Jacobian matrices, using standard double precision arithmetic operations. For instance, solving symmetric QSP for a highly oscillatory target function $\alpha \cos(1000 x)$ (polynomial degree $\approx 1433$) takes $6$ iterations to converge to machine precision when $\alpha=0.9$, and the number of iterations only increases to $18$ iterations when $\alpha=1-10^{-9}$ with a highly ill-conditioned Jacobian matrix. Leveraging the matrix product states the structure of symmetric QSP, the computation of the Jacobian matrix incurs a computational cost comparable to a single function evaluation. Moreover, we introduce a reformulation of symmetric QSP using real-number arithmetics, further enhancing the method's efficiency. Extensive numerical tests validate the effectiveness and robustness of our approach, which has been implemented in the QSPPACK software package.
This paper introduces novel bulk-surface splitting schemes of first and second order for the wave equation with kinetic and acoustic boundary conditions of semi-linear type. For kinetic boundary conditions, we propose a reinterpretation of the system equations as a coupled system. This means that the bulk and surface dynamics are modeled separately and connected through a coupling constraint. This allows the implementation of splitting schemes, which show first-order convergence in numerical experiments. On the other hand, acoustic boundary conditions naturally separate bulk and surface dynamics. Here, Lie and Strang splitting schemes reach first- and second-order convergence, respectively, as we reveal numerically.
Quantum computing is a growing field where the information is processed by two-levels quantum states known as qubits. Current physical realizations of qubits require a careful calibration, composed by different experiments, due to noise and decoherence phenomena. Among the different characterization experiments, a crucial step is to develop a model to classify the measured state by discriminating the ground state from the excited state. In this proceedings we benchmark multiple classification techniques applied to real quantum devices.
This paper aims at the algorithmic/theoretical core of reinforcement learning (RL) by introducing the novel class of proximal Bellman mappings. These mappings are defined in reproducing kernel Hilbert spaces (RKHSs), to benefit from the rich approximation properties and inner product of RKHSs, they are shown to belong to the powerful Hilbertian family of (firmly) nonexpansive mappings, regardless of the values of their discount factors, and possess ample degrees of design freedom to even reproduce attributes of the classical Bellman mappings and to pave the way for novel RL designs. An approximate policy-iteration scheme is built on the proposed class of mappings to solve the problem of selecting online, at every time instance, the "optimal" exponent $p$ in a $p$-norm loss to combat outliers in linear adaptive filtering, without training data and any knowledge on the statistical properties of the outliers. Numerical tests on synthetic data showcase the superior performance of the proposed framework over several non-RL and kernel-based RL schemes.
This study presents an importance sampling formulation based on adaptively relaxing parameters from the indicator function and/or the probability density function. The formulation embodies the prevalent mathematical concept of relaxing a complex problem into a sequence of progressively easier sub-problems. Due to the flexibility in constructing relaxation parameters, relaxation-based importance sampling provides a unified framework for various existing variance reduction techniques, such as subset simulation, sequential importance sampling, and annealed importance sampling. More crucially, the framework lays the foundation for creating new importance sampling strategies, tailoring to specific applications. To demonstrate this potential, two importance sampling strategies are proposed. The first strategy couples annealed importance sampling with subset simulation, focusing on low-dimensional problems. The second strategy aims to solve high-dimensional problems by leveraging spherical sampling and scaling techniques. Both methods are desirable for fragility analysis in performance-based engineering, as they can produce the entire fragility surface in a single run of the sampling algorithm. Three numerical examples, including a 1000-dimensional stochastic dynamic problem, are studied to demonstrate the proposed methods.
Combinatorial optimization - a field of research addressing problems that feature strongly in a wealth of scientific and industrial contexts - has been identified as one of the core potential fields of applicability of quantum computers. It is still unclear, however, to what extent quantum algorithms can actually outperform classical algorithms for this type of problems. In this work, by resorting to computational learning theory and cryptographic notions, we prove that quantum computers feature an in-principle super-polynomial advantage over classical computers in approximating solutions to combinatorial optimization problems. Specifically, building on seminal work by Kearns and Valiant and introducing a new reduction, we identify special types of problems that are hard for classical computers to approximate up to polynomial factors. At the same time, we give a quantum algorithm that can efficiently approximate the optimal solution within a polynomial factor. The core of the quantum advantage discovered in this work is ultimately borrowed from Shor's quantum algorithm for factoring. Concretely, we prove a super-polynomial advantage for approximating special instances of the so-called integer programming problem. In doing so, we provide an explicit end-to-end construction for advantage bearing instances. This result shows that quantum devices have, in principle, the power to approximate combinatorial optimization solutions beyond the reach of classical efficient algorithms. Our results also give clear guidance on how to construct such advantage-bearing problem instances.
We are interested in numerical algorithms for computing the electrical field generated by a charge distribution localized on scale $l$ in an infinite heterogeneous correlated random medium, in a situation where the medium is only known in a box of diameter $L\gg l$ around the support of the charge. We show that the algorithm of Lu, Otto and Wang, suggesting optimal Dirichlet boundary conditions motivated by the multipole expansion of Bella, Giunti and Otto, still performs well in correlated media. With overwhelming probability, we obtain a convergence rate in terms of $l$, $L$ and the size of the correlations for which optimality is supported with numerical simulations. These estimates are provided for ensembles which satisfy a multi-scale logarithmic Sobolev inequality, where our main tool is an extension of the semi-group estimates established by the first author. As part of our strategy, we construct sub-linear second-order correctors in this correlated setting which is of independent interest.
This paper presents a numerical method for the simulation of elastic solid materials coupled to fluid inclusions. The application is motivated by the modeling of vascularized tissues and by problems in medical imaging which target the estimation of effective (i.e., macroscale) material properties, taking into account the influence of microscale dynamics, such as fluid flow in the microvasculature. The method is based on the recently proposed Reduced Lagrange Multipliers framework. In particular, the interface between solid and fluid domains is not resolved within the computational mesh for the elastic material but discretized independently, imposing the coupling condition via non-matching Lagrange multipliers. Exploiting the multiscale properties of the problem, the resulting Lagrange multipliers space is reduced to a lower-dimensional characteristic set. We present the details of the stability analysis of the resulting method considering a non-standard boundary condition that enforces a local deformation on the solid-fluid boundary. The method is validated with several numerical examples.
We study scalable machine learning models for full event reconstruction in high-energy electron-positron collisions based on a highly granular detector simulation. Particle-flow (PF) reconstruction can be formulated as a supervised learning task using tracks and calorimeter clusters or hits. We compare a graph neural network and kernel-based transformer and demonstrate that both avoid quadratic memory allocation and computational cost while achieving realistic PF reconstruction. We show that hyperparameter tuning on a supercomputer significantly improves the physics performance of the models. We also demonstrate that the resulting model is highly portable across hardware processors, supporting Nvidia, AMD, and Intel Habana cards. Finally, we demonstrate that the model can be trained on highly granular inputs consisting of tracks and calorimeter hits, resulting in a competitive physics performance with the baseline. Datasets and software to reproduce the studies are published following the findable, accessible, interoperable, and reusable (FAIR) principles.
The joint design of the optical system and the downstream algorithm is a challenging and promising task. Due to the demand for balancing the global optimal of imaging systems and the computational cost of physical simulation, existing methods cannot achieve efficient joint design of complex systems such as smartphones and drones. In this work, starting from the perspective of the optical design, we characterize the optics with separated aberrations. Additionally, to bridge the hardware and software without gradients, an image simulation system is presented to reproduce the genuine imaging procedure of lenses with large field-of-views. As for aberration correction, we propose a network to perceive and correct the spatially varying aberrations and validate its superiority over state-of-the-art methods. Comprehensive experiments reveal that the preference for correcting separated aberrations in joint design is as follows: longitudinal chromatic aberration, lateral chromatic aberration, spherical aberration, field curvature, and coma, with astigmatism coming last. Drawing from the preference, a 10% reduction in the total track length of the consumer-level mobile phone lens module is accomplished. Moreover, this procedure spares more space for manufacturing deviations, realizing extreme-quality enhancement of computational photography. The optimization paradigm provides innovative insight into the practical joint design of sophisticated optical systems and post-processing algorithms.
We study feedback controller synthesis for reach-avoid control of discrete-time, linear time-invariant (LTI) systems with Gaussian process and measurement noise. The problem is to compute a controller such that, with at least some required probability, the system reaches a desired goal state in finite time while avoiding unsafe states. Due to stochasticity and nonconvexity, this problem does not admit exact algorithmic or closed-form solutions in general. Our key contribution is a correct-by-construction controller synthesis scheme based on a finite-state abstraction of a Gaussian belief over the unmeasured state, obtained using a Kalman filter. We formalize this abstraction as a Markov decision process (MDP). To be robust against numerical imprecision in approximating transition probabilities, we use MDPs with intervals of transition probabilities. By construction, any policy on the abstraction can be refined into a piecewise linear feedback controller for the LTI system. We prove that the closed-loop LTI system under this controller satisfies the reach-avoid problem with at least the required probability. The numerical experiments show that our method is able to solve reach-avoid problems for systems with up to 6D state spaces, and with control input constraints that cannot be handled by methods such as the rapidly-exploring random belief trees (RRBT).