We present a novel quantum high-dimensional linear regression algorithm with an $\ell_1$-penalty based on the classical LARS (Least Angle Regression) pathwise algorithm. Similarly to available classical algorithms for Lasso, our quantum algorithm provides the full regularisation path as the penalty term varies, but quadratically faster per iteration under specific conditions. A quadratic speedup on the number of features $d$ is possible by using the quantum minimum-finding routine from D\"urr and Hoyer (arXiv'96) in order to obtain the joining time at each iteration. We then improve upon this simple quantum algorithm and obtain a quadratic speedup both in the number of features $d$ and the number of observations $n$ by using the approximate quantum minimum-finding routine from Chen and de Wolf (ICALP'23). As one of our main contributions, we construct a quantum unitary to approximately compute the joining times to be searched over by the approximate quantum minimum finding. Since the joining times are no longer exactly computed, it is no longer clear that the resulting approximate quantum algorithm obtains a good solution. As our second main contribution, we prove, via an approximate version of the KKT conditions and a duality gap, that the LARS algorithm (and thus our quantum algorithm) is robust to errors. This means that it still outputs a path that minimises the Lasso cost function up to a small error if the joining times are approximately computed. Moreover, we show that, when the observations are sampled from a Gaussian distribution, our quantum algorithm's complexity only depends polylogarithmically on $n$, exponentially better than the classical LARS algorithm, while keeping the quadratic improvement on $d$. Finally, we propose a dequantised algorithm that also retains the polylogarithmic dependence on $n$, albeit with the linear scaling on $d$ from the standard LARS algorithm.
We initiate a novel approach to explain the predictions and out of sample performance of random forest (RF) regression and classification models by exploiting the fact that any RF can be mathematically formulated as an adaptive weighted K nearest-neighbors model. Specifically, we employ a recent result that, for both regression and classification tasks, any RF prediction can be rewritten exactly as a weighted sum of the training targets, where the weights are RF proximities between the corresponding pairs of data points. We show that this linearity facilitates a local notion of explainability of RF predictions that generates attributions for any model prediction across observations in the training set, and thereby complements established feature-based methods like SHAP, which generate attributions for a model prediction across input features. We show how this proximity-based approach to explainability can be used in conjunction with SHAP to explain not just the model predictions, but also out-of-sample performance, in the sense that proximities furnish a novel means of assessing when a given model prediction is more or less likely to be correct. We demonstrate this approach in the modeling of US corporate bond prices and returns in both regression and classification cases.
This paper introduces a novel approach to image denoising that leverages the advantages of Generative Adversarial Networks (GANs). Specifically, we propose a model that combines elements of the Pix2Pix model and the Wasserstein GAN (WGAN) with Gradient Penalty (WGAN-GP). This hybrid framework seeks to capitalize on the denoising capabilities of conditional GANs, as demonstrated in the Pix2Pix model, while mitigating the need for an exhaustive search for optimal hyperparameters that could potentially ruin the stability of the learning process. In the proposed method, the GAN's generator is employed to produce denoised images, harnessing the power of a conditional GAN for noise reduction. Simultaneously, the implementation of the Lipschitz continuity constraint during updates, as featured in WGAN-GP, aids in reducing susceptibility to mode collapse. This innovative design allows the proposed model to benefit from the strong points of both Pix2Pix and WGAN-GP, generating superior denoising results while ensuring training stability. Drawing on previous work on image-to-image translation and GAN stabilization techniques, the proposed research highlights the potential of GANs as a general-purpose solution for denoising. The paper details the development and testing of this model, showcasing its effectiveness through numerical experiments. The dataset was created by adding synthetic noise to clean images. Numerical results based on real-world dataset validation underscore the efficacy of this approach in image-denoising tasks, exhibiting significant enhancements over traditional techniques. Notably, the proposed model demonstrates strong generalization capabilities, performing effectively even when trained with synthetic noise.
In this note we show a new version of the trek rule for the continuous Lyapunov equation. This linear matrix equation characterizes the cross-sectional steady-state covariance matrix of a Gaussian Markov process, and the trek rule links the graphical structure of the drift of the process to the entries of this covariance matrix. In general, the trek rule is a power series expansion of the covariance matrix, while for the special case where the drift is acyclic, it simplifies to a polynomial in the off-diagonal entries of the drift matrix. Using the trek rule we can give relatively explicit formulas for the entries of the covariance matrix for some special cases of the drift matrix. Furthermore, we use the trek rule to derive a new lower bound for the variances in the acyclic case.
We present PV-OSIMr, an efficient algorithm for computing the Delassus matrix (also known as the inverse operational space inertia matrix) for a kinematic tree, with the lowest order computational complexity known in literature. PV-OSIMr is derived by optimizing the Popov-Vereshchagin (PV) solver computations using the compositionality of the force and motion propagators. It has a computational complexity of O(n + m^2 ) compared to O(n + m^2d) of the original PV-OSIM algorithm and O(n+md+m^2 ) of the extended force propagator algorithm (EFPA), where n is the number of joints, m is the number of constraints and d is the depth of the kinematic tree. Since Delassus matrix computation requires constructing an m x m sized matrix and must consider all the n joints at least once, the asymptotic computational complexity of PV-OSIMr is optimal. We further benchmark our algorithm and find it to be often more efficient than the PV-OSIM and EFPA in practice.
We propose a metaheuristic algorithm enhanced with feature-based guidance that is designed to solve the Capacitated Vehicle Routing Problem (CVRP). To formulate the proposed guidance, we developed and explained a supervised Machine Learning (ML) model, that is used to formulate the guidance and control the diversity of the solution during the optimization process. We propose a metaheuristic algorithm combining neighborhood search and a novel mechanism of hybrid split and path relinking to implement the proposed guidance. The proposed guidance has proven to give a statistically significant improvement to the proposed metaheuristic algorithm when solving CVRP. Moreover, the proposed guided metaheuristic is also capable of producing competitive solutions among state-of-the-art metaheuristic algorithms.
We present an Alternating Direction Method of Multipliers (ADMM) algorithm designed to solve the Weighted Generalized Fused LASSO Signal Approximator (wFLSA). First, we show that wFLSAs can always be reformulated as a Generalized LASSO problem. With the availability of algorithms tailored to the Generalized LASSO, the issue appears to be, in principle, resolved. However, the computational complexity of these algorithms is high, with a time complexity of $O(p^4)$ for a single iteration, where $p$ represents the number of coefficients. To overcome this limitation, we propose an ADMM algorithm specifically tailored for wFLSA-equivalent problems, significantly reducing the complexity to $O(p^2)$. Our algorithm is publicly accessible through the R package wflsa.
The author introduced models of linear logic known as ''Interaction Graphs'' which generalise Girard's various geometry of interaction constructions. In this work, we establish how these models essentially rely on a deep connection between zeta functions and the execution of programs, expressed as a cocycle. This is first shown in the simple case of graphs, before begin lifted to dynamical systems. Focussing on probabilistic models, we then explain how the notion of graphings used in Interaction Graphs captures a natural class of sub-Markov processes. We then extend the realisability constructions and the notion of zeta function to provide a realisability model of second-order linear logic over the set of all (discrete-time) sub-Markov processes.
We present a framework for solving partial different equations on evolving surfaces. Based on the grid-based particle method (GBPM) [18], the method can naturally resample the surface even under large deformation from the motion law. We introduce a new component in the local reconstruction step of the algorithm and demonstrate numerically that the modification can improve computational accuracy when a large curvature region is developed during evolution. The method also incorporates a recently developed constrained least-squares ghost sample points (CLS-GSP) formulation, which can lead to a better-conditioned discretized matrix for computing some surface differential operators. The proposed framework can incorporate many methods and link various approaches to the same problem. Several numerical experiments are carried out to show the accuracy and effectiveness of the proposed method.
We apply functional acceleration to the Policy Mirror Descent (PMD) general family of algorithms, which cover a wide range of novel and fundamental methods in Reinforcement Learning (RL). Leveraging duality, we propose a momentum-based PMD update. By taking the functional route, our approach is independent of the policy parametrization and applicable to large-scale optimization, covering previous applications of momentum at the level of policy parameters as a special case. We theoretically analyze several properties of this approach and complement with a numerical ablation study, which serves to illustrate the policy optimization dynamics on the value polytope, relative to different algorithmic design choices in this space. We further characterize numerically several features of the problem setting relevant for functional acceleration, and lastly, we investigate the impact of approximation on their learning mechanics.
The classical Fokker-Planck equation (FPE) is a key tool in physics for describing systems influenced by drag forces and Gaussian noise, with applications spanning multiple fields. We consider the fractional Fokker-Planck equation (FFPE), which models the time evolution of probability densities for systems driven by L\'evy processes, relevant in scenarios where Gaussian assumptions fail. The paper presents an efficient and accurate numerical approach for the free-space FFPE with constant coefficients and Dirac-delta initial conditions. This method utilizes the integral representation of the solutions and enables the efficient handling of very high-dimensional problems using fast algorithms. Our work is the first to present a high-precision numerical solver for the free-space FFPE with Dirac-delta initial conditions. This opens the door for future research on more complex scenarios, including those with variable coefficients and other types of initial conditions.