Quantum squaring operation is a useful building block in implementing quantum algorithms such as linear regression, regularized least squares algorithm, order-finding algorithm, quantum search algorithm, Newton Raphson division, Euclidean distance calculation, cryptography, and in finding roots and reciprocals. Quantum circuits could be made fault-tolerant by using error correcting codes and fault-tolerant quantum gates (such as the Clifford + T-gates). However, the T-gate is very costly to implement. Two qubit gates (such as the CNOT-gate) are more prone to noise errors than single qubit gates. Consequently, in order to realize reliable quantum algorithms, the quantum circuits should have a low T-count and CNOT-count. In this paper, we present a novel quantum integer squaring architecture optimized for T-count, CNOT-count, T-depth, CNOT-depth, and $KQ_T$ that produces no garbage outputs. To reduce costs, we use a novel approach for arranging the generated partial products that allows us to reduce the number of adders by 50%. We also use the resource efficient logical-AND gate and uncomputation gate shown in [1] to further save resources. The proposed quantum squaring circuit sees an asymptotic reduction of 66.67% in T-count, 50% in T-depth, 29.41% in CNOT-count, 42.86% in CNOT-depth, and 25% in KQ T with respect to Thapliyal et al. [2]. With respect to Nagamani et al. [3] the design sees an asymptotic reduction of 77.27% in T-count, 68.75% in T-depth, 50% in CNOT-count, 61.90% in CNOT-depth, and 6.25% in the $KQ_T$.
Robust and effective scaling of models from small to large width typically requires the precise adjustment of many algorithmic and architectural details, such as parameterization and optimizer choices. In this work, we propose a new perspective on parameterization by investigating a key assumption in prior work about the alignment between parameters and data and derive new theoretical results under weaker assumptions and a broader set of optimizers. Our extensive empirical investigation includes tens of thousands of models trained with all combinations of three optimizers, four parameterizations, several alignment assumptions, more than a dozen learning rates, and fourteen model sizes up to 26.8B parameters. We find that the best learning rate scaling prescription would often have been excluded by the assumptions in prior work. Our results show that all parameterizations, not just maximal update parameterization (muP), can achieve hyperparameter transfer; moreover, our novel per-layer learning rate prescription for standard parameterization outperforms muP. Finally, we demonstrate that an overlooked aspect of parameterization, the epsilon parameter in Adam, must be scaled correctly to avoid gradient underflow and propose Adam-atan2, a new numerically stable, scale-invariant version of Adam that eliminates the epsilon hyperparameter entirely.
Despite their linguistic competence, Large Language models (LLMs) often exhibit limitations in their ability to reason reliably and flexibly. To address this, we propose a neurosymbolic approach that prompts LLMs to extract and encode all relevant information from a problem statement as logical code statements, and then use a logic programming language (Prolog) to conduct the iterative computations of explicit deductive reasoning. Our approach significantly enhances the performance of LLMs on the standard mathematical reasoning benchmark, GSM8k, and the Navigate dataset from the BIG-bench dataset. Additionally, we introduce a novel dataset, the Non-Linear Reasoning (NLR) dataset, consisting of 55 unique word problems that target the shortcomings of the next token prediction paradigm of LLMs and require complex non-linear reasoning but only basic arithmetic skills to solve. Our findings demonstrate that the integration of Prolog enables LLMs to achieve high performance on the NLR dataset, which even the most advanced language models (including GPT4) fail to solve using text only.
Herein the topics of (natural) gradient descent, data decorrelation, and approximate methods for backpropagation are brought into a dialogue. Natural gradient descent illuminates how gradient vectors, pointing at directions of steepest descent, can be improved by considering the local curvature of loss landscapes. We extend this perspective and show that to fully solve the problem illuminated by natural gradients in neural networks, one must recognise that correlations in the data at any linear transformation, including node responses at every layer of a neural network, cause a non-orthonormal relationship between the model's parameters. To solve this requires a solution to decorrelate inputs at each individual layer of a neural network. We describe a range of methods which have been proposed for decorrelation and whitening of node output, while providing a novel method specifically useful for distributed computing and computational neuroscience. Implementing decorrelation within multi-layer neural networks, we can show that not only is training via backpropagation sped up significantly but also existing approximations of backpropagation, which have failed catastrophically in the past, are made performant once more. This has the potential to provide a route forward for approximate gradient descent methods which have previously been discarded, training approaches for analogue and neuromorphic hardware, and potentially insights as to the efficacy and utility of decorrelation processes in the brain.
High dimensional expanders (HDXs) are a hypergraph generalization of expander graphs. They are extensively studied in the math and TCS communities due to their many applications. Like expander graphs, HDXs are especially interesting for applications when they are bounded degree, namely, if the number of edges adjacent to every vertex is bounded. However, only a handful of constructions are known to have this property, all of which rely on algebraic techniques. In particular, no random or combinatorial construction of bounded degree HDXs is known. As a result, our understanding of these objects is limited. The degree of an $i$-face in an HDX is the number of $(i+1)$-faces containing it. In this work we construct HDXs whose higher dimensional faces have bounded degree. This is done by giving an elementary and deterministic algorithm that takes as input a regular $k$-dimensional HDX $X$ and outputs another $k$-dimensional HDX $\widehat{X}$ with twice as many vertices. While the degree of vertices in $\widehat{X}$ grows, the degree of the $(k-1)$-faces in $\widehat{X}$ stays the same. As a result, we obtain a new `algebra-free' construction of HDXs whose $(k-1)$-face degree is bounded. Our algorithm is based on a simple and natural generalization of the construction by Bilu and Linial (Combinatorica, 2006), which build expanders using lifts coming from edge signings. Our construction is based on local lifts of HDXs, where a local lift is a complex whose top-level links are lifts of links in the original complex. We demonstrate that a local lift of an HDX is an HDX in many cases. In addition, combining local lifts with existing bounded degree constructions creates new families of bounded degree HDXs with significantly different links than before. For every large enough $D$, we use this technique to construct families of bounded degree HDXs with links that have diameter $\geq D$.
Differential Privacy (DP) mechanisms usually {force} reduction in data utility by producing "out-of-bound" noisy results for a tight privacy budget. We introduce the Budget Recycling Differential Privacy (BR-DP) framework, designed to provide soft-bounded noisy outputs for a broad range of existing DP mechanisms. By "soft-bounded," we refer to the mechanism's ability to release most outputs within a predefined error boundary, thereby improving utility and maintaining privacy simultaneously. The core of BR-DP consists of two components: a DP kernel responsible for generating a noisy answer per iteration, and a recycler that probabilistically recycles/regenerates or releases the noisy answer. We delve into the privacy accounting of BR-DP, culminating in the development of a budgeting principle that optimally sub-allocates the available budget between the DP kernel and the recycler. Furthermore, we introduce algorithms for tight BR-DP accounting in composition scenarios, and our findings indicate that BR-DP achieves reduced privacy leakage post-composition compared to DP. Additionally, we explore the concept of privacy amplification via subsampling within the BR-DP framework and propose optimal sampling rates for BR-DP across various queries. We experiment with real data, and the results demonstrate BR-DP's effectiveness in lifting the utility-privacy tradeoff provided by DP mechanisms.
A new variational inference method, SPH-ParVI, based on smoothed particle hydrodynamics (SPH), is proposed for sampling partially known densities (e.g. up to a constant) or sampling using gradients. SPH-ParVI simulates the flow of a fluid under external effects driven by the target density; transient or steady state of the fluid approximates the target density. The continuum fluid is modelled as an interacting particle system (IPS) via SPH, where each particle carries smoothed properties, interacts and evolves as per the Navier-Stokes equations. This mesh-free, Lagrangian simulation method offers fast, flexible, scalable and deterministic sampling and inference for a class of probabilistic models such as those encountered in Bayesian inference and generative modelling.
We are interested in the distribution of Wishart samples after forgetting their scaling factors. We call such a distribution a projective Wishart distribution. We show that projective Wishart distributions have strong links with the affine-invariant geometry of symmetric positive definite matrices in the real case or Hermitian positive definite matrices in the complex case. First, the Fr{\'e}chet mean of a projective Wishart distribution is the covariance parameter, up to a scaling factor, of the corresponding Wishart distribution. Second, in the case of 2 by 2 matrices, the densities have simple expressions in term of the affine-invariant distance.
Long-horizon tasks, which have a large discount factor, pose a challenge for most conventional reinforcement learning (RL) algorithms. Algorithms such as Value Iteration and Temporal Difference (TD) learning have a slow convergence rate and become inefficient in these tasks. When the transition distributions are given, PID VI was recently introduced to accelerate the convergence of Value Iteration using ideas from control theory. Inspired by this, we introduce PID TD Learning and PID Q-Learning algorithms for the RL setting in which only samples from the environment are available. We give theoretical analysis of their convergence and acceleration compared to their traditional counterparts. We also introduce a method for adapting PID gains in the presence of noise and empirically verify its effectiveness.
We investigate a lattice-structured LSTM model for Chinese NER, which encodes a sequence of input characters as well as all potential words that match a lexicon. Compared with character-based methods, our model explicitly leverages word and word sequence information. Compared with word-based methods, lattice LSTM does not suffer from segmentation errors. Gated recurrent cells allow our model to choose the most relevant characters and words from a sentence for better NER results. Experiments on various datasets show that lattice LSTM outperforms both word-based and character-based LSTM baselines, achieving the best results.
The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.