The problem of reconstructing a sequence of independent and identically distributed symbols from a set of equal size, consecutive, fragments, as well as a dependent reference sequence, is considered. First, in the regime in which the fragments are relatively long, and typically no fragment appears more than once, the scaling of the failure probability of maximum likelihood reconstruction algorithm is exactly determined for perfect reconstruction and bounded for partial reconstruction. Second, the regime in which the fragments are relatively short and repeating fragments abound is characterized. A trade-off is stated between the fraction of fragments that cannot be adequately reconstructed vs. the distortion level allowed for the reconstruction of each fragment, while still allowing vanishing failure probability
We study the question of whether submodular functions of random variables satisfying various notions of negative dependence satisfy Chernoff-like concentration inequalities. We prove such a concentration inequality for the lower tail when the random variables satisfy negative association or negative regression, resolving an open problem raised in (\citet{approx/QiuS22}). Previous work showed such concentration results for random variables that come from specific dependent-rounding algorithms (\citet{focs/ChekuriVZ10,soda/HarveyO14}). We discuss some applications of our results to combinatorial optimization and beyond.
Stable diffusion, a generative model used in text-to-image synthesis, frequently encounters resolution-induced composition problems when generating images of varying sizes. This issue primarily stems from the model being trained on pairs of single-scale images and their corresponding text descriptions. Moreover, direct training on images of unlimited sizes is unfeasible, as it would require an immense number of text-image pairs and entail substantial computational expenses. To overcome these challenges, we propose a two-stage pipeline named Any-Size-Diffusion (ASD), designed to efficiently generate well-composed images of any size, while minimizing the need for high-memory GPU resources. Specifically, the initial stage, dubbed Any Ratio Adaptability Diffusion (ARAD), leverages a selected set of images with a restricted range of ratios to optimize the text-conditional diffusion model, thereby improving its ability to adjust composition to accommodate diverse image sizes. To support the creation of images at any desired size, we further introduce a technique called Fast Seamless Tiled Diffusion (FSTD) at the subsequent stage. This method allows for the rapid enlargement of the ASD output to any high-resolution size, avoiding seaming artifacts or memory overloads. Experimental results on the LAION-COCO and MM-CelebA-HQ benchmarks demonstrate that ASD can produce well-structured images of arbitrary sizes, cutting down the inference time by 2x compared to the traditional tiled algorithm.
Spiking Neural Networks (SNNs) emerged as a promising solution in the field of Artificial Neural Networks (ANNs), attracting the attention of researchers due to their ability to mimic the human brain and process complex information with remarkable speed and accuracy. This research aimed to optimise the training process of Liquid State Machines (LSMs), a recurrent architecture of SNNs, by identifying the most effective weight range to be assigned in SNN to achieve the least difference between desired and actual output. The experimental results showed that by using spike metrics and a range of weights, the desired output and the actual output of spiking neurons could be effectively optimised, leading to improved performance of SNNs. The results were tested and confirmed using three different weight initialisation approaches, with the best results obtained using the Barabasi-Albert random graph method.
Until high-fidelity quantum computers with a large number of qubits become widely available, classical simulation remains a vital tool for algorithm design, tuning, and validation. We present a simulator for the Quantum Approximate Optimization Algorithm (QAOA). Our simulator is designed with the goal of reducing the computational cost of QAOA parameter optimization and supports both CPU and GPU execution. Our central observation is that the computational cost of both simulating the QAOA state and computing the QAOA objective to be optimized can be reduced by precomputing the diagonal Hamiltonian encoding the problem. We reduce the time for a typical QAOA parameter optimization by eleven times for $n = 26$ qubits compared to a state-of-the-art GPU quantum circuit simulator based on cuQuantum. Our simulator is available on GitHub: //github.com/jpmorganchase/QOKit
In the context of distributed certification, the recognition of graph classes has started to be intensively studied. For instance, different results related to the recognition of planar, bounded tree-width and $H$-minor free graphs have been recently obtained. The goal of the present work is to design compact certificates for the local recognition of relevant geometric intersection graph classes, namely interval, chordal, circular arc, trapezoid and permutation. More precisely, we give proof labeling schemes recognizing each of these classes with logarithmic-sized certificates. We also provide tight logarithmic lower bounds on the size of the certificates on the proof labeling schemes for the recognition of any of the aforementioned geometric intersection graph classes.
A well-known approach in the design of efficient algorithms, called matrix sparsification, approximates a matrix $A$ with a sparse matrix $A'$. Achlioptas and McSherry [2007] initiated a long line of work on spectral-norm sparsification, which aims to guarantee that $\|A'-A\|\leq \epsilon \|A\|$ for error parameter $\epsilon>0$. Various forms of matrix approximation motivate considering this problem with a guarantee according to the Schatten $p$-norm for general $p$, which includes the spectral norm as the special case $p=\infty$. We investigate the relation between fixed but different $p\neq q$, that is, whether sparsification in the Schatten $p$-norm implies (existentially and/or algorithmically) sparsification in the Schatten $q\text{-norm}$ with similar sparsity. An affirmative answer could be tremendously useful, as it will identify which value of $p$ to focus on. Our main finding is a surprising contrast between this question and the analogous case of $\ell_p$-norm sparsification for vectors: For vectors, the answer is affirmative for $p<q$ and negative for $p>q$, but for matrices we answer negatively for almost all sufficiently distinct $p\neq q$. In addition, our explicit constructions may be of independent interest.
We introduce two new classes of measures of information for statistical experiments which generalise and subsume $\phi$-divergences, integral probability metrics, $\mathfrak{N}$-distances (MMD), and $(f,\Gamma)$ divergences between two or more distributions. This enables us to derive a simple geometrical relationship between measures of information and the Bayes risk of a statistical decision problem, thus extending the variational $\phi$-divergence representation to multiple distributions in an entirely symmetric manner. The new families of divergence are closed under the action of Markov operators which yields an information processing equality which is a refinement and generalisation of the classical data processing inequality. This equality gives insight into the significance of the choice of the hypothesis class in classical risk minimization.
Computing gradients of an expectation with respect to the distributional parameters of a discrete distribution is a problem arising in many fields of science and engineering. Typically, this problem is tackled using Reinforce, which frames the problem of gradient estimation as a Monte Carlo simulation. Unfortunately, the Reinforce estimator is especially sensitive to discrepancies between the true probability distribution and the drawn samples, a common issue in low sampling regimes that results in inaccurate gradient estimates. In this paper, we introduce DBsurf, a reinforce-based estimator for discrete distributions that uses a novel sampling procedure to reduce the discrepancy between the samples and the actual distribution. To assess the performance of our estimator, we subject it to a diverse set of tasks. Among existing estimators, DBsurf attains the lowest variance in a least squares problem commonly used in the literature for benchmarking. Furthermore, DBsurf achieves the best results for training variational auto-encoders (VAE) across different datasets and sampling setups. Finally, we apply DBsurf to build a simple and efficient Neural Architecture Search (NAS) algorithm with state-of-the-art performance.
Memristors provide a tempting solution for weighted synapse connections in neuromorphic computing due to their size and non-volatile nature. However, memristors are unreliable in the commonly used voltage-pulse-based programming approaches and require precisely shaped pulses to avoid programming failure. In this paper, we demonstrate a current-limiting-based solution that provides a more predictable analog memory behavior when reading and writing memristive synapses. With our proposed design READ current can be optimized by about 19x compared to the 1T1R design. Moreover, our proposed design saves about 9x energy compared to the 1T1R design. Our 3T1R design also shows promising write operation which is less affected by the process variation in MOSFETs and the inherent stochastic behavior of memristors. Memristors used for testing are hafnium oxide based and were fabricated in a 65nm hybrid CMOS-memristor process. The proposed design also shows linear characteristics between the voltage applied and the resulting resistance for the writing operation. The simulation and measured data show similar patterns with respect to voltage pulse-based programming and current compliance-based programming. We further observed the impact of this behavior on neuromorphic-specific applications such as a spiking neural network
Consider the normal linear regression setup when the number of covariates p is much larger than the sample size n, and the covariates form correlated groups. The response variable y is not related to an entire group of covariates in all or none basis, rather the sparsity assumption persists within and between groups. We extend the traditional g-prior setup to this framework. Variable selection consistency of the proposed method is shown under fairly general conditions, assuming the covariates to be random and allowing the true model to grow with both n and p. For the purpose of implementation of the proposed g-prior method to high-dimensional setup, we propose two procedures. First, a group screening procedure, termed as group SIS (GSIS), and secondly, a novel stochastic search variable selection algorithm, termed as group informed variable selection algorithm (GiVSA), which uses the known group structure efficiently to explore the model space without discarding any covariate based on an initial screening. Screening consistency of GSIS, and theoretical mixing time of GiVSA are studied using the canonical path ensemble approach of Yang et al. (2016). Performance of the proposed prior with implementation of GSIS as well as GiVSA are validated using various simulated examples and a real data related to residential buildings.