The quantum communication cost of computing a classical sum of distributed sources is studied over a quantum erasure multiple access channel (QEMAC). $K$ classical messages are distributed across $S$ servers, who also share quantum entanglement in advance. Each server $s\in[S]$ manipulates and sends its quantum subsystem $\mathcal{Q}_s$ to the receiver who computes the sum of the messages. The download cost from Server $s\in [S]$ is the logarithm of the dimension of $\mathcal{Q}_s$. The rate $R$ is defined as the number of instances of the sum computed at the receiver, divided by the total download cost from all the servers. In the symmetric setting with $K= {S \choose \alpha} $ messages where each message is replicated among a unique subset of $\alpha$ servers, and the answers from any $\beta$ servers may be erased, we show that the capacity (maximal rate) is $C= \max\left\{ \min \left\{ \frac{2(\alpha-\beta)}{S}, \frac{S-2\beta}{S} \right\}, \frac{\alpha-\beta}{S} \right\}$.
One of the theoretically intriguing problems in computer-aided geometric modeling comes from the stitching of the tensor product Bezier patches. When they share an extraordinary vertex, it is not possible to obtain continuity C1 or G1 along the edges emanating from that extraordinary vertex. Unfortunately, this stitching problem cannot be solved by using higher degree or rational polynomials. In this paper, we present a modified de Casteljau subdivision algorithm that can provide a solution to this problem. Our modified de Casteljau subdivision, when combined with topological modeling, provides a framework for interactive real-time modeling of piecewise smooth manifold meshes with arbitrary topology. The main advantage of the modified subdivision is that the continuity C1 on a given boundary edge does not depend on the positions of the control points on other boundary edges. The modified subdivision allows us to obtain the desired C1 continuity along the edges emanating from the extraordinary vertices along with the desired G1 continuity in the extraordinary vertices.
A major limitation for the broader scope of problems solvable by transformers is the quadratic scaling of computational complexity with input size. In this study, we investigate the recurrent memory augmentation of pre-trained transformer models to extend input context length while linearly scaling compute. Our approach demonstrates the capability to store information in memory for sequences of up to an unprecedented two million tokens while maintaining high retrieval accuracy. Experiments with language modeling tasks show perplexity improvement as the number of processed input segments increases. These results underscore the effectiveness of our method, which has significant potential to enhance long-term dependency handling in natural language understanding and generation tasks, as well as enable large-scale context processing for memory-intensive applications.
We conduct a systematic study of the approximation properties of Transformer for sequence modeling with long, sparse and complicated memory. We investigate the mechanisms through which different components of Transformer, such as the dot-product self-attention, positional encoding and feed-forward layer, affect its expressive power, and we study their combined effects through establishing explicit approximation rates. Our study reveals the roles of critical parameters in the Transformer, such as the number of layers and the number of attention heads, and these insights also provide natural suggestions for alternative architectures.
We present a theoretical approach to overcome the curse of dimensionality using a neural computation algorithm which can be distributed across several machines. Our modular distributed deep learning paradigm, termed \textit{neural pathways}, can achieve arbitrary accuracy while only loading a small number of parameters into GPU VRAM. Formally, we prove that for every error level $\varepsilon>0$ and every Lipschitz function $f:[0,1]^n\to \mathbb{R}$, one can construct a neural pathways model which uniformly approximates $f$ to $\varepsilon$ accuracy over $[0,1]^n$ while only requiring networks of $\mathcal{O}(\varepsilon^{-1})$ parameters to be loaded in memory and $\mathcal{O}(\varepsilon^{-1}\log(\varepsilon^{-1}))$ to be loaded during the forward pass. This improves the optimal bounds for traditional non-distributed deep learning models, namely ReLU MLPs, which need $\mathcal{O}(\varepsilon^{-n/2})$ parameters to achieve the same accuracy. The only other available deep learning model that breaks the curse of dimensionality is MLPs with super-expressive activation functions. However, we demonstrate that these models have an infinite VC dimension, even with bounded depth and width restrictions, unlike the neural pathways model. This implies that only the latter generalizes. Our analysis is validated experimentally in both regression and classification tasks, demonstrating that our model exhibits superior performance compared to larger centralized benchmarks.
We describe a version of the FGLM algorithm that can be used to compute generic fibers of positive-dimensional polynomial ideals. It combines the FGLM algorithm with a Hensel lifting strategy. We show that this algorithm has a complexity quasi-linear in the number of lifting steps. Some provided experimental data also demonstrates the practical efficacy of our algorithm. Additionally, we sketch a related Hensel lifting method to compute Gr\"obner bases using so-called tracers.
We develop an enthalpy-based modeling and computational framework to quantify uncertainty in Stefan problems with an injection boundary. Inspired by airfoil icing studies, we consider a system featuring an injection boundary inducing domain changes and a free boundary separating phases, resulting in two types of moving boundaries. Our proposed enthalpy-based formulation seamlessly integrates thermal diffusion across the domain with energy fluxes at the boundaries, addressing a modified injection condition for boundary movement. Uncertainty then stems from random variations in the injection boundary. The primary focus of our Uncertainty Quantification (UQ) centers on investigating the effects of uncertainty on free boundary propagation. Through mapping to a reference domain, we derive an enthalpy-based numerical scheme tailored to the transformed coordinate system, facilitating a simple and efficient simulation. Numerical and UQ studies in one and two dimensions validate the proposed model and the extended enthalpy method. They offer intriguing insights into ice accretion and other multiphysics processes involving phase transitions.
Quantum annealing (QA) is a type of analog quantum computation that is a relaxed form of adiabatic quantum computation and uses quantum fluctuations in order to search for ground state solutions of a programmable Ising model. Here we present extensive experimental random number results from a D-Wave 2000Q quantum annealer, totaling over 20 billion bits of QA measurements, which is significantly larger than previous D-Wave QA random number generator studies. Current quantum annealers are susceptible to noise from environmental sources and calibration errors, and are not in general unbiased samplers. Therefore, it is of interest to quantify whether noisy quantum annealers can effectively function as an unbiased QRNG. The amount of data that was collected from the quantum annealer allows a comprehensive analysis of the random bits to be performed using the NIST SP 800-22 Rev 1a testsuite, as well as min-entropy estimates from NIST SP 800-90B. The randomness tests show that the generated random bits from the D-Wave 2000Q are biased, and not unpredictable random bit sequences. With no server-side sampling post-processing, the $1$ microsecond annealing time measurements had a min-entropy of $0.824$.
Accurately modeling soft robots in simulation is computationally expensive and commonly falls short of representing the real world. This well-known discrepancy, known as the sim-to-real gap, can have several causes, such as coarsely approximated geometry and material models, manufacturing defects, viscoelasticity and plasticity, and hysteresis effects. Residual physics networks learn from real-world data to augment a discrepant model and bring it closer to reality. Here, we present a residual physics method for modeling soft robots with large degrees of freedom. We train neural networks to learn a residual term -- the modeling error between simulated and physical systems. Concretely, the residual term is a force applied on the whole simulated mesh, while real position data is collected with only sparse motion markers. The physical prior of the analytical simulation provides a starting point for the residual network, and the combined model is more informed than if physics were learned tabula rasa. We demonstrate our method on 1) a silicone elastomeric beam and 2) a soft pneumatic arm with hard-to-model, anisotropic fiber reinforcements. Our method outperforms traditional system identification up to 60%. We show that residual physics need not be limited to low degrees of freedom but can effectively bridge the sim-to-real gap for high dimensional systems.
The existence of representative datasets is a prerequisite of many successful artificial intelligence and machine learning models. However, the subsequent application of these models often involves scenarios that are inadequately represented in the data used for training. The reasons for this are manifold and range from time and cost constraints to ethical considerations. As a consequence, the reliable use of these models, especially in safety-critical applications, is a huge challenge. Leveraging additional, already existing sources of knowledge is key to overcome the limitations of purely data-driven approaches, and eventually to increase the generalization capability of these models. Furthermore, predictions that conform with knowledge are crucial for making trustworthy and safe decisions even in underrepresented scenarios. This work provides an overview of existing techniques and methods in the literature that combine data-based models with existing knowledge. The identified approaches are structured according to the categories integration, extraction and conformity. Special attention is given to applications in the field of autonomous driving.
As soon as abstract mathematical computations were adapted to computation on digital computers, the problem of efficient representation, manipulation, and communication of the numerical values in those computations arose. Strongly related to the problem of numerical representation is the problem of quantization: in what manner should a set of continuous real-valued numbers be distributed over a fixed discrete set of numbers to minimize the number of bits required and also to maximize the accuracy of the attendant computations? This perennial problem of quantization is particularly relevant whenever memory and/or computational resources are severely restricted, and it has come to the forefront in recent years due to the remarkable performance of Neural Network models in computer vision, natural language processing, and related areas. Moving from floating-point representations to low-precision fixed integer values represented in four bits or less holds the potential to reduce the memory footprint and latency by a factor of 16x; and, in fact, reductions of 4x to 8x are often realized in practice in these applications. Thus, it is not surprising that quantization has emerged recently as an important and very active sub-area of research in the efficient implementation of computations associated with Neural Networks. In this article, we survey approaches to the problem of quantizing the numerical values in deep Neural Network computations, covering the advantages/disadvantages of current methods. With this survey and its organization, we hope to have presented a useful snapshot of the current research in quantization for Neural Networks and to have given an intelligent organization to ease the evaluation of future research in this area.