For a convolutional code in the presence of a symbol erasure channel, the information debt $I(t)$ at time $t$ provides a measure of the number of additional code symbols required to recover all message symbols up to time $t$. Information-debt-optimal streaming ($i$DOS) codes are convolutional codes which allow for the recovery of all message symbols up to $t$ whenever $I(t)$ turns zero under the following conditions; (i) information debt can be non-zero for at most $\tau$ consecutive time slots and (ii) information debt never increases beyond a particular threshold. The existence of periodically-time-varying $i$DOS codes are known for all parameters. In this paper, we address the problem of constructing explicit, time-invariant $i$DOS codes. We present an explicit time-invariant construction of $i$DOS codes for the unit memory ($m=1$) case. It is also shown that a construction method for convolutional codes due to Almeida et al. leads to explicit time-invariant $i$DOS codes for all parameters. However, this general construction requires a larger field size than the first construction for the $m=1$ case.
Variational Autoencoders and their many variants have displayed impressive ability to perform dimensionality reduction, often achieving state-of-the-art performance. Many current methods however, struggle to learn good representations in High Dimensional, Low Sample Size (HDLSS) tasks, which is an inherently challenging setting. We address this challenge by using an ensemble of lightweight VAEs to learn posteriors over subsets of the feature-space, which get aggregated into a joint posterior in a novel divide-and-conquer approach. Specifically, we present an alternative factorisation of the joint posterior that induces a form of implicit data augmentation that yields greater sample efficiency. Through a series of experiments on eight real-world datasets, we show that our method learns better latent representations in HDLSS settings, which leads to higher accuracy in a downstream classification task. Furthermore, we verify that our approach has a positive effect on disentanglement and achieves a lower estimated Total Correlation on learnt representations. Finally, we show that our approach is robust to partial features at inference, exhibiting little performance degradation even with most features missing.
We present a novel and easy-to-use method for calibrating error-rate based confidence intervals to evidence-based support intervals. Support intervals are obtained from inverting Bayes factors based on a parameter estimate and its standard error. A $k$ support interval can be interpreted as "the observed data are at least $k$ times more likely under the included parameter values than under a specified alternative". Support intervals depend on the specification of prior distributions for the parameter under the alternative, and we present several types that allow different forms of external knowledge to be encoded. We also show how prior specification can to some extent be avoided by considering a class of prior distributions and then computing so-called minimum support intervals which, for a given class of priors, have a one-to-one mapping with confidence intervals. We also illustrate how the sample size of a future study can be determined based on the concept of support. Finally, we show how the bound for the type I error rate of Bayes factors leads to a bound for the coverage of support intervals. An application to data from a clinical trial illustrates how support intervals can lead to inferences that are both intuitive and informative.
In this paper, we introduce a new task for code completion that focuses on handling long code input and propose a sparse Transformer model, called LongCoder, to address this task. LongCoder employs a sliding window mechanism for self-attention and introduces two types of globally accessible tokens - bridge tokens and memory tokens - to improve performance and efficiency. Bridge tokens are inserted throughout the input sequence to aggregate local information and facilitate global interaction, while memory tokens are included to highlight important statements that may be invoked later and need to be memorized, such as package imports and definitions of classes, functions, or structures. We conduct experiments on a newly constructed dataset that contains longer code context and the publicly available CodeXGLUE benchmark. Experimental results demonstrate that LongCoder achieves superior performance on code completion tasks compared to previous models while maintaining comparable efficiency in terms of computational resources during inference. All the codes and data are available at //github.com/microsoft/CodeBERT.
This work extends the theory of identifiability in supervised learning by considering the consequences of having access to a distribution of tasks. In such cases, we show that identifiability is achievable even in the case of regression, extending prior work restricted to the single-task classification case. Furthermore, we show that the existence of a task distribution which defines a conditional prior over latent variables reduces the equivalence class for identifiability to permutations and scaling, a much stronger and more useful result. When we further assume a causal structure over these tasks, our approach enables simple maximum marginal likelihood optimization together with downstream applicability to causal representation learning. Empirically, we validate that our model outperforms more general unsupervised models in recovering canonical representations for synthetic and real-world data.
Entropy coding is essential to data compression, image and video coding, etc. The Range variant of Asymmetric Numeral Systems (rANS) is a modern entropy coder, featuring superior speed and compression rate. As rANS is not designed for parallel execution, the conventional approach to parallel rANS partitions the input symbol sequence and encodes partitions with independent codecs, and more partitions bring extra overhead. This approach is found in state-of-the-art implementations such as DietGPU. It is unsuitable for content-delivery applications, as the parallelism is wasted if the decoder cannot decode all the partitions in parallel, but all the overhead is still transferred. To solve this, we propose Recoil, a parallel rANS decoding approach with decoder-adaptive scalability. We discover that a single rANS-encoded bitstream can be decoded from any arbitrary position if the intermediate states are known. After renormalization, these states also have a smaller upper bound, which can be stored efficiently. We then split the encoded bitstream using a heuristic to evenly distribute the workload, and store the intermediate states and corresponding symbol indices as metadata. The splits can then be combined simply by eliminating extra metadata entries. The main contribution of Recoil is reducing unnecessary data transfer by adaptively scaling parallelism overhead to match the decoder capability. The experiments show that Recoil decoding throughput is comparable to the conventional approach, scaling massively on CPUs and GPUs and greatly outperforming various other ANS-based codecs.
Many common types of data can be represented as functions that map coordinates to signal values, such as pixel locations to RGB values in the case of an image. Based on this view, data can be compressed by overfitting a compact neural network to its functional representation and then encoding the network weights. However, most current solutions for this are inefficient, as quantization to low-bit precision substantially degrades the reconstruction quality. To address this issue, we propose overfitting variational Bayesian neural networks to the data and compressing an approximate posterior weight sample using relative entropy coding instead of quantizing and entropy coding it. This strategy enables direct optimization of the rate-distortion performance by minimizing the $\beta$-ELBO, and target different rate-distortion trade-offs for a given network architecture by adjusting $\beta$. Moreover, we introduce an iterative algorithm for learning prior weight distributions and employ a progressive refinement process for the variational posterior that significantly enhances performance. Experiments show that our method achieves strong performance on image and audio compression while retaining simplicity.
Knowledge graph embedding (KGE) that maps entities and relations into vector representations is essential for downstream tasks. Conventional KGE methods require relatively high-dimensional entity representations to preserve the structural information of knowledge graph, but lead to oversized model parameters. Recent methods reduce model parameters by adopting low-dimensional entity representations, while developing techniques (e.g., knowledge distillation) to compensate for the reduced dimension. However, such operations produce degraded model accuracy and limited reduction of model parameters. Specifically, we view the concatenation of all entity representations as an embedding layer, and then conventional KGE methods that adopt high-dimensional entity representations equal to enlarging the width of the embedding layer to gain expressiveness. To achieve parameter efficiency without sacrificing accuracy, we instead increase the depth and propose a deeper embedding network for entity representations, i.e., a narrow embedding layer and a multi-layer dimension lifting network (LiftNet). Experiments on three public datasets show that the proposed method (implemented based on TransE and DistMult) with 4-dimensional entity representations achieves more accurate link prediction results than counterpart parameter-efficient KGE methods and strong KGE baselines, including TransE and DistMult with 512-dimensional entity representations.
Neural implicit surface representations have recently emerged as popular alternative to explicit 3D object encodings, such as polygonal meshes, tabulated points, or voxels. While significant work has improved the geometric fidelity of these representations, much less attention is given to their final appearance. Traditional explicit object representations commonly couple the 3D shape data with auxiliary surface-mapped image data, such as diffuse color textures and fine-scale geometric details in normal maps that typically require a mapping of the 3D surface onto a plane, i.e., a surface parameterization; implicit representations, on the other hand, cannot be easily textured due to lack of configurable surface parameterization. Inspired by this digital content authoring methodology, we design a neural network architecture that implicitly encodes the underlying surface parameterization suitable for appearance data. As such, our model remains compatible with existing mesh-based digital content with appearance data. Motivated by recent work that overfits compact networks to individual 3D objects, we present a new weight-encoded neural implicit representation that extends the capability of neural implicit surfaces to enable various common and important applications of texture mapping. Our method outperforms reasonable baselines and state-of-the-art alternatives.
We study variable-length codes for point-to-point discrete memoryless channels with noiseless unlimited-rate feedback that occurs in $L$ bursts. We term such codes variable-length bursty-feedback (VLBF) codes. Unlike classical codes with feedback after each transmitted code symbol, bursty feedback fits better with protocols that employ sparse feedback after a packet is sent and also with half-duplex end devices that cannot transmit and listen to the channel at the same time. We present a novel non-asymptotic achievability bound for VLBF codes with $L$ bursts of feedback over any discrete memoryless channel. We numerically evaluate the bound over the binary symmetric channel (BSC). We perform optimization over the time instances at which feedback occurs for both our own bound and Yavas et al.'s non-asymptotic achievability bound for variable-length stop-feedback (VLSF) codes, where only a single bit is sent at each feedback instance. Our results demonstrate the advantages of richer feedback: VLBF codes significantly outperform VLSF codes at short blocklengths, especially as the error probability $\epsilon$ decreases. Remarkably, for BSC(0.11) and error probability $10^{-10}$, our VLBF code with $L=5$ and expected decoding time $N\leq 400$ outperforms the achievability bound given by Polyanskiy et al. for VLSF codes with $L=\infty$, and our VLBF code with $L=3$.
For a given function $F$ from $\mathbb F_{p^n}$ to itself, determining whether there exists a function which is CCZ-equivalent but EA-inequivalent to $F$ is a very important and interesting problem. For example, K\"olsch \cite{KOL21} showed that there is no function which is CCZ-equivalent but EA-inequivalent to the inverse function. On the other hand, for the cases of Gold function $F(x)=x^{2^i+1}$ and $F(x)=x^3+{\rm Tr}(x^9)$ over $\mathbb F_{2^n}$, Budaghyan, Carlet and Pott (respectively, Budaghyan, Carlet and Leander) \cite{BCP06, BCL09FFTA} found functions which are CCZ-equivalent but EA-inequivalent to $F$. In this paper, when a given function $F$ has a component function which has a linear structure, we present functions which are CCZ-equivalent to $F$, and if suitable conditions are satisfied, the constructed functions are shown to be EA-inequivalent to $F$. As a consequence, for every quadratic function $F$ on $\mathbb F_{2^n}$ ($n\geq 4$) with nonlinearity $>0$ and differential uniformity $\leq 2^{n-3}$, we explicitly construct functions which are CCZ-equivalent but EA-inequivalent to $F$. Also for every non-planar quadratic function on $\mathbb F_{p^n}$ $(p>2, n\geq 4)$ with $|\mathcal W_F|\leq p^{n-1}$ and differential uniformity $\leq p^{n-3}$, we explicitly construct functions which are CCZ-equivalent but EA-inequivalent to $F$.