In this paper, we consider interpolation by \textit{completely monotonous} polynomials (CMPs for short), that is, polynomials with non-negative real coefficients. In particular, given a finite set $S\subset \mathbb{R}_{>0} \times \mathbb{R}_{\geq 0}$, we consider \textit{the minimal polynomial} of $S$, introduced by Berg [1985], which is `minimal,' in the sense that it is eventually majorized by all the other CMPs interpolating $S$. We give an upper bound of the degree of the minimal polynomial of $S$ when it exists. Furthermore, we give another algorithm for computing the minimal polynomial of given $S$ which utilizes an order structure on sign sequences. Applying the upper bound above, we also analyze the computational complexity of algorithms for computing minimal polynomials including ours.
In this paper, we introduce and study the problem of \textit{binary stretch embedding} of edge-weighted graph. This problem is closely related to the well-known \textit{addressing problem} of Graham and Pollak. Addressing problem is the problem of assigning the shortest possible length strings (called ``addresses") over the alphabet $\{0,1,*\}$ to the vertices of an input graph $G$ with the following property. For every pair $u,v$ of vertices, the number of positions in which one of their addresses is $1$, and the other is $0$ is exactly equal to the distance of $u,v$ in graph $G$. When the addresses do not contain the symbol $*$, the problem is called \textit{isometric hypercube embedding}. As far as we know, the isometric hypercube embedding was introduced by Firsov in 1965. It is known that such addresses do not exist for general graphs. Inspired by the addressing problem, in this paper, we introduce the \textit{binary stretch embedding problem}, or BSEP for short, for the edge-weighted undirected graphs. We also argue how this problem is related to other graph embedding problems in the literature. Using tools and techniques such as Hadamard codes and the theory of linear programming, several upper and lower bounds as well as exact solutions for certain classes of graphs will be discovered. As an application of the results in this paper, we derive improved upper bounds or exact values for the maximum size of Lee metric codes of certain parameters.
In this paper, we propose \textbf{UniCode}, a novel approach within the domain of multimodal large language models (MLLMs) that learns a unified codebook to efficiently tokenize visual, text, and potentially other types of signals. This innovation addresses a critical limitation in existing MLLMs: their reliance on a text-only codebook, which restricts MLLM's ability to generate images and texts in a multimodal context. Towards this end, we propose a language-driven iterative training paradigm, coupled with an in-context pre-training task we term ``image decompression'', enabling our model to interpret compressed visual data and generate high-quality images.The unified codebook empowers our model to extend visual instruction tuning to non-linguistic generation tasks. Moreover, UniCode is adaptable to diverse stacked quantization approaches in order to compress visual signals into a more compact token representation. Despite using significantly fewer parameters and less data during training, Unicode demonstrates promising capabilities in visual reconstruction and generation. It also achieves performances comparable to leading MLLMs across a spectrum of VQA benchmarks.
We prove impossibility results for adaptivity in non-smooth stochastic convex optimization. Given a set of problem parameters we wish to adapt to, we define a "price of adaptivity" (PoA) that, roughly speaking, measures the multiplicative increase in suboptimality due to uncertainty in these parameters. When the initial distance to the optimum is unknown but a gradient norm bound is known, we show that the PoA is at least logarithmic for expected suboptimality, and double-logarithmic for median suboptimality. When there is uncertainty in both distance and gradient norm, we show that the PoA must be polynomial in the level of uncertainty. Our lower bounds nearly match existing upper bounds, and establish that there is no parameter-free lunch.
We give a procedure for computing group-level $(\epsilon, \delta)$-DP guarantees for DP-SGD, when using Poisson sampling or fixed batch size sampling. Up to discretization errors in the implementation, the DP guarantees computed by this procedure are tight (assuming we release every intermediate iterate).
Authorship Verification (AV) is the process of analyzing a set of documents to determine whether they were written by a specific author. This problem often arises in forensic scenarios, e.g., in cases where the documents in question constitute evidence for a crime. Existing state-of-the-art AV methods use computational solutions that are not supported by a plausible scientific explanation for their functioning and that are often difficult for analysts to interpret. To address this, we propose a method relying on calculating a quantity we call $\lambda_G$ (LambdaG): the ratio between the likelihood of a document given a model of the Grammar for the candidate author and the likelihood of the same document given a model of the Grammar for a reference population. These Grammar Models are estimated using $n$-gram language models that are trained solely on grammatical features. Despite not needing large amounts of data for training, LambdaG still outperforms other established AV methods with higher computational complexity, including a fine-tuned Siamese Transformer network. Our empirical evaluation based on four baseline methods applied to twelve datasets shows that LambdaG leads to better results in terms of both accuracy and AUC in eleven cases and in all twelve cases if considering only topic-agnostic methods. The algorithm is also highly robust to important variations in the genre of the reference population in many cross-genre comparisons. In addition to these properties, we demonstrate how LambdaG is easier to interpret than the current state-of-the-art. We argue that the advantage of LambdaG over other methods is due to fact that it is compatible with Cognitive Linguistic theories of language processing.
Equipping the rototranslation group $SE(2)$ with a sub-Riemannian structure inspired by the visual cortex V1, we propose algorithms for image inpainting and enhancement based on hypoelliptic diffusion. We innovate on previous implementations of the methods by Citti, Sarti, and Boscain et al., by proposing an alternative that prevents fading and is capable of producing sharper results in a procedure that we call WaxOn-WaxOff. We also exploit the sub-Riemannian structure to define a completely new unsharp filter using $SE(2)$, analogous to the classical unsharp filter for 2D image processing. We demonstrate our method on blood vessels enhancement in retinal scans.
The objective of this paper is to provide an introduction to the principles of Bayesian joint modeling of longitudinal measurements and time-to-event outcomes, as well as model implementation using the BUGS language syntax. This syntax can be executed directly using OpenBUGS or by utilizing convenient functions to invoke OpenBUGS and JAGS from R software. In this paper, all details of joint models are provided, ranging from simple to more advanced models. The presentation started with the joint modeling of a Gaussian longitudinal marker and time-to-event outcome. The implementation of the Bayesian paradigm of the model is reviewed. The strategies for simulating data from the JM are also discussed. A proportional hazard model with various forms of baseline hazards, along with the discussion of all possible association structures between the two sub-models are taken into consideration. The paper covers joint models with multivariate longitudinal measurements, zero-inflated longitudinal measurements, competing risks, and time-to-event with cure fraction. The models are illustrated by the analyses of several real data sets. All simulated and real data and code are available at \url{//github.com/tbaghfalaki/JM-with-BUGS-and-JAGS}.
When training deep neural networks, the phenomenon of $\textit{dying neurons}$ $\unicode{x2013}$units that become inactive or saturated, output zero during training$\unicode{x2013}$ has traditionally been viewed as undesirable, linked with optimization challenges, and contributing to plasticity loss in continual learning scenarios. In this paper, we reassess this phenomenon, focusing on sparsity and pruning. By systematically exploring the impact of various hyperparameter configurations on dying neurons, we unveil their potential to facilitate simple yet effective structured pruning algorithms. We introduce $\textit{Demon Pruning}$ (DemP), a method that controls the proliferation of dead neurons, dynamically leading to network sparsity. Achieved through a combination of noise injection on active units and a one-cycled schedule regularization strategy, DemP stands out for its simplicity and broad applicability. Experiments on CIFAR10 and ImageNet datasets demonstrate that DemP surpasses existing structured pruning techniques, showcasing superior accuracy-sparsity tradeoffs and training speedups. These findings suggest a novel perspective on dying neurons as a valuable resource for efficient model compression and optimization.
In this paper, we explore the descriptive complexity theory of finite groups by examining the power of the second Ehrenfeucht-Fra\"iss\'e bijective pebble game in Hella's (Ann. Pure Appl. Log., 1989) heirarchy. This is a Spoiler-Duplicator game in which Spoiler can place up to two pebbles each round. While it trivially solves graph isomorphism, it may be nontrivial for finite groups, and other ternary relational structures. We first provide a novel generalization of Weisfeiler-Leman (WL) coloring, which we call 2-ary WL. We then show that the 2-ary WL is equivalent to the second Ehrenfeucht-Fra\"iss\'e bijective pebble game in Hella's heirarchy. Our main result is that, in the pebble game characterization, only $O(1)$ pebbles and $O(1)$ rounds are sufficient to identify all groups without Abelian normal subgroups (a class of groups for which isomorphism testing is known to be in $\mathsf{P}$; Babai, Codenotti, & Qiao, ICALP 2012). In particular, we show that within the first few rounds, Spoiler can force Duplicator to select an isomorphism between two such groups at each subsequent round. By Hella's results (\emph{ibid.}), this is equivalent to saying that these groups are identified by formulas in first-order logic with generalized 2-ary quantifiers, using only $O(1)$ variables and $O(1)$ quantifier depth.
Pre-trained Language Models (PLMs) which are trained on large text corpus via self-supervised learning method, have yielded promising performance on various tasks in Natural Language Processing (NLP). However, though PLMs with huge parameters can effectively possess rich knowledge learned from massive training text and benefit downstream tasks at the fine-tuning stage, they still have some limitations such as poor reasoning ability due to the lack of external knowledge. Research has been dedicated to incorporating knowledge into PLMs to tackle these issues. In this paper, we present a comprehensive review of Knowledge-Enhanced Pre-trained Language Models (KE-PLMs) to provide a clear insight into this thriving field. We introduce appropriate taxonomies respectively for Natural Language Understanding (NLU) and Natural Language Generation (NLG) to highlight these two main tasks of NLP. For NLU, we divide the types of knowledge into four categories: linguistic knowledge, text knowledge, knowledge graph (KG), and rule knowledge. The KE-PLMs for NLG are categorized into KG-based and retrieval-based methods. Finally, we point out some promising future directions of KE-PLMs.