We prove a few new lower bounds on the randomized competitive ratio for the $k$-server problem and other related problems, resolving some long-standing conjectures. In particular, for metrical task systems (MTS) we asympotically settle the competitive ratio and obtain the first improvement to an existential lower bound since the introduction of the model 35 years ago (in 1987). More concretely, we show: 1. There exist $(k+1)$-point metric spaces in which the randomized competitive ratio for the $k$-server problem is $\Omega(\log^2 k)$. This refutes the folklore conjecture (which is known to hold in some families of metrics) that in all metric spaces with at least $k+1$ points, the competitive ratio is $\Theta(\log k)$. 2. Consequently, there exist $n$-point metric spaces in which the randomized competitive ratio for MTS is $\Omega(\log^2 n)$. This matches the upper bound that holds for all metrics. The previously best existential lower bound was $\Omega(\log n)$ (which was known to be tight for some families of metrics). 3. For all $k<n\in\mathbb N$, for *all* $n$-point metric spaces the randomized $k$-server competitive ratio is at least $\Omega(\log k)$, and consequently the randomized MTS competitive ratio is at least $\Omega(\log n)$. These universal lower bounds are asymptotically tight. The previous bounds were $\Omega(\log k/\log\log k)$ and $\Omega(\log n/\log \log n)$, respectively. 4. The randomized competitive ratio for the $w$-set metrical service systems problem, and its equivalent width-$w$ layered graph traversal problem, is $\Omega(w^2)$. This slightly improves the previous lower bound and matches the recently discovered upper bound. 5. Our results imply improved lower bounds for other problems like $k$-taxi, distributed paging and metric allocation. These lower bounds share a common thread, and other than the third bound, also a common construction.
This paper characterizes when an $m \times n$ rectangle, where $m$ and $n$ are integers, can be tiled (exactly packed) by squares where each has an integer side length of at least 2. In particular, we prove that tiling is always possible when both $m$ and $n$ are sufficiently large (at least 10). When one dimension $m$ is small, the behavior is eventually periodic in $n$ with period 1, 2, or 3. When both dimensions $m,n$ are small, the behavior is determined computationally by an exhaustive search.
Inspired by recent work by Christensen and Popovski on secure $2$-user product computation for finite-fields of prime-order over a quantum multiple access channel, the generalization to $K$ users and arbitrary finite fields is explored. Asymptotically optimal (capacity-achieving for large alphabet) schemes are proposed. Additionally, the capacity of modulo-$d$ ($d\geq 2$) secure $K$-sum computation is shown to be $2/K$ computations/qudit, generalizing a result of Nishimura and Kawachi beyond binary, and improving upon it for odd $K$.
The continuous representation of spatiotemporal data commonly relies on using abstract data types, such as \textit{moving regions}, to represent entities whose shape and position continuously change over time. Creating this representation from discrete snapshots of real-world entities requires using interpolation methods to compute in-between data representations and estimate the position and shape of the object of interest at arbitrary temporal points. Existing region interpolation methods often fail to generate smooth and realistic representations of a region's evolution. However, recent advancements in deep learning techniques have revealed the potential of deep models trained on discrete observations to capture spatiotemporal dependencies through implicit feature learning. In this work, we explore the capabilities of Conditional Variational Autoencoder (C-VAE) models to generate smooth and realistic representations of the spatiotemporal evolution of moving regions. We evaluate our proposed approach on a sparsely annotated dataset on the burnt area of a forest fire. We apply compression operations to sample from the dataset and use the C-VAE model and other commonly used interpolation algorithms to generate in-between region representations. To evaluate the performance of the methods, we compare their interpolation results with manually annotated data and regions generated by a U-Net model. We also assess the quality of generated data considering temporal consistency metrics. The proposed C-VAE-based approach demonstrates competitive results in geometric similarity metrics. It also exhibits superior temporal consistency, suggesting that C-VAE models may be a viable alternative to modelling the spatiotemporal evolution of 2D moving regions.
The index of success of the researchers are now mostly measured using the Hirsch index ($h$). Our recent precise demonstration, that statistically $h \sim \sqrt {N_c} \sim \sqrt {N_p}$, where $N_p$ and $N_c$ denote respectively the total number of publications and total citations for the researcher, suggests that average number of citations per paper ($N_c/N_p$), and hence $h$, are statistical numbers (Dunbar numbers) depending on the community or network to which the researcher belongs. We show here, extending our earlier observations, that the indications of success are not reflected by the total citations $N_c$, rather by the inequalities among citations from publications to publications. Specifically, we show that for very successful authors, the yearly variations in the Gini index ($g$, giving the average inequality of citations for the publications) and the Kolkata index ($k$, giving the fraction of total citations received by the top $1 - k$ fraction of publications; $k = 0.80$ corresponds to Pareto's 80/20 law) approach each other to $g = k \simeq 0.82$, signaling a precursor for the arrival of (or departure from) the Self-Organized Critical (SOC) state of his/her publication statistics. Analyzing the citation statistics (from Google Scholar) of thirty successful scientists throughout their recorded publication history, we find that the $g$ and $k$ for very successful among them (mostly Nobel Laureates, highest rank Stanford Cite-Scorers, and a few others) reach and hover just above (and then) below that $g = k \simeq 0.82$ mark, while for others they remain below that mark. We also find that for all the lower (than the SOC mark 0.82) values of $k$ and $g$ fit a linear relationship $k = 1/2 + cg$, with $c = 0.39$.
The prevalent use of Large Language Models (LLMs) has necessitated studying their mental models, yielding noteworthy theoretical and practical implications. Current research has demonstrated that state-of-the-art LLMs, such as ChatGPT, exhibit certain theory of mind capabilities and possess relatively stable Big Five and/or MBTI personality traits. In addition, cognitive process features form an essential component of these mental models. Research in cultural psychology indicated significant differences in the cognitive processes of Eastern and Western people when processing information and making judgments. While Westerners predominantly exhibit analytical thinking that isolates things from their environment to analyze their nature independently, Easterners often showcase holistic thinking, emphasizing relationships and adopting a global viewpoint. In our research, we probed the cultural cognitive traits of ChatGPT. We employed two scales that directly measure the cognitive process: the Analysis-Holism Scale (AHS) and the Triadic Categorization Task (TCT). Additionally, we used two scales that investigate the value differences shaped by cultural thinking: the Dialectical Self Scale (DSS) and the Self-construal Scale (SCS). In cognitive process tests (AHS/TCT), ChatGPT consistently tends towards Eastern holistic thinking, but regarding value judgments (DSS/SCS), ChatGPT does not significantly lean towards the East or the West. We suggest that the result could be attributed to both the training paradigm and the training data in LLM development. We discuss the potential value of this finding for AI research and directions for future research.
App reviews reflect various user requirements that can aid in planning maintenance tasks. Recently, proposed approaches for automatically classifying user reviews rely on machine learning algorithms. Devine et al. demonstrated that models trained on existing labeled datasets exhibit poor performance when predicting new ones. Although integrating datasets improves the results to some extent, there is still a need for greater generalizability to be taken into consideration. Therefore, a comprehensive labeled dataset is essential to train a more precise model. This paper introduces an approach to train a more generalizable model by leveraging information from an additional source, such as the GitHub issue tracking system, that contains valuable information about user requirements. We propose an approach that assists in augmenting labeled datasets by utilizing information extracted from GitHub issues. First, we identify issues concerning review intentions (bug reports, feature requests, and others) by examining the issue labels. Then, we analyze issue bodies and define 19 language patterns for extracting targeted information. Finally, we augment the manually labeled review dataset with a subset of processed issues through the Within-App, Within-Context, and Between-App Analysis methods. The first two methods train the app-specific models, and the last suits the general-purpose models. We conducted several experiments to evaluate the proposed approach. Our results demonstrate that using labeled issues for data augmentation can improve the F1-score and recall to 13.9 and 29.9 in the bug reports, respectively, and to 7.5 and 13.5 for feature requests. Furthermore, we identify an effective volume range of 0.3 to 0.7, which provides better performance improvements.
We prove that the {\sf PoA} of {\sf First Price Auctions} is $1 - 1/e^2 \approx 0.8647$, closing the gap between the best known bounds $[0.7430,\, 0.8689]$.
Originating in Girard's Linear logic, Ehrhard and Regnier's Taylor expansion of $\lambda$-terms has been broadly used as a tool to approximate the terms of several variants of the $\lambda$-calculus. Many results arise from a Commutation theorem relating the normal form of the Taylor expansion of a term to its B\"ohm tree. This led us to consider extending this formalism to the infinitary $\lambda$-calculus, since the $\Lambda_{\infty}^{001}$ version of this calculus has B\"ohm trees as normal forms and seems to be the ideal framework to reformulate the Commutation theorem. We give a (co-)inductive presentation of $\Lambda_{\infty}^{001}$. We define a Taylor expansion on this calculus, and state that the infinitary $\beta$-reduction can be simulated through this Taylor expansion. The target language is the usual resource calculus, and in particular the resource reduction remains finite, confluent and terminating. Finally, we state the generalised Commutation theorem and use our results to provide simple proofs of some normalisation and confluence properties in the infinitary $\lambda$-calculus.
Compared with cheap addition operation, multiplication operation is of much higher computation complexity. The widely-used convolutions in deep neural networks are exactly cross-correlation to measure the similarity between input feature and convolution filters, which involves massive multiplications between float values. In this paper, we present adder networks (AdderNets) to trade these massive multiplications in deep neural networks, especially convolutional neural networks (CNNs), for much cheaper additions to reduce computation costs. In AdderNets, we take the $\ell_1$-norm distance between filters and input feature as the output response. The influence of this new similarity measure on the optimization of neural network have been thoroughly analyzed. To achieve a better performance, we develop a special back-propagation approach for AdderNets by investigating the full-precision gradient. We then propose an adaptive learning rate strategy to enhance the training procedure of AdderNets according to the magnitude of each neuron's gradient. As a result, the proposed AdderNets can achieve 74.9% Top-1 accuracy 91.7% Top-5 accuracy using ResNet-50 on the ImageNet dataset without any multiplication in convolution layer.
Object detection typically assumes that training and test data are drawn from an identical distribution, which, however, does not always hold in practice. Such a distribution mismatch will lead to a significant performance drop. In this work, we aim to improve the cross-domain robustness of object detection. We tackle the domain shift on two levels: 1) the image-level shift, such as image style, illumination, etc, and 2) the instance-level shift, such as object appearance, size, etc. We build our approach based on the recent state-of-the-art Faster R-CNN model, and design two domain adaptation components, on image level and instance level, to reduce the domain discrepancy. The two domain adaptation components are based on H-divergence theory, and are implemented by learning a domain classifier in adversarial training manner. The domain classifiers on different levels are further reinforced with a consistency regularization to learn a domain-invariant region proposal network (RPN) in the Faster R-CNN model. We evaluate our newly proposed approach using multiple datasets including Cityscapes, KITTI, SIM10K, etc. The results demonstrate the effectiveness of our proposed approach for robust object detection in various domain shift scenarios.