The LogRank conjecture of Lov\'asz and Saks from 1988 is the most famous open problem in the communication complexity theory. The statement is as follows: Suppose that two players intend to compute a Boolean function $f(x,y)$ when $x$ is known for the first and $y$ for the second player, and they may send and receive messages encoded with bits, then they can compute $f(x,y)$ with exchanging $(\log \rank (M_f))^c $ bits, where $M_f$ is a Boolean matrix, determined by function $f$. The problem is widely open and very popular, and it has resisted numerous attacks in the last 35 years. The best upper bound is still exponential in the bound of the conjecture. Unfortunately, we cannot prove the conjecture, but we present a communication protocol with $(\log \rank (M_f))^c $ bits, which computes a -- somewhat -- related quantity to $f(x,y)$. The relation is characterized by a representation of low-degree, multi-linear polynomials modulo composite numbers. This result of ours may help to settle this long-time open conjecture.
Compliance with the European Union's Platform-to-Business (P2B) Regulation is challenging for online platforms, and assessing their compliance can be difficult for public authorities. This is partly due to the lack of automated tools for assessing the information (e.g., software documentation) platforms provide concerning ranking transparency. Our study tackles this issue in two ways. First, we empirically evaluate the compliance of six major platforms (Amazon, Bing, Booking, Google, Tripadvisor, and Yahoo), revealing substantial differences in their documentation. Second, we introduce and test automated compliance assessment tools based on ChatGPT and information retrieval technology. These tools are evaluated against human judgments, showing promising results as reliable proxies for compliance assessments. Our findings could help enhance regulatory compliance and align with the United Nations Sustainable Development Goal 10.3, which seeks to reduce inequality, including business disparities, on these platforms.
A Gr\"obner basis computation for the Weyl algebra with respect to a tropical term order and by using a homogenization-dehomogenization technique is sufficiently sluggish. A significant number of reductions to zero occur. To improve the computation, a tropical F5 algorithm is developed for this context. As a member of the family of signature-based algorithms, this algorithm keeps track of where Weyl algebra elements come from to anticipate reductions to zero. The total order for ordering module monomials or signatures in this paper is designed as close as possible to the definition of the tropical term order. As in Vaccon et al. (2021), this total order is not compatible with the tropical term order.
Finding the exact spanning ratio of a Delaunay graph has been one of the longstanding open problems in Computational Geometry. Currently there are only four convex shapes for which the exact spanning ratio of their Delaunay graph is known: the equilateral triangle, the square, the regular hexagon and the rectangle. In this paper, we show the exact spanning ratio of the parallelogram Delaunay graph, making the parallelogram the fifth convex shape for which an exact bound is known. The worst-case spanning ratio is exactly $$\frac{\sqrt{2}\sqrt{1+A^2+2A\cos(\theta_0)+(A+\cos(\theta_0))\sqrt{1+A^2+2A\cos(\theta_0)}}}{\sin(\theta_0)} .$$ where $A$ is the aspect ratio and $\theta_0$ is the non-obtuse angle of the parallelogram. Moreover, we show how to construct a parallelogram Delaunay graph whose spanning ratio matches the above mentioned spanning ratio.
With the emergence of the Quantum Internet, the need for advanced quantum networking techniques has significantly risen. Various models of quantum repeaters have been presented, each delineating a unique strategy to ensure quantum communication over long distances. We focus on repeaters that employ entanglement generation and swapping. This revolves around establishing remote end-to-end entanglement through repeaters, a concept we denote as the "quantum-native" repeaters (also called "first-generation" repeaters in some literature). The challenges in routing with quantum-native repeaters arise from probabilistic entanglement generation and restricted coherence time. Current approaches use synchronized time slots to search for entanglement-swapping paths, resulting in inefficiencies. Here, we propose a new set of asynchronous routing protocols for quantum networks by incorporating the idea of maintaining a dynamic topology in a distributed manner, which has been extensively studied in classical routing for lossy networks, such as using a destination-oriented directed acyclic graph (DODAG) or a spanning tree. The protocols update the entanglement-link topology asynchronously, identify optimal entanglement-swapping paths, and preserve unused direct-link entanglements. Our results indicate that asynchronous protocols achieve a larger upper bound with an appropriate setting and significantly higher entanglement rate than existing synchronous approaches, and the rate increases with coherence time, suggesting that it will have a much more profound impact on quantum networks as technology advances.
Explainable Artificial Intelligence (XAI) is transforming the field of Artificial Intelligence (AI) by enhancing the trust of end-users in machines. As the number of connected devices keeps on growing, the Internet of Things (IoT) market needs to be trustworthy for the end-users. However, existing literature still lacks a systematic and comprehensive survey work on the use of XAI for IoT. To bridge this lacking, in this paper, we address the XAI frameworks with a focus on their characteristics and support for IoT. We illustrate the widely-used XAI services for IoT applications, such as security enhancement, Internet of Medical Things (IoMT), Industrial IoT (IIoT), and Internet of City Things (IoCT). We also suggest the implementation choice of XAI models over IoT systems in these applications with appropriate examples and summarize the key inferences for future works. Moreover, we present the cutting-edge development in edge XAI structures and the support of sixth-generation (6G) communication services for IoT applications, along with key inferences. In a nutshell, this paper constitutes the first holistic compilation on the development of XAI-based frameworks tailored for the demands of future IoT use cases.
Learning on big data brings success for artificial intelligence (AI), but the annotation and training costs are expensive. In future, learning on small data is one of the ultimate purposes of AI, which requires machines to recognize objectives and scenarios relying on small data as humans. A series of machine learning models is going on this way such as active learning, few-shot learning, deep clustering. However, there are few theoretical guarantees for their generalization performance. Moreover, most of their settings are passive, that is, the label distribution is explicitly controlled by one specified sampling scenario. This survey follows the agnostic active sampling under a PAC (Probably Approximately Correct) framework to analyze the generalization error and label complexity of learning on small data using a supervised and unsupervised fashion. With these theoretical analyses, we categorize the small data learning models from two geometric perspectives: the Euclidean and non-Euclidean (hyperbolic) mean representation, where their optimization solutions are also presented and discussed. Later, some potential learning scenarios that may benefit from small data learning are then summarized, and their potential learning scenarios are also analyzed. Finally, some challenging applications such as computer vision, natural language processing that may benefit from learning on small data are also surveyed.
The generalization mystery in deep learning is the following: Why do over-parameterized neural networks trained with gradient descent (GD) generalize well on real datasets even though they are capable of fitting random datasets of comparable size? Furthermore, from among all solutions that fit the training data, how does GD find one that generalizes well (when such a well-generalizing solution exists)? We argue that the answer to both questions lies in the interaction of the gradients of different examples during training. Intuitively, if the per-example gradients are well-aligned, that is, if they are coherent, then one may expect GD to be (algorithmically) stable, and hence generalize well. We formalize this argument with an easy to compute and interpretable metric for coherence, and show that the metric takes on very different values on real and random datasets for several common vision networks. The theory also explains a number of other phenomena in deep learning, such as why some examples are reliably learned earlier than others, why early stopping works, and why it is possible to learn from noisy labels. Moreover, since the theory provides a causal explanation of how GD finds a well-generalizing solution when one exists, it motivates a class of simple modifications to GD that attenuate memorization and improve generalization. Generalization in deep learning is an extremely broad phenomenon, and therefore, it requires an equally general explanation. We conclude with a survey of alternative lines of attack on this problem, and argue that the proposed approach is the most viable one on this basis.
Graph Neural Networks (GNNs) have recently become increasingly popular due to their ability to learn complex systems of relations or interactions arising in a broad spectrum of problems ranging from biology and particle physics to social networks and recommendation systems. Despite the plethora of different models for deep learning on graphs, few approaches have been proposed thus far for dealing with graphs that present some sort of dynamic nature (e.g. evolving features or connectivity over time). In this paper, we present Temporal Graph Networks (TGNs), a generic, efficient framework for deep learning on dynamic graphs represented as sequences of timed events. Thanks to a novel combination of memory modules and graph-based operators, TGNs are able to significantly outperform previous approaches being at the same time more computationally efficient. We furthermore show that several previous models for learning on dynamic graphs can be cast as specific instances of our framework. We perform a detailed ablation study of different components of our framework and devise the best configuration that achieves state-of-the-art performance on several transductive and inductive prediction tasks for dynamic graphs.
Deep Convolutional Neural Networks (CNNs) are a special type of Neural Networks, which have shown state-of-the-art results on various competitive benchmarks. The powerful learning ability of deep CNN is largely achieved with the use of multiple non-linear feature extraction stages that can automatically learn hierarchical representation from the data. Availability of a large amount of data and improvements in the hardware processing units have accelerated the research in CNNs and recently very interesting deep CNN architectures are reported. The recent race in deep CNN architectures for achieving high performance on the challenging benchmarks has shown that the innovative architectural ideas, as well as parameter optimization, can improve the CNN performance on various vision-related tasks. In this regard, different ideas in the CNN design have been explored such as use of different activation and loss functions, parameter optimization, regularization, and restructuring of processing units. However, the major improvement in representational capacity is achieved by the restructuring of the processing units. Especially, the idea of using a block as a structural unit instead of a layer is gaining substantial appreciation. This survey thus focuses on the intrinsic taxonomy present in the recently reported CNN architectures and consequently, classifies the recent innovations in CNN architectures into seven different categories. These seven categories are based on spatial exploitation, depth, multi-path, width, feature map exploitation, channel boosting and attention. Additionally, it covers the elementary understanding of the CNN components and sheds light on the current challenges and applications of CNNs.
Visual Question Answering (VQA) models have struggled with counting objects in natural images so far. We identify a fundamental problem due to soft attention in these models as a cause. To circumvent this problem, we propose a neural network component that allows robust counting from object proposals. Experiments on a toy task show the effectiveness of this component and we obtain state-of-the-art accuracy on the number category of the VQA v2 dataset without negatively affecting other categories, even outperforming ensemble models with our single model. On a difficult balanced pair metric, the component gives a substantial improvement in counting over a strong baseline by 6.6%.