The sums and maxima of non-stationary random length sequences of regularly varying random variables may have the same tail and extremal indices, Markovich and Rodionov (2020). The main constraint is that there exists a unique series in a scheme of series with the minimum tail index. The result is now revised allowing a random bounded number of series to have the minimum tail index. This new result is applied to random networks.
Age of information (AoI) is a performance metric that captures the freshness of status updates. While AoI has been studied thoroughly for point-to-point links, the impact of modern random-access protocols on this metric is still unclear. In this paper, we extend the recent results by Munari to prioritized random access where devices are divided into different classes according to different AoI requirements. We consider the irregular repetition slotted ALOHA protocol and analyze the AoI evolution by means of a Markovian analysis following similar lines as in Munari (2021). We aim to design the protocol to satisfy the AoI requirements for each class while minimizing the power consumption. To this end, we optimize the update probability and the degree distributions of each class, such that the probability that their AoI exceeds a given threshold lies below a given target and the average number of transmitted packets is minimized.
Existing analyses of optimization in deep learning are either continuous, focusing on (variants of) gradient flow, or discrete, directly treating (variants of) gradient descent. Gradient flow is amenable to theoretical analysis, but is stylized and disregards computational efficiency. The extent to which it represents gradient descent is an open question in the theory of deep learning. The current paper studies this question. Viewing gradient descent as an approximate numerical solution to the initial value problem of gradient flow, we find that the degree of approximation depends on the curvature around the gradient flow trajectory. We then show that over deep neural networks with homogeneous activations, gradient flow trajectories enjoy favorable curvature, suggesting they are well approximated by gradient descent. This finding allows us to translate an analysis of gradient flow over deep linear neural networks into a guarantee that gradient descent efficiently converges to global minimum almost surely under random initialization. Experiments suggest that over simple deep neural networks, gradient descent with conventional step size is indeed close to gradient flow. We hypothesize that the theory of gradient flows will unravel mysteries behind deep learning.
One of the most pressing problems in modern analysis is the study of the growth rate of the norms of all possible matrix products $\|A_{i_{n}}\cdots A_{i_{0}}\|$ with factors from a set of matrices $\mathscr{A}$. So far, only for a relatively small number of classes of matrices $\mathscr{A}$ has it been possible to rigorously describe the sequences of matrices $\{A_{i_{n}}\}$ that guarantee the maximal growth rate of the corresponding norms. Moreover, in almost all theoretically studied cases, the index sequences $\{i_{n}\}$ of matrices maximizing the norms of the corresponding matrix products turned out to be periodic or so-called Sturmian sequences, which entails a whole set of ``good'' properties of the sequences $\{A_{i_{n}}\}$, in particular the existence of a limiting frequency of occurrence of each matrix factor $A_{i}\in\mathscr{A}$ in them. The paper determines a class of $2\times 2$ matrices consisting of two matrices similar to rotations of the plane in which the sequence $\{A_{i_{n}}\}$ maximizing the growth rate of the norms $\|A_{i_{n}}\cdots A_{i_{0}}\|$ is not Sturmian. All considerations are based on numerical modeling and cannot be considered mathematically rigorous in this part. Rather, they should be interpreted as a set of questions for further comprehensive theoretical analysis.
Although there is an extensive literature on the maxima of Gaussian processes, there are relatively few non-asymptotic bounds on their lower-tail probabilities. The aim of this paper is to develop such a bound, while also allowing for many types of dependence. Let $(\xi_1,\dots,\xi_N)$ be a centered Gaussian vector with standardized entries, whose correlation matrix $R$ satisfies $\max_{i\neq j} R_{ij}\leq \rho_0$ for some constant $\rho_0\in (0,1)$. Then, for any $\epsilon_0\in(0,\sqrt{1-\rho_0})$, we establish an upper bound on the probability $\mathbb{P}(\max_{1\leq j\leq N} \xi_j\leq \epsilon_0\sqrt{2\log(N)})$ in terms of $(\rho_0,\epsilon_0,N)$. The bound is also sharp, in the sense that it is attained up to a constant, independent of $N$. Next, we apply this result in the context of high-dimensional statistics, where we simplify and weaken conditions that have recently been used to establish near-parametric rates of bootstrap approximation. Lastly, an interesting aspect of this application is that it makes use of recent refinements of Bourgain and Tzafriri's "restricted invertibility principle".
The question whether a partition $\mathcal{P}$ and a hierarchy $\mathcal{H}$ or a tree-like split system $\mathfrak{S}$ are compatible naturally arises in a wide range of classification problems. In the setting of phylogenetic trees, one asks whether the sets of $\mathcal{P}$coincide with leaf sets of connected components obtained by deleting some edges from the tree $T$ that represents $\mathcal{H}$ or $\mathfrak{S}$, respectively. More generally, we ask whether a refinement $T^*$ of $T$ exists such that $T^*$ and $\mathcal{P}$ are compatible in this sense. The latter is closely related to the question as to whether there exists a tree at all that is compatible with $\mathcal{P}$. We report several characterizations for (refinements of) hierarchies and split systems that are compatible with (systems of) partitions. In addition, we provide a linear-time algorithm to check whether refinements of trees and a given partition are compatible. The latter problem becomes NP-complete but fixed-parameter tractable if a system of partitions is considered instead of a single partition. In this context, we also explore the close relationship of the concept of compatibility and so-called Fitch maps.
We present a method for comparing point forecasts in a region of interest, such as the tails or centre of a variable's range. This method cannot be hedged, in contrast to conditionally selecting events to evaluate and then using a scoring function that would have been consistent (or proper) prior to event selection. Our method also gives decompositions of scoring functions that are consistent for the mean or a particular quantile or expectile. Each member of each decomposition is itself a consistent scoring function that emphasises performance over a selected region of the variable's range. The score of each member of the decomposition has a natural interpretation rooted in optimal decision theory. It is the weighted average of economic regret over user decision thresholds, where the weight emphasises those decision thresholds in the corresponding region of interest.
We employ a toolset -- dubbed Dr. Frankenstein -- to analyse the similarity of representations in deep neural networks. With this toolset, we aim to match the activations on given layers of two trained neural networks by joining them with a stitching layer. We demonstrate that the inner representations emerging in deep convolutional neural networks with the same architecture but different initializations can be matched with a surprisingly high degree of accuracy even with a single, affine stitching layer. We choose the stitching layer from several possible classes of linear transformations and investigate their performance and properties. The task of matching representations is closely related to notions of similarity. Using this toolset, we also provide a novel viewpoint on the current line of research regarding similarity indices of neural network representations: the perspective of the performance on a task.
Despite the recent success of graph neural networks (GNN), common architectures often exhibit significant limitations, including sensitivity to oversmoothing, long-range dependencies, and spurious edges, e.g., as can occur as a result of graph heterophily or adversarial attacks. To at least partially address these issues within a simple transparent framework, we consider a new family of GNN layers designed to mimic and integrate the update rules of two classical iterative algorithms, namely, proximal gradient descent and iterative reweighted least squares (IRLS). The former defines an extensible base GNN architecture that is immune to oversmoothing while nonetheless capturing long-range dependencies by allowing arbitrary propagation steps. In contrast, the latter produces a novel attention mechanism that is explicitly anchored to an underlying end-toend energy function, contributing stability with respect to edge uncertainty. When combined we obtain an extremely simple yet robust model that we evaluate across disparate scenarios including standardized benchmarks, adversarially-perturbated graphs, graphs with heterophily, and graphs involving long-range dependencies. In doing so, we compare against SOTA GNN approaches that have been explicitly designed for the respective task, achieving competitive or superior node classification accuracy.
Spectral clustering (SC) is a popular clustering technique to find strongly connected communities on a graph. SC can be used in Graph Neural Networks (GNNs) to implement pooling operations that aggregate nodes belonging to the same cluster. However, the eigendecomposition of the Laplacian is expensive and, since clustering results are graph-specific, pooling methods based on SC must perform a new optimization for each new sample. In this paper, we propose a graph clustering approach that addresses these limitations of SC. We formulate a continuous relaxation of the normalized minCUT problem and train a GNN to compute cluster assignments that minimize this objective. Our GNN-based implementation is differentiable, does not require to compute the spectral decomposition, and learns a clustering function that can be quickly evaluated on out-of-sample graphs. From the proposed clustering method, we design a graph pooling operator that overcomes some important limitations of state-of-the-art graph pooling techniques and achieves the best performance in several supervised and unsupervised tasks.
Graph neural networks (GNNs) are effective machine learning models for various graph learning problems. Despite their empirical successes, the theoretical limitations of GNNs have been revealed recently. Consequently, many GNN models have been proposed to overcome these limitations. In this survey, we provide a comprehensive overview of the expressive power of GNNs and provably powerful variants of GNNs.