Timestamped relational datasets consisting of records between pairs of entities are ubiquitous in data and network science. For applications like peer-to-peer communication, email, social network interactions, and computer network security, it makes sense to organize these records into groups based on how and when they are occurring. Weighted line graphs offer a natural way to model how records are related in such datasets but for large real-world graph topologies the complexity of building and utilizing the line graph is prohibitive. We present an algorithm to cluster the edges of a dynamic graph via the associated line graph without forming it explicitly. We outline a novel hierarchical dynamic graph edge clustering approach that efficiently breaks massive relational datasets into small sets of edges containing events at various timescales. This is in stark contrast to traditional graph clustering algorithms that prioritize highly connected community structures. Our approach relies on constructing a sufficient subgraph of a weighted line graph and applying a hierarchical agglomerative clustering. This work draws particular inspiration from HDBSCAN. We present a parallel algorithm and show that it is able to break billion-scale dynamic graphs into small sets that correlate in topology and time. The entire clustering process for a graph with $O(10 \text{ billion})$ edges takes just a few minutes of run time on 256 nodes of a distributed compute environment. We argue how the output of the edge clustering is useful for a multitude of data visualization and powerful machine learning tasks, both involving the original massive dynamic graph data and/or the non-relational metadata. Finally, we demonstrate its use on a real-world large-scale directed dynamic graph and describe how it can be extended to dynamic hypergraphs and graphs with unstructured data living on vertices and edges.
Over the past decade, deep learning has proven to be a highly effective tool for learning meaningful features from raw data. However, it remains an open question how deep networks perform hierarchical feature learning across layers. In this work, we attempt to unveil this mystery by investigating the structures of intermediate features. Motivated by our empirical findings that linear layers mimic the roles of deep layers in nonlinear networks for feature learning, we explore how deep linear networks transform input data into output by investigating the output (i.e., features) of each layer after training in the context of multi-class classification problems. Toward this goal, we first define metrics to measure within-class compression and between-class discrimination of intermediate features, respectively. Through theoretical analysis of these two metrics, we show that the evolution of features follows a simple and quantitative pattern from shallow to deep layers when the input data is nearly orthogonal and the network weights are minimum-norm, balanced, and approximate low-rank: Each layer of the linear network progressively compresses within-class features at a geometric rate and discriminates between-class features at a linear rate with respect to the number of layers that data have passed through. To the best of our knowledge, this is the first quantitative characterization of feature evolution in hierarchical representations of deep linear networks. Empirically, our extensive experiments not only validate our theoretical results numerically but also reveal a similar pattern in deep nonlinear networks which aligns well with recent empirical studies. Moreover, we demonstrate the practical implications of our results in transfer learning. Our code is available at \url{//github.com/Heimine/PNC_DLN}.
Testing network effects in weighted directed networks is a foundational problem in econometrics, sociology, and psychology. Yet, the prevalent edge dependency poses a significant methodological challenge. Most existing methods are model-based and come with stringent assumptions, limiting their applicability. In response, we introduce a novel, fully nonparametric framework that requires only minimal regularity assumptions. While inspired by recent developments in $U$-statistic literature (arXiv:1712.00771, arXiv:2004.06615), our approach notably broadens their scopes. Specifically, we identified and carefully addressed the challenge of indeterminate degeneracy in the test statistics $-$ a problem that aforementioned tools do not handle. We established Berry-Esseen type bound for the accuracy of type-I error rate control. Using original analysis, we also proved the minimax optimality of our test's power. Simulations underscore the superiority of our method in computation speed, accuracy, and numerical robustness compared to competing methods. We also applied our method to the U.S. faculty hiring network data and discovered intriguing findings.
Supporting the interactive exploration of large datasets is a popular and challenging use case for data management systems. Traditionally, the interface and the back-end system are built and optimized separately, and interface design and system optimization require different skill sets that are difficult for one person to master. To enable analysts to focus on visualization design, we contribute VegaPlus, a system that automatically optimizes interactive dashboards to support large datasets. To achieve this, VegaPlus leverages two core ideas. First, we introduce an optimizer that can reason about execution plans in Vega, a back-end DBMS, or a mix of both environments. The optimizer also considers how user interactions may alter execution plan performance, and can partially or fully rewrite the plans when needed. Through a series of benchmark experiments on seven different dashboard designs, our results show that VegaPlus provides superior performance and versatility compared to standard dashboard optimization techniques.
This paper conducts a comprehensive benchmarking analysis of the performance of two innovative cryptographic schemes: Homomorphic Polynomial Public Key (HPPK)-Key Encapsulation Mechanism (KEM) and Digital Signature (DS), recently proposed by Kuang et al. These schemes represent a departure from traditional cryptographic paradigms, with HPPK leveraging the security of homomorphic symmetric encryption across two hidden rings without reliance on NP-hard problems. HPPK can be viewed as a specialized variant of Multivariate Public Key Cryptography (MPKC), intricately associated with two vector spaces: the polynomial vector space for the secret exchange and the multivariate vector space for randomized encapsulation. The unique integration of asymmetric, symmetric, and homomorphic cryptography within HPPK necessitates a careful examination of its performance metrics. This study focuses on the thorough benchmarking of HPPK KEM and DS across key cryptographic operations, encompassing key generation, encapsulation, decapsulation, signing, and verification. The results highlight the exceptional efficiency of HPPK, characterized by compact key sizes, cipher sizes, and signature sizes. The use of symmetric encryption in HPPK enhances its overall performance. Key findings underscore the outstanding performance of HPPK KEM and DS across various security levels, emphasizing their superiority in crucial cryptographic operations. This research positions HPPK as a promising and competitive solution for post-quantum cryptographic applications in a wide range of applications, including blockchain, digital currency, and Internet of Things (IoT) devices.
Multi-sensor data that track system operating behaviors are widely available nowadays from various engineering systems. Measurements from each sensor over time form a curve and can be viewed as functional data. Clustering of these multivariate functional curves is important for studying the operating patterns of systems. One complication in such applications is the possible presence of sensors whose data do not contain relevant information. Hence it is desirable for the clustering method to equip with an automatic sensor selection procedure. Motivated by a real engineering application, we propose a functional data clustering method that simultaneously removes noninformative sensors and groups functional curves into clusters using informative sensors. Functional principal component analysis is used to transform multivariate functional data into a coefficient matrix for data reduction. We then model the transformed data by a Gaussian mixture distribution to perform model-based clustering with variable selection. Three types of penalties, the individual, variable, and group penalties, are considered to achieve automatic variable selection. Extensive simulations are conducted to assess the clustering and variable selection performance of the proposed methods. The application of the proposed methods to an engineering system with multiple sensors shows the promise of the methods and reveals interesting patterns in the sensor data.
Recently, graph neural networks (GNNs) have been widely used for document classification. However, most existing methods are based on static word co-occurrence graphs without sentence-level information, which poses three challenges:(1) word ambiguity, (2) word synonymity, and (3) dynamic contextual dependency. To address these challenges, we propose a novel GNN-based sparse structure learning model for inductive document classification. Specifically, a document-level graph is initially generated by a disjoint union of sentence-level word co-occurrence graphs. Our model collects a set of trainable edges connecting disjoint words between sentences and employs structure learning to sparsely select edges with dynamic contextual dependencies. Graphs with sparse structures can jointly exploit local and global contextual information in documents through GNNs. For inductive learning, the refined document graph is further fed into a general readout function for graph-level classification and optimization in an end-to-end manner. Extensive experiments on several real-world datasets demonstrate that the proposed model outperforms most state-of-the-art results, and reveal the necessity to learn sparse structures for each document.
We consider the problem of explaining the predictions of graph neural networks (GNNs), which otherwise are considered as black boxes. Existing methods invariably focus on explaining the importance of graph nodes or edges but ignore the substructures of graphs, which are more intuitive and human-intelligible. In this work, we propose a novel method, known as SubgraphX, to explain GNNs by identifying important subgraphs. Given a trained GNN model and an input graph, our SubgraphX explains its predictions by efficiently exploring different subgraphs with Monte Carlo tree search. To make the tree search more effective, we propose to use Shapley values as a measure of subgraph importance, which can also capture the interactions among different subgraphs. To expedite computations, we propose efficient approximation schemes to compute Shapley values for graph data. Our work represents the first attempt to explain GNNs via identifying subgraphs explicitly and directly. Experimental results show that our SubgraphX achieves significantly improved explanations, while keeping computations at a reasonable level.
Residual networks (ResNets) have displayed impressive results in pattern recognition and, recently, have garnered considerable theoretical interest due to a perceived link with neural ordinary differential equations (neural ODEs). This link relies on the convergence of network weights to a smooth function as the number of layers increases. We investigate the properties of weights trained by stochastic gradient descent and their scaling with network depth through detailed numerical experiments. We observe the existence of scaling regimes markedly different from those assumed in neural ODE literature. Depending on certain features of the network architecture, such as the smoothness of the activation function, one may obtain an alternative ODE limit, a stochastic differential equation or neither of these. These findings cast doubts on the validity of the neural ODE model as an adequate asymptotic description of deep ResNets and point to an alternative class of differential equations as a better description of the deep network limit.
We describe the new field of mathematical analysis of deep learning. This field emerged around a list of research questions that were not answered within the classical framework of learning theory. These questions concern: the outstanding generalization power of overparametrized neural networks, the role of depth in deep architectures, the apparent absence of the curse of dimensionality, the surprisingly successful optimization performance despite the non-convexity of the problem, understanding what features are learned, why deep architectures perform exceptionally well in physical problems, and which fine aspects of an architecture affect the behavior of a learning task in which way. We present an overview of modern approaches that yield partial answers to these questions. For selected approaches, we describe the main ideas in more detail.
Translational distance-based knowledge graph embedding has shown progressive improvements on the link prediction task, from TransE to the latest state-of-the-art RotatE. However, N-1, 1-N and N-N predictions still remain challenging. In this work, we propose a novel translational distance-based approach for knowledge graph link prediction. The proposed method includes two-folds, first we extend the RotatE from 2D complex domain to high dimension space with orthogonal transforms to model relations for better modeling capacity. Second, the graph context is explicitly modeled via two directed context representations. These context representations are used as part of the distance scoring function to measure the plausibility of the triples during training and inference. The proposed approach effectively improves prediction accuracy on the difficult N-1, 1-N and N-N cases for knowledge graph link prediction task. The experimental results show that it achieves better performance on two benchmark data sets compared to the baseline RotatE, especially on data set (FB15k-237) with many high in-degree connection nodes.