The current routing protocol used in the internet backbone is based on manual configuration, making it susceptible to errors. To mitigate these configuration-related issues, it becomes imperative to validate the accuracy and convergence of the algorithm, ensuring a seamless operation devoid of problems. However, the process of network verification faces challenges related to privacy and scalability. This paper addresses these challenges by introducing a novel approach: leveraging privacy-preserving computation, specifically multiparty computation (MPC), to verify the correctness of configurations in the internet backbone, governed by the BGP protocol. Not only does our proposed solution effectively address scalability concerns, but it also establishes a robust privacy framework. Through rigorous analysis, we demonstrate that our approach maintains privacy by not disclosing any information beyond the query result, thus providing a comprehensive and secure solution to the intricacies associated with routing protocol verification in large-scale networks.
Statistical tools which satisfy rigorous privacy guarantees are necessary for modern data analysis. It is well-known that robustness against contamination is linked to differential privacy. Despite this fact, using multivariate medians for differentially private and robust multivariate location estimation has not been systematically studied. We develop novel finite-sample performance guarantees for differentially private multivariate depth-based medians, which are essentially sharp. Our results cover commonly used depth functions, such as the halfspace (or Tukey) depth, spatial depth, and the integrated dual depth. We show that under Cauchy marginals, the cost of heavy-tailed location estimation outweighs the cost of privacy. We demonstrate our results numerically using a Gaussian contamination model in dimensions up to d = 100, and compare them to a state-of-the-art private mean estimation algorithm. As a by-product of our investigation, we prove concentration inequalities for the output of the exponential mechanism about the maximizer of the population objective function. This bound applies to objective functions that satisfy a mild regularity condition.
Statistical analysis of bipartite networks frequently requires randomly sampling from the set of all bipartite networks with the same degree sequence as an observed network. Trade algorithms offer an efficient way to generate samples of bipartite networks by incrementally `trading' the positions of some of their edges. However, it is difficult to know how many such trades are required to ensure that the sample is random. I propose a stopping rule that focuses on the distance between sampled networks and the observed network, and stops performing trades when this distribution stabilizes. Analyses demonstrate that, for over 300 different degree sequences, using this stopping rule ensures a random sample with a high probability, and that it is practical for use in empirical applications.
Many classical inferential approaches fail to hold when interference exists among the population units. This amounts to the treatment status of one unit affecting the potential outcome of other units in the population. Testing for such spillover effects in this setting makes the null hypothesis non-sharp. An interesting approach to tackling the non-sharp nature of the null hypothesis in this setup is constructing conditional randomization tests such that the null is sharp on the restricted population. In randomized experiments, conditional randomized tests hold finite sample validity. Such approaches can pose computational challenges as finding these appropriate sub-populations based on experimental design can involve solving an NP-hard problem. In this paper, we view the network amongst the population as a random variable instead of being fixed. We propose a new approach that builds a conditional quasi-randomization test. Our main idea is to build the (non-sharp) null distribution of no spillover effects using random graph null models. We show that our method is exactly valid in finite-samples under mild assumptions. Our method displays enhanced power over other methods, with substantial improvement in complex experimental designs. We highlight that the method reduces to a simple permutation test, making it easy to implement in practice. We conduct a simulation study to verify the finite-sample validity of our approach and illustrate our methodology to test for interference in a weather insurance adoption experiment run in rural China.
Phylogenetics is a branch of computational biology that studies the evolutionary relationships among biological entities. Its long history and numerous applications notwithstanding, inference of phylogenetic trees from sequence data remains challenging: the high complexity of tree space poses a significant obstacle for the current combinatorial and probabilistic techniques. In this paper, we adopt the framework of generative flow networks (GFlowNets) to tackle two core problems in phylogenetics: parsimony-based and Bayesian phylogenetic inference. Because GFlowNets are well-suited for sampling complex combinatorial structures, they are a natural choice for exploring and sampling from the multimodal posterior distribution over tree topologies and evolutionary distances. We demonstrate that our amortized posterior sampler, PhyloGFN, produces diverse and high-quality evolutionary hypotheses on real benchmark datasets. PhyloGFN is competitive with prior works in marginal likelihood estimation and achieves a closer fit to the target distribution than state-of-the-art variational inference methods. Our code is available at //github.com/zmy1116/phylogfn.
Establishing efficient and robust covert channels is crucial for secure communication within insecure network environments. With its inherent benefits of decentralization and anonymization, blockchain has gained considerable attention in developing covert channels. To guarantee a highly secure covert channel, channel negotiation should be contactless before the communication, carrier transaction features must be indistinguishable from normal transactions during the communication, and communication identities must be untraceable after the communication. Such a full-lifecycle covert channel is indispensable to defend against a versatile adversary who intercepts two communicating parties comprehensively (e.g., on-chain and off-chain). Unfortunately, it has not been thoroughly investigated in the literature. We make the first effort to achieve a full-lifecycle covert channel, a novel blockchain-based covert channel named ABC-Channel. We tackle a series of challenges, such as off-chain contact dependency, increased masquerading difficulties as growing transaction volume, and time-evolving, communicable yet untraceable identities, to achieve contactless channel negotiation, indistinguishable transaction features, and untraceable communication identities, respectively. We develop a working prototype to validate ABC-Channel and conduct extensive tests on the Bitcoin testnet. The experimental results demonstrate that ABC-Channel achieves substantially secure covert capabilities. In comparison to existing methods, it also exhibits state-of-the-art transmission efficiency.
We propose a method for obtaining parsimonious decompositions of networks into higher order interactions which can take the form of arbitrary motifs.The method is based on a class of analytically solvable generative models, where vertices are connected via explicit copies of motifs, which in combination with non-parametric priors allow us to infer higher order interactions from dyadic graph data without any prior knowledge on the types or frequencies of such interactions. Crucially, we also consider 'degree--corrected' models that correctly reflect the degree distribution of the network and consequently prove to be a better fit for many real world--networks compared to non-degree corrected models. We test the presented approach on simulated data for which we recover the set of underlying higher order interactions to a high degree of accuracy. For empirical networks the method identifies concise sets of atomic subgraphs from within thousands of candidates that cover a large fraction of edges and include higher order interactions of known structural and functional significance. The method not only produces an explicit higher order representation of the network but also a fit of the network to analytically tractable models opening new avenues for the systematic study of higher order network structures.
As more devices connect to the internet, it becomes crucial to address their limitations and basic security needs. While much research focuses on utilizing ML and DL to tackle security challenges, there is often a tendency to overlook the practicality and feasibility of implementing these methods in real-time settings. This oversight stems from the constrained processing power and memory of certain devices (IoT devices), as well as concerns about the generalizability of these approaches. Focusing on the detection of DNS-tunneling attacks in a router as a case study, we present an end-to-end process designed to effectively address these challenges. The process spans from developing a lightweight DNS-tunneling detection model to integrating it into a resource-constrained device for real-time detection. Through our experiments, we demonstrate that utilizing stateless features for training the ML model, along with features chosen to be independent of the network configuration, leads to highly accurate results. The deployment of this carefully crafted model, optimized for embedded devices across diverse environments, resulted in high DNS-tunneling attack detection with minimal latency. With this work, we aim to encourage solutions that strike a balance between theoretical advancements and the practical applicability of ML approaches in the ever-evolving landscape of device security.
Neural network models have shown outstanding performance and successful resolutions to complex problems in various fields. However, the majority of these models are viewed as black-box, requiring a significant amount of data for development. Consequently, in situations with limited data, constructing appropriate models becomes challenging due to the lack of transparency and scarcity of data. To tackle these challenges, this study suggests the implementation of a grey-informed neural network (GINN). The GINN ensures that the output of the neural network follows the differential equation model of the grey system, improving interpretability. Moreover, incorporating prior knowledge from grey system theory enables traditional neural networks to effectively handle small data samples. Our proposed model has been observed to uncover underlying patterns in the real world and produce reliable forecasts based on empirical data.
Decoding EEG signals is crucial for unraveling human brain and advancing brain-computer interfaces. Traditional machine learning algorithms have been hindered by the high noise levels and inherent inter-person variations in EEG signals. Recent advances in deep neural networks (DNNs) have shown promise, owing to their advanced nonlinear modeling capabilities. However, DNN still faces challenge in decoding EEG samples of unseen individuals. To address this, this paper introduces a novel approach by incorporating the conditional identification information of each individual into the neural network, thereby enhancing model representation through the synergistic interaction of EEG and personal traits. We test our model on the WithMe dataset and demonstrated that the inclusion of these identifiers substantially boosts accuracy for both subjects in the training set and unseen subjects. This enhancement suggests promising potential for improving for EEG interpretability and understanding of relevant identification features.
Graph Convolutional Neural Networks (GCNs) possess strong capabilities for processing graph data in non-grid domains. They can capture the topological logical structure and node features in graphs and integrate them into nodes' final representations. GCNs have been extensively studied in various fields, such as recommendation systems, social networks, and protein molecular structures. With the increasing application of graph neural networks, research has focused on improving their performance while compressing their size. In this work, a plug-in module named Graph Knowledge Enhancement and Distillation Module (GKEDM) is proposed. GKEDM can enhance node representations and improve the performance of GCNs by extracting and aggregating graph information via multi-head attention mechanism. Furthermore, GKEDM can serve as an auxiliary transferor for knowledge distillation. With a specially designed attention distillation method, GKEDM can distill the knowledge of large teacher models into high-performance and compact student models. Experiments on multiple datasets demonstrate that GKEDM can significantly improve the performance of various GCNs with minimal overhead. Furthermore, it can efficiently transfer distilled knowledge from large teacher networks to small student networks via attention distillation.