IP addresses and port numbers (network based identifiers hereafter) in packets are two major identifiers for network devices to identify systems and roles of hosts sending and receiving packets for access control lists, priority control, etc. However, in modern system design on cloud, such as microservices architecture, network based identifiers are inefficient for network devices to identify systems and roles of hosts. This is because, due to autoscaling and automatic deployment of new software, many VMs and containers consisting of the system (workload hereafter) are frequently created and deleted on servers whose resources are available, and network based identifiers are assigned based on servers where containers and VMs are running. In this paper, we propose a new system, Acila, to classify packets based on the identity of a workload at network devices, by marking packets with the necessary information extracted from the identity that usually stored in orchestrators or controllers. We then implement Acila and show that packet filtering and priority control can be implemented with Acila, and entries for them with Acila is more efficient than conventional network based identifiers approach, with little overhead on performance
Multi-access Edge Computing (MEC) is a 5G-enabling solution that aims to bring cloud-computing capabilities closer to the end-users. This paper focuses on mitigation techniques against Distributed Denial-of-Service (DDoS) attacks in the context of 5G MEC, providing solutions that involve the virtualized environment and the management entities from the MEC architecture. The proposed solutions aim to reduce the risk of affecting legitimate traffic in the context of DDoS attacks. Our work supports the idea of using a network flow collector that sends the data to an anomaly detection system based on artificial intelligence techniques and, as an improvement over the previous work, it contributes to redirecting detected anomalies for isolation to a separate virtual machine. This virtual machine uses deep packet inspection tools to analyze the traffic and provides services until the final verdict. We decrease the risk of compromising the virtual machine that provides services to legitimate users by isolating the suspicious traffic. The management entities of the MEC architecture allow to re-instantiate or reconfigure the virtual machines. Hence, if the machine inspecting the isolated traffic crashes because of an attack, the damaged machine can be restored while the services provided to legitimate users are not affected.
The rise of NewSpace provides a platform for small and medium businesses to commercially launch and operate satellites in space. In contrast to traditional satellites, NewSpace provides the opportunity for delivering computing platforms in space. However, computational resources within space are usually expensive and satellites may not be able to compute all computational tasks locally. Computation Offloading (CO), a popular practice in Edge/Fog computing, could prove effective in saving energy and time in this resource-limited space ecosystem. However, CO alters the threat and risk profile of the system. In this paper, we analyse security issues in space systems and propose a security-aware algorithm for CO. Our method is based on the reinforcement learning technique, Deep Deterministic Policy Gradient (DDPG). We show, using Monte-Carlo simulations, that our algorithm is effective under a variety of environment and network conditions and provide novel insights into the challenge of optimised location of computation.
In topology optimization, the state of structures is typically obtained by numerically evaluating a discretized PDE-based model. The degrees of freedom of such a model can be partitioned in free and prescribed sets to define the boundary conditions. A multi-partition problem involves multiple partitions of the same discretization, typically corresponding to different loading scenarios. As a result, solving multi-partition problems involves multiple factorization/preconditionings of the system matrix, requiring a high computational effort. In this paper, a novel method is proposed to efficiently calculate the responses and accompanying design sensitivities in such multi-partition problems using static condensation for use in gradient-based topology optimization. A main problem class that benefits from the proposed method is the topology optimization of small-displacement multi-input-multi-output compliant mechanisms. However, the method is applicable to any linear problem. We present its formulation and an algorithmic complexity analysis to estimate computational advantages for both direct and iterative solution methods to solve the system of equations, verified by numerical experiments. It is demonstrated that substantial gains are achievable for large-scale multi-partition problems. This is especially true for problems with both a small set of number of degrees of freedom that fully describes the performance of the structure and with large similarities between the different partitions. A major contribution to the gain is the lack of large adjoint analyses required to obtain the sensitivities of the performance measure.
Blockchain technologies have been boosting the development of data-driven decentralized services in a wide range of fields. However, with the spirit of full transparency, many public blockchains expose all types of data to the public such as Ethereum, a leading public blockchain platform. Besides, the on-chain persistence of large data is significantly expensive. These lead to the difficulty of sharing fairly large private data while preserving attractive properties of public blockchains. Although direct encryption for on-chain data persistence can introduce confidentiality, new challenges such as key sharing, access control, and legal rights proving are still open. Meanwhile, cross-chain collaboration still requires secure and effective protocols, though decentralized storage systems such as IPFS bring the possibility for fairly large data persistence. In this paper, we propose Sunspot, a decentralized framework for privacy-preserving data sharing with an access control mechanism, to solve these issues. We also show the practicality and applicability of Sunspot by MyPub, a decentralized privacy-preserving publishing platform based on Sunspot. Furthermore, we evaluate the security, privacy, and performance of Sunspot through theoretical analysis and experiments.
Network embedding (or graph embedding) has been widely used in many real-world applications. However, existing methods mainly focus on networks with single-typed nodes/edges and cannot scale well to handle large networks. Many real-world networks consist of billions of nodes and edges of multiple types and each node is associated with different attributes. In this paper, we formalize the problem of embedding learning for the Attributed Multiplex Heterogeneous Network and propose a unified framework to address this problem. The framework supports both transductive and inductive learning. We also give the theoretical analysis of the proposed framework, showing its connection with previous works and proving its better generalization ability. We conduct systematical evaluations for the proposed framework on four different genres of challenging datasets: Amazon, YouTube, Twitter, and Alibaba dataset. Experimental results demonstrate that with the learned embeddings from the proposed framework, we can achieve statistically significant improvements (e.g., 5.99-28.23% lift by F1 scores; p<<0.01, t-test) over previous state-of-the-arts for link prediction. The framework has also been successfully deployed on the recommendation system of a worldwide leading E-Commerce company Alibaba. Results of the offline A/B tests on product recommendation further confirm the effectiveness and efficiency of the framework in practice.
Deep reinforcement learning (RL) has achieved many recent successes, yet experiment turn-around time remains a key bottleneck in research and in practice. We investigate how to optimize existing deep RL algorithms for modern computers, specifically for a combination of CPUs and GPUs. We confirm that both policy gradient and Q-value learning algorithms can be adapted to learn using many parallel simulator instances. We further find it possible to train using batch sizes considerably larger than are standard, without negatively affecting sample complexity or final performance. We leverage these facts to build a unified framework for parallelization that dramatically hastens experiments in both classes of algorithm. All neural network computations use GPUs, accelerating both data collection and training. Our results include using an entire DGX-1 to learn successful strategies in Atari games in mere minutes, using both synchronous and asynchronous algorithms.
Currently, the neural network architecture design is mostly guided by the \emph{indirect} metric of computation complexity, i.e., FLOPs. However, the \emph{direct} metric, e.g., speed, also depends on the other factors such as memory access cost and platform characterics. Thus, this work proposes to evaluate the direct metric on the target platform, beyond only considering FLOPs. Based on a series of controlled experiments, this work derives several practical \emph{guidelines} for efficient network design. Accordingly, a new architecture is presented, called \emph{ShuffleNet V2}. Comprehensive ablation experiments verify that our model is the state-of-the-art in terms of speed and accuracy tradeoff.
Network embedding has attracted considerable research attention recently. However, the existing methods are incapable of handling billion-scale networks, because they are computationally expensive and, at the same time, difficult to be accelerated by distributed computing schemes. To address these problems, we propose RandNE, a novel and simple billion-scale network embedding method. Specifically, we propose a Gaussian random projection approach to map the network into a low-dimensional embedding space while preserving the high-order proximities between nodes. To reduce the time complexity, we design an iterative projection procedure to avoid the explicit calculation of the high-order proximities. Theoretical analysis shows that our method is extremely efficient, and friendly to distributed computing schemes without any communication cost in the calculation. We demonstrate the efficacy of RandNE over state-of-the-art methods in network reconstruction and link prediction tasks on multiple datasets with different scales, ranging from thousands to billions of nodes and edges.
Network Virtualization is one of the most promising technologies for future networking and considered as a critical IT resource that connects distributed, virtualized Cloud Computing services and different components such as storage, servers and application. Network Virtualization allows multiple virtual networks to coexist on same shared physical infrastructure simultaneously. One of the crucial keys in Network Virtualization is Virtual Network Embedding, which provides a method to allocate physical substrate resources to virtual network requests. In this paper, we investigate Virtual Network Embedding strategies and related issues for resource allocation of an Internet Provider(InP) to efficiently embed virtual networks that are requested by Virtual Network Operators(VNOs) who share the same infrastructure provided by the InP. In order to achieve that goal, we design a heuristic Virtual Network Embedding algorithm that simultaneously embeds virtual nodes and virtual links of each virtual network request onto physic infrastructure. Through extensive simulations, we demonstrate that our proposed scheme improves significantly the performance of Virtual Network Embedding by enhancing the long-term average revenue as well as acceptance ratio and resource utilization of virtual network requests compared to prior algorithms.
Embedding network data into a low-dimensional vector space has shown promising performance for many real-world applications, such as node classification and entity retrieval. However, most existing methods focused only on leveraging network structure. For social networks, besides the network structure, there also exists rich information about social actors, such as user profiles of friendship networks and textual content of citation networks. These rich attribute information of social actors reveal the homophily effect, exerting huge impacts on the formation of social networks. In this paper, we explore the rich evidence source of attributes in social networks to improve network embedding. We propose a generic Social Network Embedding framework (SNE), which learns representations for social actors (i.e., nodes) by preserving both the structural proximity and attribute proximity. While the structural proximity captures the global network structure, the attribute proximity accounts for the homophily effect. To justify our proposal, we conduct extensive experiments on four real-world social networks. Compared to the state-of-the-art network embedding approaches, SNE can learn more informative representations, achieving substantial gains on the tasks of link prediction and node classification. Specifically, SNE significantly outperforms node2vec with an 8.2% relative improvement on the link prediction task, and a 12.7% gain on the node classification task.