While the concept of Artificial Intelligent Internet of Things\ (AIoT) is booming, computation and/or communication-intensive tasks accompanied by several sub-tasks are slowly moving from centralized deployment to edge-side deployment. The idea of edge computing also makes intelligent services sink locally. But in actual scenarios like dynamic edge computing networks (DECN), due to fluctuations in available computing resources of intermediate servers and changes in bandwidth during data transmission, service reliability becomes difficult to guarantee. Coupled with changes in the amount of data in a service, the above three problems all make the existing reliability evaluation methods no longer accurate. To study the effect of distributed service deployment strategies under such a background, this paper proposes a reliability evaluation method (REMR) based on lower boundary rule under time constraint to study the degree of the rationality of a service deployment plan combined with DECN. In this scenario, time delay is the main concern which would be affected by three quantitative factors: data packet storing and sending time, data transmission time and the calculation time of executing sub-tasks on the node devices, specially while the last two are in dynamic scenarios. In actual calculation, based on the idea of the minimal paths, the solution set would to be found that can meet requirements in the current deployment. Then the reliability of the service supported by the solution sets would be found out based on the principle of inclusion-exclusion combined with the distribution of available data transmission bandwidth and the distribution of node available computing resources. Besides a illustrative example was provided, to verify the calculated reliability of the designed service deployment plan, the NS3 is utilized along with Google cluster data set for simulation.
Unmanned aerial vehicles (UAVs) and Terahertz (THz) technology are envisioned to play paramount roles in next-generation wireless communications. In this paper, we present a novel secure UAV-assisted mobile relaying system operating at THz bands for data acquisition from multiple ground user equipments (UEs) towards a destination. We assume that the UAV-mounted relay may act, besides providing relaying services, as a potential eavesdropper called the untrusted UAV-relay (UUR). To safeguard end-to-end communications, we present a secure two-phase transmission strategy with cooperative jamming. Then, we devise an optimization framework in terms of a new measure $-$ secrecy energy efficiency (SEE), defined as the ratio of achievable average secrecy rate to average system power consumption, which enables us to obtain the best possible security level while taking UUR's inherent flight power limitation into account. For the sake of quality of service fairness amongst all the UEs, we aim to maximize the minimum SEE (MSEE) performance via the joint design of key system parameters, including UUR's trajectory and velocity, communication scheduling, and network power allocation. Since the formulated problem is a mixed-integer nonconvex optimization and computationally intractable, we decouple it into four subproblems and propose alternative algorithms to solve it efficiently via greedy/sequential block successive convex approximation and non-linear fractional programming techniques. Numerical results demonstrate significant MSEE performance improvement of our designs compared to other known benchmarks.
Advances in quantum computing make Shor's algorithm for factorising numbers ever more tractable. This threatens the security of any cryptographic system which often relies on the difficulty of factorisation. It also threatens methods based on discrete logarithms, such as with the Diffie-Hellman key exchange method. For a cryptographic system to remain secure against a quantum adversary, we need to build methods based on a hard mathematical problem, which are not susceptible to Shor's algorithm and which create Post Quantum Cryptography (PQC). While high-powered computing devices may be able to run these new methods, we need to investigate how well these methods run on limited powered devices. This paper outlines an evaluation framework for PQC within constrained devices, and contributes to the area by providing benchmarks of the front-running algorithms on a popular single-board low-power device.
For data streaming applications, existing solutions are not yet able to close the gap between high data rates and low delay. This work considers the problem of data streaming under mixed delay constraints over a single communication channel with delayed feedback. We propose a novel layered adaptive causal random linear network coding (LAC-RLNC) approach with forward error correction. LAC-RLNC is a variable-to-variable coding scheme, i.e., variable recovered information data at the receiver over variable short block length and rate is proposed. Specifically, for data streaming with base and enhancement layers of content, we characterize a high dimensional throughput-delay trade-off managed by the adaptive causal layering coding scheme. The base layer is designed to satisfy the strict delay constraints, as it contains the data needed to allow the streaming service. Then, the sender can manage the throughput-delay trade-off of the second layer by adjusting the retransmission rate a priori and posterior as the enhancement layer, that contains the remaining data to augment the streaming service's quality, is with the relax delay constraints. We numerically show that the layered network coding approach can dramatically increase performance. We demonstrate that LAC-RLNC compared with the non-layered approach gains a factor of three in mean and maximum delay for the base layer, close to the lower bound, and factor two for the enhancement layer.
Metaverse provides an alternative platform for human interaction in the virtual world. Since virtual platform holds few restrictions in changing the surrounding environments or the appearance of the avatars, it can serve as a platform that reflects human thoughts or even dreams at least in the metaverse world. When it is merged together with the current brain-computer interface (BCI) technology, which enables system control via brain signals, a new paradigm of human interaction through mind may be established in the metaverse conditions. Recent BCI systems are aiming to provide user-friendly and intuitive means of communication using brain signals. Imagined speech has become an alternative neuro-paradigm for communicative BCI since it relies directly on a person's speech production process, rather than using speech-unrelated neural activity as the means of communication. In this paper, we propose a brain-to-speech (BTS) system for real-world smart communication using brain signals. Also, we show a demonstration of imagined speech based smart home control through communication with a virtual assistant, which can be one of the future applications of brain-metaverse system. We performed pseudo-online analysis using imagined speech electroencephalography data of nine subjects to investigate the potential use of virtual BTS system in the real-world. Average accuracy of 46.54 % (chance level = 7.7 %) and 75.56 % (chance level = 50 %) was acquired in the thirteen-class and binary pseudo-online analysis, respectively. Our results support the potential of imagined speech based smart communication to be applied in the metaverse world.
Network slicing provides introduces customized and agile network deployment for managing different service types for various verticals under the same infrastructure. To cater to the dynamic service requirements of these verticals and meet the required quality-of-service (QoS) mentioned in the service-level agreement (SLA), network slices need to be isolated through dedicated elements and resources. Additionally, allocated resources to these slices need to be continuously monitored and intelligently managed. This enables immediate detection and correction of any SLA violation to support automated service assurance in a closed-loop fashion. By reducing human intervention, intelligent and closed-loop resource management reduces the cost of offering flexible services. Resource management in a network shared among verticals (potentially administered by different providers), would be further facilitated through open and standardized interfaces. Open radio access network (O-RAN) is perhaps the most promising RAN architecture that inherits all the aforementioned features, namely intelligence, open and standard interfaces, and closed control loop. Inspired by this, in this article we provide a closed-loop and intelligent resource provisioning scheme for O-RAN slicing to prevent SLA violations. In order to maintain realism, a real-world dataset of a large operator is used to train a learning solution for optimizing resource utilization in the proposed closed-loop service automation process. Moreover, the deployment architecture and the corresponding flow that are cognizant of the O-RAN requirements are also discussed.
Local community search is an important research topic to support complex network data analysis in various scenarios like social networks, collaboration networks, and cellular networks. The evolution of networks over time has motivated several recent studies to identify local communities from dynamic networks. However, they only utilized the aggregation of disjoint structural information to measure the quality of communities, which ignores the reliability of communities in a continuous time interval. To fill this research gap, we propose a novel $(\theta,k)$-$core$ reliable community (CRC) model in the weighted dynamic networks, and define the problem of the most reliable community search that couples the desirable properties of connection strength, cohesive structure continuity, and the maximal member engagement. To solve this problem, we first develop an online CRC search algorithm by proposing a definition of eligible edge set and deriving the eligible edge set based pruning rules. % called the Eligible Edge Filtering-based CRC algorithm. After that, we devise a Weighted Core Forest-Index and index-based dynamic programming CRC search algorithm, which can prune a large number of insignificant intermediate results according to the maintained weight and structure information in the index, as well as the proposed upper bound properties. % our proposed pruning properties and upper bound properties. Finally, we conduct extensive experiments to verify the efficiency of our proposed algorithms and the effectiveness of our proposed community model on eight real datasets under different parameter settings.
The rapid development of virtual network architecture makes it possible for wireless network to be widely used. With the popularity of artificial intelligence (AI) industry in daily life, efficient resource allocation of wireless network has become a problem. Especially when network users request wireless network resources from different management domains, they still face many practical problems. From the perspective of virtual network embedding (VNE), this paper designs and implements a multi-objective optimization VNE algorithm for wireless network resource allocation. Resource allocation in virtual network is essentially a problem of allocating underlying resources for virtual network requests (VNRs). According to the proposed objective formula, we consider the optimization mapping cost, network delay and VNR acceptance rate. VNE is completed by node mapping and link mapping. In the experiment and simulation stage, it is compared with other VNE algorithms, the cross domain VNE algorithm proposed in this paper is optimal in the above three indicators. This shows the effectiveness of the algorithm in wireless network resource allocation.
The development of Intelligent Cyber-Physical Systems (ICPSs) in virtual network environment is facing severe challenges. On the one hand, the Internet of things (IoT) based on ICPSs construction needs a large amount of reasonable network resources support. On the other hand, ICPSs are facing severe network security problems. The integration of ICPSs and network virtualization (NV) can provide more efficient network resource support and security guarantees for IoT users. Based on the above two problems faced by ICPSs, we propose a virtual network embedded (VNE) algorithm with computing, storage resources and security constraints to ensure the rationality and security of resource allocation in ICPSs. In particular, we use reinforcement learning (RL) method as a means to improve algorithm performance. We extract the important attribute characteristics of underlying network as the training environment of RL agent. Agent can derive the optimal node embedding strategy through training, so as to meet the requirements of ICPSs for resource management and security. The embedding of virtual links is based on the breadth first search (BFS) strategy. Therefore, this is a comprehensive two-stage RL-VNE algorithm considering the constraints of computing, storage and security three-dimensional resources. Finally, we design a large number of simulation experiments from the perspective of typical indicators of VNE algorithms. The experimental results effectively illustrate the effectiveness of the algorithm in the application of ICPSs.
With the overwhelming popularity of Knowledge Graphs (KGs), researchers have poured attention to link prediction to fill in missing facts for a long time. However, they mainly focus on link prediction on binary relational data, where facts are usually represented as triples in the form of (head entity, relation, tail entity). In practice, n-ary relational facts are also ubiquitous. When encountering such facts, existing studies usually decompose them into triples by introducing a multitude of auxiliary virtual entities and additional triples. These conversions result in the complexity of carrying out link prediction on n-ary relational data. It has even proven that they may cause loss of structure information. To overcome these problems, in this paper, we represent each n-ary relational fact as a set of its role and role-value pairs. We then propose a method called NaLP to conduct link prediction on n-ary relational data, which explicitly models the relatedness of all the role and role-value pairs in an n-ary relational fact. We further extend NaLP by introducing type constraints of roles and role-values without any external type-specific supervision, and proposing a more reasonable negative sampling mechanism. Experimental results validate the effectiveness and merits of the proposed methods.
Network Virtualization is one of the most promising technologies for future networking and considered as a critical IT resource that connects distributed, virtualized Cloud Computing services and different components such as storage, servers and application. Network Virtualization allows multiple virtual networks to coexist on same shared physical infrastructure simultaneously. One of the crucial keys in Network Virtualization is Virtual Network Embedding, which provides a method to allocate physical substrate resources to virtual network requests. In this paper, we investigate Virtual Network Embedding strategies and related issues for resource allocation of an Internet Provider(InP) to efficiently embed virtual networks that are requested by Virtual Network Operators(VNOs) who share the same infrastructure provided by the InP. In order to achieve that goal, we design a heuristic Virtual Network Embedding algorithm that simultaneously embeds virtual nodes and virtual links of each virtual network request onto physic infrastructure. Through extensive simulations, we demonstrate that our proposed scheme improves significantly the performance of Virtual Network Embedding by enhancing the long-term average revenue as well as acceptance ratio and resource utilization of virtual network requests compared to prior algorithms.