SDN and NFV have recently changed the way we operate networks. By decoupling control and data plane operations and virtualising their components, they have opened up new frontiers towards reducing network ownership costs and improving usability and efficiency. Recently, their applicability has moved towards public telecommunications networks, with concepts such as the cloud-CO that have pioneered its use in access and metro networks: an idea that has quickly attracted the interest of network operators. By merging mobile, residential and enterprise services into a common framework, built around commoditised data centre types of architectures, future embodiments of this CO virtualisation concept could achieve significant capital and operational cost savings, while providing customised network experience to high-capacity and low-latency future applications. This tutorial provides an overview of the various frameworks and architectures outlining current network disaggregation trends that are leading to the virtualisation/cloudification of central offices. It also provides insight on the virtualisation of the access-metro network, showcasing new software functionalities like the virtual \ac{DBA} mechanisms for \acp{PON}. In addition, we explore how it can bring together different network technologies to enable convergence of mobile and optical access networks and pave the way for the integration of disaggregated ROADM networks. Finally, this paper discusses some of the open challenges towards the realisation of networks capable of delivering guaranteed performance, while sharing resources across multiple operators and services.
Artificial intelligence is already ubiquitous, and is increasingly being used to autonomously make ever more consequential decisions. However, there has been relatively little research into the existing and possible consequences for population health equity. A narrative review was undertaken using a hermeneutic approach to explore current and future uses of narrow AI and automated decision systems (ADS) in medicine and public health, issues that have emerged, and implications for equity. Accounts reveal a tremendous expectation on AI to transform medical and public health practices. Prominent demonstrations of AI capability - particularly in diagnostic decision making, risk prediction, and surveillance - are stimulating rapid adoption, spurred by COVID-19. Automated decisions being made have significant consequences for individual and population health and wellbeing. Meanwhile, it is evident that hazards including bias, incontestability, and privacy erosion have emerged in sensitive domains such as criminal justice where narrow AI and ADS are in common use. Reports of issues arising from their use in health are already appearing. As the use of ADS in health expands, it is probable that these hazards will manifest more widely. Bias, incontestability, and privacy erosion give rise to mechanisms by which existing social, economic and health disparities are perpetuated and amplified. Consequently, there is a significant risk that use of ADS in health will exacerbate existing population health inequities. The industrial scale and rapidity with which ADS can be applied heightens the risk to population health equity. It is incumbent on health practitioners and policy makers therefore to explore the potential implications of using ADS, to ensure the use of artificial intelligence promotes population health and equity.
Payment channel networks (PCNs) such as the Lightning Network offer an appealing solution to the scalability problem faced by many cryptocurrencies operating on a blockchain such as Bitcoin. However, PCNs also inherit the stringent dependability requirements of blockchain. In particular, in order to mitigate liquidity bottlenecks as well as on-path attacks, it is important that payment channel networks maintain a high degree of decentralization. Motivated by this requirement, we conduct an empirical centrality analysis of the popular Lightning Network, and in particular, the betweenness centrality distribution of the routing system. Based on our extensive data set (using several millions of channel update messages), we implemented a TimeMachine tool which enables us to study the network evolution over time. We find that although the network is generally fairly decentralized, a small number of nodes can attract a significant fraction of the transactions, introducing skew. Furthermore, our analysis suggests that over the last two years, the centrality has increased significantly, e.g., the inequality (measured by the Gini index) has increased by more than 10%.
Industry 4.0 in health care has evolved drastically over the past century. In fact, it is evolving every day, with new tools and strategies being developed by physicians and researchers alike. Health care and technology have been intertwined together with the advancement of cloud computing and big data. This study aims to analyze the impact of industry 4.0 in health care systems. To do so, a systematic literature review was carried out considering peer-reviewed articles extracted from the two popular databases: Scopus and Web of Science (WoS). PRISMA statement 2015 was used to include and exclude that data. At first, a bibliometric analysis was carried out using 346 articles considering the following factors: publication by year, journal, authors, countries, institutions, authors' keywords, and citations. Finally, qualitative analysis was carried out based on selected 32 articles considering the following factors: a conceptual framework, schedule problems, security, COVID-19, digital supply chain, and blockchain technology. Study finding suggests that during the onset of COVID-19, health care and industry 4.0 has been merged and evolved jointly, considering various crisis such as data security, resource allocation, and data transparency. Industry 4.0 enables many technologies such as the internet of things (IoT), blockchain, big data, cloud computing, machine learning, deep learning, information, and communication technologies (ICT) to track patients' records and helps reduce social transmission COVID-19 and so on. The study findings will give future researchers and practitioners some insights regarding the integration of health care and Industry 4.0.
The past few years have witnessed a remarkable rise in interest in driver-less cars; and naturally, in parallel, the demand for an accurate and reliable object localization and mapping system is higher than ever. Such a system would have to provide its subscribers with precise information within close range. There have been many previous research works that have explored the different possible approaches to implement such a highly dynamic mapping system in an intelligent transportation system setting, but few have discussed its applicability toward enabling other 5G verticals and services. In this article we start by describing the concept of dynamic maps. We then introduce the approach we took when creating a spatio-temporal dynamic maps system by presenting its architecture and different components. After that, we propose different scenarios where this fairly new and modern technology can be adapted to serve other 5G services, in particular, that of UAV geofencing, and finally, we test the object detection module and discuss the results.
The past decade has seen a remarkable series of advances in machine learning, and in particular deep learning approaches based on artificial neural networks, to improve our abilities to build more accurate systems across a broad range of areas, including computer vision, speech recognition, language translation, and natural language understanding tasks. This paper is a companion paper to a keynote talk at the 2020 International Solid-State Circuits Conference (ISSCC) discussing some of the advances in machine learning, and their implications on the kinds of computational devices we need to build, especially in the post-Moore's Law-era. It also discusses some of the ways that machine learning may also be able to help with some aspects of the circuit design process. Finally, it provides a sketch of at least one interesting direction towards much larger-scale multi-task models that are sparsely activated and employ much more dynamic, example- and task-based routing than the machine learning models of today.
To make deliberate progress towards more intelligent and more human-like artificial systems, we need to be following an appropriate feedback signal: we need to be able to define and evaluate intelligence in a way that enables comparisons between two systems, as well as comparisons with humans. Over the past hundred years, there has been an abundance of attempts to define and measure intelligence, across both the fields of psychology and AI. We summarize and critically assess these definitions and evaluation approaches, while making apparent the two historical conceptions of intelligence that have implicitly guided them. We note that in practice, the contemporary AI community still gravitates towards benchmarking intelligence by comparing the skill exhibited by AIs and humans at specific tasks such as board games and video games. We argue that solely measuring skill at any given task falls short of measuring intelligence, because skill is heavily modulated by prior knowledge and experience: unlimited priors or unlimited training data allow experimenters to "buy" arbitrary levels of skills for a system, in a way that masks the system's own generalization power. We then articulate a new formal definition of intelligence based on Algorithmic Information Theory, describing intelligence as skill-acquisition efficiency and highlighting the concepts of scope, generalization difficulty, priors, and experience. Using this definition, we propose a set of guidelines for what a general AI benchmark should look like. Finally, we present a benchmark closely following these guidelines, the Abstraction and Reasoning Corpus (ARC), built upon an explicit set of priors designed to be as close as possible to innate human priors. We argue that ARC can be used to measure a human-like form of general fluid intelligence and that it enables fair general intelligence comparisons between AI systems and humans.
Recent successes of value-based multi-agent deep reinforcement learning employ optimism in value function by carefully controlling learning rate(Omidshafiei et al., 2017) or reducing update prob-ability (Palmer et al., 2018). We introduce a de-centralized quantile estimator: Responsible Implicit Quantile Network (RIQN), while robust to teammate-environment interactions, able to reduce the amount of imposed optimism. Upon benchmarking against related Hysteretic-DQN(HDQN) and Lenient-DQN (LDQN), we findRIQN agents more stable, sample efficient and more likely to converge to the optimal policy.
We present an end-to-end framework for solving the Vehicle Routing Problem (VRP) using reinforcement learning. In this approach, we train a single model that finds near-optimal solutions for problem instances sampled from a given distribution, only by observing the reward signals and following feasibility rules. Our model represents a parameterized stochastic policy, and by applying a policy gradient algorithm to optimize its parameters, the trained model produces the solution as a sequence of consecutive actions in real time, without the need to re-train for every new problem instance. On capacitated VRP, our approach outperforms classical heuristics and Google's OR-Tools on medium-sized instances in solution quality with comparable computation time (after training). We demonstrate how our approach can handle problems with split delivery and explore the effect of such deliveries on the solution quality. Our proposed framework can be applied to other variants of the VRP such as the stochastic VRP, and has the potential to be applied more generally to combinatorial optimization problems.
Network Virtualization is one of the most promising technologies for future networking and considered as a critical IT resource that connects distributed, virtualized Cloud Computing services and different components such as storage, servers and application. Network Virtualization allows multiple virtual networks to coexist on same shared physical infrastructure simultaneously. One of the crucial keys in Network Virtualization is Virtual Network Embedding, which provides a method to allocate physical substrate resources to virtual network requests. In this paper, we investigate Virtual Network Embedding strategies and related issues for resource allocation of an Internet Provider(InP) to efficiently embed virtual networks that are requested by Virtual Network Operators(VNOs) who share the same infrastructure provided by the InP. In order to achieve that goal, we design a heuristic Virtual Network Embedding algorithm that simultaneously embeds virtual nodes and virtual links of each virtual network request onto physic infrastructure. Through extensive simulations, we demonstrate that our proposed scheme improves significantly the performance of Virtual Network Embedding by enhancing the long-term average revenue as well as acceptance ratio and resource utilization of virtual network requests compared to prior algorithms.
The Deep Q-Network proposed by Mnih et al. [2015] has become a benchmark and building point for much deep reinforcement learning research. However, replicating results for complex systems is often challenging since original scientific publications are not always able to describe in detail every important parameter setting and software engineering solution. In this paper, we present results from our work reproducing the results of the DQN paper. We highlight key areas in the implementation that were not covered in great detail in the original paper to make it easier for researchers to replicate these results, including termination conditions and gradient descent algorithms. Finally, we discuss methods for improving the computational performance and provide our own implementation that is designed to work with a range of domains, and not just the original Arcade Learning Environment [Bellemare et al., 2013].