亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Recently, the wireless community has initiated research on the sixth generation (6G) cellular network for the next decade. The 6G visions are still under development but are converging toward ubiquitous, sustainable, and automated digital society. A network-in-a-box (NIB) is a portable and fully-fledged networking solution that has many potentials to stimulate 6G visions, especially for ubiquitous and resilient network connectivity. In this article, we highlight how NIB features suit 6G use cases and requirements and how it can be used for 6G communications. In addition, we discuss the challenges of the potential enabling technologies of 6G that can reinforce the NIB performance.

相關內容

While operating communication networks adaptively may improve utilization and performance, frequent adjustments also introduce an algorithmic challenge: the re-optimization of traffic engineering solutions is time-consuming and may limit the granularity at which a network can be adjusted. This paper is motivated by question whether the reactivity of a network can be improved by re-optimizing solutions dynamically rather than from scratch, especially if inputs such as link weights do not change significantly. This paper explores to what extent dynamic algorithms can be used to speed up fundamental tasks in network operations. We specifically investigate optimizations related to traffic engineering (namely shortest paths and maximum flow computations), but also consider spanning tree and matching applications. While prior work on dynamic graph algorithms focuses on link insertions and deletions, we are interested in the practical problem of link weight changes. We revisit existing upper bounds in the weight-dynamic model, and present several novel lower bounds on the amortized runtime for recomputing solutions. In general, we find that the potential performance gains depend on the application, and there are also strict limitations on what can be achieved, even if link weights change only slightly.

Purpose: People are increasingly adhering to social networking platforms (SNP), and this adhesion is often unreflective, which makes them alienate data, actions, and decisions to tech giants. This essay discusses what happens when, eventually, someone chooses to cancel their participation in a large SNP. Methodology/design: This is a theoretical essay, whose narrative resembles a theoretical-empirical manuscript, grounded on the author's experience and his subjective perceptions regarding being out of the WhatsApp network (nowadays, the main SNP instance in the world). Findings/highlights: This study proposes a definition and implications of the supposedly new "digital near-death experience" concept, a metaphor for the classic near-death experience (NDE). A research agenda is also proposed. Limitations: The resulting propositions are grounded on a set of assumptions, that if falsified, make the findings invalid.

Today's communication networks have stringent availability requirements and hence need to rapidly restore connectivity after failures. Modern networks thus implement various forms of fast reroute mechanisms in the data plane, to bridge the gap to slow global control plane convergence. State-of-the-art fast reroute commonly relies on disjoint route structures, to offer multiple independent paths to the destination. We propose to leverage the network's path diversity to extend edge disjoint path mechanisms to tree routing, in order to improve the performance of fast rerouting. We present two such tree-mechanisms in detail and show that they boost resilience by up to 12% and 25% respectively on real-world, synthetic, and data center topologies, while still retaining good path length qualities.

Preserving energy in households and office buildings is a significant challenge, mainly due to the recent shortage of energy resources, the uprising of the current environmental problems, and the global lack of utilizing energy-saving technologies. Not to mention, within some regions, COVID-19 social distancing measures have led to a temporary transfer of energy demand from commercial and urban centers to residential areas, causing an increased use and higher charges, and in turn, creating economic impacts on customers. Therefore, the marketplace could benefit from developing an internet of things (IoT) ecosystem that monitors energy consumption habits and promptly recommends action to facilitate energy efficiency. This paper aims to present the full integration of a proposed energy efficiency framework into the Home-Assistant platform using an edge-based architecture. End-users can visualize their consumption patterns as well as ambient environmental data using the Home-Assistant user interface. More notably, explainable energy-saving recommendations are delivered to end-users in the form of notifications via the mobile application to facilitate habit change. In this context, to the best of the authors' knowledge, this is the first attempt to develop and implement an energy-saving recommender system on edge devices. Thus, ensuring better privacy preservation since data are processed locally on the edge, without the need to transmit them to remote servers, as is the case with cloudlet platforms.

This paper discusses cellular network security for unmanned aircraft systems (UASs) and provides insights into the ongoing Third Generation Partnership Project (3GPP) standardization efforts with respect to authentication and authorization, location information privacy, and command and control signaling. We introduce the 3GPP reference architecture for network connected UAS and the new network functions as part of the 5G core network, discuss introduce the three security contexts, potential threats, and the 3GPP procedures. The paper identifies research opportunities for UAS communications security and recommends critical security features and processes to be considered for standardization.

Heterogeneous tabular data are the most commonly used form of data and are essential for numerous critical and computationally demanding applications. On homogeneous data sets, deep neural networks have repeatedly shown excellent performance and have therefore been widely adopted. However, their application to modeling tabular data (inference or generation) remains highly challenging. This work provides an overview of state-of-the-art deep learning methods for tabular data. We start by categorizing them into three groups: data transformations, specialized architectures, and regularization models. We then provide a comprehensive overview of the main approaches in each group. A discussion of deep learning approaches for generating tabular data is complemented by strategies for explaining deep models on tabular data. Our primary contribution is to address the main research streams and existing methodologies in this area, while highlighting relevant challenges and open research questions. To the best of our knowledge, this is the first in-depth look at deep learning approaches for tabular data. This work can serve as a valuable starting point and guide for researchers and practitioners interested in deep learning with tabular data.

In the past few decades, artificial intelligence (AI) technology has experienced swift developments, changing everyone's daily life and profoundly altering the course of human society. The intention of developing AI is to benefit humans, by reducing human labor, bringing everyday convenience to human lives, and promoting social good. However, recent research and AI applications show that AI can cause unintentional harm to humans, such as making unreliable decisions in safety-critical scenarios or undermining fairness by inadvertently discriminating against one group. Thus, trustworthy AI has attracted immense attention recently, which requires careful consideration to avoid the adverse effects that AI may bring to humans, so that humans can fully trust and live in harmony with AI technologies. Recent years have witnessed a tremendous amount of research on trustworthy AI. In this survey, we present a comprehensive survey of trustworthy AI from a computational perspective, to help readers understand the latest technologies for achieving trustworthy AI. Trustworthy AI is a large and complex area, involving various dimensions. In this work, we focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being. For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems. We also discuss the accordant and conflicting interactions among different dimensions and discuss potential aspects for trustworthy AI to investigate in the future.

Knowledge is a formal way of understanding the world, providing a human-level cognition and intelligence for the next-generation artificial intelligence (AI). One of the representations of knowledge is the structural relations between entities. An effective way to automatically acquire this important knowledge, called Relation Extraction (RE), a sub-task of information extraction, plays a vital role in Natural Language Processing (NLP). Its purpose is to identify semantic relations between entities from natural language text. To date, there are several studies for RE in previous works, which have documented these techniques based on Deep Neural Networks (DNNs) become a prevailing technique in this research. Especially, the supervised and distant supervision methods based on DNNs are the most popular and reliable solutions for RE. This article 1)introduces some general concepts, and further 2)gives a comprehensive overview of DNNs in RE from two points of view: supervised RE, which attempts to improve the standard RE systems, and distant supervision RE, which adopts DNNs to design the sentence encoder and the de-noise method. We further 3)cover some novel methods and describe some recent trends and discuss possible future research directions for this task.

Many resource allocation problems in the cloud can be described as a basic Virtual Network Embedding Problem (VNEP): finding mappings of request graphs (describing the workloads) onto a substrate graph (describing the physical infrastructure). In the offline setting, the two natural objectives are profit maximization, i.e., embedding a maximal number of request graphs subject to the resource constraints, and cost minimization, i.e., embedding all requests at minimal overall cost. The VNEP can be seen as a generalization of classic routing and call admission problems, in which requests are arbitrary graphs whose communication endpoints are not fixed. Due to its applications, the problem has been studied intensively in the networking community. However, the underlying algorithmic problem is hardly understood. This paper presents the first fixed-parameter tractable approximation algorithms for the VNEP. Our algorithms are based on randomized rounding. Due to the flexible mapping options and the arbitrary request graph topologies, we show that a novel linear program formulation is required. Only using this novel formulation the computation of convex combinations of valid mappings is enabled, as the formulation needs to account for the structure of the request graphs. Accordingly, to capture the structure of request graphs, we introduce the graph-theoretic notion of extraction orders and extraction width and show that our algorithms have exponential runtime in the request graphs' maximal width. Hence, for request graphs of fixed extraction width, we obtain the first polynomial-time approximations. Studying the new notion of extraction orders we show that (i) computing extraction orders of minimal width is NP-hard and (ii) that computing decomposable LP solutions is in general NP-hard, even when restricting request graphs to planar ones.

Multi-view networks are ubiquitous in real-world applications. In order to extract knowledge or business value, it is of interest to transform such networks into representations that are easily machine-actionable. Meanwhile, network embedding has emerged as an effective approach to generate distributed network representations. Therefore, we are motivated to study the problem of multi-view network embedding, with a focus on the characteristics that are specific and important in embedding this type of networks. In our practice of embedding real-world multi-view networks, we identify two such characteristics, which we refer to as preservation and collaboration. We then explore the feasibility of achieving better embedding quality by simultaneously modeling preservation and collaboration, and propose the mvn2vec algorithms. With experiments on a series of synthetic datasets, an internal Snapchat dataset, and two public datasets, we further confirm the presence and importance of preservation and collaboration. These experiments also demonstrate that better embedding can be obtained by simultaneously modeling the two characteristics, while not over-complicating the model or requiring additional supervision.

北京阿比特科技有限公司