亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Optical interconnects are already the dominant technology in large-scale data center networks. However, the high optical loss of many optical components coupled with the low efficiency of laser sources result in high aggregate power requirements for the thousands of optical transceivers used by these networks. As optical interconnects stay always on even as traffic demands ebb and flow, most of this power is wasted. We present LC/DC, a data center network system architecture in which the operating system, the switch, and the optical components are co-designed to achieve energy proportionality. LC/DC capitalizes on the path divergence of data center networks to turn on and off redundant paths according to traffic demand, while maintaining full connectivity. Turning off redundant paths allows the optical transceivers and their electronic drivers to power down and save energy. Maintaining full connectivity hides the laser turn-on delay. At the node layer, intercepting send requests within the OS allows for the NIC's laser turn-on delay to be fully overlapped with TCP/IP packet processing, and thus egress links can remain powered off until needed with zero performance penalty. We demonstrate the feasibility of LC/DC by i) implementing the necessary modifications in the Linux kernel and device drivers, ii) implementing a 10Gbit/s FPGA switch, and iii) performing physical experiments with optical devices and circuit simulations. Our results on university data center traces and models of Facebook and Microsoft data center traffic show that LC/DC saves on average 60% of the optical transceivers power (68% max) at the cost of 6% higher packet delay.

相關內容

Unmanned aerial vehicles (UAVs) and Terahertz (THz) technology are envisioned to play paramount roles in next-generation wireless communications. In this paper, we present a novel secure UAV-assisted mobile relaying system operating at THz bands for data acquisition from multiple ground user equipments (UEs) towards a destination. We assume that the UAV-mounted relay may act, besides providing relaying services, as a potential eavesdropper called the untrusted UAV-relay (UUR). To safeguard end-to-end communications, we present a secure two-phase transmission strategy with cooperative jamming. Then, we devise an optimization framework in terms of a new measure $-$ secrecy energy efficiency (SEE), defined as the ratio of achievable average secrecy rate to average system power consumption, which enables us to obtain the best possible security level while taking UUR's inherent flight power limitation into account. For the sake of quality of service fairness amongst all the UEs, we aim to maximize the minimum SEE (MSEE) performance via the joint design of key system parameters, including UUR's trajectory and velocity, communication scheduling, and network power allocation. Since the formulated problem is a mixed-integer nonconvex optimization and computationally intractable, we decouple it into four subproblems and propose alternative algorithms to solve it efficiently via greedy/sequential block successive convex approximation and non-linear fractional programming techniques. Numerical results demonstrate significant MSEE performance improvement of our designs compared to other known benchmarks.

The Internet of Things (IoT) brings connectivity to a massive number of devices that demand energy-efficient solutions to deal with limited battery capacities, uplink-dominant traffic, and channel impairments. In this work, we explore the use of Unmanned Aerial Vehicles (UAVs) equipped with configurable antennas as a flexible solution for serving low-power IoT networks. We formulate an optimization problem to set the position and antenna beamwidth of the UAV, and the transmit power of the IoT devices subject to average-Signal-to-average-Interference-plus-Noise Ratio ($\bar{\text{S}}\overline{\text{IN}}\text{R}$) Quality of Service (QoS) constraints. We minimize the worst-case average energy consumption of the latter, thus, targeting the fairest allocation of the energy resources. The problem is non-convex and highly non-linear; therefore, we re-formulate it as a series of three geometric programs that can be solved iteratively. Results reveal the benefits of planning the network compared to a random deployment in terms of reducing the worst-case average energy consumption. Furthermore, we show that the target $\bar{\text{S}}\overline{\text{IN}}\text{R}$ is limited by the number of IoT devices, and highlight the dominant impact of the UAV hovering height when serving wider areas. Our proposed algorithm outperforms other optimization benchmarks in terms of minimizing the average energy consumption at the most energy-demanding IoT device, and convergence time.

The popularity of deep convolutional autoencoders (CAEs) has engendered new and effective reduced-order models (ROMs) for the simulation of large-scale dynamical systems. Despite this, it is still unknown whether deep CAEs provide superior performance over established linear techniques or other network-based methods in all modeling scenarios. To elucidate this, the effect of autoencoder architecture on its associated ROM is studied through the comparison of deep CAEs against two alternatives: a simple fully connected autoencoder, and a novel graph convolutional autoencoder. Through benchmark experiments, it is shown that the superior autoencoder architecture for a given ROM application is highly dependent on the size of the latent space and the structure of the snapshot data, with the proposed architecture demonstrating benefits on data with irregular connectivity when the latent space is sufficiently large.

Private inference (PI) enables inference directly on cryptographically secure data. While promising to address many privacy issues, it has seen limited use due to extreme runtimes. Unlike plaintext inference, where latency is dominated by FLOPs, in PI non-linear functions (namely ReLU) are the bottleneck. Thus, practical PI demands novel ReLU-aware optimizations. To reduce PI latency we propose a gradient-based algorithm that selectively linearizes ReLUs while maintaining prediction accuracy. We evaluate our algorithm on several standard PI benchmarks. The results demonstrate up to $4.25\%$ more accuracy (iso-ReLU count at 50K) or $2.2\times$ less latency (iso-accuracy at 70\%) than the current state of the art and advance the Pareto frontier across the latency-accuracy space. To complement empirical results, we present a "no free lunch" theorem that sheds light on how and when network linearization is possible while maintaining prediction accuracy.

Wide Area Networks (WAN) are a key infrastructure in today's society. During the last years, WANs have seen a considerable increase in network's traffic and network applications, imposing new requirements on existing network technologies (e.g., low latency and high throughput). Consequently, Internet Service Providers (ISP) are under pressure to ensure the customer's Quality of Service and fulfill Service Level Agreements. Network operators leverage Traffic Engineering (TE) techniques to efficiently manage network's resources. However, WAN's traffic can drastically change during time and the connectivity can be affected due to external factors (e.g., link failures). Therefore, TE solutions must be able to adapt to dynamic scenarios in real-time. In this paper we propose Enero, an efficient real-time TE solution based on a two-stage optimization process. In the first one, Enero leverages Deep Reinforcement Learning (DRL) to optimize the routing configuration by generating a long-term TE strategy. To enable efficient operation over dynamic network scenarios (e.g., when link failures occur), we integrated a Graph Neural Network into the DRL agent. In the second stage, Enero uses a Local Search algorithm to improve DRL's solution without adding computational overhead to the optimization process. The experimental results indicate that Enero is able to operate in real-world dynamic network topologies in 4.5 seconds on average for topologies up to 100 edges.

In real-world applications, data often come in a growing manner, where the data volume and the number of classes may increase dynamically. This will bring a critical challenge for learning: given the increasing data volume or the number of classes, one has to instantaneously adjust the neural model capacity to obtain promising performance. Existing methods either ignore the growing nature of data or seek to independently search an optimal architecture for a given dataset, and thus are incapable of promptly adjusting the architectures for the changed data. To address this, we present a neural architecture adaptation method, namely Adaptation eXpert (AdaXpert), to efficiently adjust previous architectures on the growing data. Specifically, we introduce an architecture adjuster to generate a suitable architecture for each data snapshot, based on the previous architecture and the different extent between current and previous data distributions. Furthermore, we propose an adaptation condition to determine the necessity of adjustment, thereby avoiding unnecessary and time-consuming adjustments. Extensive experiments on two growth scenarios (increasing data volume and number of classes) demonstrate the effectiveness of the proposed method.

Neural Architecture Search (NAS) was first proposed to achieve state-of-the-art performance through the discovery of new architecture patterns, without human intervention. An over-reliance on expert knowledge in the search space design has however led to increased performance (local optima) without significant architectural breakthroughs, thus preventing truly novel solutions from being reached. In this work we 1) are the first to investigate casting NAS as a problem of finding the optimal network generator and 2) we propose a new, hierarchical and graph-based search space capable of representing an extremely large variety of network types, yet only requiring few continuous hyper-parameters. This greatly reduces the dimensionality of the problem, enabling the effective use of Bayesian Optimisation as a search strategy. At the same time, we expand the range of valid architectures, motivating a multi-objective learning approach. We demonstrate the effectiveness of this strategy on six benchmark datasets and show that our search space generates extremely lightweight yet highly competitive models.

The ever-growing interest witnessed in the acquisition and development of unmanned aerial vehicles (UAVs), commonly known as drones in the past few years, has brought generation of a very promising and effective technology. Because of their characteristic of small size and fast deployment, UAVs have shown their effectiveness in collecting data over unreachable areas and restricted coverage zones. Moreover, their flexible-defined capacity enables them to collect information with a very high level of detail, leading to high resolution images. UAVs mainly served in military scenario. However, in the last decade, they have being broadly adopted in civilian applications as well. The task of aerial surveillance and situation awareness is usually completed by integrating intelligence, surveillance, observation, and navigation systems, all interacting in the same operational framework. To build this capability, UAV's are well suited tools that can be equipped with a wide variety of sensors, such as cameras or radars. Deep learning has been widely recognized as a prominent approach in different computer vision applications. Specifically, one-stage object detector and two-stage object detector are regarded as the most important two groups of Convolutional Neural Network based object detection methods. One-stage object detector could usually outperform two-stage object detector in speed; however, it normally trails in detection accuracy, compared with two-stage object detectors. In this study, focal loss based RetinaNet, which works as one-stage object detector, is utilized to be able to well match the speed of regular one-stage detectors and also defeat two-stage detectors in accuracy, for UAV based object detection. State-of-the-art performance result has been showed on the UAV captured image dataset-Stanford Drone Dataset (SDD).

Accurate Traffic Sign Detection (TSD) can help intelligent systems make better decisions according to the traffic regulations. TSD, regarded as a typical small object detection problem in some way, is fundamental in Advanced Driver Assistance Systems (ADAS) and self-driving. However, although deep neural networks have achieved human even superhuman performance on several tasks, due to their own limitations, small object detection is still an open question. In this paper, we proposed a brain-inspired network, named as KB-RANN, to handle this problem. Attention mechanism is an essential function of our brain, we used a novel recurrent attentive neural network to improve the detection accuracy in a fine-grained manner. Further, we combined domain specific knowledge and intuitive knowledge to improve the efficiency. Experimental result shows that our methods achieved better performance than several popular methods widely used in object detection. More significantly, we transplanted our method on our designed embedded system and deployed on our self-driving car successfully.

Vision-based vehicle detection approaches achieve incredible success in recent years with the development of deep convolutional neural network (CNN). However, existing CNN based algorithms suffer from the problem that the convolutional features are scale-sensitive in object detection task but it is common that traffic images and videos contain vehicles with a large variance of scales. In this paper, we delve into the source of scale sensitivity, and reveal two key issues: 1) existing RoI pooling destroys the structure of small scale objects, 2) the large intra-class distance for a large variance of scales exceeds the representation capability of a single network. Based on these findings, we present a scale-insensitive convolutional neural network (SINet) for fast detecting vehicles with a large variance of scales. First, we present a context-aware RoI pooling to maintain the contextual information and original structure of small scale objects. Second, we present a multi-branch decision network to minimize the intra-class distance of features. These lightweight techniques bring zero extra time complexity but prominent detection accuracy improvement. The proposed techniques can be equipped with any deep network architectures and keep them trained end-to-end. Our SINet achieves state-of-the-art performance in terms of accuracy and speed (up to 37 FPS) on the KITTI benchmark and a new highway dataset, which contains a large variance of scales and extremely small objects.

北京阿比特科技有限公司