亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Realizing today's cloud-level artificial intelligence functionalities directly on devices distributed at the edge of the internet calls for edge hardware capable of processing multiple modalities of sensory data (e.g. video, audio) at unprecedented energy-efficiency. AI hardware architectures today cannot meet the demand due to a fundamental "memory wall": data movement between separate compute and memory units consumes large energy and incurs long latency. Resistive random-access memory (RRAM) based compute-in-memory (CIM) architectures promise to bring orders of magnitude energy-efficiency improvement by performing computation directly within memory. However, conventional approaches to CIM hardware design limit its functional flexibility necessary for processing diverse AI workloads, and must overcome hardware imperfections that degrade inference accuracy. Such trade-offs between efficiency, versatility and accuracy cannot be addressed by isolated improvements on any single level of the design. By co-optimizing across all hierarchies of the design from algorithms and architecture to circuits and devices, we present NeuRRAM - the first multimodal edge AI chip using RRAM CIM to simultaneously deliver a high degree of versatility for diverse model architectures, record energy-efficiency $5\times$ - $8\times$ better than prior art across various computational bit-precisions, and inference accuracy comparable to software models with 4-bit weights on all measured standard AI benchmarks including accuracy of 99.0% on MNIST and 85.7% on CIFAR-10 image classification, 84.7% accuracy on Google speech command recognition, and a 70% reduction in image reconstruction error on a Bayesian image recovery task. This work paves a way towards building highly efficient and reconfigurable edge AI hardware platforms for the more demanding and heterogeneous AI applications of the future.

相關內容

機器學習系統設計系統評估標準

Classic image scaling (e.g. bicubic) can be seen as one convolutional layer and a single upscaling filter. Its implementation is ubiquitous in all display devices and image processing software. In the last decade deep learning systems have been introduced for the task of image super-resolution (SR), using several convolutional layers and numerous filters. These methods have taken over the benchmarks of image quality for upscaling tasks. Would it be possible to replace classic upscalers with deep learning architectures on edge devices such as display panels, tablets, laptop computers, etc.? On one hand, the current trend in Edge-AI chips shows a promising future in this direction, with rapid development of hardware that can run deep-learning tasks efficiently. On the other hand, in image SR only few architectures have pushed the limit to extreme small sizes that can actually run on edge devices at real-time. We explore possible solutions to this problem with the aim to fill the gap between classic upscalers and small deep learning configurations. As a transition from classic to deep-learning upscaling we propose edge-SR (eSR), a set of one-layer architectures that use interpretable mechanisms to upscale images. Certainly, a one-layer architecture cannot reach the quality of deep learning systems. Nevertheless, we find that for high speed requirements, eSR becomes better at trading-off image quality and runtime performance. Filling the gap between classic and deep-learning architectures for image upscaling is critical for massive adoption of this technology. It is equally important to have an interpretable system that can reveal the inner strategies to solve this problem and guide us to future improvements and better understanding of larger networks.

We develop two distributed downlink resource allocation algorithms for user-centric, cell-free, spatially-distributed, multiple-input multiple-output (MIMO) networks. In such networks, each user is served by a subset of nearby transmitters that we call distributed units or DUs. The operation of the DUs in a region is controlled by a central unit (CU). Our first scheme is implemented at the DUs, while the second is implemented at the CUs controlling these DUs. We define a hybrid quality of service metric that enables distributed optimization of system resources in a proportional fair manner. Specifically, each of our algorithms performs user scheduling, beamforming, and power control while accounting for channel estimation errors. Importantly, our algorithm does not require information exchange amongst DUs (CUs) for the DU-distributed (CU-distributed) system, while also smoothly converging. Our results show that our CU-distributed system provides 1.3- to 1.8-fold network throughput compared to the DU-distributed system, with minor increases in complexity and front-haul load - and substantial gains over benchmark schemes like local zero-forcing. We also analyze the trade-offs provided by the CU-distributed system, hence highlighting the significance of deploying multiple CUs in user-centric cell-free networks.

In this paper, we propose an approach for constructing a multi-layer multi-orbit space information network (SIN) to provide high-speed continuous broadband connectivity for space missions (nanosatellite terminals) from the emerging space-based Internet providers. This notion has been motivated by the rapid developments in satellite technologies in terms of satellite miniaturization and reusable rocket launch, as well as the increased number of nanosatellite constellations in lower orbits for space downstream applications, such as earth observation, remote sensing, and Internet of Things (IoT) data collection. Specifically, space-based Internet providers, such as Starlink, OneWeb, and SES O3b, can be utilized for broadband connectivity directly to/from the nanosatellites, which allows a larger degree of connectivity in space network topologies. Besides, this kind of establishment is more economically efficient and eliminates the need for an excessive number of ground stations while achieving real-time and reliable space communications. This objective necessitates developing suitable radio access schemes and efficient scalable space backhauling using inter-satellite links (ISLs) and inter-orbit links (IOLs). Particularly, service-oriented radio access methods in addition to software-defined networking (SDN)-based architecture employing optimal routing mechanisms over multiple ISLs and IOLs are the most essential enablers for this novel concept. Thus, developing this symbiotic interaction between versatile satellite nodes across different orbits will lead to a breakthrough in the way that future downstream space missions and satellite networks are designed and operated.

For practical deep neural network design on mobile devices, it is essential to consider the constraints incurred by the computational resources and the inference latency in various applications. Among deep network acceleration related approaches, pruning is a widely adopted practice to balance the computational resource consumption and the accuracy, where unimportant connections can be removed either channel-wisely or randomly with a minimal impact on model accuracy. The channel pruning instantly results in a significant latency reduction, while the random weight pruning is more flexible to balance the latency and accuracy. In this paper, we present a unified framework with Joint Channel pruning and Weight pruning (JCW), and achieves a better Pareto-frontier between the latency and accuracy than previous model compression approaches. To fully optimize the trade-off between the latency and accuracy, we develop a tailored multi-objective evolutionary algorithm in the JCW framework, which enables one single search to obtain the optimal candidate architectures for various deployment requirements. Extensive experiments demonstrate that the JCW achieves a better trade-off between the latency and accuracy against various state-of-the-art pruning methods on the ImageNet classification dataset. Our codes are available at //github.com/jcw-anonymous/JCW.

Efficient data offloading plays a pivotal role in computational-intensive platforms as data rate over wireless channels is fundamentally limited. On top of that, high mobility adds an extra burden in vehicular edge networks (VENs), bolstering the desire for efficient user-centric solutions. Therefore, unlike the legacy inflexible network-centric approach, this paper exploits a software-defined flexible, open, and programmable networking platform for an efficient user-centric, fast, reliable, and deadline-constrained offloading solution in VENs. In the proposed model, each active vehicle user (VU) is served from multiple low-powered access points (APs) by creating a noble virtual cell (VC). A joint node association, power allocation, and distributed resource allocation problem is formulated. As centralized learning is not practical in many real-world problems, following the distributed nature of autonomous VUs, each VU is considered an edge learning agent. To that end, considering practical location-aware node associations, a joint radio and power resource allocation non-cooperative stochastic game is formulated. Leveraging reinforcement learning's (RL) efficacy, a multi-agent RL (MARL) solution is proposed where the edge learners aim to learn the Nash equilibrium (NE) strategies to solve the game efficiently. Besides, real-world map data, with a practical microscopic mobility model, are used for the simulation. Results suggest that the proposed user-centric approach can deliver remarkable performances in VENs. Moreover, the proposed MARL solution delivers near-optimal performances with approximately 3% collision probabilities in case of distributed random access in the uplink.

The confluence of 5G and AI is transforming wireless networks to deliver diverse services at the Edge, driving towards a vision of pervasive distributed intelligence. Future 6G networks will need to deliver quality of experience through seamless integration of communication, computation and AI. Therefore, networks must become intelligent, distributed, scalable, and programmable platforms across the continuum of data delivery to address the ever-increasing service requirements and deployment complexity. We present novel results across three research directions that are expected to be integral to 6G systems and also discuss newer 6G metrics.

Next-generation satellite systems require more flexibility in resource management such that available radio resources can be dynamically allocated to meet time-varying and non-uniform traffic demands. Considering potential benefits of beam hopping (BH) and non-orthogonal multiple access (NOMA), we exploit the time-domain flexibility in multi-beam satellite systems by optimizing BH design, and enhance the power-domain flexibility via NOMA. In this paper, we investigate the synergy and mutual influence of beam hopping and NOMA. We jointly optimize power allocation, beam scheduling, and terminal-timeslot assignment to minimize the gap between requested traffic demand and offered capacity. In the solution development, we formally prove the NP-hardness of the optimization problem. Next, we develop a bounding scheme to tightly gauge the global optimum and propose a suboptimal algorithm to enable efficient resource assignment. Numerical results demonstrate the benefits of combining NOMA and BH, and validate the superiority of the proposed BH-NOMA schemes over benchmarks.

Many video classification applications require access to personal data, thereby posing an invasive security risk to the users' privacy. We propose a privacy-preserving implementation of single-frame method based video classification with convolutional neural networks that allows a party to infer a label from a video without necessitating the video owner to disclose their video to other entities in an unencrypted manner. Similarly, our approach removes the requirement of the classifier owner from revealing their model parameters to outside entities in plaintext. To this end, we combine existing Secure Multi-Party Computation (MPC) protocols for private image classification with our novel MPC protocols for oblivious single-frame selection and secure label aggregation across frames. The result is an end-to-end privacy-preserving video classification pipeline. We evaluate our proposed solution in an application for private human emotion recognition. Our results across a variety of security settings, spanning honest and dishonest majority configurations of the computing parties, and for both passive and active adversaries, demonstrate that videos can be classified with state-of-the-art accuracy, and without leaking sensitive user information.

News recommendation aims to display news articles to users based on their personal interest. Existing news recommendation methods rely on centralized storage of user behavior data for model training, which may lead to privacy concerns and risks due to the privacy-sensitive nature of user behaviors. In this paper, we propose a privacy-preserving method for news recommendation model training based on federated learning, where the user behavior data is locally stored on user devices. Our method can leverage the useful information in the behaviors of massive number users to train accurate news recommendation models and meanwhile remove the need of centralized storage of them. More specifically, on each user device we keep a local copy of the news recommendation model, and compute gradients of the local model based on the user behaviors in this device. The local gradients from a group of randomly selected users are uploaded to server, which are further aggregated to update the global model in the server. Since the model gradients may contain some implicit private information, we apply local differential privacy (LDP) to them before uploading for better privacy protection. The updated global model is then distributed to each user device for local model update. We repeat this process for multiple rounds. Extensive experiments on a real-world dataset show the effectiveness of our method in news recommendation model training with privacy protection.

Adder Neural Networks (ANNs) which only contain additions bring us a new way of developing deep neural networks with low energy consumption. Unfortunately, there is an accuracy drop when replacing all convolution filters by adder filters. The main reason here is the optimization difficulty of ANNs using $\ell_1$-norm, in which the estimation of gradient in back propagation is inaccurate. In this paper, we present a novel method for further improving the performance of ANNs without increasing the trainable parameters via a progressive kernel based knowledge distillation (PKKD) method. A convolutional neural network (CNN) with the same architecture is simultaneously initialized and trained as a teacher network, features and weights of ANN and CNN will be transformed to a new space to eliminate the accuracy drop. The similarity is conducted in a higher-dimensional space to disentangle the difference of their distributions using a kernel based method. Finally, the desired ANN is learned based on the information from both the ground-truth and teacher, progressively. The effectiveness of the proposed method for learning ANN with higher performance is then well-verified on several benchmarks. For instance, the ANN-50 trained using the proposed PKKD method obtains a 76.8\% top-1 accuracy on ImageNet dataset, which is 0.6\% higher than that of the ResNet-50.

北京阿比特科技有限公司