In conventional dual-function radar-communication (DFRC) systems, the radar and communication channels are routinely estimated at fixed time intervals based on their worst-case operation scenarios. Such situation-agnostic repeated estimations cause significant training overhead and dramatically degrade the system performance, especially for applications with dynamic sensing/communication demands and limited radio resources. In this paper, we leverage the channel aging characteristics to reduce training overhead and to design a situation-dependent channel re-estimation interval optimization-based resource allocation for performance improvement in a multi-target tracking DFRC system. Specifically, we exploit the channel temporal correlation to predict radar and communication channels for reducing the need of training preamble retransmission. Then, we characterize the channel aging effects on the Cramer-Rao lower bounds (CRLBs) for radar tracking performance analysis and achievable rates with maximum ratio transmission (MRT) and zero-forcing (ZF) transmit beamforming for communication performance analysis. In particular, the aged CRLBs and achievable rates are derived as closed-form expressions with respect to the channel aging time, bandwidth, and power. Based on the analyzed results, we optimize these factors to maximize the average total aged achievable rate subject to individual target tracking precision demand, communication rate requirement, and other practical constraints. Since the formulated problem belongs to a non-convex problem, we develop an efficient one-dimensional search based optimization algorithm to obtain its suboptimal solutions. Finally, simulation results are presented to validate the correctness of the derived theoretical results and the effectiveness of the proposed allocation scheme.
Huge diagrams have unique properties for organizations and research, such as client linkages in informal organizations and customer evaluation lattices in social channels. They necessitate a lot of financial assets to maintain because they are large and frequently continue to expand. Owners of large diagrams may need to use cloud resources due to the extensive arrangement of open cloud resources to increase capacity and computation flexibility. However, the cloud's accountability and protection of schematics have become a significant issue. In this study, we consider calculations for security savings for essential graph examination practices: schematic extraterrestrial examination for outsourcing graphs in the cloud server. We create the security-protecting variants of the two proposed Eigen decay computations. They are using two cryptographic algorithms: additional substance homomorphic encryption (ASHE) strategies and some degree homomorphic encryption (SDHE) methods. Inadequate networks also feature a distinctively confidential info adaptation convention to allow the trade-off between secrecy and data sparseness. Both dense and sparse structures are investigated. According to test results, calculations with sparse encoding can drastically reduce information. SDHE-based strategies have reduced computing time, while ASHE-based methods have reduced stockpiling expenses.
In the era of social media, people frequently share their own opinions online on various issues and also in the way, get exposed to others' opinions. Be it for selective exposure of news feed recommendation algorithms or our own inclination to listen to opinions that support ours, the result is that we get more and more exposed to opinions closer to ours. Further, any population is inherently heterogeneous i.e. people will hold a varied range of opinions regarding a topic and showcase a varied range of openness to get influenced by others. In this paper, we demonstrate the different behavior put forward by open- and close-minded agents towards an issue, when allowed to freely intermix and communicate. We have shown that the intermixing among people leads to formation of opinion echo chambers i.e. a small closed network of people who hold similar opinions and are not affected by opinions of people outside the network. Echo chambers are evidently harmful for a society because it inhibits free healthy communication among all and thus, prevents exchange of opinions, spreads misinformation and increases extremist beliefs. This calls for reduction in echo chambers, because a total consensus of opinion is neither possible nor is welcome. We show that the number of echo chambers depends on the number of close-minded agents and cannot be lessened by increasing the number of open-minded agents. We identify certain 'moderate'-minded agents, who possess the capability of manipulating and reducing the number of echo chambers. The paper proposes an algorithm for intelligent placement of moderate-minded agents in the opinion-time spectrum by which the opinion echo chambers can be maximally reduced. With various experimental setups, we demonstrate that the proposed algorithm fares well when compared to placement of other agents (open- or close-minded) and random placement of 'moderate'-minded agents.
Recent advances in state-of-the-art DNN architecture design have been moving toward Transformer models. These models achieve superior accuracy across a wide range of applications. This trend has been consistent over the past several years since Transformer models were originally introduced. However, the amount of compute and bandwidth required for inference of recent Transformer models is growing at a significant rate, and this has made their deployment in latency-sensitive applications challenging. As such, there has been an increased focus on making Transformer models more efficient, with methods that range from changing the architecture design, all the way to developing dedicated domain-specific accelerators. In this work, we survey different approaches for efficient Transformer inference, including: (i) analysis and profiling of the bottlenecks in existing Transformer architectures and their similarities and differences with previous convolutional models; (ii) implications of Transformer architecture on hardware, including the impact of non-linear operations such as Layer Normalization, Softmax, and GELU, as well as linear operations, on hardware design; (iii) approaches for optimizing a fixed Transformer architecture; (iv) challenges in finding the right mapping and scheduling of operations for Transformer models; and (v) approaches for optimizing Transformer models by adapting the architecture using neural architecture search. Finally, we perform a case study by applying the surveyed optimizations on Gemmini, the open-source, full-stack DNN accelerator generator, and we show how each of these approaches can yield improvements, compared to previous benchmark results on Gemmini. Among other things, we find that a full-stack co-design approach with the aforementioned methods can result in up to 88.7x speedup with a minimal performance degradation for Transformer inference.
In large-scale systems there are fundamental challenges when centralised techniques are used for task allocation. The number of interactions is limited by resource constraints such as on computation, storage, and network communication. We can increase scalability by implementing the system as a distributed task-allocation system, sharing tasks across many agents. However, this also increases the resource cost of communications and synchronisation, and is difficult to scale. In this paper we present four algorithms to solve these problems. The combination of these algorithms enable each agent to improve their task allocation strategy through reinforcement learning, while changing how much they explore the system in response to how optimal they believe their current strategy is, given their past experience. We focus on distributed agent systems where the agents' behaviours are constrained by resource usage limits, limiting agents to local rather than system-wide knowledge. We evaluate these algorithms in a simulated environment where agents are given a task composed of multiple subtasks that must be allocated to other agents with differing capabilities, to then carry out those tasks. We also simulate real-life system effects such as networking instability. Our solution is shown to solve the task allocation problem to 6.7% of the theoretical optimal within the system configurations considered. It provides 5x better performance recovery over no-knowledge retention approaches when system connectivity is impacted, and is tested against systems up to 100 agents with less than a 9% impact on the algorithms' performance.
Deep neural networks (DNNs) have achieved unprecedented success in the field of artificial intelligence (AI), including computer vision, natural language processing and speech recognition. However, their superior performance comes at the considerable cost of computational complexity, which greatly hinders their applications in many resource-constrained devices, such as mobile phones and Internet of Things (IoT) devices. Therefore, methods and techniques that are able to lift the efficiency bottleneck while preserving the high accuracy of DNNs are in great demand in order to enable numerous edge AI applications. This paper provides an overview of efficient deep learning methods, systems and applications. We start from introducing popular model compression methods, including pruning, factorization, quantization as well as compact model design. To reduce the large design cost of these manual solutions, we discuss the AutoML framework for each of them, such as neural architecture search (NAS) and automated pruning and quantization. We then cover efficient on-device training to enable user customization based on the local data on mobile devices. Apart from general acceleration techniques, we also showcase several task-specific accelerations for point cloud, video and natural language processing by exploiting their spatial sparsity and temporal/token redundancy. Finally, to support all these algorithmic advancements, we introduce the efficient deep learning system design from both software and hardware perspectives.
Recent times are witnessing rapid development in machine learning algorithm systems, especially in reinforcement learning, natural language processing, computer and robot vision, image processing, speech, and emotional processing and understanding. In tune with the increasing importance and relevance of machine learning models, algorithms, and their applications, and with the emergence of more innovative uses cases of deep learning and artificial intelligence, the current volume presents a few innovative research works and their applications in real world, such as stock trading, medical and healthcare systems, and software automation. The chapters in the book illustrate how machine learning and deep learning algorithms and models are designed, optimized, and deployed. The volume will be useful for advanced graduate and doctoral students, researchers, faculty members of universities, practicing data scientists and data engineers, professionals, and consultants working on the broad areas of machine learning, deep learning, and artificial intelligence.
It has been a long time that computer architecture and systems are optimized to enable efficient execution of machine learning (ML) algorithms or models. Now, it is time to reconsider the relationship between ML and systems, and let ML transform the way that computer architecture and systems are designed. This embraces a twofold meaning: the improvement of designers' productivity, and the completion of the virtuous cycle. In this paper, we present a comprehensive review of work that applies ML for system design, which can be grouped into two major categories, ML-based modelling that involves predictions of performance metrics or some other criteria of interest, and ML-based design methodology that directly leverages ML as the design tool. For ML-based modelling, we discuss existing studies based on their target level of system, ranging from the circuit level to the architecture/system level. For ML-based design methodology, we follow a bottom-up path to review current work, with a scope of (micro-)architecture design (memory, branch prediction, NoC), coordination between architecture/system and workload (resource allocation and management, data center management, and security), compiler, and design automation. We further provide a future vision of opportunities and potential directions, and envision that applying ML for computer architecture and systems would thrive in the community.
Since deep neural networks were developed, they have made huge contributions to everyday lives. Machine learning provides more rational advice than humans are capable of in almost every aspect of daily life. However, despite this achievement, the design and training of neural networks are still challenging and unpredictable procedures. To lower the technical thresholds for common users, automated hyper-parameter optimization (HPO) has become a popular topic in both academic and industrial areas. This paper provides a review of the most essential topics on HPO. The first section introduces the key hyper-parameters related to model training and structure, and discusses their importance and methods to define the value range. Then, the research focuses on major optimization algorithms and their applicability, covering their efficiency and accuracy especially for deep learning networks. This study next reviews major services and toolkits for HPO, comparing their support for state-of-the-art searching algorithms, feasibility with major deep learning frameworks, and extensibility for new modules designed by users. The paper concludes with problems that exist when HPO is applied to deep learning, a comparison between optimization algorithms, and prominent approaches for model evaluation with limited computational resources.
Driven by the visions of Internet of Things and 5G communications, the edge computing systems integrate computing, storage and network resources at the edge of the network to provide computing infrastructure, enabling developers to quickly develop and deploy edge applications. Nowadays the edge computing systems have received widespread attention in both industry and academia. To explore new research opportunities and assist users in selecting suitable edge computing systems for specific applications, this survey paper provides a comprehensive overview of the existing edge computing systems and introduces representative projects. A comparison of open source tools is presented according to their applicability. Finally, we highlight energy efficiency and deep learning optimization of edge computing systems. Open issues for analyzing and designing an edge computing system are also studied in this survey.
Automatic License Plate Recognition (ALPR) has been a frequent topic of research due to many practical applications. However, many of the current solutions are still not robust in real-world situations, commonly depending on many constraints. This paper presents a robust and efficient ALPR system based on the state-of-the-art YOLO object detection. The Convolutional Neural Networks (CNNs) are trained and fine-tuned for each ALPR stage so that they are robust under different conditions (e.g., variations in camera, lighting, and background). Specially for character segmentation and recognition, we design a two-stage approach employing simple data augmentation tricks such as inverted License Plates (LPs) and flipped characters. The resulting ALPR approach achieved impressive results in two datasets. First, in the SSIG dataset, composed of 2,000 frames from 101 vehicle videos, our system achieved a recognition rate of 93.53% and 47 Frames Per Second (FPS), performing better than both Sighthound and OpenALPR commercial systems (89.80% and 93.03%, respectively) and considerably outperforming previous results (81.80%). Second, targeting a more realistic scenario, we introduce a larger public dataset, called UFPR-ALPR dataset, designed to ALPR. This dataset contains 150 videos and 4,500 frames captured when both camera and vehicles are moving and also contains different types of vehicles (cars, motorcycles, buses and trucks). In our proposed dataset, the trial versions of commercial systems achieved recognition rates below 70%. On the other hand, our system performed better, with recognition rate of 78.33% and 35 FPS.