亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In recent decades, due to the emerging requirements of computation acceleration, cloud FPGAs have become popular in public clouds. Major cloud service providers, e.g. AWS and Microsoft Azure have provided FPGA computing resources in their infrastructure and have enabled users to design and deploy their own accelerators on these FPGAs. Multi-tenancy FPGAs, where multiple users can share the same FPGA fabric with certain types of isolation to improve resource efficiency, have already been proved feasible. However, this also raises security concerns. Various types of side-channel attacks targeting multi-tenancy FPGAs have been proposed and validated. The awareness of security vulnerabilities in the cloud has motivated cloud providers to take action to enhance the security of their cloud environments. In FPGA security research papers, researchers always perform attacks under the assumption that attackers successfully co-locate with victims and are aware of the existence of victims on the same FPGA board. However, the way to reach this point, i.e., how attackers secretly obtain information regarding accelerators on the same fabric, is constantly ignored despite the fact that it is non-trivial and important for attackers. In this paper, we present a novel fingerprinting attack to gain the types of co-located FPGA accelerators. We utilize a seemingly non-malicious benchmark accelerator to sniff the communication link and collect performance traces of the FPGA-host communication link. By analyzing these traces, we are able to achieve high classification accuracy for fingerprinting co-located accelerators, which proves that attackers can use our method to perform cloud FPGA accelerator fingerprinting with a high success rate. As far as we know, this is the first paper targeting multi-tenant FPGA accelerator fingerprinting with the communication side-channel.

相關內容

FPGA:ACM/SIGDA International Symposium on Field-Programmable Gate Arrays。 Explanation:ACM/SIGDA現場可編程門陣列國際研討會。 Publisher:ACM。 SIT:

We present new min-max relations in digraphs between the number of paths satisfying certain conditions and the order of the corresponding cuts. We define these objects in order to capture, in the context of solving the half-integral linkage problem, the essential properties needed for reaching a large bramble of congestion two (or any other constant) from the terminal set. This strategy has been used ad-hoc in several articles, usually with lengthy technical proofs, and our objective is to abstract it to make it applicable in a simpler and unified way. We provide two proofs of the min-max relations, one consisting in applying Menger's Theorem on appropriately defined auxiliary digraphs, and an alternative simpler one using matroids, however with worse polynomial running time. As an application, we manage to simplify and improve several results of Edwards et al. [ESA 2017] and of Giannopoulou et al. [SODA 2022] about finding half-integral linkages in digraphs. Concerning the former, besides being simpler, our proof provides an almost optimal bound on the strong connectivity of a digraph for it to be half-integrally feasible under the presence of a large bramble of congestion two (or equivalently, if the directed tree-width is large, which is the hard case). Concerning the latter, our proof uses brambles as rerouting objects instead of cylindrical grids, hence yielding much better bounds and being somehow independent of a particular topology. We hope that our min-max relations will find further applications as, in our opinion, they are simple, robust, and versatile to be easily applicable to different types of routing problems in digraphs.

Context modeling and recognition are crucial for adaptive mobile and ubiquitous computing. Context-awareness in mobile environments relies on prompt reactions to context changes. However, current solutions focus on limited context information processed on centralized architectures, risking privacy leakage and lacking personalization. On-device context modeling and recognition are emerging research trends, addressing these concerns. Social interactions and visited locations play significant roles in characterizing daily life scenarios. This paper proposes an unsupervised and lightweight approach to model the user's social context and locations directly on the mobile device. Leveraging the ego-network model, the system extracts high-level, semantic-rich context features from smartphone-embedded sensor data. For the social context, the approach utilizes data on physical and cyber social interactions among users and their devices. Regarding location, it prioritizes modeling the familiarity degree of specific locations over raw location data, such as GPS coordinates and proximity devices. The effectiveness of the proposed approach is demonstrated through three sets of experiments, employing five real-world datasets. These experiments evaluate the structure of social and location ego networks, provide a semantic evaluation of the proposed models, and assess mobile computing performance. Finally, the relevance of the extracted features is showcased by the improved performance of three machine learning models in recognizing daily-life situations. Compared to using only features related to physical context, the proposed approach achieves a 3% improvement in AUROC, 9% in Precision, and 5% in Recall.

Change detection and irregular object extraction in 3D point clouds is a challenging task that is of high importance not only for autonomous navigation but also for updating existing digital twin models of various industrial environments. This article proposes an innovative approach for change detection in 3D point clouds using deep learned place recognition descriptors and irregular object extraction based on voxel-to-point comparison. The proposed method first aligns the bi-temporal point clouds using a map-merging algorithm in order to establish a common coordinate frame. Then, it utilizes deep learning techniques to extract robust and discriminative features from the 3D point cloud scans, which are used to detect changes between consecutive point cloud frames and therefore find the changed areas. Finally, the altered areas are sampled and compared between the two time instances to extract any obstructions that caused the area to change. The proposed method was successfully evaluated in real-world field experiments, where it was able to detect different types of changes in 3D point clouds, such as object or muck-pile addition and displacement, showcasing the effectiveness of the approach. The results of this study demonstrate important implications for various applications, including safety and security monitoring in construction sites, mapping and exploration and suggests potential future research directions in this field.

We present Citadel, to our knowledge, the first side-channel-resistant enclave platform to run realistic secure programs on a speculative out-of-order multicore processor. First, we develop a new hardware mechanism to enable secure shared memory while defending against transient execution attacks. Then, we develop an efficient dynamic cache partitioning scheme, improving both enclaves' and unprotected processes' performance. We conduct an in-depth security analysis and a performance evaluation of our new mechanisms. Finally, we build the hardware and software infrastructure required to run our secure enclaves. Our multicore processor runs on an FPGA and boots untrusted Linux from which users can securely launch and interact with enclaves. We open-source our end-to-end hardware and software infrastructure, hoping to spark more research and bridge the gap between conceptual proposals and FPGA prototypes.

Natural interaction between multiple users within a shared virtual environment (VE) relies on each other's awareness of the current position of the interaction partners. This, however, cannot be warranted when users employ noncontinuous locomotion techniques, such as teleportation, which may cause confusion among bystanders. In this paper, we pursue two approaches to create a pleasant experience for both the moving user and the bystanders observing that movement. First, we will introduce a Smart Avatar system that delivers continuous full-body human representations for noncontinuous locomotion in shared virtual reality (VR) spaces. Smart Avatars imitate their assigned user's real-world movements when close-by and autonomously navigate to their user when the distance between them exceeds a certain threshold, i.e., after the user teleports. As part of the Smart Avatar system, we implemented four avatar transition techniques and compared them to conventional avatar locomotion in a user study, revealing significant positive effects on the observer's spatial awareness, as well as pragmatic and hedonic quality scores. Second, we introduce the concept of Stuttered Locomotion, which can be applied to any continuous locomotion method. By converting a continuous movement into short-interval teleport steps, we provide the merits of non-continuous locomotion for the moving user while observers can easily keep track of their path. Thus, while the experience for observers is similarly positive as with continuous motion, a user study confirmed that Stuttered Locomotion can significantly reduce the occurrence of cybersickness symptoms for the moving user, making it an attractive choice for shared VEs. We will discuss the potential of Smart Avatars and Stuttered Locomotion for shared VR experiences, both when applied individually and in combination.

Trends in hardware, the prevalence of the cloud, and the rise of highly demanding applications have ushered an era of specialization that quickly changes how data is processed at scale. These changes are likely to continue and accelerate in the next years as new technologies are adopted and deployed: smart NICs, smart storage, smart memory, disaggregated storage, disaggregated memory, specialized accelerators (GPUS, TPUs, FPGAs), and a wealth of ASICs specifically created to deal with computationally expensive tasks (e.g., cryptography or compression). In this tutorial, we focus on data processing on FPGAs, a technology that has received less attention than, e.g., TPUs or GPUs but that is, however, increasingly being deployed in the cloud for data processing tasks due to the architectural flexibility of FPGAs, along with their ability to process data at line rate, something not possible with other types of processors or accelerators. In the tutorial, we will cover what FPGAs are, their characteristics, their advantages and disadvantages, as well as examples from deployments in the industry and how they are used in various data processing tasks. We will introduce FPGA programming with high-level languages and describe hardware and software resources available to researchers. The tutorial includes case studies borrowed from research done in collaboration with companies that illustrate the potential of FPGAs in data processing and how software and hardware are evolving to take advantage of the possibilities offered by FPGAs. The use cases include: (1) approximated nearest neighbor search, which is relevant to databases and machine learning, (2) remote disaggregated memory, showing how the cloud architecture is evolving and demonstrating the potential for operator offloading and line rate data processing, and (3) recommendation system as an application with tight latency constraints.

Deep neural networks (DNNs) have succeeded in many different perception tasks, e.g., computer vision, natural language processing, reinforcement learning, etc. The high-performed DNNs heavily rely on intensive resource consumption. For example, training a DNN requires high dynamic memory, a large-scale dataset, and a large number of computations (a long training time); even inference with a DNN also demands a large amount of static storage, computations (a long inference time), and energy. Therefore, state-of-the-art DNNs are often deployed on a cloud server with a large number of super-computers, a high-bandwidth communication bus, a shared storage infrastructure, and a high power supplement. Recently, some new emerging intelligent applications, e.g., AR/VR, mobile assistants, Internet of Things, require us to deploy DNNs on resource-constrained edge devices. Compare to a cloud server, edge devices often have a rather small amount of resources. To deploy DNNs on edge devices, we need to reduce the size of DNNs, i.e., we target a better trade-off between resource consumption and model accuracy. In this dissertation, we studied four edge intelligence scenarios, i.e., Inference on Edge Devices, Adaptation on Edge Devices, Learning on Edge Devices, and Edge-Server Systems, and developed different methodologies to enable deep learning in each scenario. Since current DNNs are often over-parameterized, our goal is to find and reduce the redundancy of the DNNs in each scenario.

Technology ecosystems often undergo significant transformations as they mature. For example, telephony, the Internet, and PCs all started with a single provider, but in the United States each is now served by a competitive market that uses comprehensive and universal technology standards to provide compatibility. This white paper presents our view on how the cloud ecosystem, barely over fifteen years old, could evolve as it matures.

Deep neural networks (DNNs) have achieved unprecedented success in the field of artificial intelligence (AI), including computer vision, natural language processing and speech recognition. However, their superior performance comes at the considerable cost of computational complexity, which greatly hinders their applications in many resource-constrained devices, such as mobile phones and Internet of Things (IoT) devices. Therefore, methods and techniques that are able to lift the efficiency bottleneck while preserving the high accuracy of DNNs are in great demand in order to enable numerous edge AI applications. This paper provides an overview of efficient deep learning methods, systems and applications. We start from introducing popular model compression methods, including pruning, factorization, quantization as well as compact model design. To reduce the large design cost of these manual solutions, we discuss the AutoML framework for each of them, such as neural architecture search (NAS) and automated pruning and quantization. We then cover efficient on-device training to enable user customization based on the local data on mobile devices. Apart from general acceleration techniques, we also showcase several task-specific accelerations for point cloud, video and natural language processing by exploiting their spatial sparsity and temporal/token redundancy. Finally, to support all these algorithmic advancements, we introduce the efficient deep learning system design from both software and hardware perspectives.

The world population is anticipated to increase by close to 2 billion by 2050 causing a rapid escalation of food demand. A recent projection shows that the world is lagging behind accomplishing the "Zero Hunger" goal, in spite of some advancements. Socio-economic and well being fallout will affect the food security. Vulnerable groups of people will suffer malnutrition. To cater to the needs of the increasing population, the agricultural industry needs to be modernized, become smart, and automated. Traditional agriculture can be remade to efficient, sustainable, eco-friendly smart agriculture by adopting existing technologies. In this survey paper the authors present the applications, technological trends, available datasets, networking options, and challenges in smart agriculture. How Agro Cyber Physical Systems are built upon the Internet-of-Agro-Things is discussed through various application fields. Agriculture 4.0 is also discussed as a whole. We focus on the technologies, such as Artificial Intelligence (AI) and Machine Learning (ML) which support the automation, along with the Distributed Ledger Technology (DLT) which provides data integrity and security. After an in-depth study of different architectures, we also present a smart agriculture framework which relies on the location of data processing. We have divided open research problems of smart agriculture as future research work in two groups - from a technological perspective and from a networking perspective. AI, ML, the blockchain as a DLT, and Physical Unclonable Functions (PUF) based hardware security fall under the technology group, whereas any network related attacks, fake data injection and similar threats fall under the network research problem group.

北京阿比特科技有限公司