亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

There is increasing interest in using Linux in the real-time domain due to the emergence of cloud and edge computing, the need to decrease costs, and the growing number of complex functional and non-functional requirements of real-time applications. Linux presents a valuable opportunity as it has rich hardware support, an open-source development model, a well-established programming environment, and avoids vendor lock-in. Although Linux was initially developed as a general-purpose operating system, some real-time capabilities have been added to the kernel over many years to increase its predictability and reduce its scheduling latency. Unfortunately, Linux currently has no support for time-triggered (TT) scheduling, which is widely used in the safety-critical domain for its determinism, low run-time scheduling latency, and strong isolation properties. We present an enhancement of the Linux scheduler as a new low-overhead TT scheduling class to support offline table-driven scheduling of tasks on multicore Linux nodes. Inspired by the Slot shifting algorithm, we complement the new scheduling class with a low overhead slot shifting manager running on a non-time-triggered core to provide guaranteed execution time to real-time aperiodic tasks by using the slack of the time-triggered tasks and avoiding high-overhead table regeneration for adding new periodic tasks. Furthermore, we evaluate our implementation on server-grade hardware with Intel Xeon Scalable Processor.

相關內容

Linux 是一(yi)系列類 Unix 計(ji)算(suan)機操(cao)作(zuo)(zuo)系統的統稱(cheng)。該操(cao)作(zuo)(zuo)系統的核心為 Linux 內核。Linux 操(cao)作(zuo)(zuo)系統也是軟件和開放(fang)源(yuan)代碼發展中最(zui)著名的例子之(zhi)一(yi)。 

This paper presents a rigorous Bayesian analysis of the information in the signal (consisting of both the line-of-sight (LOS) path and reflections from multiple reconfigurable intelligent surfaces (RISs)) that originate from a single base station (BS) and is received by a user equipment (UE). For a comprehensive Bayesian analysis, both near and far field regimes are considered. The Bayesian analysis views both the location of the RISs and previous information about the UE as {\em a priori} information for UE localization. With outdated {\em a priori} information, the position and orientation offsets of the RISs become parameters that need to be estimated and fed back to the BS for correction. We first show that when the RIS elements have a half wavelength spacing, this RIS orientation offset is a factor in the pathloss of the RIS paths. Subsequently, we show through the Bayesian equivalent Fisher information matrix (EFIM) for the channel parameters that the RIS orientation offset cannot be corrected when there is an unknown phase offset in the received signal in the far-field regime. However, the corresponding EFIM for the channel parameters in the received signal observed in the near-field shows that this unknown phase offset does not hinder the estimation of the RIS orientation offset when the UE has more than one receive antenna. Furthermore, we use the EFIM for the UE location parameters to present bounds for UE localization in the presence of RIS uncertainty.

In human-robot collaboration, unintentional physical contacts occur in the form of collisions and clamping, which must be detected and classified separately for a reaction. If certain collision or clamping situations are misclassified, reactions might occur that make the true contact case more dangerous. This work analyzes data-driven modeling based on physically modeled features like estimated external forces for clamping and collision classification with a real parallel robot. The prediction reliability of a feedforward neural network is investigated. Quantification of the classification uncertainty enables the distinction between safe versus unreliable classifications and optimal reactions like a retraction movement for collisions, structure opening for the clamping joint, and a fallback reaction in the form of a zero-g mode. This hypothesis is tested with experimental data of clamping and collision cases by analyzing dangerous misclassifications and then reducing them by the proposed uncertainty quantification. Finally, it is investigated how the approach of this work influences correctly classified clamping and collision scenarios.

This dissertation gives an overview of Martin Lof's dependant type theory, focusing on its computational content and addressing a question of possibility of fully canonical and computable semantic presentation.

We consider a network of smart sensors for an edge computing application that sample a time-varying signal and send updates to a base station for remote global monitoring. Sensors are equipped with sensing and compute, and can either send raw data or process them on-board before transmission. Limited hardware resources at the edge generate a fundamental latency-accuracy trade-off: raw measurements are inaccurate but timely, whereas accurate processed updates are available after processing delay. Hence, one needs to decide when sensors should transmit raw measurements or rely on local processing to maximize network monitoring performance. To tackle this sensing design problem, we model an estimation-theoretic optimization framework that embeds both computation and communication latency, and propose a Reinforcement Learning-based approach that dynamically allocates computational resources at each sensor. Effectiveness of our proposed approach is validated through numerical experiments motivated by smart sensing for the Internet of Drones and self-driving vehicles. In particular, we show that, under constrained computation at the base station, monitoring performance can be further improved by an online sensor selection.

The security of confidential information associated with devices in the industrial Internet of Things (IIoT) network is a serious concern. This article focuses on achieving a nonorthogonal multiple access (NOMA)-enabled secure IIoT network in the presence of untrusted devices by jointly optimizing the resources, such as decoding order and power allocated to devices. Assuming that the devices are resource-constrained for performing perfect successive interference cancellation (SIC), we characterize the residual interference at receivers with the linear model. Firstly, considering all possible decoding orders in an untrusted scenario, we obtain secure decoding orders that are feasible to obtain a positive secrecy rate for each device. Then, under the secrecy fairness criterion, we formulate a joint optimization problem of maximizing the minimum secrecy rate among devices. Since the formulated problem is non-convex and combinatorial, we first obtain the optimal secure decoding order and then solve it for power allocation by analyzing Karush-Kuhn-Tucker points. Thus, we provide the closed-form global-optimal solution of the formulated optimization problem. Numerical results validate the analytical claims and demonstrate an interesting observation that the conventional decoding order and assigning more power allocation to the weak device, as presumed in many works on NOMA, is not an optimal strategy from the secrecy fairness viewpoint. Also, the average percentage gain of about 22.75%, 50.58%, 94.59%, and 98.16%, respectively, is achieved by jointly optimized solution over benchmarks ODEP (optimal decoding order, equal power allocation), ODFP (optimal decoding order, fixed power allocation), FDEP (fixed decoding order, equal power allocation), and FDFP (fixed decoding order, fixed power allocation).

Network alignment (NA) is the task of finding the correspondence of nodes between two networks based on the network structure and node attributes. Our study is motivated by the fact that, since most of existing NA methods have attempted to discover all node pairs at once, they do not harness information enriched through interim discovery of node correspondences to more accurately find the next correspondences during the node matching. To tackle this challenge, we propose Grad-Align, a new NA method that gradually discovers node pairs by making full use of node pairs exhibiting strong consistency, which are easy to be discovered in the early stage of gradual matching. Specifically, Grad-Align first generates node embeddings of the two networks based on graph neural networks along with our layer-wise reconstruction loss, a loss built upon capturing the first-order and higher-order neighborhood structures. Then, nodes are gradually aligned by computing dual-perception similarity measures including the multi-layer embedding similarity as well as the Tversky similarity, an asymmetric set similarity using the Tversky index applicable to networks with different scales. Additionally, we incorporate an edge augmentation module into Grad-Align to reinforce the structural consistency. Through comprehensive experiments using real-world and synthetic datasets, we empirically demonstrate that Grad-Align consistently outperforms state-of-the-art NA methods.

Many open-domain questions are under-specified and thus have multiple possible answers, each of which is correct under a different interpretation of the question. Answering such ambiguous questions is challenging, as it requires retrieving and then reasoning about diverse information from multiple passages. We present a new state-of-the-art for answering ambiguous questions that exploits a database of unambiguous questions generated from Wikipedia. On the challenging ASQA benchmark, which requires generating long-form answers that summarize the multiple answers to an ambiguous question, our method improves performance by 15% (relative improvement) on recall measures and 10% on measures which evaluate disambiguating questions from predicted outputs. Retrieving from the database of generated questions also gives large improvements in diverse passage retrieval (by matching user questions q to passages p indirectly, via questions q' generated from p).

Detection and recognition of text in natural images are two main problems in the field of computer vision that have a wide variety of applications in analysis of sports videos, autonomous driving, industrial automation, to name a few. They face common challenging problems that are factors in how text is represented and affected by several environmental conditions. The current state-of-the-art scene text detection and/or recognition methods have exploited the witnessed advancement in deep learning architectures and reported a superior accuracy on benchmark datasets when tackling multi-resolution and multi-oriented text. However, there are still several remaining challenges affecting text in the wild images that cause existing methods to underperform due to there models are not able to generalize to unseen data and the insufficient labeled data. Thus, unlike previous surveys in this field, the objectives of this survey are as follows: first, offering the reader not only a review on the recent advancement in scene text detection and recognition, but also presenting the results of conducting extensive experiments using a unified evaluation framework that assesses pre-trained models of the selected methods on challenging cases, and applies the same evaluation criteria on these techniques. Second, identifying several existing challenges for detecting or recognizing text in the wild images, namely, in-plane-rotation, multi-oriented and multi-resolution text, perspective distortion, illumination reflection, partial occlusion, complex fonts, and special characters. Finally, the paper also presents insight into the potential research directions in this field to address some of the mentioned challenges that are still encountering scene text detection and recognition techniques.

Deep neural networks (DNNs) are successful in many computer vision tasks. However, the most accurate DNNs require millions of parameters and operations, making them energy, computation and memory intensive. This impedes the deployment of large DNNs in low-power devices with limited compute resources. Recent research improves DNN models by reducing the memory requirement, energy consumption, and number of operations without significantly decreasing the accuracy. This paper surveys the progress of low-power deep learning and computer vision, specifically in regards to inference, and discusses the methods for compacting and accelerating DNN models. The techniques can be divided into four major categories: (1) parameter quantization and pruning, (2) compressed convolutional filters and matrix factorization, (3) network architecture search, and (4) knowledge distillation. We analyze the accuracy, advantages, disadvantages, and potential solutions to the problems with the techniques in each category. We also discuss new evaluation metrics as a guideline for future research.

Driven by the visions of Internet of Things and 5G communications, the edge computing systems integrate computing, storage and network resources at the edge of the network to provide computing infrastructure, enabling developers to quickly develop and deploy edge applications. Nowadays the edge computing systems have received widespread attention in both industry and academia. To explore new research opportunities and assist users in selecting suitable edge computing systems for specific applications, this survey paper provides a comprehensive overview of the existing edge computing systems and introduces representative projects. A comparison of open source tools is presented according to their applicability. Finally, we highlight energy efficiency and deep learning optimization of edge computing systems. Open issues for analyzing and designing an edge computing system are also studied in this survey.

北京阿比特科技有限公司