亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Modern cars are no longer purely mechanical devices but shelter so much digital technology that they resemble a network of computers. Electronic Control Units (ECUs) need to exchange a large amount of data for the various functions of the car to work, and such data must be made secure if we want those functions to work as intended despite malicious activity by attackers. TOUCAN is a new security protocol designed to be secure and at the same time both CAN and AUTOSAR compliant. It achieves security in terms of authenticity, integrity and confidentiality, yet without the need to upgrade (the hardware of) existing ECUs or enrich the network with novel components. The overhead is tiny, namely a reduction of the size of the Data field of a frame. A prototype implementation exhibits promising performance on a STM32F407Discovery board.

相關內容

Networking:IFIP International Conferences on Networking。 Explanation:國際網絡會議。 Publisher:IFIP。 SIT:

The connectivity and resource-constrained nature of single-board devices opens up to cybersecurity concerns affecting Internet of Things (IoT) scenarios. One of the most important issues is the presence of evil IoT twins. Evil IoT twins are malicious devices, with identical hardware and software specifications to authorized ones, that can provoke sensitive information leakages, data poisoning, or privilege escalation in IoT scenarios. Combining behavioral fingerprinting and Machine/Deep Learning (ML/DL) techniques is a promising approach to identify evil IoT twins by detecting minor performance differences generated by imperfections in manufacturing. However, existing solutions are not suitable for single-board devices because they do not consider their hardware and software limitations, underestimate critical aspects such as fingerprint stability, and do not explore the potential of ML/DL techniques. To improve it, this work proposes an ML/DL-oriented methodology that uses behavioral fingerprinting to identify identical single-board devices. The methodology leverages the different built-in components of the system, comparing their internal behavior with each other to detect variations that occurred in manufacturing processes. The validation has been performed in a real environment composed of identical Raspberry Pi 4 Model B and Raspberry Pi 3 Model B+ devices, obtaining a 91.9% average TPR with an XGBoost model and achieving the identification for all devices by setting a 50% threshold in the evaluation process. Finally, a discussion compares the proposed solution with related work and provides important lessons learned and limitations.

Cyber-attacks have been one of the deadliest attacks in today's world. One of them is DDoS (Distributed Denial of Services). It is a cyber-attack in which the attacker attacks and makes a network or a machine unavailable to its intended users temporarily or indefinitely, interrupting services of the host that are connected to a network. To define it in simple terms, It's an attack accomplished by flooding the target machine with unnecessary requests in an attempt to overload and make the systems crash and make the users unable to use that network or a machine. In this research paper, we present the detection of DDoS attacks using neural networks, that would flag malicious and legitimate data flow, preventing network performance degradation. We compared and assessed our suggested system against current models in the field. We are glad to note that our work was 99.7\% accurate.

Millions of consumers depend on smart camera systems to remotely monitor their homes and businesses. However, the architecture and design of popular commercial systems require users to relinquish control of their data to untrusted third parties, such as service providers (e.g., the cloud). Third parties therefore can (and in some instances have) access the video footage without the users' knowledge or consent -- violating the core tenet of user privacy. In this paper, we present CaCTUs, a privacy-preserving smart Camera system Controlled Totally by Users. CaCTUs returns control to the user; the root of trust begins with the user and is maintained through a series of cryptographic protocols, designed to support popular features, such as sharing, deleting, and viewing videos live. We show that the system can support live streaming with a latency of 2s at a frame rate of 10fps and a resolution of 480p. In so doing, we demonstrate that it is feasible to implement a performant smart-camera system that leverages the convenience of a cloud-based model while retaining the ability to control access to (private) data.

During the COVID-19 epidemic in China, millions of workers in tech companies had to start working from home (WFH). The change was sudden, unexpected and companies were not ready for it. Additionally, it was also the first time that WFH was experienced on such a large scale. We used the opportunity to describe the effect of WFH at scale for a sustained period of time. As the lockdown was easing, we conducted semi-structured interviews with 12 participants from China working in tech companies. While at first, WFH was reported as a pleasant experience with advantages, e.g. flexible schedule, more time with family, over time, this evolved into a rather negative experience where workers start working all day, every day and feel a higher workload despite the actual workload being reduced. We discuss these results and how they could apply for other extreme circumstances and to help improve WFH in general.

It is known that a Sleptsov net, with multiple firing a transition at a step, runs exponentially faster than a Petri net opening prospects for its application as a graphical language of concurrent programming. We provide classification of place-transition nets based on firability rules considering general definitions and their strong and weak variants. We introduce and study a strong Sleptsov net, where a transition with the maximal firing multiplicity fires at a step, and prove that it is Turing-complete. We follow the proof pattern of Peterson applied to prove that an inhibitor Petri net is Turing-complete simulating a Shepherdson and Sturgis register machine. The central construct of our proof is a strong Sleptsov net that checks whether a register value (place marking) equals zero.

We consider the problem of efficiently evaluating a secret polynomial at a given public point, when the polynomial is stored on an untrusted server. The server performs the evaluation and returns a certificate, and the client can efficiently check that the evaluation is correct using some pre-computed keys. Our protocols support two important features: the polynomial itself can be encrypted on the server, and it can be dynamically updated by changing individual coefficients cheaply without redoing the entire setup. As an important application, we show how these new techniques can be used to instantiate a Dynamic Proof of Retrievability (DPoR) for arbitrary outsourced data storage that achieves low server storage size and audit complexity. Our methods rely only on linearly homomorphic encryption and pairings, and preliminary timing results indicate reasonable performance for polynomials with millions of coefficients, and efficient DPoR with for instance 1TB size databases.

Designing and implementing secure software is inarguably more important than ever. However, despite years of research into privilege separating programs, it remains difficult to actually do so and such efforts can take years of labor-intensive engineering to reach fruition. At the same time, new intra-process isolation primitives make strong data isolation and privilege separation more attractive from a performance perspective. Yet, substituting intra-process security boundaries for time-tested process boundaries opens the door to subtle but devastating privilege leaks. In this work, we present Polytope, a language extension to C++ that aims to make efficient privilege separation accessible to a wider audience of developers. Polytope defines a policy language encoded as C++11 attributes that separate code and data into distinct program partitions. A modified Clang front-end embeds source-level policy as metadata nodes in the LLVM IR. An LLVM pass interprets embedded policy and instruments an IR with code to enforce the source-level policy using Intel MPK. A run-time support library manages partitions, protection keys, dynamic memory operations, and indirect call target privileges. An evaluation demonstrates that Polytope provides equivalent protection to prior systems with a low annotation burden and comparable performance overhead. Polytope also renders privilege leaks that contradict intended policy impossible to express.

To make deliberate progress towards more intelligent and more human-like artificial systems, we need to be following an appropriate feedback signal: we need to be able to define and evaluate intelligence in a way that enables comparisons between two systems, as well as comparisons with humans. Over the past hundred years, there has been an abundance of attempts to define and measure intelligence, across both the fields of psychology and AI. We summarize and critically assess these definitions and evaluation approaches, while making apparent the two historical conceptions of intelligence that have implicitly guided them. We note that in practice, the contemporary AI community still gravitates towards benchmarking intelligence by comparing the skill exhibited by AIs and humans at specific tasks such as board games and video games. We argue that solely measuring skill at any given task falls short of measuring intelligence, because skill is heavily modulated by prior knowledge and experience: unlimited priors or unlimited training data allow experimenters to "buy" arbitrary levels of skills for a system, in a way that masks the system's own generalization power. We then articulate a new formal definition of intelligence based on Algorithmic Information Theory, describing intelligence as skill-acquisition efficiency and highlighting the concepts of scope, generalization difficulty, priors, and experience. Using this definition, we propose a set of guidelines for what a general AI benchmark should look like. Finally, we present a benchmark closely following these guidelines, the Abstraction and Reasoning Corpus (ARC), built upon an explicit set of priors designed to be as close as possible to innate human priors. We argue that ARC can be used to measure a human-like form of general fluid intelligence and that it enables fair general intelligence comparisons between AI systems and humans.

In the last years, Artificial Intelligence (AI) has achieved a notable momentum that may deliver the best of expectations over many application sectors across the field. For this to occur, the entire community stands in front of the barrier of explainability, an inherent problem of AI techniques brought by sub-symbolism (e.g. ensembles or Deep Neural Networks) that were not present in the last hype of AI. Paradigms underlying this problem fall within the so-called eXplainable AI (XAI) field, which is acknowledged as a crucial feature for the practical deployment of AI models. This overview examines the existing literature in the field of XAI, including a prospect toward what is yet to be reached. We summarize previous efforts to define explainability in Machine Learning, establishing a novel definition that covers prior conceptual propositions with a major focus on the audience for which explainability is sought. We then propose and discuss about a taxonomy of recent contributions related to the explainability of different Machine Learning models, including those aimed at Deep Learning methods for which a second taxonomy is built. This literature analysis serves as the background for a series of challenges faced by XAI, such as the crossroads between data fusion and explainability. Our prospects lead toward the concept of Responsible Artificial Intelligence, namely, a methodology for the large-scale implementation of AI methods in real organizations with fairness, model explainability and accountability at its core. Our ultimate goal is to provide newcomers to XAI with a reference material in order to stimulate future research advances, but also to encourage experts and professionals from other disciplines to embrace the benefits of AI in their activity sectors, without any prior bias for its lack of interpretability.

Generative adversarial networks (GANs) have been promising for many computer vision problems due to their powerful capabilities to enhance the data for training and test. In this paper, we leveraged GANs and proposed a new architecture with a cascaded Single Shot Detector (SSD) for pedestrian detection at distance, which is yet a challenge due to the varied sizes of pedestrians in videos at distance. To overcome the low-resolution issues in pedestrian detection at distance, DCGAN is employed to improve the resolution first to reconstruct more discriminative features for a SSD to detect objects in images or videos. A crucial advantage of our method is that it learns a multi-scale metric to distinguish multiple objects at different distances under one image, while DCGAN serves as an encoder-decoder platform to generate parts of an image that contain better discriminative information. To measure the effectiveness of our proposed method, experiments were carried out on the Canadian Institute for Advanced Research (CIFAR) dataset, and it was demonstrated that the proposed new architecture achieved a much better detection rate, particularly on vehicles and pedestrians at distance, making it highly suitable for smart cities applications that need to discover key objects or pedestrians at distance.

北京阿比特科技有限公司