亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The rapid development in Internet of Medical Things (IoMT) boosts the opportunity for real-time health monitoring using various data types such as electroencephalography (EEG) and electrocardiography (ECG). Security issues have significantly impeded the e-healthcare system implementation. Three important challenges for privacy preserving system need to be addressed: accurate matching, privacy enhancement without compromising security, and computation efficiency. It is essential to guarantee prediction accuracy since disease diagnosis is strongly related to health and life. In this paper, we propose efficient disease prediction that guarantees security against malicious clients and honest-but-curious server using matrix encryption technique. A biomedical signal provided by the client is diagnosed such that the server does not get any information about the signal as well as the final result of the diagnosis while the client does not learn any information about the server's medical data. Thorough security analysis illustrates the disclosure resilience of the proposed scheme and the encryption algorithm satisfies local differential privacy. After result decryption performed by the client's device, performance is not degraded to perform prediction on encrypted data. The proposed scheme is efficient to implement real-time health monitoring.

相關內容

Although robust learning and local differential privacy are both widely studied fields of research, combining the two settings is just starting to be explored. We consider the problem of estimating a discrete distribution in total variation from $n$ contaminated data batches under a local differential privacy constraint. A fraction $1-\epsilon$ of the batches contain $k$ i.i.d. samples drawn from a discrete distribution $p$ over $d$ elements. To protect the users' privacy, each of the samples is privatized using an $\alpha$-locally differentially private mechanism. The remaining $\epsilon n $ batches are an adversarial contamination. The minimax rate of estimation under contamination alone, with no privacy, is known to be $\epsilon/\sqrt{k}+\sqrt{d/kn}$, up to a $\sqrt{\log(1/\epsilon)}$ factor. Under the privacy constraint alone, the minimax rate of estimation is $\sqrt{d^2/\alpha^2 kn}$. We show that combining the two constraints leads to a minimax estimation rate of $\epsilon\sqrt{d/\alpha^2 k}+\sqrt{d^2/\alpha^2 kn}$ up to a $\sqrt{\log(1/\epsilon)}$ factor, larger than the sum of the two separate rates. We provide a polynomial-time algorithm achieving this bound, as well as a matching information theoretic lower bound.

The Internet of Things (IoT) is one of the emerging technologies that has grabbed the attention of researchers from academia and industry. The idea behind Internet of things is the interconnection of internet enabled things or devices to each other and to humans, to achieve some common goals. In near future IoT is expected to be seamlessly integrated into our environment and human will be wholly solely dependent on this technology for comfort and easy life style. Any security compromise of the system will directly affect human life. Therefore security and privacy of this technology is foremost important issue to resolve. In this paper we present a thorough study of security problems in IoT and classify possible cyberattacks on each layer of IoT architecture. We also discuss challenges to traditional security solutions such as cryptographic solutions, authentication mechanisms and key management in IoT. Device authentication and access controls is an essential area of IoT security, which is not surveyed so far. We spent our efforts to bring the state of the art device authentication and access control techniques on a single paper.

Over the past few decades, interest in algorithms for face recognition has been growing rapidly and has even surpassed human-level performance. Despite their accomplishments, their practical integration with a real-time performance-hungry system is not feasible due to high computational costs. So in this paper, we explore the recent, fast, and accurate face recognition system that can be easily integrated with real-time devices, and tested the algorithms on robot hardware platforms to confirm their robustness and speed.

When IP-packet processing is unconditionally carried out on behalf of an operating system kernel thread, processing systems can experience overload in high incoming traffic scenarios. This is especially worrying for embedded real-time devices controlling their physical environment in industrial IoT scenarios and automotive systems. We propose an embedded real-time aware IP stack adaption with an early demultiplexing scheme for incoming packets and subsequent per-flow aperiodic scheduling. By instrumenting existing embedded IP stacks, rigid prioritization with minimal latency is deployed without the need of further task resources. Simple mitigation techniques can be applied to individual flows, causing hardly measurable overhead while at the same time protecting the system from overload conditions. Our IP stack adaption is able to reduce the low-priority packet processing time by over 86% compared to an unmodified stack. The network subsystem can thereby remain active at a 7x higher general traffic load before disabling the receive IRQ as a last resort to assure deadlines.

Modern web services routinely provide REST APIs for clients to access their functionality. These APIs present unique challenges and opportunities for automated testing, driving the recent development of many techniques and tools that generate test cases for API endpoints using various strategies. Understanding how these techniques compare to one another is difficult, as they have been evaluated on different benchmarks and using different metrics. To fill this gap, we performed an empirical study aimed to understand the landscape in automated testing of REST APIs and guide future research in this area. We first identified, through a systematic selection process, a set of 10 state-of-the-art REST API testing tools that included tools developed by both researchers and practitioners. We then applied these tools to a benchmark of 20 real-world open-source RESTful services and analyzed their performance in terms of code coverage achieved and unique failures triggered. This analysis allowed us to identify strengths, weaknesses, and limitations of the tools considered and of their underlying strategies, as well as implications of our findings for future research in this area.

The concept of federated learning (FL) was first proposed by Google in 2016. Thereafter, FL has been widely studied for the feasibility of application in various fields due to its potential to make full use of data without compromising the privacy. However, limited by the capacity of wireless data transmission, the employment of federated learning on mobile devices has been making slow progress in practical. The development and commercialization of the 5th generation (5G) mobile networks has shed some light on this. In this paper, we analyze the challenges of existing federated learning schemes for mobile devices and propose a novel cross-device federated learning framework, which utilizes the anonymous communication technology and ring signature to protect the privacy of participants while reducing the computation overhead of mobile devices participating in FL. In addition, our scheme implements a contribution-based incentive mechanism to encourage mobile users to participate in FL. We also give a case study of autonomous driving. Finally, we present the performance evaluation of the proposed scheme and discuss some open issues in federated learning.

Medical data is often highly sensitive in terms of data privacy and security concerns. Federated learning, one type of machine learning techniques, has been started to use for the improvement of the privacy and security of medical data. In the federated learning, the training data is distributed across multiple machines, and the learning process is performed in a collaborative manner. There are several privacy attacks on deep learning (DL) models to get the sensitive information by attackers. Therefore, the DL model itself should be protected from the adversarial attack, especially for applications using medical data. One of the solutions for this problem is homomorphic encryption-based model protection from the adversary collaborator. This paper proposes a privacy-preserving federated learning algorithm for medical data using homomorphic encryption. The proposed algorithm uses a secure multi-party computation protocol to protect the deep learning model from the adversaries. In this study, the proposed algorithm using a real-world medical dataset is evaluated in terms of the model performance.

The outbreak of the COVID-19 pandemic has deeply influenced the lifestyle of the general public and the healthcare system of the society. As a promising approach to address the emerging challenges caused by the epidemic of infectious diseases like COVID-19, Internet of Medical Things (IoMT) deployed in hospitals, clinics, and healthcare centers can save the diagnosis time and improve the efficiency of medical resources though privacy and security concerns of IoMT stall the wide adoption. In order to tackle the privacy, security, and interoperability issues of IoMT, we propose a framework of blockchain-enabled IoMT by introducing blockchain to incumbent IoMT systems. In this paper, we review the benefits of this architecture and illustrate the opportunities brought by blockchain-enabled IoMT. We also provide use cases of blockchain-enabled IoMT on fighting against the COVID-19 pandemic, including the prevention of infectious diseases, location sharing and contact tracing, and the supply chain of injectable medicines. We also outline future work in this area.

With the rapid growth of new technological paradigms such as the Internet of Things (IoT), it opens new doors for many applications in the modern era for the betterment of human life. One of the recent applications of the IoT is the Internet of Vehicles (IoV) which helps to see unprecedented growth of connected vehicles on the roads. The IoV is gaining attention due to enhancing traffic safety and providing low route information. One of the most important and major requirements of the IoV is preserving security and privacy under strict latency. Moreover, vehicles are required to be authenticated frequently and fast considering limited bandwidth, high mobility, and density of the vehicles. To address the security vulnerabilities and data integrity, an ultralight authentication scheme has been proposed in this article. Physical Unclonable Function (PUF) and XOR function are used to authenticate both server and vehicle in two message flow which makes the proposed scheme ultralight, and less computation is required. The proposed Easy-Sec can authenticate vehicles maintaining low latency and resisting known security threats. Furthermore, the proposed Easy-Sec needs low overhead so that it does not increase the burden of the IoV network. Computational ( around 4 ms) and Communication (32 bytes) overhead shows the feasibility, efficiency, and also security features are depicted using formal analysis, Burrows, Abadi, and Needham (BAN) logic, and informal analysis to show the robustness of the proposed mechanisms against security threats.

As data are increasingly being stored in different silos and societies becoming more aware of data privacy issues, the traditional centralized training of artificial intelligence (AI) models is facing efficiency and privacy challenges. Recently, federated learning (FL) has emerged as an alternative solution and continue to thrive in this new reality. Existing FL protocol design has been shown to be vulnerable to adversaries within or outside of the system, compromising data privacy and system robustness. Besides training powerful global models, it is of paramount importance to design FL systems that have privacy guarantees and are resistant to different types of adversaries. In this paper, we conduct the first comprehensive survey on this topic. Through a concise introduction to the concept of FL, and a unique taxonomy covering: 1) threat models; 2) poisoning attacks and defenses against robustness; 3) inference attacks and defenses against privacy, we provide an accessible review of this important topic. We highlight the intuitions, key techniques as well as fundamental assumptions adopted by various attacks and defenses. Finally, we discuss promising future research directions towards robust and privacy-preserving federated learning.

北京阿比特科技有限公司