亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The aviation industry as well as the industries that benefit and are linked to it are ripe for innovation in the form of Big Data analytics. The number of available big data technologies is constantly growing, while at the same time the existing ones are rapidly evolving and empowered with new features. However, the Big Data era imposes the crucial challenge of how to effectively handle information security while managing massive and rapidly evolving data from heterogeneous data sources. While multiple technologies have emerged, there is a need to find a balance between multiple security requirements, privacy obligations, system performance and rapid dynamic analysis on large datasets. The current paper aims to introduce the ICARUS Secure Experimentation Sandbox of the ICARUS platform. The ICARUS platform aims to provide a big data-enabled platform that aspires to become an 'one-stop shop' for aviation data and intelligence marketplace that provides a trusted and secure 'sandboxed' analytics workspace, allowing the exploration, integration and deep analysis of original and derivative data in a trusted and fair manner. Towards this end, a Secure Experimentation Sandbox has been designed and integrated in the ICARUS platform offering, that enables the provisioning of a sophisticated environment that can completely guarantee the safety and confidentiality of data, allowing to any interested party to utilise the platform to conduct analytical experiments in closed-lab conditions.

相關內容

Integration:Integration, the VLSI Journal。 Explanation:集成,VLSI雜志。 Publisher:Elsevier。 SIT:

There is a growing need for authentication methodology in virtual reality applications. Current systems assume that the immersive experience technology is a collection of peripheral devices connected to a personal computer or mobile device. Hence there is a complete reliance on the computing device with traditional authentication mechanisms to handle the authentication and authorization decisions. Using the virtual reality controllers and headset poses a different set of challenges as it is subject to unauthorized observation, unannounced to the user given the fact that the headset completely covers the field of vision in order to provide an immersive experience. As the need for virtual reality experiences in the commercial world increases, there is a need to provide other alternative mechanisms for secure authentication. In this paper, we analyze a few proposed authentication systems and reached a conclusion that a multidimensional approach to authentication is needed to address the granular nature of authentication and authorization needs of a commercial virtual reality applications in the commercial world.

Federated Learning (FL) provides privacy preservation by allowing the model training at edge devices without the need of sending the data from edge to a centralized server. FL has distributed the implementation of ML. Another variant of FL which is well suited for the Internet of Things (IoT) is known as Collaborated Federated Learning (CFL), which does not require an edge device to have a direct link to the model aggregator. Instead, the devices can connect to the central model aggregator via other devices using them as relays. Although, FL and CFL protect the privacy of edge devices but raises security challenges for a centralized server that performs model aggregation. The centralized server is prone to malfunction, backdoor attacks, model corruption, adversarial attacks and external attacks. Moreover, edge device to centralized server data exchange is not required in FL and CFL, but model parameters are sent from the model aggregator (global model) to edge devices (local model), which is still prone to cyber-attacks. These security and privacy concerns can be potentially addressed by Blockchain technology. The blockchain is a decentralized and consensus-based chain where devices can share consensus ledgers with increased reliability and security, thus significantly reducing the cyberattacks on an exchange of information. In this work, we will investigate the efficacy of blockchain-based decentralized exchange of model parameters and relevant information among edge devices and from a centralized server to edge devices. Moreover, we will be conducting the feasibility analysis for blockchain-based CFL models for different application scenarios like the internet of vehicles, and the internet of things. The proposed study aims to improve the security, reliability and privacy preservation by the use of blockchain-powered CFL.

Artificial intelligence (AI), especially deep learning, requires vast amounts of data for training, testing, and validation. Collecting these data and the corresponding annotations requires the implementation of imaging biobanks that provide access to these data in a standardized way. This requires careful design and implementation based on the current standards and guidelines and complying with the current legal restrictions. However, the realization of proper imaging data collections is not sufficient to train, validate and deploy AI as resource demands are high and require a careful hybrid implementation of AI pipelines both on-premise and in the cloud. This chapter aims to help the reader when technical considerations have to be made about the AI environment by providing a technical background of different concepts and implementation aspects involved in data storage, cloud usage, and AI pipelines.

In parallel with the rapid adoption of Artificial Intelligence (AI) empowered by advances in AI research, there have been growing awareness and concerns of data privacy. Recent significant developments in the data regulation landscape have prompted a seismic shift in interest towards privacy-preserving AI. This has contributed to the popularity of Federated Learning (FL), the leading paradigm for the training of machine learning models on data silos in a privacy-preserving manner. In this survey, we explore the domain of Personalized FL (PFL) to address the fundamental challenges of FL on heterogeneous data, a universal characteristic inherent in all real-world datasets. We analyze the key motivations for PFL and present a unique taxonomy of PFL techniques categorized according to the key challenges and personalization strategies in PFL. We highlight their key ideas, challenges and opportunities and envision promising future trajectories of research towards new PFL architectural design, realistic PFL benchmarking, and trustworthy PFL approaches.

The provision of communication services via portable and mobile devices, such as aerial base stations, is a crucial concept to be realized in 5G/6G networks. Conventionally, IoT/edge devices need to transmit the data directly to the base station for training the model using machine learning techniques. The data transmission introduces privacy issues that might lead to security concerns and monetary losses. Recently, Federated learning was proposed to partially solve privacy issues via model-sharing with base station. However, the centralized nature of federated learning only allow the devices within the vicinity of base stations to share the trained models. Furthermore, the long-range communication compels the devices to increase transmission power, which raises the energy efficiency concerns. In this work, we propose distributed federated learning (DBFL) framework that overcomes the connectivity and energy efficiency issues for distant devices. The DBFL framework is compatible with mobile edge computing architecture that connects the devices in a distributed manner using clustering protocols. Experimental results show that the framework increases the classification performance by 7.4\% in comparison to conventional federated learning while reducing the energy consumption.

Fast developing artificial intelligence (AI) technology has enabled various applied systems deployed in the real world, impacting people's everyday lives. However, many current AI systems were found vulnerable to imperceptible attacks, biased against underrepresented groups, lacking in user privacy protection, etc., which not only degrades user experience but erodes the society's trust in all AI systems. In this review, we strive to provide AI practitioners a comprehensive guide towards building trustworthy AI systems. We first introduce the theoretical framework of important aspects of AI trustworthiness, including robustness, generalization, explainability, transparency, reproducibility, fairness, privacy preservation, alignment with human values, and accountability. We then survey leading approaches in these aspects in the industry. To unify the current fragmented approaches towards trustworthy AI, we propose a systematic approach that considers the entire lifecycle of AI systems, ranging from data acquisition to model development, to development and deployment, finally to continuous monitoring and governance. In this framework, we offer concrete action items to practitioners and societal stakeholders (e.g., researchers and regulators) to improve AI trustworthiness. Finally, we identify key opportunities and challenges in the future development of trustworthy AI systems, where we identify the need for paradigm shift towards comprehensive trustworthy AI systems.

Federated Learning (FL) is a concept first introduced by Google in 2016, in which multiple devices collaboratively learn a machine learning model without sharing their private data under the supervision of a central server. This offers ample opportunities in critical domains such as healthcare, finance etc, where it is risky to share private user information to other organisations or devices. While FL appears to be a promising Machine Learning (ML) technique to keep the local data private, it is also vulnerable to attacks like other ML models. Given the growing interest in the FL domain, this report discusses the opportunities and challenges in federated learning.

As data are increasingly being stored in different silos and societies becoming more aware of data privacy issues, the traditional centralized training of artificial intelligence (AI) models is facing efficiency and privacy challenges. Recently, federated learning (FL) has emerged as an alternative solution and continue to thrive in this new reality. Existing FL protocol design has been shown to be vulnerable to adversaries within or outside of the system, compromising data privacy and system robustness. Besides training powerful global models, it is of paramount importance to design FL systems that have privacy guarantees and are resistant to different types of adversaries. In this paper, we conduct the first comprehensive survey on this topic. Through a concise introduction to the concept of FL, and a unique taxonomy covering: 1) threat models; 2) poisoning attacks and defenses against robustness; 3) inference attacks and defenses against privacy, we provide an accessible review of this important topic. We highlight the intuitions, key techniques as well as fundamental assumptions adopted by various attacks and defenses. Finally, we discuss promising future research directions towards robust and privacy-preserving federated learning.

In recent years with the rise of Cloud Computing (CC), many companies providing services in the cloud, are empowered a new series of services to their catalog, such as data mining (DM) and data processing, taking advantage of the vast computing resources available to them. Different service definition proposals have been proposed to address the problem of describing services in CC in a comprehensive way. Bearing in mind that each provider has its own definition of the logic of its services, and specifically of DM services, it should be pointed out that the possibility of describing services in a flexible way between providers is fundamental in order to maintain the usability and portability of this type of CC services. The use of semantic technologies based on the proposal offered by Linked Data (LD) for the definition of services, allows the design and modelling of DM services, achieving a high degree of interoperability. In this article a schema for the definition of DM services on CC is presented, in addition are considered all key aspects of service in CC, such as prices, interfaces, Software Level Agreement, instances or workflow of experimentation, among others. The proposal presented is based on LD, so that it reuses other schemata obtaining a best definition of the service. For the validation of the schema, a series of DM services have been created where some of the best known algorithms such as \textit{Random Forest} or \textit{KMeans} are modeled as services.

Recommender systems rely on large datasets of historical data and entail serious privacy risks. A server offering recommendations as a service to a client might leak more information than necessary regarding its recommendation model and training dataset. At the same time, the disclosure of the client's preferences to the server is also a matter of concern. Providing recommendations while preserving privacy in both senses is a difficult task, which often comes into conflict with the utility of the system in terms of its recommendation-accuracy and efficiency. Widely-purposed cryptographic primitives such as secure multi-party computation and homomorphic encryption offer strong security guarantees, but in conjunction with state-of-the-art recommender systems yield far-from-practical solutions. We precisely define the above notion of security and propose CryptoRec, a novel recommendations-as-a-service protocol, which encompasses a crypto-friendly recommender system. This model possesses two interesting properties: (1) It models user-item interactions in a user-free latent feature space in which it captures personalized user features by an aggregation of item features. This means that a server with a pre-trained model can provide recommendations for a client without having to re-train the model with the client's preferences. Nevertheless, re-training the model still improves accuracy. (2) It only uses addition and multiplication operations, making the model straightforwardly compatible with homomorphic encryption schemes.

北京阿比特科技有限公司