亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We propose how a developing country like Sri Lanka can benefit from privacy-enabled machine learning techniques such as Federated Learning to detect road conditions using crowd-sourced data collection and proposed the idea of implementing a Digital Twin for the national road system in Sri Lanka. Developing countries such as Sri Lanka are far behind in implementing smart road systems and smart cities compared to the developed countries. The proposed work discussed in this paper matches the UN Sustainable Development Goal (SDG) 9: "Build Resilient Infrastructure, Promote Inclusive and Sustainable Industrialization and Foster Innovation". Our proposed work discusses how the government and private sector vehicles that conduct routine trips to collect crowd-sourced data using smartphone devices to identify the road conditions and detect where the potholes, surface unevenness (roughness), and other major distresses are located on the roads. We explore Mobile Edge Computing (MEC) techniques that can bring machine learning intelligence closer to the edge devices where produced data is stored and show how the applications of Federated Learning can be made to detect and improve road conditions. During the second phase of this study, we plan to implement a Digital Twin for the road system in Sri Lanka. We intend to use data provided by both Dedicated and Non-Dedicated systems in the proposed Digital Twin for the road system. As of writing this paper, and best to our knowledge, there is no Digital Twin system implemented for roads and other infrastructure systems in Sri Lanka. The proposed Digital Twin will be one of the first implementations of such systems in Sri Lanka. Lessons learned from this pilot project will benefit other developing countries who wish to follow the same path and make data-driven decisions.

相關內容

Millions of battery-powered sensors deployed for monitoring purposes in a multitude of scenarios, e.g., agriculture, smart cities, industry, etc., require energy-efficient solutions to prolong their lifetime. When these sensors observe a phenomenon distributed in space and evolving in time, it is expected that collected observations will be correlated in time and space. In this paper, we propose a Deep Reinforcement Learning (DRL) based scheduling mechanism capable of taking advantage of correlated information. We design our solution using the Deep Deterministic Policy Gradient (DDPG) algorithm. The proposed mechanism is capable of determining the frequency with which sensors should transmit their updates, to ensure accurate collection of observations, while simultaneously considering the energy available. To evaluate our scheduling mechanism, we use multiple datasets containing environmental observations obtained in multiple real deployments. The real observations enable us to model the environment with which the mechanism interacts as realistically as possible. We show that our solution can significantly extend the sensors' lifetime. We compare our mechanism to an idealized, all-knowing scheduler to demonstrate that its performance is near-optimal. Additionally, we highlight the unique feature of our design, energy-awareness, by displaying the impact of sensors' energy levels on the frequency of updates.

The digitalization of railway systems should increase the efficiency of the train operation to achieve future mobility challenges and climate goals. But this digitalization also comes with several new challenges in providing a secure and reliable train operation. The work resulting in this paper tackles two major challenges. First, there is no single university curriculum combining computer science, railway operation, and certification processes. Second, many railway processes are still manual and without the usage of digital tools and result in static implementations and configurations of the railway infrastructure devices. This case study occurred as part of the Digital Rail Summer School 2021, a university course combining the three mentioned aspects as cooperation of several German universities with partners from the railway industry. It passes through all steps from a digital Control-Command and Signalling (CCS) planning in ProSig 7.3, the transfer, and validation of the planning in the PlanPro data format and toolbox, to the generation of code of an interlocking for the digital CCS planning to contribute to the vision of test automation. This paper contributes the experiences of the case study and a proof-of-concept of the whole lifecycle for the Digital Testfield of Deutsche Bahn in Scheibenberg. This proof-of-concept will be continued in ongoing and following projects to fulfill the vision of test automation and automated launching of new devices.

Emerging application scenarios, such as cyber-physical systems (CPSs), the Internet of Things (IoT), and edge computing, call for coordination approaches addressing openness, self-adaptation, heterogeneity, and deployment agnosticism. Field-based coordination is one such approach, promoting the idea of programming system coordination declaratively from a global perspective, in terms of functional manipulation and evolution in "space and time" of distributed data structures called fields. More specifically regarding time, in field-based coordination (as in many other distributed approaches to coordination) it is assumed that local activities in each device are regulated by a fair and unsynchronised fixed clock working at the platform level. In this work, we challenge this assumption, and propose an alternative approach where scheduling is programmed in a natural way (along with usual field-based coordination) in terms of causality fields, each enacting a programmable distributed notion of a computation "cause" (why and when a field computation has to be locally computed) and how it should change across time and space. Starting from low-level platform triggers, such causality fields can be organised into multiple layers, up to high-level, collectively-computed time abstractions, to be used at the application level. This reinterpretation of time in terms of articulated causality relations allows us to express what we call "time-fluid" coordination, where scheduling can be finely tuned so as to select the triggers to react to, generally allowing to adaptively balance performance (system reactivity) and cost (resource usage) of computations. We formalise the proposed scheduling framework for field-based coordination in the context of the field calculus, discuss an implementation in the aggregate computing framework, and finally evaluate the approach via simulation on several case studies.

The growth of the Internet and its associated technologies; including digital services have tremendously impacted our society. However, scholars have noted a trend in data flow and collection; and have alleged mass surveillance and digital supremacy. To this end therefore, nations of the world such as Russia, China, Germany, Canada, France and Brazil among others have taken steps toward changing the narrative. The question now is, should Africans join these giants in this school of thought on digital sovereignty or fold their hands to remain on the other side of the divide? This question among others are the main reasons that provoked the thoughts of putting this paper together. This is with a view to demystifying the strategies to reconfigure data infrastructure in the context of Africa. It also highlights the benefits of digital technologies and its propensity to foster all round development in the continent as it relates to economic face-lift, employment creation, national security, among others. There is therefore a need for African nations to design appropriate blueprint to ensure security of her digital infrastructure and the flow of data within her cyber space. In addition, a roadmap in the immediate, short- and long-term in accordance with the framework of African developmental goals should be put in place to guide the implementation.

The rapid development in the field of System of Chip (SoC) technology, Internet of Things (IoT), cloud computing, and artificial intelligence has brought more possibilities of improving and solving the current problems. With data analytics and the use of machine learning/deep learning, it is made possible to learn the underlying patterns and make decisions based on what was learned from massive data generated from IoT sensors. When combined with cloud computing, the whole pipeline can be automated, and free of manual controls and operations. In this paper, an implementation of an automated data engineering pipeline for anomaly detection of IoT sensor data is studied and proposed. The process involves the use of IoT sensors, Raspberry Pis, Amazon Web Services (AWS) and multiple machine learning techniques with the intent to identify anomalous cases for the smart home security system.

Federated learning (FL) is an emerging, privacy-preserving machine learning paradigm, drawing tremendous attention in both academia and industry. A unique characteristic of FL is heterogeneity, which resides in the various hardware specifications and dynamic states across the participating devices. Theoretically, heterogeneity can exert a huge influence on the FL training process, e.g., causing a device unavailable for training or unable to upload its model updates. Unfortunately, these impacts have never been systematically studied and quantified in existing FL literature. In this paper, we carry out the first empirical study to characterize the impacts of heterogeneity in FL. We collect large-scale data from 136k smartphones that can faithfully reflect heterogeneity in real-world settings. We also build a heterogeneity-aware FL platform that complies with the standard FL protocol but with heterogeneity in consideration. Based on the data and the platform, we conduct extensive experiments to compare the performance of state-of-the-art FL algorithms under heterogeneity-aware and heterogeneity-unaware settings. Results show that heterogeneity causes non-trivial performance degradation in FL, including up to 9.2% accuracy drop, 2.32x lengthened training time, and undermined fairness. Furthermore, we analyze potential impact factors and find that device failure and participant bias are two potential factors for performance degradation. Our study provides insightful implications for FL practitioners. On the one hand, our findings suggest that FL algorithm designers consider necessary heterogeneity during the evaluation. On the other hand, our findings urge system providers to design specific mechanisms to mitigate the impacts of heterogeneity.

As data are increasingly being stored in different silos and societies becoming more aware of data privacy issues, the traditional centralized training of artificial intelligence (AI) models is facing efficiency and privacy challenges. Recently, federated learning (FL) has emerged as an alternative solution and continue to thrive in this new reality. Existing FL protocol design has been shown to be vulnerable to adversaries within or outside of the system, compromising data privacy and system robustness. Besides training powerful global models, it is of paramount importance to design FL systems that have privacy guarantees and are resistant to different types of adversaries. In this paper, we conduct the first comprehensive survey on this topic. Through a concise introduction to the concept of FL, and a unique taxonomy covering: 1) threat models; 2) poisoning attacks and defenses against robustness; 3) inference attacks and defenses against privacy, we provide an accessible review of this important topic. We highlight the intuitions, key techniques as well as fundamental assumptions adopted by various attacks and defenses. Finally, we discuss promising future research directions towards robust and privacy-preserving federated learning.

The demand for artificial intelligence has grown significantly over the last decade and this growth has been fueled by advances in machine learning techniques and the ability to leverage hardware acceleration. However, in order to increase the quality of predictions and render machine learning solutions feasible for more complex applications, a substantial amount of training data is required. Although small machine learning models can be trained with modest amounts of data, the input for training larger models such as neural networks grows exponentially with the number of parameters. Since the demand for processing training data has outpaced the increase in computation power of computing machinery, there is a need for distributing the machine learning workload across multiple machines, and turning the centralized into a distributed system. These distributed systems present new challenges, first and foremost the efficient parallelization of the training process and the creation of a coherent model. This article provides an extensive overview of the current state-of-the-art in the field by outlining the challenges and opportunities of distributed machine learning over conventional (centralized) machine learning, discussing the techniques used for distributed machine learning, and providing an overview of the systems that are available.

In recent years, mobile devices have gained increasingly development with stronger computation capability and larger storage. Some of the computation-intensive machine learning and deep learning tasks can now be run on mobile devices. To take advantage of the resources available on mobile devices and preserve users' privacy, the idea of mobile distributed machine learning is proposed. It uses local hardware resources and local data to solve machine learning sub-problems on mobile devices, and only uploads computation results instead of original data to contribute to the optimization of the global model. This architecture can not only relieve computation and storage burden on servers, but also protect the users' sensitive information. Another benefit is the bandwidth reduction, as various kinds of local data can now participate in the training process without being uploaded to the server. In this paper, we provide a comprehensive survey on recent studies of mobile distributed machine learning. We survey a number of widely-used mobile distributed machine learning methods. We also present an in-depth discussion on the challenges and future directions in this area. We believe that this survey can demonstrate a clear overview of mobile distributed machine learning and provide guidelines on applying mobile distributed machine learning to real applications.

The field of Multi-Agent System (MAS) is an active area of research within Artificial Intelligence, with an increasingly important impact in industrial and other real-world applications. Within a MAS, autonomous agents interact to pursue personal interests and/or to achieve common objectives. Distributed Constraint Optimization Problems (DCOPs) have emerged as one of the prominent agent architectures to govern the agents' autonomous behavior, where both algorithms and communication models are driven by the structure of the specific problem. During the last decade, several extensions to the DCOP model have enabled them to support MAS in complex, real-time, and uncertain environments. This survey aims at providing an overview of the DCOP model, giving a classification of its multiple extensions and addressing both resolution methods and applications that find a natural mapping within each class of DCOPs. The proposed classification suggests several future perspectives for DCOP extensions, and identifies challenges in the design of efficient resolution algorithms, possibly through the adaptation of strategies from different areas.

北京阿比特科技有限公司