亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Computational task offloading based on edge computing can deal with the performance bottleneck of traditional cloud-based systems for Internet of things (IoT). To further optimize computing efficiency and resource allocation, collaborative offloading has been put forward to enable the offloading from edge devices to IoT terminal devices. However, there still lack incentive mechanisms to encourage participants to take over tasks from others. To counter this situation, this paper proposes a distributed computational resource trading strategy addressing multiple preferences of IoT users. Unlike most existing works, the objective of our trading strategy comprehensively considers different satisfaction degrees with task delay, energy consumption, price, and user reputation of both requesters and collaborators. The system design uses blockchain to enhance the decentralization, security, and automation. Compared with the trading method based on classical double auction matching mechanism, our trading strategy has more tasks offloaded and executed, and the trading results are friendlier to collaborators with good reputation.

相關內容

Mac 平臺下的最佳 GTD 軟件之一.有 iOS 版本.

The coordination of actions and the allocation of profit in supply chains under decentralized control play an important role in improving the profits of retailers and suppliers in the chain. We focus on supply chains under decentralized control in which noncompeting retailers can order from multiple suppliers to replenish their stocks. Suppliers' production capacity is bounded. The goal of the firms in the chain is to maximize their individual profits. As the outcome under decentralized control is inefficient, coordination of actions between cooperating agents can improve individual profits. Cooperative game theory is used to analyze cooperation between agents. We define multi-retailer-supplier games and show that agents can always achieve together an optimal profit and they have incentives to cooperate and to form the grand coalition. Moreover, we show that there always exist stable allocations of the total profit among the firms upon which no coalition can improve. Then we propose and characterize a stable allocation of the total surplus induced by cooperation.

In conventional dual-function radar-communication (DFRC) systems, the radar and communication channels are routinely estimated at fixed time intervals based on their worst-case operation scenarios. Such situation-agnostic repeated estimations cause significant training overhead and dramatically degrade the system performance, especially for applications with dynamic sensing/communication demands and limited radio resources. In this paper, we leverage the channel aging characteristics to reduce training overhead and to design a situation-dependent channel re-estimation interval optimization-based resource allocation for performance improvement in a multi-target tracking DFRC system. Specifically, we exploit the channel temporal correlation to predict radar and communication channels for reducing the need of training preamble retransmission. Then, we characterize the channel aging effects on the Cramer-Rao lower bounds (CRLBs) for radar tracking performance analysis and achievable rates with maximum ratio transmission (MRT) and zero-forcing (ZF) transmit beamforming for communication performance analysis. In particular, the aged CRLBs and achievable rates are derived as closed-form expressions with respect to the channel aging time, bandwidth, and power. Based on the analyzed results, we optimize these factors to maximize the average total aged achievable rate subject to individual target tracking precision demand, communication rate requirement, and other practical constraints. Since the formulated problem belongs to a non-convex problem, we develop an efficient one-dimensional search based optimization algorithm to obtain its suboptimal solutions. Finally, simulation results are presented to validate the correctness of the derived theoretical results and the effectiveness of the proposed allocation scheme.

Research in HCI has shown a growing interest in unethical design practices across numerous domains, often referred to as ``dark patterns''. There is, however, a gap in related literature regarding social networking services (SNSs). In this context, studies emphasise a lack of users' self-determination regarding control over personal data and time spent on SNSs. We collected over 16 hours of screen recordings from Facebook's, Instagram's, TikTok's, and Twitter's mobile applications to understand how dark patterns manifest in these SNSs. For this task, we turned towards HCI experts to mitigate possible difficulties of non-expert participants in recognising dark patterns, as prior studies have noticed. Supported by the recordings, two authors of this paper conducted a thematic analysis based on previously described taxonomies, manually classifying the recorded material while delivering two key findings: We observed which instances occur in SNSs and identified two strategies - engaging and governing - with five dark patterns undiscovered before.

Federated learning is one of the most appealing alternatives to the standard centralized learning paradigm, allowing a heterogeneous set of devices to train a machine learning model without sharing their raw data. However, it requires a central server to coordinate the learning process, thus introducing potential scalability and security issues. In the literature, server-less federated learning approaches like gossip federated learning and blockchain-enabled federated learning have been proposed to mitigate these issues. In this work, we propose a complete overview of these three techniques proposing a comparison according to an integral set of performance indicators, including model accuracy, time complexity, communication overhead, convergence time, and energy consumption. An extensive simulation campaign permits to draw a quantitative analysis considering both feedforward and convolutional neural network models. Results show that gossip federated learning and standard federated solution are able to reach a similar level of accuracy, and their energy consumption is influenced by the machine learning model adopted, the software library, and the hardware used. Differently, blockchain-enabled federated learning represents a viable solution for implementing decentralized learning with a higher level of security, at the cost of an extra energy usage and data sharing. Finally, we identify open issues on the two decentralized federated learning implementations and provide insights on potential extensions and possible research directions in this new research field.

Future wireless networks are expected to support diverse mobile services, including artificial intelligence (AI) services and ubiquitous data transmissions. Federated learning (FL), as a revolutionary learning approach, enables collaborative AI model training across distributed mobile edge devices. By exploiting the superposition property of multiple-access channels, over-the-air computation allows concurrent model uploading from massive devices over the same radio resources, and thus significantly reduces the communication cost of FL. In this paper, we study the coexistence of over-the-air FL and traditional information transfer (IT) in a mobile edge network. We propose a coexisting federated learning and information transfer (CFLIT) communication framework, where the FL and IT devices share the wireless spectrum in an OFDM system. Under this framework, we aim to maximize the IT data rate and guarantee a given FL convergence performance by optimizing the long-term radio resource allocation. A key challenge that limits the spectrum efficiency of the coexisting system lies in the large overhead incurred by frequent communication between the server and edge devices for FL model aggregation. To address the challenge, we rigorously analyze the impact of the computation-to-communication ratio on the convergence of over-the-air FL in wireless fading channels. The analysis reveals the existence of an optimal computation-to-communication ratio that minimizes the amount of radio resources needed for over-the-air FL to converge to a given error tolerance. Based on the analysis, we propose a low-complexity online algorithm to jointly optimize the radio resource allocation for both the FL devices and IT devices. Extensive numerical simulations verify the superior performance of the proposed design for the coexistence of FL and IT devices in wireless cellular systems.

Renewable energy sources, such as wind and solar power, are increasingly being integrated into smart grid systems. However, when compared to traditional energy resources, the unpredictability of renewable energy generation poses significant challenges for both electricity providers and utility companies. Furthermore, the large-scale integration of distributed energy resources (such as PV systems) creates new challenges for energy management in microgrids. To tackle these issues, we propose a novel framework with two objectives: (i) combating uncertainty of renewable energy in smart grid by leveraging time-series forecasting with Long-Short Term Memory (LSTM) solutions, and (ii) establishing distributed and dynamic decision-making framework with multi-agent reinforcement learning using Deep Deterministic Policy Gradient (DDPG) algorithm. The proposed framework considers both objectives concurrently to fully integrate them, while considering both wholesale and retail markets, thereby enabling efficient energy management in the presence of uncertain and distributed renewable energy sources. Through extensive numerical simulations, we demonstrate that the proposed solution significantly improves the profit of load serving entities (LSE) by providing a more accurate wind generation forecast. Furthermore, our results demonstrate that households with PV and battery installations can increase their profits by using intelligent battery charge/discharge actions determined by the DDPG agents.

Graph neural networks (GNNs) are a type of deep learning models that learning over graphs, and have been successfully applied in many domains. Despite the effectiveness of GNNs, it is still challenging for GNNs to efficiently scale to large graphs. As a remedy, distributed computing becomes a promising solution of training large-scale GNNs, since it is able to provide abundant computing resources. However, the dependency of graph structure increases the difficulty of achieving high-efficiency distributed GNN training, which suffers from the massive communication and workload imbalance. In recent years, many efforts have been made on distributed GNN training, and an array of training algorithms and systems have been proposed. Yet, there is a lack of systematic review on the optimization techniques from graph processing to distributed execution. In this survey, we analyze three major challenges in distributed GNN training that are massive feature communication, the loss of model accuracy and workload imbalance. Then we introduce a new taxonomy for the optimization techniques in distributed GNN training that address the above challenges. The new taxonomy classifies existing techniques into four categories that are GNN data partition, GNN batch generation, GNN execution model, and GNN communication protocol.We carefully discuss the techniques in each category. In the end, we summarize existing distributed GNN systems for multi-GPUs, GPU-clusters and CPU-clusters, respectively, and give a discussion about the future direction on scalable GNNs.

Deep Learning (DL) is the most widely used tool in the contemporary field of computer vision. Its ability to accurately solve complex problems is employed in vision research to learn deep neural models for a variety of tasks, including security critical applications. However, it is now known that DL is vulnerable to adversarial attacks that can manipulate its predictions by introducing visually imperceptible perturbations in images and videos. Since the discovery of this phenomenon in 2013~[1], it has attracted significant attention of researchers from multiple sub-fields of machine intelligence. In [2], we reviewed the contributions made by the computer vision community in adversarial attacks on deep learning (and their defenses) until the advent of year 2018. Many of those contributions have inspired new directions in this area, which has matured significantly since witnessing the first generation methods. Hence, as a legacy sequel of [2], this literature review focuses on the advances in this area since 2018. To ensure authenticity, we mainly consider peer-reviewed contributions published in the prestigious sources of computer vision and machine learning research. Besides a comprehensive literature review, the article also provides concise definitions of technical terminologies for non-experts in this domain. Finally, this article discusses challenges and future outlook of this direction based on the literature reviewed herein and [2].

Federated learning (FL) is a machine learning setting where many clients (e.g. mobile devices or whole organizations) collaboratively train a model under the orchestration of a central server (e.g. service provider), while keeping the training data decentralized. FL embodies the principles of focused data collection and minimization, and can mitigate many of the systemic privacy risks and costs resulting from traditional, centralized machine learning and data science approaches. Motivated by the explosive growth in FL research, this paper discusses recent advances and presents an extensive collection of open problems and challenges.

The concept of smart grid has been introduced as a new vision of the conventional power grid to figure out an efficient way of integrating green and renewable energy technologies. In this way, Internet-connected smart grid, also called energy Internet, is also emerging as an innovative approach to ensure the energy from anywhere at any time. The ultimate goal of these developments is to build a sustainable society. However, integrating and coordinating a large number of growing connections can be a challenging issue for the traditional centralized grid system. Consequently, the smart grid is undergoing a transformation to the decentralized topology from its centralized form. On the other hand, blockchain has some excellent features which make it a promising application for smart grid paradigm. In this paper, we have an aim to provide a comprehensive survey on application of blockchain in smart grid. As such, we identify the significant security challenges of smart grid scenarios that can be addressed by blockchain. Then, we present a number of blockchain-based recent research works presented in different literatures addressing security issues in the area of smart grid. We also summarize several related practical projects, trials, and products that have been emerged recently. Finally, we discuss essential research challenges and future directions of applying blockchain to smart grid security issues.

北京阿比特科技有限公司