亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this work, we consider a Federated Edge Learning (FEEL) system where training data are randomly generated over time at a set of distributed edge devices with long-term energy constraints. Due to limited communication resources and latency requirements, only a subset of devices is scheduled for participating in the local training process in every iteration. We formulate a stochastic network optimization problem for designing a dynamic scheduling policy that maximizes the time-average data importance from scheduled user sets subject to energy consumption and latency constraints. Our proposed algorithm based on the Lyapunov optimization framework outperforms alternative methods without considering time-varying data importance, especially when the generation of training data shows strong temporal correlation.

相關內容

To improve the application-level communication performance, scheduling of coflows, a collection of parallel flows sharing the same objective, is prevalent in modern data center networks (DCNs). Meanwhile, a hybrid-switched DCN design combining optical circuit switches (OPS) and electrical packet switches (EPS) for transmitting high-volume traffic and low-volume traffic separately has received considerable research attention recently. Efficient scheduling of coflows on hybrid network links is crucial for reducing the overall communication time. However, because of the reconfiguration delay in the circuit switch due to the ultra-high transmission rate and the limitation of bandwidth in the packet switch, coflow scheduling becomes increasingly challenging. The existing coflow scheduling algorithms in hybrid-switched DCNs are all heuristic and provide no performance guarantees. In this work, we propose an approximation algorithm with the worst-case performance guarantee of 2+ \lambda?, where \lambda? is a factor related to system parameters and demand characteristics, for single coflow scheduling in hybridswitched DCN to minimize the coflow completion time (CCT). Extensive simulations based on Facebook data traces show that our algorithm outperforms the state-of-the-art schemes Solstice by 1.14? and Reco-Sin by 1.42? in terms of minimizing CCT.

Online video streaming has fundamental limitations on the transmission bandwidth and computational capacity and super-resolution is a promising potential solution. However, applying existing video super-resolution methods to online streaming is non-trivial. Existing video codecs and streaming protocols (\eg, WebRTC) dynamically change the video quality both spatially and temporally, which leads to diverse and dynamic degradations. Furthermore, online streaming has a strict requirement for latency that most existing methods are less applicable. As a result, this paper focuses on the rarely exploited problem setting of online streaming video super resolution. To facilitate the research on this problem, a new benchmark dataset named LDV-WebRTC is constructed based on a real-world online streaming system. Leveraging the new benchmark dataset, we proposed a novel method specifically for online video streaming, which contains a convolution and Look-Up Table (LUT) hybrid model to achieve better performance-latency trade-off. To tackle the changing degradations, we propose a mixture-of-expert-LUT module, where a set of LUT specialized in different degradations are built and adaptively combined to handle different degradations. Experiments show our method achieves 720P video SR around 100 FPS, while significantly outperforms existing LUT-based methods and offers competitive performance compared to efficient CNN-based methods.

The agricultural sector currently faces significant challenges in water resource conservation and crop yield optimization, primarily due to concerns over freshwater scarcity. Traditional irrigation scheduling methods often prove inadequate in meeting the needs of large-scale irrigation systems. To address this issue, this paper proposes a predictive irrigation scheduler that leverages the three paradigms of machine learning to optimize irrigation schedules. The proposed scheduler employs the k-means clustering approach to divide the field into distinct irrigation management zones based on soil hydraulic parameters and topology information. Furthermore, a long short-term memory network is employed to develop dynamic models for each management zone, enabling accurate predictions of soil moisture dynamics. Formulated as a mixed-integer model predictive control problem, the scheduler aims to maximize water uptake while minimizing overall water consumption and irrigation costs. To tackle the mixed-integer optimization challenge, the proximal policy optimization algorithm is utilized to train a reinforcement learning agent responsible for making daily irrigation decisions. To evaluate the performance of the proposed scheduler, a 26.4-hectare field in Lethbridge, Canada, was chosen as a case study for the 2015 and 2022 growing seasons. The results demonstrate the superiority of the proposed scheduler compared to a traditional irrigation scheduling method in terms of water use efficiency and crop yield improvement for both growing seasons. Notably, the proposed scheduler achieved water savings ranging from 6.4% to 22.8%, along with yield increases ranging from 2.3% to 4.3%.

With the rapid advancement of technology, parallel computing applications have become increasingly popular and are commonly executed in large data centers. These applications involve two phases: computation and communication, which are executed repeatedly to complete the work. However, due to the ever-increasing demand for computing power, large data centers are struggling to meet the massive communication demands. To address this problem, coflow has been proposed as a networking abstraction that captures communication patterns in data-parallel computing frameworks. This paper focuses on the coflow scheduling problem in identical parallel networks, where the primary objective is to minimize the makespan, which is the maximum completion time of coflows. It is considered one of the most significant $\mathcal{NP}$-hard problems in large data centers. In this paper, we consider two problems: flow-level scheduling and coflow-level scheduling. In the flow-level scheduling problem, distinct flows can be transferred through different network cores, whereas in the coflow-level scheduling problem, all flows must be transferred through the same network core. To address the flow-level scheduling problem, this paper proposes two algorithms: a $(3-\tfrac{2}{m})$-approximation algorithm and a $(\tfrac{8}{3}-\tfrac{2}{3m})$-approximation algorithm, where $m$ represents the number of network cores. For the coflow-level scheduling problem, this paper proposes a $(2m)$-approximation algorithm. Finally, we conduct simulations on our proposed algorithm and Weaver's algorithm, as presented in Huang \textit{et al.} (2020) in the 2020 IEEE International Parallel and Distributed Processing Symposium (IPDPS). We also validate the effectiveness of the proposed algorithms on heterogeneous parallel networks.

The majority of Multi-Agent Reinforcement Learning (MARL) literature equates the cooperation of self-interested agents in mixed environments to the problem of social welfare maximization, allowing agents to arbitrarily share rewards and private information. This results in agents that forgo their individual goals in favour of social good, which can potentially be exploited by selfish defectors. We argue that cooperation also requires agents' identities and boundaries to be respected by making sure that the emergent behaviour is an equilibrium, i.e., a convention that no agent can deviate from and receive higher individual payoffs. Inspired by advances in mechanism design, we propose to solve the problem of cooperation, defined as finding socially beneficial equilibrium, by using mediators. A mediator is a benevolent entity that may act on behalf of agents, but only for the agents that agree to it. We show how a mediator can be trained alongside agents with policy gradient to maximize social welfare subject to constraints that encourage agents to cooperate through the mediator. Our experiments in matrix and iterative games highlight the potential power of applying mediators in MARL.

Data processing and analytics are fundamental and pervasive. Algorithms play a vital role in data processing and analytics where many algorithm designs have incorporated heuristics and general rules from human knowledge and experience to improve their effectiveness. Recently, reinforcement learning, deep reinforcement learning (DRL) in particular, is increasingly explored and exploited in many areas because it can learn better strategies in complicated environments it is interacting with than statically designed algorithms. Motivated by this trend, we provide a comprehensive review of recent works focusing on utilizing DRL to improve data processing and analytics. First, we present an introduction to key concepts, theories, and methods in DRL. Next, we discuss DRL deployment on database systems, facilitating data processing and analytics in various aspects, including data organization, scheduling, tuning, and indexing. Then, we survey the application of DRL in data processing and analytics, ranging from data preparation, natural language processing to healthcare, fintech, etc. Finally, we discuss important open challenges and future research directions of using DRL in data processing and analytics.

Federated Learning (FL) is a decentralized machine-learning paradigm, in which a global server iteratively averages the model parameters of local users without accessing their data. User heterogeneity has imposed significant challenges to FL, which can incur drifted global models that are slow to converge. Knowledge Distillation has recently emerged to tackle this issue, by refining the server model using aggregated knowledge from heterogeneous users, other than directly averaging their model parameters. This approach, however, depends on a proxy dataset, making it impractical unless such a prerequisite is satisfied. Moreover, the ensemble knowledge is not fully utilized to guide local model learning, which may in turn affect the quality of the aggregated model. Inspired by the prior art, we propose a data-free knowledge distillation} approach to address heterogeneous FL, where the server learns a lightweight generator to ensemble user information in a data-free manner, which is then broadcasted to users, regulating local training using the learned knowledge as an inductive bias. Empirical studies powered by theoretical implications show that, our approach facilitates FL with better generalization performance using fewer communication rounds, compared with the state-of-the-art.

Vast amount of data generated from networks of sensors, wearables, and the Internet of Things (IoT) devices underscores the need for advanced modeling techniques that leverage the spatio-temporal structure of decentralized data due to the need for edge computation and licensing (data access) issues. While federated learning (FL) has emerged as a framework for model training without requiring direct data sharing and exchange, effectively modeling the complex spatio-temporal dependencies to improve forecasting capabilities still remains an open problem. On the other hand, state-of-the-art spatio-temporal forecasting models assume unfettered access to the data, neglecting constraints on data sharing. To bridge this gap, we propose a federated spatio-temporal model -- Cross-Node Federated Graph Neural Network (CNFGNN) -- which explicitly encodes the underlying graph structure using graph neural network (GNN)-based architecture under the constraint of cross-node federated learning, which requires that data in a network of nodes is generated locally on each node and remains decentralized. CNFGNN operates by disentangling the temporal dynamics modeling on devices and spatial dynamics on the server, utilizing alternating optimization to reduce the communication cost, facilitating computations on the edge devices. Experiments on the traffic flow forecasting task show that CNFGNN achieves the best forecasting performance in both transductive and inductive learning settings with no extra computation cost on edge devices, while incurring modest communication cost.

Heterogeneous graph neural networks (HGNNs) as an emerging technique have shown superior capacity of dealing with heterogeneous information network (HIN). However, most HGNNs follow a semi-supervised learning manner, which notably limits their wide use in reality since labels are usually scarce in real applications. Recently, contrastive learning, a self-supervised method, becomes one of the most exciting learning paradigms and shows great potential when there are no labels. In this paper, we study the problem of self-supervised HGNNs and propose a novel co-contrastive learning mechanism for HGNNs, named HeCo. Different from traditional contrastive learning which only focuses on contrasting positive and negative samples, HeCo employs cross-viewcontrastive mechanism. Specifically, two views of a HIN (network schema and meta-path views) are proposed to learn node embeddings, so as to capture both of local and high-order structures simultaneously. Then the cross-view contrastive learning, as well as a view mask mechanism, is proposed, which is able to extract the positive and negative embeddings from two views. This enables the two views to collaboratively supervise each other and finally learn high-level node embeddings. Moreover, two extensions of HeCo are designed to generate harder negative samples with high quality, which further boosts the performance of HeCo. Extensive experiments conducted on a variety of real-world networks show the superior performance of the proposed methods over the state-of-the-arts.

A key requirement for the success of supervised deep learning is a large labeled dataset - a condition that is difficult to meet in medical image analysis. Self-supervised learning (SSL) can help in this regard by providing a strategy to pre-train a neural network with unlabeled data, followed by fine-tuning for a downstream task with limited annotations. Contrastive learning, a particular variant of SSL, is a powerful technique for learning image-level representations. In this work, we propose strategies for extending the contrastive learning framework for segmentation of volumetric medical images in the semi-supervised setting with limited annotations, by leveraging domain-specific and problem-specific cues. Specifically, we propose (1) novel contrasting strategies that leverage structural similarity across volumetric medical images (domain-specific cue) and (2) a local version of the contrastive loss to learn distinctive representations of local regions that are useful for per-pixel segmentation (problem-specific cue). We carry out an extensive evaluation on three Magnetic Resonance Imaging (MRI) datasets. In the limited annotation setting, the proposed method yields substantial improvements compared to other self-supervision and semi-supervised learning techniques. When combined with a simple data augmentation technique, the proposed method reaches within 8% of benchmark performance using only two labeled MRI volumes for training, corresponding to only 4% (for ACDC) of the training data used to train the benchmark.

北京阿比特科技有限公司