亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Mobile edge computing (MEC) integrated with multiple radio access technologies (RATs) is a promising technique for satisfying the growing low-latency computation demand of emerging intelligent internet of things (IoT) applications. Under the distributed MapReduce framework, this paper investigates the joint RAT selection and transceiver design for over-the-air (OTA) aggregation of intermediate values (IVAs) in wireless multiuser MEC systems, while taking into account the energy budget constraint for the local computing and IVA transmission per wireless device (WD). We aim to minimize the weighted sum of the computation mean squared error (MSE) of the aggregated IVA at the RAT receivers, the WDs' IVA transmission cost, and the associated transmission time delay, which is a mixed-integer and non-convex problem. Based on the Lagrange duality method and primal decomposition, we develop a low-complexity algorithm by solving the WDs' RAT selection problem, the WDs' transmit coefficients optimization problem, and the aggregation beamforming problem. Extensive numerical results are provided to demonstrate the effectiveness and merit of our proposed algorithm as compared with other existing schemes.

相關內容

在計算機網絡中,異構網絡是一種連接計算機和其他設備的網絡,其中操作系統和協議有顯著差異。例如,將基于微軟Windows和Linux的個人計算機與蘋果Macintosh計算機連接起來的局域網(LANs)是異構的。異構網絡也被用于使用不同接入技術的無線網絡中。例如,通過無線局域網提供服務并在切換到蜂窩網絡時能夠維持服務的無線網絡稱為無線異構網絡。

We develop two distributed downlink resource allocation algorithms for user-centric, cell-free, spatially-distributed, multiple-input multiple-output (MIMO) networks. In such networks, each user is served by a subset of nearby transmitters that we call distributed units or DUs. The operation of the DUs in a region is controlled by a central unit (CU). Our first scheme is implemented at the DUs, while the second is implemented at the CUs controlling these DUs. We define a hybrid quality of service metric that enables distributed optimization of system resources in a proportional fair manner. Specifically, each of our algorithms performs user scheduling, beamforming, and power control while accounting for channel estimation errors. Importantly, our algorithm does not require information exchange amongst DUs (CUs) for the DU-distributed (CU-distributed) system, while also smoothly converging. Our results show that our CU-distributed system provides 1.3- to 1.8-fold network throughput compared to the DU-distributed system, with minor increases in complexity and front-haul load - and substantial gains over benchmark schemes like local zero-forcing. We also analyze the trade-offs provided by the CU-distributed system, hence highlighting the significance of deploying multiple CUs in user-centric cell-free networks.

In this paper, we propose an approach for constructing a multi-layer multi-orbit space information network (SIN) to provide high-speed continuous broadband connectivity for space missions (nanosatellite terminals) from the emerging space-based Internet providers. This notion has been motivated by the rapid developments in satellite technologies in terms of satellite miniaturization and reusable rocket launch, as well as the increased number of nanosatellite constellations in lower orbits for space downstream applications, such as earth observation, remote sensing, and Internet of Things (IoT) data collection. Specifically, space-based Internet providers, such as Starlink, OneWeb, and SES O3b, can be utilized for broadband connectivity directly to/from the nanosatellites, which allows a larger degree of connectivity in space network topologies. Besides, this kind of establishment is more economically efficient and eliminates the need for an excessive number of ground stations while achieving real-time and reliable space communications. This objective necessitates developing suitable radio access schemes and efficient scalable space backhauling using inter-satellite links (ISLs) and inter-orbit links (IOLs). Particularly, service-oriented radio access methods in addition to software-defined networking (SDN)-based architecture employing optimal routing mechanisms over multiple ISLs and IOLs are the most essential enablers for this novel concept. Thus, developing this symbiotic interaction between versatile satellite nodes across different orbits will lead to a breakthrough in the way that future downstream space missions and satellite networks are designed and operated.

Efficient collaboration between collaborative machine learning and wireless communication technology, forming a Federated Edge Learning (FEEL), has spawned a series of next-generation intelligent applications. However, due to the openness of network connections, the FEEL framework generally involves hundreds of remote devices (or clients), resulting in expensive communication costs, which is not friendly to resource-constrained FEEL. To address this issue, we propose a distributed approximate Newton-type algorithm with fast convergence speed to alleviate the problem of FEEL resource (in terms of communication resources) constraints. Specifically, the proposed algorithm is improved based on distributed L-BFGS algorithm and allows each client to approximate the high-cost Hessian matrix by computing the low-cost Fisher matrix in a distributed manner to find a "better" descent direction, thereby speeding up convergence. Second, we prove that the proposed algorithm has linear convergence in strongly convex and non-convex cases and analyze its computational and communication complexity. Similarly, due to the heterogeneity of the connected remote devices, FEEL faces the challenge of heterogeneous data and non-IID (Independent and Identically Distributed) data. To this end, we design a simple but elegant training scheme, namely FedOVA, to solve the heterogeneous statistical challenge brought by heterogeneous data. In this way, FedOVA first decomposes a multi-class classification problem into more straightforward binary classification problems and then combines their respective outputs using ensemble learning. In particular, the scheme can be well integrated with our communication efficient algorithm to serve FEEL. Numerical results verify the effectiveness and superiority of the proposed algorithm.

Next-generation satellite systems require more flexibility in resource management such that available radio resources can be dynamically allocated to meet time-varying and non-uniform traffic demands. Considering potential benefits of beam hopping (BH) and non-orthogonal multiple access (NOMA), we exploit the time-domain flexibility in multi-beam satellite systems by optimizing BH design, and enhance the power-domain flexibility via NOMA. In this paper, we investigate the synergy and mutual influence of beam hopping and NOMA. We jointly optimize power allocation, beam scheduling, and terminal-timeslot assignment to minimize the gap between requested traffic demand and offered capacity. In the solution development, we formally prove the NP-hardness of the optimization problem. Next, we develop a bounding scheme to tightly gauge the global optimum and propose a suboptimal algorithm to enable efficient resource assignment. Numerical results demonstrate the benefits of combining NOMA and BH, and validate the superiority of the proposed BH-NOMA schemes over benchmarks.

The proliferation of Internet-of-Things (IoT) devices and cloud-computing applications over siloed data centers is motivating renewed interest in the collaborative training of a shared model by multiple individual clients via federated learning (FL). To improve the communication efficiency of FL implementations in wireless systems, recent works have proposed compression and dimension reduction mechanisms, along with digital and analog transmission schemes that account for channel noise, fading, and interference. The prior art has mainly focused on star topologies consisting of distributed clients and a central server. In contrast, this paper studies FL over wireless device-to-device (D2D) networks by providing theoretical insights into the performance of digital and analog implementations of decentralized stochastic gradient descent (DSGD). First, we introduce generic digital and analog wireless implementations of communication-efficient DSGD algorithms, leveraging random linear coding (RLC) for compression and over-the-air computation (AirComp) for simultaneous analog transmissions. Next, under the assumptions of convexity and connectivity, we provide convergence bounds for both implementations. The results demonstrate the dependence of the optimality gap on the connectivity and on the signal-to-noise ratio (SNR) levels in the network. The analysis is corroborated by experiments on an image-classification task.

Vast amount of data generated from networks of sensors, wearables, and the Internet of Things (IoT) devices underscores the need for advanced modeling techniques that leverage the spatio-temporal structure of decentralized data due to the need for edge computation and licensing (data access) issues. While federated learning (FL) has emerged as a framework for model training without requiring direct data sharing and exchange, effectively modeling the complex spatio-temporal dependencies to improve forecasting capabilities still remains an open problem. On the other hand, state-of-the-art spatio-temporal forecasting models assume unfettered access to the data, neglecting constraints on data sharing. To bridge this gap, we propose a federated spatio-temporal model -- Cross-Node Federated Graph Neural Network (CNFGNN) -- which explicitly encodes the underlying graph structure using graph neural network (GNN)-based architecture under the constraint of cross-node federated learning, which requires that data in a network of nodes is generated locally on each node and remains decentralized. CNFGNN operates by disentangling the temporal dynamics modeling on devices and spatial dynamics on the server, utilizing alternating optimization to reduce the communication cost, facilitating computations on the edge devices. Experiments on the traffic flow forecasting task show that CNFGNN achieves the best forecasting performance in both transductive and inductive learning settings with no extra computation cost on edge devices, while incurring modest communication cost.

Driven by the visions of Internet of Things and 5G communications, the edge computing systems integrate computing, storage and network resources at the edge of the network to provide computing infrastructure, enabling developers to quickly develop and deploy edge applications. Nowadays the edge computing systems have received widespread attention in both industry and academia. To explore new research opportunities and assist users in selecting suitable edge computing systems for specific applications, this survey paper provides a comprehensive overview of the existing edge computing systems and introduces representative projects. A comparison of open source tools is presented according to their applicability. Finally, we highlight energy efficiency and deep learning optimization of edge computing systems. Open issues for analyzing and designing an edge computing system are also studied in this survey.

Network embedding (or graph embedding) has been widely used in many real-world applications. However, existing methods mainly focus on networks with single-typed nodes/edges and cannot scale well to handle large networks. Many real-world networks consist of billions of nodes and edges of multiple types and each node is associated with different attributes. In this paper, we formalize the problem of embedding learning for the Attributed Multiplex Heterogeneous Network and propose a unified framework to address this problem. The framework supports both transductive and inductive learning. We also give the theoretical analysis of the proposed framework, showing its connection with previous works and proving its better generalization ability. We conduct systematical evaluations for the proposed framework on four different genres of challenging datasets: Amazon, YouTube, Twitter, and Alibaba dataset. Experimental results demonstrate that with the learned embeddings from the proposed framework, we can achieve statistically significant improvements (e.g., 5.99-28.23% lift by F1 scores; p<<0.01, t-test) over previous state-of-the-arts for link prediction. The framework has also been successfully deployed on the recommendation system of a worldwide leading E-Commerce company Alibaba. Results of the offline A/B tests on product recommendation further confirm the effectiveness and efficiency of the framework in practice.

Network embedding has attracted considerable research attention recently. However, the existing methods are incapable of handling billion-scale networks, because they are computationally expensive and, at the same time, difficult to be accelerated by distributed computing schemes. To address these problems, we propose RandNE, a novel and simple billion-scale network embedding method. Specifically, we propose a Gaussian random projection approach to map the network into a low-dimensional embedding space while preserving the high-order proximities between nodes. To reduce the time complexity, we design an iterative projection procedure to avoid the explicit calculation of the high-order proximities. Theoretical analysis shows that our method is extremely efficient, and friendly to distributed computing schemes without any communication cost in the calculation. We demonstrate the efficacy of RandNE over state-of-the-art methods in network reconstruction and link prediction tasks on multiple datasets with different scales, ranging from thousands to billions of nodes and edges.

When deploying resource-intensive signal processing applications in wireless sensor or mesh networks, distributing processing blocks over multiple nodes becomes promising. Such distributed applications need to solve the placement problem (which block to run on which node), the routing problem (which link between blocks to map on which path between nodes), and the scheduling problem (which transmission is active when). We investigate a variant where the application graph may contain feedback loops and we exploit wireless networks? inherent multicast advantage. Thus, we propose Multicast-Aware Routing for Virtual network Embedding with Loops in Overlays (MARVELO) to find efficient solutions for scheduling and routing under a detailed interference model. We cast this as a mixed integer quadratically constrained optimisation problem and provide an efficient heuristic. Simulations show that our approach handles complex scenarios quickly.

北京阿比特科技有限公司