亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Unmanned Aerial Vehicles (UAVs) offer promising potential as communications node carriers, providing on-demand wireless connectivity to users. While existing literature presents various wireless channel models, it often overlooks the impact of UAV heading. This paper provides an experimental characterization of the Air-to-Ground (A2G) and Ground-to-Air (G2A) wireless channels in an open environment with no obstacles nor interference, considering the distance and the UAV heading. We analyze the received signal strength indicator and the TCP throughput between a ground user and a UAV, covering distances between 50~m and 500~m, and considering different UAV headings. Additionally, we characterize the antenna's radiation pattern based on UAV headings. The paper provides valuable perspectives on the capabilities of UAVs in offering on-demand and dynamic wireless connectivity, as well as highlights the significance of considering UAV heading and antenna configurations in real-world scenarios.

相關內容

In past years, non-terrestrial networks (NTNs) have emerged as a viable solution for providing ubiquitous connectivity for future wireless networks due to their ability to reach large geographical areas. However, the efficient integration and operation of an NTN with a classic terrestrial network (TN) is challenging due the large amount of parameters to tune. In this paper, we consider the downlink scenario of an integrated TN-NTN transmitting over the S band, comprised of low-earth orbit (LEO) satellites overlapping a large-scale ground cellular network. We propose a new resource management framework to optimize the user equipment (UE) performance by properly controlling the spectrum allocation, the UE association and the transmit power of ground base stations (BSs) and satellites. Our study reveals that, in rural scenarios, NTNs, combined with the proposed radio resource management framework, reduce the number of UEs that are out of coverage, highlighting the important role of NTNs in providing ubiquitous connectivity, and greatly improve the overall capacity of the network. Specifically, our solution leads to more than 200% gain in terms of mean data rate with respect to a network without satellites and a standard integrated TN-NTN when the resource allocation setting follows 3GPP recommendation.

Context: Domain-Driven Design (DDD) addresses software challenges, gaining attention for refactoring, reimplementation, and adoption. It centers on domain knowledge to solve complex business problems. Objective: This Systematic Literature Review (SLR) analyzes DDD research in software development to assess its effectiveness in solving architecture problems, identify challenges, and explore outcomes. Method: We selected 36 peer-reviewed studies and conducted quantitative and qualitative analysis. Results: DDD effectively improved software systems, emphasizing Ubiquitous Language, Bounded Context, and Domain Events. DDD in microservices gained prominence for system decomposition. Some studies lacked empirical evaluations, identifying challenges in onboarding and expertise. Conclusion: Adopting DDD benefits software development, involving stakeholders like engineers, architects, managers, and domain experts. More empirical evaluations and open discussions on challenges are needed. Collaboration between academia and industry advances DDD adoption and knowledge transfer in projects.

The division of one physical 5G communications infrastructure into several virtual network slices with distinct characteristics such as bandwidth, latency, reliability, security, and service quality is known as 5G network slicing. Each slice is a separate logical network that meets the requirements of specific services or use cases, such as virtual reality, gaming, autonomous vehicles, or industrial automation. The network slice can be adjusted dynamically to meet the changing demands of the service, resulting in a more cost-effective and efficient approach to delivering diverse services and applications over a shared infrastructure. This paper assesses various machine learning techniques, including the logistic regression model, linear discriminant model, k-nearest neighbor's model, decision tree model, random forest model, SVC BernoulliNB model, and GaussianNB model, to investigate the accuracy and precision of each model on detecting network slices. The report also gives an overview of 5G network slicing.

The emergence of Fifth-Generation (5G) communication networks has brought forth unprecedented connectivity with ultra-low latency, high data rates, and pervasive coverage. However, meeting the increasing demands of applications for seamless and high-quality communication, especially in rural areas, requires exploring innovative solutions that expand 5G beyond traditional terrestrial networks. Within the context of Non-Terrestrial Networks (NTNs), two promising technologies with vast potential are High Altitude Platforms (HAPs) and satellites. The combination of these two platforms is able to provide wide coverage and reliable communication in remote and inaccessible areas, and/or where terrestrial infrastructure is unavailable. This study evaluates the performance of the communication link between a Geostationary Equatorial Orbit (GEO) satellite and a HAP using the Internet of Drones Simulator (IoD-Sim), implemented in ns-3 and incorporating the 3GPP TR 38.811 channel model. The code base of IoD-Sim is extended to simulate HAPs, accounting for the Earths curvature in various geographic coordinate systems, and considering realistic mobility patterns. A simulation campaign is conducted to evaluate the GEO-to-HAP communication link in terms of Signal-to-Noise Ratio (SNR) in two different scenarios, considering the mobility of the HAP, and as a function of the frequency and the distance.

Decentralized storage networks offer services with intriguing possibilities to reduce inequalities in an extremely centralized market. The challenge is to conceive incentives that are fair in regard to the income distribution among peers. Despite many systems using tokens to incentivize forwarding data, like Swarm, little is known about the interplay between incentives, storage-, and network-parameters. This paper aims to help fill this gap by developing Tit-for-Token (Tit4Tok), a framework to understand fairness. Tit4Tok realizes a triad of altruism (acts of kindness such as debt forgiveness), reciprocity (Tit-for-Tat's mirroring cooperation), and monetary rewards as desired in the free market. Tit4Tok sheds light on incentives across the accounting and settlement layers. We present a comprehensive exploration of different factors when incentivized peers share bandwidth in a libp2p-based network, including uneven distributions emerging when gateways provide data to users outside the network. We quantified the Income-Fairness with the Gini coefficient, using multiple model instantiations and diverse approaches for debt cancellation. We propose regular changes to the gateway neighborhood and show that our shuffling method improves the Income-Fairness from 0.66 to 0.16. We quantified the non-negligible cost of tolerating free-riding (altruism). The performance is evaluated by extensive computer simulations and using an IPFS workload to study the effects of caching.

We present MOTLEE, a distributed mobile multi-object tracking algorithm that enables a team of robots to collaboratively track moving objects in the presence of localization error. Existing approaches to distributed tracking make limiting assumptions regarding the relative spatial relationship of sensors, including assuming a static sensor network or that perfect localization is available. Instead, we develop an algorithm based on the Kalman-Consensus filter for distributed tracking that properly leverages localization uncertainty in collaborative tracking. Further, our method allows the team to maintain an accurate understanding of dynamic objects in the environment by realigning robot frames and incorporating frame alignment uncertainty into our object tracking formulation. We evaluate our method in hardware on a team of three mobile ground robots tracking four people. Compared to previous works that do not account for localization error, we show that MOTLEE is resilient to localization uncertainties, enabling accurate tracking in distributed, dynamic settings with mobile tracking sensors.

Graph neural networks (GNNs) have demonstrated a significant boost in prediction performance on graph data. At the same time, the predictions made by these models are often hard to interpret. In that regard, many efforts have been made to explain the prediction mechanisms of these models from perspectives such as GNNExplainer, XGNN and PGExplainer. Although such works present systematic frameworks to interpret GNNs, a holistic review for explainable GNNs is unavailable. In this survey, we present a comprehensive review of explainability techniques developed for GNNs. We focus on explainable graph neural networks and categorize them based on the use of explainable methods. We further provide the common performance metrics for GNNs explanations and point out several future research directions.

Since deep neural networks were developed, they have made huge contributions to everyday lives. Machine learning provides more rational advice than humans are capable of in almost every aspect of daily life. However, despite this achievement, the design and training of neural networks are still challenging and unpredictable procedures. To lower the technical thresholds for common users, automated hyper-parameter optimization (HPO) has become a popular topic in both academic and industrial areas. This paper provides a review of the most essential topics on HPO. The first section introduces the key hyper-parameters related to model training and structure, and discusses their importance and methods to define the value range. Then, the research focuses on major optimization algorithms and their applicability, covering their efficiency and accuracy especially for deep learning networks. This study next reviews major services and toolkits for HPO, comparing their support for state-of-the-art searching algorithms, feasibility with major deep learning frameworks, and extensibility for new modules designed by users. The paper concludes with problems that exist when HPO is applied to deep learning, a comparison between optimization algorithms, and prominent approaches for model evaluation with limited computational resources.

Graph convolutional network (GCN) has been successfully applied to many graph-based applications; however, training a large-scale GCN remains challenging. Current SGD-based algorithms suffer from either a high computational cost that exponentially grows with number of GCN layers, or a large space requirement for keeping the entire graph and the embedding of each node in memory. In this paper, we propose Cluster-GCN, a novel GCN algorithm that is suitable for SGD-based training by exploiting the graph clustering structure. Cluster-GCN works as the following: at each step, it samples a block of nodes that associate with a dense subgraph identified by a graph clustering algorithm, and restricts the neighborhood search within this subgraph. This simple but effective strategy leads to significantly improved memory and computational efficiency while being able to achieve comparable test accuracy with previous algorithms. To test the scalability of our algorithm, we create a new Amazon2M data with 2 million nodes and 61 million edges which is more than 5 times larger than the previous largest publicly available dataset (Reddit). For training a 3-layer GCN on this data, Cluster-GCN is faster than the previous state-of-the-art VR-GCN (1523 seconds vs 1961 seconds) and using much less memory (2.2GB vs 11.2GB). Furthermore, for training 4 layer GCN on this data, our algorithm can finish in around 36 minutes while all the existing GCN training algorithms fail to train due to the out-of-memory issue. Furthermore, Cluster-GCN allows us to train much deeper GCN without much time and memory overhead, which leads to improved prediction accuracy---using a 5-layer Cluster-GCN, we achieve state-of-the-art test F1 score 99.36 on the PPI dataset, while the previous best result was 98.71 by [16]. Our codes are publicly available at //github.com/google-research/google-research/tree/master/cluster_gcn.

The prevalence of networked sensors and actuators in many real-world systems such as smart buildings, factories, power plants, and data centers generate substantial amounts of multivariate time series data for these systems. The rich sensor data can be continuously monitored for intrusion events through anomaly detection. However, conventional threshold-based anomaly detection methods are inadequate due to the dynamic complexities of these systems, while supervised machine learning methods are unable to exploit the large amounts of data due to the lack of labeled data. On the other hand, current unsupervised machine learning approaches have not fully exploited the spatial-temporal correlation and other dependencies amongst the multiple variables (sensors/actuators) in the system for detecting anomalies. In this work, we propose an unsupervised multivariate anomaly detection method based on Generative Adversarial Networks (GANs). Instead of treating each data stream independently, our proposed MAD-GAN framework considers the entire variable set concurrently to capture the latent interactions amongst the variables. We also fully exploit both the generator and discriminator produced by the GAN, using a novel anomaly score called DR-score to detect anomalies by discrimination and reconstruction. We have tested our proposed MAD-GAN using two recent datasets collected from real-world CPS: the Secure Water Treatment (SWaT) and the Water Distribution (WADI) datasets. Our experimental results showed that the proposed MAD-GAN is effective in reporting anomalies caused by various cyber-intrusions compared in these complex real-world systems.

北京阿比特科技有限公司