亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The massive multiple-input multiple-output (MIMO) transmission technology has recently attracted much attention in the non-geostationary, e.g., low earth orbit (LEO) satellite communication (SATCOM) systems since it can significantly improve the energy efficiency (EE) and spectral efficiency. In this work, we develop a hybrid analog/digital precoding technique in the massive MIMO LEO SATCOM downlink, which reduces the onboard hardware complexity and power consumption. In the proposed scheme, the analog precoder is implemented via a more practical twin-resolution phase shifting (TRPS) network to make a meticulous tradeoff between the power consumption and array gain. In addition, we consider and study the impact of the distortion effect of the nonlinear power amplifiers (NPAs) in the system design. By jointly considering all the above factors, we propose an efficient algorithmic approach for the TRPS-based hybrid precoding problem with NPAs. Numerical results show the EE gains considering the nonlinear distortion and the performance superiority of the proposed TRPS-based hybrid precoding scheme over the baselines.

相關內容

In the modern digital world, a user of a smart system remains surrounded with as well as observed by a number of tiny IoT devices round the clock almost everywhere. Unfortunately, the ability of these devices to sense and share various physical parameters, although play a key role in these smart systems but also causes the threat of breach of the privacy of the users. Existing solutions for privacy-preserving computation for decentralized systems either use too complex cryptographic techniques or exploit an extremely high degree of message passing and hence, are not suitable for the resource-constrained IoT devices that constitute a significant fraction of a smart system. In this work, we propose a novel lightweight strategy LiPI for Privacy-Preserving Data Aggregation in low-power IoT systems. The design of the strategy is based on decentralized and collaborative data obfuscation and does not exploit any dependency on any trusted third party. In addition, besides minimizing the communication requirements, we make appropriate use of the recent advances in Synchronous-Transmission (ST)-based protocols in our design to accomplish the goal efficiently. Extensive evaluation based on comprehensive experiments in both simulation platforms and publicly available WSN/IoT testbeds demonstrates that our strategy works up to at least 51.7% faster and consumes 50.5% lesser energy compared to the existing state-of-the-art strategies.

Rate Splitting Multiple Access (RSMA) has emerged as an effective interference management scheme for applications that require high data rates. Although RSMA has shown advantages in rate enhancement and spectral efficiency, it has yet not to be ready for latency-sensitive applications such as virtual reality streaming, which is an essential building block of future 6G networks. Unlike conventional High-Definition streaming applications, streaming virtual reality applications requires not only stringent latency requirements but also the computation capability of the transmitter to quickly respond to dynamic users' demands. Thus, conventional RSMA approaches usually fail to address the challenges caused by computational demands at the transmitter, let alone the dynamic nature of the virtual reality streaming applications. To overcome the aforementioned challenges, we first formulate the virtual reality streaming problem assisted by RSMA as a joint communication and computation optimization problem. A novel multicast approach is then proposed to cluster users into different groups based on a Field-of-View metric and transmit multicast streams in a hierarchical manner. After that, we propose a deep reinforcement learning approach to obtain the solution for the optimization problem. Extensive simulations show that our framework can achieve the millisecond-latency requirement, which is much lower than other baseline schemes.

This paper investigates the accuracy of recently proposed stochastic geometry-based modeling of low earth orbit (LEO) satellite networks. In particular, we use the Wasserstein Distance-inspired method to analyze the distances between different models, including Fibonacci lattice and orbit models. We propose an algorithm to calculate the distance between the generated point sets. Next, we test the algorithm's performance and analyze the distance between the stochastic geometry model and other more widely acceptable models using numerical results.

Clustering has received much attention in Statistics and Machine learning with the aim of developing statistical models and autonomous algorithms which are capable of acquiring information from raw data in order to perform exploratory analysis.Several techniques have been developed to cluster sampled univariate vectors only considering the average value over the whole period and as such they have not been able to explore fully the underlying distribution as well as other features of the data, especially in presence of structured time series. We propose a model-based clustering technique that is based on quantile regression permitting us to cluster bivariate time series at different quantile levels. We model the within cluster density using asymmetric Laplace distribution allowing us to take into account asymmetry in the distribution of the data. We evaluate the performance of the proposed technique through a simulation study. The method is then applied to cluster time series observed from Glob-colour satellite data related to trophic status indices with aim of evaluating their temporal dynamics in order to identify homogeneous areas, in terms of trophic status, in the Gulf of Gabes.

By moving to millimeter wave (mmWave) frequencies, base stations (BSs) will be densely deployed to provide seamless coverage in sixth generation (6G) mobile communication systems, which, unfortunately, leads to severe cell-edge problem. In addition, with massive multiple-input-multiple-output (MIMO) antenna arrays employed at BSs, the beamspace channel is sparse for each user, and thus there is no need to serve all the users in a cell by all the beams therein jointly. Therefore, it is of paramount importance to develop a flexible clustered cell-free networking scheme that can decompose the whole network into a number of weakly interfered small subnetworks operating independently and in parallel. Given a per-user rate constraint for service quality guarantee, this paper aims to maximize the number of decomposed subnetworks so as to reduce the signaling overhead and system complexity as much as possible. By formulating it as a bipartite graph partitioning problem, a rate-constrained network decomposition (RC-NetDecomp) algorithm is proposed, which can smoothly tune the network structure from the current cellular network with simple beam allocation to a fully cooperative network by increasing the required per-user rate. Simulation results demonstrate that the proposed RC-NetDecomp algorithm outperforms existing baselines in terms of average per-user rate, fairness among users and energy efficiency.

In the era of multinational cooperation, gathering and analyzing the satellite images are getting easier and more important. Typical procedure of the satellite image analysis include transmission of the bulky image data from satellite to the ground producing significant overhead. To reduce the amount of the transmission overhead while making no harm to the analysis result, we propose a novel image compression scheme RDIC in this paper. RDIC is a reasoning based image compression scheme that compresses an image according to the pixel importance score acquired from the analysis model itself. From the experimental results we showed that our RDIC scheme successfully captures the important regions in an image showing high compression rate and low accuracy loss.

In this paper, we consider multiple solar-powered wireless nodes which utilize the harvested solar energy to transmit collected data to multiple unmanned aerial vehicles (UAVs) in the uplink. In this context, we jointly design UAV flight trajectories, UAV-node communication associations, and uplink power control to effectively utilize the harvested energy and manage co-channel interference within a finite time horizon. To ensure the fairness of wireless nodes, the design goal is to maximize the worst user rate. The joint design problem is highly non-convex and requires causal (future) knowledge of the instantaneous energy state information (ESI) and channel state information (CSI), which are difficult to predict in reality. To overcome these challenges, we propose an offline method based on convex optimization that only utilizes the average ESI and CSI. The problem is solved by three convex subproblems with successive convex approximation (SCA) and alternative optimization. We further design an online convex-assisted reinforcement learning (CARL) method to improve the system performance based on real-time environmental information. An idea of multi-UAV regulated flight corridors, based on the optimal offline UAV trajectories, is proposed to avoid unnecessary flight exploration by UAVs and enables us to improve the learning efficiency and system performance, as compared with the conventional reinforcement learning (RL) method. Computer simulations are used to verify the effectiveness of the proposed methods. The proposed CARL method provides 25% and 12% improvement on the worst user rate over the offline and conventional RL methods.

With the growing complexity of big data workloads that require abundant data and computation, data centers consume a tremendous amount of power daily. In an effort to minimize data center power consumption, several studies developed power models that can be used for job scheduling either reducing the number of active servers or balancing workloads across servers at their peak energy efficiency points. Due to increasing software and hardware heterogeneity, we observed that there is no single power model that works the best for all server conditions. Some complicated machine learning models themselves incur performance and power overheads and hence it is not desirable to use them frequently. There are no power models that consider containerized workload execution. In this paper, we propose a hybrid server power model, Hydra, that considers both prediction accuracy and performance overhead. Hydra dynamically chooses the best power model for the given server conditions. Compared with state-of-the-art solutions, Hydra outperforms across all compute-intensity levels on heterogeneous servers.

Clustering is one of the most fundamental and wide-spread techniques in exploratory data analysis. Yet, the basic approach to clustering has not really changed: a practitioner hand-picks a task-specific clustering loss to optimize and fit the given data to reveal the underlying cluster structure. Some types of losses---such as k-means, or its non-linear version: kernelized k-means (centroid based), and DBSCAN (density based)---are popular choices due to their good empirical performance on a range of applications. Although every so often the clustering output using these standard losses fails to reveal the underlying structure, and the practitioner has to custom-design their own variation. In this work we take an intrinsically different approach to clustering: rather than fitting a dataset to a specific clustering loss, we train a recurrent model that learns how to cluster. The model uses as training pairs examples of datasets (as input) and its corresponding cluster identities (as output). By providing multiple types of training datasets as inputs, our model has the ability to generalize well on unseen datasets (new clustering tasks). Our experiments reveal that by training on simple synthetically generated datasets or on existing real datasets, we can achieve better clustering performance on unseen real-world datasets when compared with standard benchmark clustering techniques. Our meta clustering model works well even for small datasets where the usual deep learning models tend to perform worse.

To address the sparsity and cold start problem of collaborative filtering, researchers usually make use of side information, such as social networks or item attributes, to improve recommendation performance. This paper considers the knowledge graph as the source of side information. To address the limitations of existing embedding-based and path-based methods for knowledge-graph-aware recommendation, we propose Ripple Network, an end-to-end framework that naturally incorporates the knowledge graph into recommender systems. Similar to actual ripples propagating on the surface of water, Ripple Network stimulates the propagation of user preferences over the set of knowledge entities by automatically and iteratively extending a user's potential interests along links in the knowledge graph. The multiple "ripples" activated by a user's historically clicked items are thus superposed to form the preference distribution of the user with respect to a candidate item, which could be used for predicting the final clicking probability. Through extensive experiments on real-world datasets, we demonstrate that Ripple Network achieves substantial gains in a variety of scenarios, including movie, book and news recommendation, over several state-of-the-art baselines.

北京阿比特科技有限公司