亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Spectrum slicing of the shared radio resources is a critical task in 5G networks with heterogeneous services, through which each service gets performance guarantees. In this paper, we consider a setup in which a Base Station (BS) should serve two types of traffic in the downlink, enhanced mobile broadband (eMBB) and ultra-reliable low-latency communication (URLLC), respectively. Two resource allocation strategies are compared: non-orthogonal multiple access (NOMA) and orthogonal multiple access (OMA). A framework for power minimization is presented, in which the BS knows the channel state information (CSI) of the eMBB users only. Nevertheless, due to the resource sharing, it is shown that this knowledge can be used also to the benefit of the URLLC users. The numerical results show that NOMA leads to a lower power consumption compared to OMA for every simulation parameter under test.

相關內容

《計算機信息》雜志發表高質量的論文,擴大了運籌學和計算的范圍,尋求有關理論、方法、實驗、系統和應用方面的原創研究論文、新穎的調查和教程論文,以及描述新的和有用的軟件工具的論文。官網鏈接: · 無偏 · · tuning · 優化器 ·
2021 年 12 月 27 日

Unrolled computation graphs arise in many scenarios, including training RNNs, tuning hyperparameters through unrolled optimization, and training learned optimizers. Current approaches to optimizing parameters in such computation graphs suffer from high variance gradients, bias, slow updates, or large memory usage. We introduce a method called Persistent Evolution Strategies (PES), which divides the computation graph into a series of truncated unrolls, and performs an evolution strategies-based update step after each unroll. PES eliminates bias from these truncations by accumulating correction terms over the entire sequence of unrolls. PES allows for rapid parameter updates, has low memory usage, is unbiased, and has reasonable variance characteristics. We experimentally demonstrate the advantages of PES compared to several other methods for gradient estimation on synthetic tasks, and show its applicability to training learned optimizers and tuning hyperparameters.

The study of statistical estimation without distributional assumptions on data values, but with knowledge of data collection methods was recently introduced by Chen, Valiant and Valiant (NeurIPS 2020). In this framework, the goal is to design estimators that minimize the worst-case expected error. Here the expectation is over a known, randomized data collection process from some population, and the data values corresponding to each element of the population are assumed to be worst-case. Chen, Valiant and Valiant show that, when data values are $\ell_{\infty}$-normalized, there is a polynomial time algorithm to compute an estimator for the mean with worst-case expected error that is within a factor $\frac{\pi}{2}$ of the optimum within the natural class of semilinear estimators. However, their algorithm is based on optimizing a somewhat complex concave objective function over a constrained set of positive semidefinite matrices, and thus does not come with explicit runtime guarantees beyond being polynomial time in the input. In this paper we design provably efficient algorithms for approximating the optimal semilinear estimator based on online convex optimization. In the setting where data values are $\ell_{\infty}$-normalized, our algorithm achieves a $\frac{\pi}{2}$-approximation by iteratively solving a sequence of standard SDPs. When data values are $\ell_2$-normalized, our algorithm iteratively computes the top eigenvector of a sequence of matrices, and does not lose any multiplicative approximation factor. We complement these positive results by stating a simple combinatorial condition which, if satisfied by a data collection process, implies that any (not necessarily semilinear) estimator for the mean has constant worst-case expected error.

In this paper, we study a general multi-cluster wireless powered communication network (WPCN) with user cooperation under harvest-then-transmit (HTT) protocol where the hybrid access point (HAP) as well as each user is equipped with multiple antennas. In the downlink phase of HTT, the HAP employs beamforming to transfer energy to the users. In the uplink phase, users in each cluster transmit their signals to the HAP and to their cluster heads (CHs). Afterward, the CHs first relay the signals of their cluster users and then transmit their own information signals to the HAP. The aim is to design the energy beamforming (EB) matrix, transmit covariance matrices of the users and time allocations among energy transfer and cooperation phases in order to optimize the max-min and sum throughputs of the network. The corresponding maximization problems are non-convex and NP-hard in general. We devise iterative algorithm based on alternating optimization (AO) and then the minorization-maximization (MM) technique is used to deal with the non-convex sub-problems with respect to (w.r.t.) the EB and covariance matrices in each iteration. We recast the resulting sub-problems as a convex second order cone programming (SOCP) and quadratic constraint quadratic programming (QCQP) for the max-min and sum throughput maximization problems, respectively. We also consider imperfect channel state information (CSI) at the HAP and CHs and non-linearity in energy harvesting (EH) circuits. Numerical examples show that the proposed cooperative method can effectively improve the achievable throughput in the multi-cluster wireless powered communication under various setups.

The reconfigurable intelligent surface (RIS) has arose an upsurging research interest recently due to its promising outlook in 5G-and-beyond wireless networks. With the assistance of RIS, the wireless propagation environment is no longer static and could be customized to support diverse service requirements. In this paper, we will approach the rate maximization problems in RIS-aided wireless networks by considering the beamforming and reflecting design jointly. Three representative design problems from different system settings are investigated based on a proposed unified algorithmic framework via the block minorization-maximization (BMM) method. Extensions and generalizations of the proposed framework in dealing with some other related problems are further presented. Merits of the proposed algorithms are demonstrated through numerical simulations in comparison with the state-of-the-art methods.

This paper analyzes how the distortion created by hardware impairments in a multiple-antenna base station affects the uplink spectral efficiency (SE), with focus on Massive MIMO. This distortion is correlated across the antennas, but has been often approximated as uncorrelated to facilitate (tractable) SE analysis. To determine when this approximation is accurate, basic properties of distortion correlation are first uncovered. Then, we separately analyze the distortion correlation caused by third-order non-linearities and by quantization. Finally, we study the SE numerically and show that the distortion correlation can be safely neglected in Massive MIMO when there are sufficiently many users. Under i.i.d. Rayleigh fading and equal signal-to-noise ratios (SNRs), this occurs for more than five transmitting users. Other channel models and SNR variations have only minor impact on the accuracy. We also demonstrate the importance of taking the distortion characteristics into account in the receive combining.

We propose a new wavelet-based method for density estimation when the data are size-biased. More specifically, we consider the power of the density of interest, where this power is some value greater than or equal to half. Warped wavelet bases are employed, where warping is attained by some continuous cumulative distribution function. This can be seen as a general framework for which the conventional orthonormal wavelet estimation is the case with the standard uniform c.d.f. We show that both linear and nonlinear wavelet estimators are consistent, with optimal and/or near-optimal rates. Monte Carlo simulations are performed to compare four special set-ups which are easy to interpret in practice. A real dataset application illustrates the method. We observe that warped bases provide more flexible and better estimates for both simulated and real data. Moreover, we can see that estimating the density power (for instance, its square root) further improves results.

Influence maximization is the task of selecting a small number of seed nodes in a social network to maximize the spread of the influence from these seeds, and it has been widely investigated in the past two decades. In the canonical setting, the whole social network as well as its diffusion parameters is given as input. In this paper, we consider the more realistic sampling setting where the network is unknown and we only have a set of passively observed cascades that record the set of activated nodes at each diffusion step. We study the task of influence maximization from these cascade samples (IMS), and present constant approximation algorithms for this task under mild conditions on the seed set distribution. To achieve the optimization goal, we also provide a novel solution to the network inference problem, that is, learning diffusion parameters and the network structure from the cascade data. Comparing with prior solutions, our network inference algorithm requires weaker assumptions and does not rely on maximum-likelihood estimation and convex programming. Our IMS algorithms enhance the learning-and-then-optimization approach by allowing a constant approximation ratio even when the diffusion parameters are hard to learn, and we do not need any assumption related to the network structure or diffusion parameters.

Driven by the visions of Internet of Things and 5G communications, the edge computing systems integrate computing, storage and network resources at the edge of the network to provide computing infrastructure, enabling developers to quickly develop and deploy edge applications. Nowadays the edge computing systems have received widespread attention in both industry and academia. To explore new research opportunities and assist users in selecting suitable edge computing systems for specific applications, this survey paper provides a comprehensive overview of the existing edge computing systems and introduces representative projects. A comparison of open source tools is presented according to their applicability. Finally, we highlight energy efficiency and deep learning optimization of edge computing systems. Open issues for analyzing and designing an edge computing system are also studied in this survey.

Implicit probabilistic models are models defined naturally in terms of a sampling procedure and often induces a likelihood function that cannot be expressed explicitly. We develop a simple method for estimating parameters in implicit models that does not require knowledge of the form of the likelihood function or any derived quantities, but can be shown to be equivalent to maximizing likelihood under some conditions. Our result holds in the non-asymptotic parametric setting, where both the capacity of the model and the number of data examples are finite. We also demonstrate encouraging experimental results.

Network Virtualization is one of the most promising technologies for future networking and considered as a critical IT resource that connects distributed, virtualized Cloud Computing services and different components such as storage, servers and application. Network Virtualization allows multiple virtual networks to coexist on same shared physical infrastructure simultaneously. One of the crucial keys in Network Virtualization is Virtual Network Embedding, which provides a method to allocate physical substrate resources to virtual network requests. In this paper, we investigate Virtual Network Embedding strategies and related issues for resource allocation of an Internet Provider(InP) to efficiently embed virtual networks that are requested by Virtual Network Operators(VNOs) who share the same infrastructure provided by the InP. In order to achieve that goal, we design a heuristic Virtual Network Embedding algorithm that simultaneously embeds virtual nodes and virtual links of each virtual network request onto physic infrastructure. Through extensive simulations, we demonstrate that our proposed scheme improves significantly the performance of Virtual Network Embedding by enhancing the long-term average revenue as well as acceptance ratio and resource utilization of virtual network requests compared to prior algorithms.

北京阿比特科技有限公司