亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this work, we propose a globally optimal joint successive interference cancellation (SIC) ordering and power allocation (JSPA) algorithm for the sum-rate maximization problem in downlink multi-cell non-orthogonal multiple access (NOMA) systems. The proposed algorithm is based on the exploration of base stations (BSs) power consumption, and closed-form of optimal powers obtained for each cell. Although the optimal JSPA algorithm scales well with larger number of users, it is still exponential in the number of cells. For any suboptimal decoding order, we propose a low-complexity near-optimal joint rate and power allocation (JRPA) strategy in which the complete rate region of users is exploited. Furthermore, we design a near-optimal semi-centralized JSPA framework for a two-tier heterogeneous network such that it scales well with larger number of small-BSs and users. Numerical results show that JRPA highly outperforms the case that the users are enforced to achieve their channel capacity by imposing the well-known SIC necessary condition on power allocation. Moreover, the proposed semi-centralized JSPA framework significantly outperforms the fully distributed framework, where all the BSs operate in their maximum power budget. Therefore, the centralized JRPA and semi-centralized JSPA algorithms with near-optimal performances are good choices for larger number of cells and users.

相關內容

This paper considers a discrete time-Poisson noise channel which is used to model pulse-amplitude modulated optical communication with a direct-detection receiver. The goal of this paper is to obtain insights into the capacity and the structure of the capacity-achieving distribution for the channel under the amplitude constraint $\mathsf{A}$ and in the presence of dark current $\lambda$. Using recent theoretical progress on the structure of the capacity-achieving distribution, this paper develops a numerical algorithm, based on the gradient ascent and Blahut-Arimoto algorithms, for computing the capacity and the capacity-achieving distribution. The algorithm is used to perform extensive numerical simulations for various regimes of $\mathsf{A}$ and $\lambda$.

With the development of Internet-of-Things (IoT), we witness the explosive growth in the number of devices with sensing, computing, and communication capabilities, along with a large amount of raw data generated at the network edge. Mobile (multi-access) edge computing (MEC), acquiring and processing data at network edge (like base station (BS)) via wireless links, has emerged as a promising technique for real-time applications. In this paper, we consider the scenario that multiple devices sense then offload data to an edge server/BS, and the offloading throughput maximization problems are studied by joint radio-and-computation resource allocation, based on time-division multiple access (TDMA) and non-orthogonal multiple access (NOMA) multiuser computation offloading. Particularly, we take the sequence of TDMA-based multiuser transmission/offloading into account. The studied problems are NP-hard and non-convex. A set of low-complexity algorithms are designed based on decomposition approach and exploration of valuable insights of problems. They are either optimal or can achieve close-to-optimal performance as shown by simulation. The comprehensive simulation results show that the sequence optimized TDMA scheme achieves better throughput performance than the NOMA scheme, while the NOMA scheme is better under the assumptions of time-sharing strategy and the identical sensing capability of the devices.

The graphical balls-into-bins process is a generalization of the classical 2-choice balls-into-bins process, where the bins correspond to vertices of an arbitrary underlying graph $G$. At each time step an edge of $G$ is chosen uniformly at random, and a ball must be assigned to either of the two endpoints of this edge. The standard 2-choice process corresponds to the case of $G=K_n$. For any $k(n)$-edge-connected, $d(n)$-regular graph on $n$ vertices, and any number of balls, we give an allocation strategy that, with high probability, ensures a gap of $O((d/k) \log^4\hspace{-1pt}n \log \log n)$, between the load of any two bins. In particular, this implies polylogarithmic bounds for natural graphs such as cycles and tori, for which the classical greedy allocation strategy is conjectured to have a polynomial gap between the bin loads. For every graph $G$, we also show an $\Omega((d/k) + \log n)$ lower bound on the gap achievable by any allocation strategy. This implies that our strategy achieves the optimal gap, up to polylogarithmic factors, for every graph $G$. Our allocation algorithm is simple to implement and requires only $O(\log(n))$ time per allocation. It can be viewed as a more global version of the greedy strategy that compares average load on certain fixed sets of vertices, rather than on individual vertices. A key idea is to relate the problem of designing a good allocation strategy to that of finding suitable multi-commodity flows. To this end, we consider R\"{a}cke's cut-based decomposition tree and define certain orthogonal flows on it.

The tensor-train (TT) decomposition expresses a tensor in a data-sparse format used in molecular simulations, high-order correlation functions, and optimization. In this paper, we propose four parallelizable algorithms that compute the TT format from various tensor inputs: (1) Parallel-TTSVD for traditional format, (2) PSTT and its variants for streaming data, (3) Tucker2TT for Tucker format, and (4) TT-fADI for solutions of Sylvester tensor equations. We provide theoretical guarantees of accuracy, parallelization methods, scaling analysis, and numerical results. For example, for a $d$-dimension tensor in $\mathbb{R}^{n\times\dots\times n}$, a two-sided sketching algorithm PSTT2 is shown to have a memory complexity of $\mathcal{O}(n^{\lfloor d/2 \rfloor})$, improving upon $\mathcal{O}(n^{d-1})$ from previous algorithms.

We develop necessary conditions for geometrically fast convergence in the Wasserstein distance for Metropolis-Hastings algorithms on $\mathbb{R}^d$ when the metric used is a norm. This is accomplished through a lower bound which is of independent interest. We show exact convergence expressions in more general Wasserstein distances (e.g. total variation) can be achieved for a large class of distributions by centering an independent Gaussian proposal, that is, matching the optimal points of the proposal and target densities. This approach has applications for sampling posteriors of many popular Bayesian generalized linear models. In the case of Bayesian binary response regression, we show when the sample size $n$ and the dimension $d$ grow in such a way that the ratio $d/n \to \gamma \in (0, +\infty)$, the exact convergence rate can be upper bounded asymptotically.

In this paper, we demonstrate a formulation for optimizing coupled submodular maximization problems with provable sub-optimality bounds. In robotics applications, it is quite common that optimization problems are coupled with one another and therefore cannot be solved independently. Specifically, we consider two problems coupled if the outcome of the first problem affects the solution of a second problem that operates over a longer time scale. For example, in our motivating problem of environmental monitoring, we posit that multi-robot task allocation will potentially impact environmental dynamics and thus influence the quality of future monitoring, here modeled as a multi-robot intermittent deployment problem. The general theoretical approach for solving this type of coupled problem is demonstrated through this motivating example. Specifically, we propose a method for solving coupled problems modeled by submodular set functions with matroid constraints. A greedy algorithm for solving this class of problem is presented, along with sub-optimality guarantees. Finally, practical optimality ratios are shown through Monte Carlo simulations to demonstrate that the proposed algorithm can generate near-optimal solutions with high efficiency.

Since deep neural networks were developed, they have made huge contributions to everyday lives. Machine learning provides more rational advice than humans are capable of in almost every aspect of daily life. However, despite this achievement, the design and training of neural networks are still challenging and unpredictable procedures. To lower the technical thresholds for common users, automated hyper-parameter optimization (HPO) has become a popular topic in both academic and industrial areas. This paper provides a review of the most essential topics on HPO. The first section introduces the key hyper-parameters related to model training and structure, and discusses their importance and methods to define the value range. Then, the research focuses on major optimization algorithms and their applicability, covering their efficiency and accuracy especially for deep learning networks. This study next reviews major services and toolkits for HPO, comparing their support for state-of-the-art searching algorithms, feasibility with major deep learning frameworks, and extensibility for new modules designed by users. The paper concludes with problems that exist when HPO is applied to deep learning, a comparison between optimization algorithms, and prominent approaches for model evaluation with limited computational resources.

We propose accelerated randomized coordinate descent algorithms for stochastic optimization and online learning. Our algorithms have significantly less per-iteration complexity than the known accelerated gradient algorithms. The proposed algorithms for online learning have better regret performance than the known randomized online coordinate descent algorithms. Furthermore, the proposed algorithms for stochastic optimization exhibit as good convergence rates as the best known randomized coordinate descent algorithms. We also show simulation results to demonstrate performance of the proposed algorithms.

In this work, we consider the distributed optimization of non-smooth convex functions using a network of computing units. We investigate this problem under two regularity assumptions: (1) the Lipschitz continuity of the global objective function, and (2) the Lipschitz continuity of local individual functions. Under the local regularity assumption, we provide the first optimal first-order decentralized algorithm called multi-step primal-dual (MSPD) and its corresponding optimal convergence rate. A notable aspect of this result is that, for non-smooth functions, while the dominant term of the error is in $O(1/\sqrt{t})$, the structure of the communication network only impacts a second-order term in $O(1/t)$, where $t$ is time. In other words, the error due to limits in communication resources decreases at a fast rate even in the case of non-strongly-convex objective functions. Under the global regularity assumption, we provide a simple yet efficient algorithm called distributed randomized smoothing (DRS) based on a local smoothing of the objective function, and show that DRS is within a $d^{1/4}$ multiplicative factor of the optimal convergence rate, where $d$ is the underlying dimension.

In this paper, we study the optimal convergence rate for distributed convex optimization problems in networks. We model the communication restrictions imposed by the network as a set of affine constraints and provide optimal complexity bounds for four different setups, namely: the function $F(\xb) \triangleq \sum_{i=1}^{m}f_i(\xb)$ is strongly convex and smooth, either strongly convex or smooth or just convex. Our results show that Nesterov's accelerated gradient descent on the dual problem can be executed in a distributed manner and obtains the same optimal rates as in the centralized version of the problem (up to constant or logarithmic factors) with an additional cost related to the spectral gap of the interaction matrix. Finally, we discuss some extensions to the proposed setup such as proximal friendly functions, time-varying graphs, improvement of the condition numbers.

北京阿比特科技有限公司