亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Achieving high bit rates is the main goal of wireless technologies like 5G and beyond. This translates to obtaining high spectral efficiencies using large number of antennas at the transmitter and receiver (single user massive multiple input multiple output or SU-MMIMO). It is possible to have a large number of antennas in the mobile handset at mm-wave frequencies in the range $30 - 300$ GHz due to the small antenna size. In this work, we investigate the bit-error-rate (BER) performance of SU-MMIMO in two scenarios (a) using serially concatenated turbo code (SCTC) in uncorrelated channel and (b) parallel concatenated turbo code (PCTC) in correlated channel. Computer simulation results indicate that the BER is quite insensitive to re-transmissions and wide variations in the number of transmit and receive antennas. Moreover, we have obtained a BER of $10^{-5}$ at an average signal-to-interference plus noise ratio (SINR) per bit of just 1.25 dB with 512 transmit and receive antennas ($512\times 512$ SU-MMIMO system) with a spectral efficiency of 256 bits/transmission or 256 bits/sec/Hz in an uncorrelated channel. Similar BER results have been obtained for SU-MMIMO using PCTC in correlated channel. A semi-analytic approach to estimating the BER of a turbo code has been derived.

相關內容

In next generation Internet-of-Things, the overhead introduced by grant-based multiple access protocols may engulf the access network as a consequence of the unprecedented number of connected devices. Grant-free access protocols are therefore gaining an increasing interest to support massive access from machine-type devices with intermittent activity. In this paper, coded random access (CRA) with massive multiple input multiple output (MIMO) is investigated as a solution to design highly-scalable massive multiple access protocols, taking into account stringent requirements on latency and reliability. With a focus on signal processing aspects at the physical layer and their impact on the overall system performance, critical issues of successive interference cancellation (SIC) over fading channels are first analyzed. Then, SIC algorithms and a scheduler are proposed that can overcome some of the limitations of the current access protocols. The effectiveness of the proposed processing algorithms is validated by Monte Carlo simulation, for different CRA protocols and by comparisons with developed benchmarks.

We consider the problem of communicating a sequence of concepts, i.e., unknown and potentially stochastic maps, which can be observed only through examples, i.e., the mapping rules are unknown. The transmitter applies a learning algorithm to the available examples, and extracts knowledge from the data by optimizing a probability distribution over a set of models, i.e., known functions, which can better describe the observed data, and so potentially the underlying concepts. The transmitter then needs to communicate the learned models to a remote receiver through a rate-limited channel, to allow the receiver to decode the models that can describe the underlying sampled concepts as accurately as possible in their semantic space. After motivating our analysis, we propose the formal problem of communicating concepts, and provide its rate-distortion characterization, pointing out its connection with the concepts of empirical and strong coordination in a network. We also provide a bound for the distortion-rate function.

Ultra-reliable low latency communications (uRLLC) is adopted in the fifth generation (5G) mobile networks to better support mission-critical applications that demand high level of reliability and low latency. With the aid of well-established multiple-input multiple-output (MIMO) information theory, uRLLC in the future 6G is expected to provide enhanced capability towards extreme connectivity. Since the latency constraint can be represented equivalently by blocklength, channel coding theory at finite block-length plays an important role in the theoretic analysis of uRLLC. On the basis of Polyanskiy's and Yang's asymptotic results, we first derive the exact close-form expressions for the expectation and variance of channel dispersion. Then, the bound of average maximal achievable rate is given for massive MIMO systems in ideal independent and identically distributed fading channels. This is the study to reveal the underlying connections among the fundamental parameters in MIMO transmissions in a concise and complete close-form formula. Most importantly, the inversely proportional law observed therein implies that the latency can be further reduced at expense of spatial degrees of freedom.

Distributed tensor decomposition (DTD) is a fundamental data-analytics technique that extracts latent important properties from high-dimensional multi-attribute datasets distributed over edge devices. Conventionally its wireless implementation follows a one-shot approach that first computes local results at devices using local data and then aggregates them to a server with communication-efficient techniques such as over-the-air computation (AirComp) for global computation. Such implementation is confronted with the issues of limited storage-and-computation capacities and link interruption, which motivates us to propose a framework of on-the-fly communication-and-computing (FlyCom$^2$) in this work. The proposed framework enables streaming computation with low complexity by leveraging a random sketching technique and achieves progressive global aggregation through the integration of progressive uploading and multiple-input-multiple-output (MIMO) AirComp. To develop FlyCom$^2$, an on-the-fly sub-space estimator is designed to take real-time sketches accumulated at the server to generate online estimates for the decomposition. Its performance is evaluated by deriving both deterministic and probabilistic error bounds using the perturbation theory and concentration of measure. Both results reveal that the decomposition error is inversely proportional to the population of sketching observations received by the server. To further rein in the noise effect on the error, we propose a threshold-based scheme to select a subset of sufficiently reliable received sketches for DTD at the server. Experimental results validate the performance gain of the proposed selection algorithm and show that compared to its one-shot counterparts, the proposed FlyCom$^2$ achieves comparable (even better in the case of large eigen-gaps) decomposition accuracy besides dramatically reducing devices' complexity costs.

In conventional backscatter communication (BackCom) systems, time division multiple access (TDMA) and frequency division multiple access (FDMA) are generally adopted for multiuser backscattering due to their simplicity in implementation. However, as the number of backscatter devices (BDs) proliferates, there will be a high overhead under the traditional centralized control techniques, and the inter-user coordination is unaffordable for the passive BDs, which are of scarce concern in existing works and remain unsolved. To this end, in this paper, we propose a slotted ALOHA-based random access for BackCom systems, in which each BD is randomly chosen and is allowed to coexist with one active device for hybrid multiple access. To excavate and evaluate the performance, a resource allocation problem for max-min transmission rate is formulated, where transmit antenna selection, receive beamforming design, reflection coefficient adjustment, power control, and access probability determination are jointly considered. To deal with this intractable problem, we first transform the objective function with the max-min form into an equivalent linear one, and then decompose the resulting problem into three sub-problems. Next, a block coordinate descent (BCD)-based greedy algorithm with a penalty function, successive convex approximation, and linear programming are designed to obtain sub-optimal solutions for tractable analysis. Simulation results demonstrate that the proposed algorithm outperforms benchmark algorithms in terms of transmission rate and fairness.

Iterative memory-bound solvers commonly occur in HPC codes. Typical GPU implementations have a loop on the host side that invokes the GPU kernel as much as time/algorithm steps there are. The termination of each kernel implicitly acts the barrier required after advancing the solution every time step. We propose an execution model for running memory-bound iterative GPU kernels: PERsistent KernelS (PERKS). In this model, the time loop is moved inside persistent kernel, and device-wide barriers are used for synchronization. We then reduce the traffic to device memory by caching subset of the output in each time step in the unused registers and shared memory. PERKS can be generalized to any iterative solver: they largely independent of the solver's implementation. We explain the design principle of PERKS and demonstrate effectiveness of PERKS for a wide range of iterative 2D/3D stencil benchmarks (geomean speedup of $2.12$x for 2D stencils and $1.24$x for 3D stencils over state-of-art libraries), and a Krylov subspace conjugate gradient solver (geomean speedup of $4.86$x in smaller SpMV datasets from SuiteSparse and $1.43$x in larger SpMV datasets over a state-of-art library). All PERKS-based implementations available at: //github.com/neozhang307/PERKS.

Backscatter communication (BackCom), one of the core technologies to realize zero-power communication, is expected to be a pivotal paradigm for the next generation of the Internet of Things (IoT). However, the "strong" direct link (DL) interference (DLI) is traditionally assumed to be harmful, and generally drowns out the "weak" backscattered signals accordingly, thus deteriorating the performance of BackCom. In contrast to the previous efforts to eliminate the DLI, in this paper, we exploit the constructive interference (CI), in which the DLI contributes to the backscattered signal. To be specific, our objective is to maximize the received signal power by jointly optimizing the receive beamforming vectors and tag selection factors, which is, however, non-convex and difficult to solve due to constraints on the Kullback-Leibler (KL) divergence. In order to solve this problem, we first decompose the original problem, and then propose two algorithms to solve the sub-problem with beamforming design via a change of variables and semi-definite programming (SDP) and a greedy algorithm to solve the sub-problem with tag selection. In order to gain insight into the CI, we consider a special case with the single-antenna reader to reveal the channel angle between the backscattering link (BL) and the DL, in which the DLI will become constructive. Simulation results show that significant performance gain can always be achieved in the proposed algorithms compared with the traditional algorithms without the DL in terms of the strength of the received signal. The derived constructive channel angle for the BackCom system with the single-antenna reader is also confirmed by simulation results.

This paper studies the performance of a transmission and reception scheme for massive access under some practical challenges. One challenge is the near-far problem, i.e., an access point often receives signals from different transmitting devices at vastly different signal strengths. Another challenge is that the signals from different devices may be subject to arbitrary, analog, and heterogeneous delays. This paper considers a fully asynchronous model which is more realistic than the frame or symbol level synchrony needed in most existing work. A main theorem characterizes the asymptotic scaling of the codelength with the number of devices, a device delay upper bound, and the dynamic range of received signal strengths across devices. The scaling result suggests potential advantages of grouping devices with similar received signal strengths and letting the groups use time sharing. The performance of the proposed scheme is evaluated using simulations with and without grouping.

Intelligent reflecting surfaces (IRSs) have emerged as a promising technology to improve the efficiency of wireless communication systems. However, passive IRSs suffer from the ``multiplicative fading" effect, because the transmit signal will go through two fading hops. With the ability to amplify and reflect signals, active IRSs offer a potential way to tackle this issue, where the amplification energy only experiences the second hop. However, the fundamental limit and system design for active IRSs have not been fully understood, especially for multiple-input multiple-output (MIMO) systems. In this work, we consider the analysis and design for the large-scale active IRS-aided MIMO system assuming only statistical channel state information (CSI) at the transmitter and the IRS. The evaluation of the fundamental limit, i.e., ergodic rate, turns out to be a very difficult problem. To this end, we leverage random matrix theory (RMT) to derive the deterministic approximation (DA) for the ergodic rate, and then design an algorithm to jointly optimize the transmit covariance matrix at the transmitter and the reflection matrix at the active IRS. Numerical results demonstrate the accuracy of the derived DA and the effectiveness of the proposed optimization algorithm. The results in this work reveal interesting physical insights with respect to the advantage of active IRSs over their passive counterparts.

To address the sparsity and cold start problem of collaborative filtering, researchers usually make use of side information, such as social networks or item attributes, to improve recommendation performance. This paper considers the knowledge graph as the source of side information. To address the limitations of existing embedding-based and path-based methods for knowledge-graph-aware recommendation, we propose Ripple Network, an end-to-end framework that naturally incorporates the knowledge graph into recommender systems. Similar to actual ripples propagating on the surface of water, Ripple Network stimulates the propagation of user preferences over the set of knowledge entities by automatically and iteratively extending a user's potential interests along links in the knowledge graph. The multiple "ripples" activated by a user's historically clicked items are thus superposed to form the preference distribution of the user with respect to a candidate item, which could be used for predicting the final clicking probability. Through extensive experiments on real-world datasets, we demonstrate that Ripple Network achieves substantial gains in a variety of scenarios, including movie, book and news recommendation, over several state-of-the-art baselines.

北京阿比特科技有限公司