亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper presents LuMaMi28, a real-time 28 GHz massive multiple-input multiple-output (MIMO) testbed. In this testbed, the base station has 16 transceiver chains with a fully-digital beamforming architecture (with different pre-coding algorithms) and simultaneously supports multiple user equipments (UEs) with spatial multiplexing. The UEs are equipped with a beam-switchable antenna array for real-time antenna selection where the one with the highest channel magnitude, out of four pre-defined beams, is selected. For the beam-switchable antenna array, we consider two kinds of UE antennas, with different beam-width and different peak-gain. Based on this testbed, we provide measurement results for millimeter-wave (mmWave) massive MIMO performance in different real-life scenarios with static and mobile UEs. We explore the potential benefit of the mmWave massive MIMO systems with antenna selection based on measured channel data, and discuss the performance results through real-time measurements.

相關內容

We propose a joint channel estimation and data detection (JED) algorithm for densely-populated cell-free massive multiuser (MU) multiple-input multiple-output (MIMO) systems, which reduces the channel training overhead caused by the presence of hundreds of simultaneously transmitting user equipments (UEs). Our algorithm iteratively solves a relaxed version of a maximum a-posteriori JED problem and simultaneously exploits the sparsity of cell-free massive MU-MIMO channels as well as the boundedness of QAM constellations. In order to improve the performance and convergence of the algorithm, we propose methods that permute the access point and UE indices to form so-called virtual cells, which leads to better initial solutions. We assess the performance of our algorithm in terms of root-mean-squared-symbol error, bit error rate, and mutual information, and we demonstrate that JED significantly reduces the pilot overhead compared to orthogonal training, which enables reliable communication with short packets to a large number of UEs.

THz transmissions suffer from pointing errors due to antenna misalignment and incur higher path loss because of molecular absorption at such a high frequency. In this paper, we employ an amplify-and-forward (AF) dual-hop relay to mitigate the effect of pointing errors and extend the range of a wireless backhaul network. We provide statistical analysis on the performance of the considered system by deriving analytical expressions for the outage probability, average bit-error-rate (BER), average signal-to-noise ratio (SNR), and a lower bound on the ergodic capacity over independent and identical (i.i.d) $\alpha$-$\mu$ fading model and statistical pointing errors. Using computer simulations, we validate the derived analysis of the relay-assisted system. We demonstrate the effect of the system parameters on outage probability and average BER with the help of diversity order. We show that data rates up to several \mbox{Gbps} can be achieved using THz transmissions, which is desirable for next-generation wireless systems, especially for backhaul applications.

We present an analytical framework for the channel estimation and the data detection in massive multiple-input multiple-output uplink systems with 1-bit analog-to-digital converters (ADCs) and i.i.d. Rayleigh fading. First, we provide closed-form expressions of the mean squared error (MSE) of the channel estimation considering the state-of-the-art linear minimum MSE estimator and the class of scaled least-squares estimators. For the data detection, we provide closed-form expressions of the expected value and the variance of the estimated symbols when maximum ratio combining is adopted, which can be exploited to efficiently implement minimum distance detection and, potentially, to design the set of transmit symbols. Our analytical findings explicitly depend on key system parameters such as the signal-to-noise ratio (SNR), the number of user equipments, and the pilot length, thus enabling a precise characterization of the performance of the channel estimation and the data detection with 1-bit ADCs. The proposed analysis highlights a fundamental SNR trade-off, according to which operating at the right noise level significantly enhances the system performance.

Mobile parcel lockers (MPLs) have been recently introduced by urban logistics operators as a means to reduce traffic congestion and operational cost. Their capability to relocate their position during the day has the potential to improve customer accessibility and convenience (if deployed and planned accordingly), allowing customers to collect parcels at their preferred time among one of the multiple locations. This paper proposes an integer programming model to solve the Location Routing Problem for MPLs to determine the optimal configuration and locker routes. In solving this model, a Hybrid Q-Learning algorithm-based Method (HQM) integrated with global and local search mechanisms is developed, the performance of which is examined for different problem sizes and benchmarked with genetic algorithms. Furthermore, we introduced two route adjustment strategies to resolve stochastic events that may cause delays. The results show that HQM achieves 443.41% improvement on average in solution improvement, compared with the 94.91% improvement of heuristic counterparts, suggesting HQM enables a more efficient search for better solutions. Finally, we identify critical factors that contribute to service delays and investigate their effects.

We investigate a multi-pair two-way decode-andforward relaying aided massive multiple-input multiple-output antenna system under Rician fading channels, in which multiple pairs of users exchange information through a relay station having multiple antennas. Imperfect channel state information is considered in the context of maximum-ratio processing. Closedform expressions are derived for approximating the sum spectral efficiency (SE) of the system. Moreover, we obtain the powerscaling laws at the users and the relay station to satisfy a certain SE requirement in three typical scenarios. Finally, simulations validate the accuracy of the derived results.

We consider an information update system consisting of $N$ sources sending status packets at random instances according to a Poisson process to a remote monitor through a single server. We assume a heteregeneous server with exponentially distributed service times which is equipped with a waiting room holding the freshest packet from each source referred to as Single Buffer Per-Source Queueing (SBPSQ). The sources are assumed to be equally important, i.e., non-weighted average AoI is used as the information freshness metric, and subsequently two symmetric scheduling policies are studied in this paper, namely First Source First Serve (FSFS) and the Earliest Served First Serve (ESFS) policies, the latter policy being proposed the first time in the current paper to the best of our knowledge. By employing the theory of Markov Fluid Queues (MFQ), an analytical model is proposed to obtain the exact distribution of the Age of Information (AoI) for each source when the FSFS and ESFS policies are employed at the server. Subsequently, a benchmark scheduling-free scheme named as Single Buffer with Replacement (SBR) that uses a single one-packet buffer shared by all sources is also studied with a similar but less complex analytical model. We comparatively study the performance of the three schemes through numerical examples and show that the proposed ESFS policy outperforms the other two schemes in terms of the average AoI and the age violation probability averaged across all sources, in a scenario of sources possessing different traffic intensities but sharing a common service time.

Contention-based wireless channel access methods like CSMA and ALOHA paved the way for the rise of the Internet of Things in industrial applications (IIoT). However, to cope with increasing demands for reliability and throughput, several mostly TDMA-based protocols like IEEE 802.15.4 and its extensions were proposed. Nonetheless, many of these IIoT-protocols still require contention-based communication, e.g., for slot allocation and broadcast transmission. In many cases, subtle but hidden patterns characterize this secondary traffic. Present contention-based protocols are unaware of these hidden patterns and can therefore not exploit this information. Especially in dense networks, they often do not provide sufficient reliability for primary traffic, e.g., they are unable to allocate transmission slots in time. In this paper, we propose QMA, a contention-based multiple access scheme based on Q-learning, which dynamically adapts transmission times to avoid collisions by learning patterns in the contention-based traffic. QMA is designed to be resource-efficient and targets small embedded devices. We show that QMA solves the hidden node problem without the additional overhead of RTS / CTS messages and verify the behaviour of QMA in the FIT IoT-LAB testbed. Finally, QMA's scalability is studied by simulation, where it is used for GTS allocation in IEEE 802.15.4 DSME. Results show that QMA considerably increases reliability and throughput in comparison to CSMA/CA, especially in networks with a high load.

Unmanned aerial vehicles (UAVs) can be integrated into wireless sensor networks (WSNs) for smart city applications in several ways. Among them, a UAV can be employed as a relay in a "store-carry and forward" fashion by uploading data from ground sensors and metering devices and, then, downloading it to a central unit. However, both the uploading and downloading phases can be prone to potential threats and attacks. As a legacy from traditional wireless networks, the jamming attack is still one of the major and serious threats to UAV-aided communications, especially when also the jammer is mobile, e.g., it is mounted on a UAV or inside a terrestrial vehicle. In this paper, we investigate anti-jamming communications for UAV-aided WSNs operating over doubly-selective channels in the downloading phase. In such a scenario, the signals transmitted by the UAV and the malicious mobile jammer undergo both time dispersion due to multipath propagation effects and frequency dispersion caused by their mobility. To suppress high-power jamming signals, we propose a blind physical-layer technique that jointly detects the UAV and jammer symbols through serial disturbance cancellation based on symbol-level post-sorting of the detector output. Amplitudes, phases, time delays, and Doppler shifts - required to implement the proposed detection strategy - are blindly estimated from data through the use of algorithms that exploit the almost-cyclostationarity properties of the received signal and the detailed structure of multicarrier modulation format. Simulation results corroborate the anti-jamming capabilities of the proposed method, for different mobility scenarios of the jammer.

Much stringent reliability and processing latency requirements in ultra-reliable-low-latency-communication (URLLC) traffic make the design of linear massive multiple-input-multiple-output (M-MIMO) receivers becomes very challenging. Recently, Bayesian concept has been used to increase the detection reliability in minimum-mean-square-error (MMSE) linear receivers. However, the latency processing time is a major concern due to the exponential complexity of matrix inversion operations in MMSE schemes. This paper proposes an iterative M-MIMO receiver that is developed by using a Bayesian concept and a parallel interference cancellation (PIC) scheme, referred to as a linear Bayesian learning (LBL) receiver. PIC has a linear complexity as it uses a combination of maximum ratio combining (MRC) and decision statistic combining (DSC) schemes to avoid matrix inversion operations. Simulation results show that the bit-error-rate (BER) and latency processing performances of the proposed receiver outperform the ones of MMSE and best Bayesian-based receivers by minimum $2$ dB and $19$ times for various M-MIMO system configurations.

The stringent requirements on reliability and processing delay in the fifth-generation ($5$G) cellular networks introduce considerable challenges in the design of massive multiple-input-multiple-output (M-MIMO) receivers. The two main components of an M-MIMO receiver are a detector and a decoder. To improve the trade-off between reliability and complexity, a Bayesian concept has been considered as a promising approach that enhances classical detectors, e.g. minimum-mean-square-error detector. This work proposes an iterative M-MIMO detector based on a Bayesian framework, a parallel interference cancellation scheme, and a decision statistics combining concept. We then develop a high performance M-MIMO receiver, integrating the proposed detector with a low complexity sequential decoding for polar codes. Simulation results of the proposed detector show a significant performance gain compared to other low complexity detectors. Furthermore, the proposed M-MIMO receiver with sequential decoding ensures one order magnitude lower complexity compared to a receiver with stack successive cancellation decoding for polar codes from the 5G New Radio standard.

北京阿比特科技有限公司