亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper, we study the performance of wideband terahertz (THz) communications assisted by an intelligent reflecting surface (IRS). Specifically, we first introduce a generalized channel model that is suitable for electrically large THz IRSs operating in the near-field. Unlike prior works, our channel model takes into account the spherical wavefront of the emitted electromagnetic waves and the spatial-wideband effect. We next show that conventional frequency-flat beamfocusing significantly reduces the power gain due to beam squint, and hence is highly suboptimal. More importantly, we analytically characterize this reduction when the spacing between adjacent reflecting elements is negligible, i.e., holographic reflecting surfaces. Numerical results corroborate our analysis and provide important insights into the design of future IRS-aided THz systems.

相關內容

ACM/IEEE第23屆模型驅動工程語言和系統國際會議,是模型驅動軟件和系統工程的首要會議系列,由ACM-SIGSOFT和IEEE-TCSE支持組織。自1998年以來,模型涵蓋了建模的各個方面,從語言和方法到工具和應用程序。模特的參加者來自不同的背景,包括研究人員、學者、工程師和工業專業人士。MODELS 2019是一個論壇,參與者可以圍繞建模和模型驅動的軟件和系統交流前沿研究成果和創新實踐經驗。今年的版本將為建模社區提供進一步推進建模基礎的機會,并在網絡物理系統、嵌入式系統、社會技術系統、云計算、大數據、機器學習、安全、開源等新興領域提出建模的創新應用以及可持續性。 官網鏈接: · Networking · 正交 · 通道 · ·
2021 年 12 月 31 日

Future cellular networks are expected to support new communication paradigms such as machine-type communication (MTC) services along with human-type communication (HTC) services. This requires base stations to serve a large number of devices in relatively short channel coherence intervals which renders allocation of orthogonal pilot sequence per-device approaches impractical. Furthermore, the stringent power constraints, place-and-play type connectivity and various data rate requirements of MTC devices make it impossible for the traditional cellular architecture to accommodate MTC and HTC services together. Massive multiple-input-multiple-output (MaMIMO) technology has the potential to allow the coexistence of HTC and MTC services, thanks to its inherent spatial multiplexing properties and low transmission power requirements. In this work, we investigate the performance of a single cell under a shared physical channel assumption for MTC and HTC services and propose a novel scheme for sharing the time-frequency resources. The analysis reveals that MaMIMO can significantly enhance the performance of such a setup and allow the inclusion of MTC services into the cellular networks without requiring additional resources.

Achieving high channel estimation accuracy and reducing hardware cost as well as power dissipation constitute substantial challenges in the design of massive multiple-input multiple-output (MIMO) systems. To resolve these difficulties, sophisticated pilot designs have been conceived for the family of energy-efficient hybrid analog-digital (HAD) beamforming architecture relying on adaptive-resolution analog-to-digital converters (RADCs). In this paper, we jointly optimize the pilot sequences, the number of RADC quantization bits and the hybrid receiver combiner in the uplink of multiuser massive MIMO systems. We solve the associated mean square error (MSE) minimization problem of channel estimation in the context of correlated Rayleigh fading channels subject to practical constraints. The associated mixed-integer problem is quite challenging due to the nonconvex nature of the objective function and of the constraints. By relying on advanced fractional programming (FP) techniques, we first recast the original problem into a more tractable yet equivalent form, which allows the decoupling of the fractional objective function. We then conceive a pair of novel algorithms for solving the resultant problems for codebook-based and codebook-free pilot schemes, respectively. To reduce the design complexity, we also propose a simplified algorithm for the codebook-based pilot scheme. Our simulation results confirm the superiority of the proposed algorithms over the relevant state-of-the-art benchmark schemes.

Reconfigurable intelligent surface (RIS) is very promising for wireless networks to achieve high energy efficiency, extended coverage, improved capacity, massive connectivity, etc. To unleash the full potentials of RIS-aided communications, acquiring accurate channel state information is crucial, which however is very challenging. For RIS-aided multiple-input and multiple-output (MIMO) communications, the existing channel estimation methods have computational complexity growing rapidly with the number of RIS units $N$ (e.g., in the order of $N^2$ or $N^3$) and/or have special requirements on the matrices involved (e.g., the matrices need to be sparse for algorithm convergence to achieve satisfactory performance), which hinder their applications. In this work, instead of using the conventional signal model in the literature, we derive a new signal model obtained through proper vectorization and reduction operations. Then, leveraging the unitary approximate message passing (UAMP), we develop a more efficient channel estimator that has complexity linear with $N$ and does not have special requirements on the relevant matrices, thanks to the robustness of UAMP. These facilitate the applications of the proposed algorithm to a general RIS-aided MIMO system with a larger $N$. Moreover, extensive numerical results show that the proposed estimator delivers much better performance and/or requires significantly less number of training symbols, thereby leading to notable reductions in both training overhead and latency.

Intelligent reflecting surfaces (IRSs) are promising enablers for next-generation wireless communications due to their reconfigurability and high energy efficiency in improving poor propagation condition of channels, e.g., limited scattering environment. However, most existing works assumed full-rank channels requiring rich scatters, which may not be available in practice. To analyze the impact of rank-deficient channels and mitigate the ensued performance loss, we consider a large-scale IRS-aided MIMO system with statistical channel state information (CSI), where the double-scattering channel is adopted to model rank deficiency. By leveraging random matrix theory (RMT), we first derive a deterministic approximation (DA) of the ergodic rate with low computational complexity and prove the existence and uniqueness of the DA parameters. Then, we propose an alternating optimization algorithm for maximizing the DA with respect to phase shifts and signal covariance matrices. Numerical results will show that the DA is tight and our proposed method can effectively mitigate the performance loss induced by channel rank deficiency.

In the research and application of vehicle ad hoc networks (VANETs), it is often assumed that vehicles obtain cloud computing services by accessing to roadside units (RSUs). However, due to the problems of insufficient construction quantity, limited communication range and overload of calculation load of roadside units, the calculation mode relying only on vehicle to roadside units is difficult to deal with complex and changeable calculation tasks. In this paper, when the roadside unit is missing, the vehicle mobile unit is regarded as a natural edge computing node to make full use of the excess computing power of mobile vehicles and perform the offloading task of surrounding mobile vehicles in time. In this paper, the OPFTO framework is designed, an improved task allocation algorithm HGSA is proposed, and the pre-filtering process is designed with full consideration of the moving characteristics of vehicles. In addition, vehicle simulation experiments show that the proposed strategy has the advantages of low delay and high accuracy compared with other task scheduling strategies, which provides a reference scheme for the construction of Urban Intelligent Transportation in the future.

String vibration represents an active field of research in acoustics. Small-amplitude vibration is often assumed, leading to simplified physical models that can be simulated efficiently. However, the inclusion of nonlinear phenomena due to larger string stretchings is necessary to capture important features, and efficient numerical algorithms are currently lacking in this context. Of the available techniques, many lead to schemes which may only be solved iteratively, resulting in high computational cost, and the additional concerns of existence and uniqueness of solutions. Slow and fast waves are present concurrently in the transverse and longitudinal directions of motion, adding further complications concerning numerical dispersion. This work presents a linearly-implicit scheme for the simulation of the geometrically exact nonlinear string model. The scheme conserves a numerical energy, expressed as the sum of quadratic terms only, and including an auxiliary state variable yielding the nonlinear effects. This scheme allows to treat the transverse and longitudinal waves separately, using a mixed finite difference/modal scheme for the two directions of motion, thus allowing to accurately resolve the wave speeds at reference sample rates. Numerical experiments are presented throughout.

A high-order finite element method is proposed to solve the nonlinear convection-diffusion equation on a time-varying domain whose boundary is implicitly driven by the solution of the equation. The method is semi-implicit in the sense that the boundary is traced explicitly with a high-order surface-tracking algorithm, while the convection-diffusion equation is solved implicitly with high-order backward differentiation formulas and fictitious-domain finite element methods. By two numerical experiments for severely deforming domains, we show that optimal convergence orders are obtained in energy norm for third-order and fourth-order methods.

Training a machine learning model with federated edge learning (FEEL) is typically time-consuming due to the constrained computation power of edge devices and limited wireless resources in edge networks. In this paper, the training time minimization problem is investigated in a quantized FEEL system, where the heterogeneous edge devices send quantized gradients to the edge server via orthogonal channels. In particular, a stochastic quantization scheme is adopted for compression of uploaded gradients, which can reduce the burden of per-round communication but may come at the cost of increasing number of communication rounds. The training time is modeled by taking into account the communication time, computation time and the number of communication rounds. Based on the proposed training time model, the intrinsic trade-off between the number of communication rounds and per-round latency is characterized. Specifically, we analyze the convergence behavior of the quantized FEEL in terms of the optimality gap. Further, a joint data-and-model-driven fitting method is proposed to obtain the exact optimality gap, based on which the closed-form expressions for the number of communication rounds and the total training time are obtained. Constrained by total bandwidth, the training time minimization problem is formulated as a joint quantization level and bandwidth allocation optimization problem. To this end, an algorithm based on alternating optimization is proposed, which alternatively solves the subproblem of quantization optimization via successive convex approximation and the subproblem of bandwidth allocation via bisection search. With different learning tasks and models, the validation of our analysis and the near-optimal performance of the proposed optimization algorithm are demonstrated by the experimental results.

Intelligent reflecting surface (IRS) has emerged as a promising technique to enhance wireless communication performance cost effectively. The existing literature has mainly considered IRS being deployed near user terminals to improve their performance. However, this approach may incur a high cost if IRSs need to be densely deployed in the network to cater to random user locations. To avoid such high deployment cost, in this paper we consider a new IRS aided wireless network architecture, where IRSs are deployed in the vicinity of each base station (BS) to assist in its communications with distributed users regardless of their locations. Besides significantly enhancing IRSs' signal coverage, this scheme helps reduce the IRS associated channel estimation overhead as compared to conventional user-side IRSs, by exploiting the nearly static BS-IRS channels over short distance. For this scheme, we propose a new two stage transmission protocol to achieve IRS channel estimation and reflection optimization for uplink data transmission efficiently. In addition, we propose effective methods for solving the user IRS association problem based on long term channel knowledge and the selected user IRS BS cascaded channel estimation problem. Finally, all IRSs' passive reflections are jointly optimized with the BS's multi-antenna receive combining to maximize the minimum achievable rate among all users for data transmission. Numerical results show that the proposed co site IRS empowered BS scheme can achieve significant performance gains over the conventional BS without co site IRS and existing schemes for IRS channel estimation and reflection optimization, thus enabling an appealing low cost and high performance BS design for future wireless networks.

As soon as abstract mathematical computations were adapted to computation on digital computers, the problem of efficient representation, manipulation, and communication of the numerical values in those computations arose. Strongly related to the problem of numerical representation is the problem of quantization: in what manner should a set of continuous real-valued numbers be distributed over a fixed discrete set of numbers to minimize the number of bits required and also to maximize the accuracy of the attendant computations? This perennial problem of quantization is particularly relevant whenever memory and/or computational resources are severely restricted, and it has come to the forefront in recent years due to the remarkable performance of Neural Network models in computer vision, natural language processing, and related areas. Moving from floating-point representations to low-precision fixed integer values represented in four bits or less holds the potential to reduce the memory footprint and latency by a factor of 16x; and, in fact, reductions of 4x to 8x are often realized in practice in these applications. Thus, it is not surprising that quantization has emerged recently as an important and very active sub-area of research in the efficient implementation of computations associated with Neural Networks. In this article, we survey approaches to the problem of quantizing the numerical values in deep Neural Network computations, covering the advantages/disadvantages of current methods. With this survey and its organization, we hope to have presented a useful snapshot of the current research in quantization for Neural Networks and to have given an intelligent organization to ease the evaluation of future research in this area.

北京阿比特科技有限公司