亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

A promising waveform candidate for future joint sensing and communication systems is orthogonal frequencydivision multiplexing (OFDM). For such systems, supporting multiple transmit antennas requires multiplexing methods for the generation of orthogonal transmit signals, where equidistant subcarrier interleaving (ESI) is the most popular multiplexing method. In this work, we analyze a multiplexing method called Doppler-division multiplexing (DDM). This method applies a phase shift from OFDM symbol to OFDM symbol to separate signals transmitted by different Tx antennas along the velocity axis of the range-Doppler map. While general properties of DDM for the task of radar sensing are analyzed in this work, the main focus lies on the implications of DDM on the communication task. It will be shown that for DDM, the channels observed in the communication receiver are heavily timevarying, preventing any meaningful transmission of data when not taken into account. In this work, a communication system designed to combat these time-varying channels is proposed, which includes methods for data estimation, synchronization, and channel estimation. Bit error ratio (BER) simulations demonstrate the superiority of this communications system compared to a system utilizing ESI.

相關內容

In conventional backscatter communication (BackCom) systems, time division multiple access (TDMA) and frequency division multiple access (FDMA) are generally adopted for multiuser backscattering due to their simplicity in implementation. However, as the number of backscatter devices (BDs) proliferates, there will be a high overhead under the traditional centralized control techniques, and the inter-user coordination is unaffordable for the passive BDs, which are of scarce concern in existing works and remain unsolved. To this end, in this paper, we propose a slotted ALOHA-based random access for BackCom systems, in which each BD is randomly chosen and is allowed to coexist with one active device for hybrid multiple access. To excavate and evaluate the performance, a resource allocation problem for max-min transmission rate is formulated, where transmit antenna selection, receive beamforming design, reflection coefficient adjustment, power control, and access probability determination are jointly considered. To deal with this intractable problem, we first transform the objective function with the max-min form into an equivalent linear one, and then decompose the resulting problem into three sub-problems. Next, a block coordinate descent (BCD)-based greedy algorithm with a penalty function, successive convex approximation, and linear programming are designed to obtain sub-optimal solutions for tractable analysis. Simulation results demonstrate that the proposed algorithm outperforms benchmark algorithms in terms of transmission rate and fairness.

Backscatter communication (BackCom), one of the core technologies to realize zero-power communication, is expected to be a pivotal paradigm for the next generation of the Internet of Things (IoT). However, the "strong" direct link (DL) interference (DLI) is traditionally assumed to be harmful, and generally drowns out the "weak" backscattered signals accordingly, thus deteriorating the performance of BackCom. In contrast to the previous efforts to eliminate the DLI, in this paper, we exploit the constructive interference (CI), in which the DLI contributes to the backscattered signal. To be specific, our objective is to maximize the received signal power by jointly optimizing the receive beamforming vectors and tag selection factors, which is, however, non-convex and difficult to solve due to constraints on the Kullback-Leibler (KL) divergence. In order to solve this problem, we first decompose the original problem, and then propose two algorithms to solve the sub-problem with beamforming design via a change of variables and semi-definite programming (SDP) and a greedy algorithm to solve the sub-problem with tag selection. In order to gain insight into the CI, we consider a special case with the single-antenna reader to reveal the channel angle between the backscattering link (BL) and the DL, in which the DLI will become constructive. Simulation results show that significant performance gain can always be achieved in the proposed algorithms compared with the traditional algorithms without the DL in terms of the strength of the received signal. The derived constructive channel angle for the BackCom system with the single-antenna reader is also confirmed by simulation results.

We demonstrate the relevance of an algorithm called generalized iterative scaling (GIS) or simultaneous multiplicative algebraic reconstruction technique (SMART) and its rescaled block-iterative version (RBI-SMART) in the field of optimal transport (OT). Many OT problems can be tackled through the use of entropic regularization by solving the Schr\"odinger problem, which is an information projection problem, that is, with respect to the Kullback--Leibler divergence. Here we consider problems that have several affine constraints. It is well-known that cyclic information projections onto the individual affine sets converge to the solution. In practice, however, even these individual projections are not explicitly available in general. In this paper, we exchange them for one GIS iteration. If this is done for every affine set, we obtain RBI-SMART. We provide a convergence proof using an interpretation of these iterations as two-step affine projections in an equivalent problem. This is done in a slightly more general setting than RBI-SMART, since we use a mix of explicitly known information projections and GIS iterations. We proceed to specialize this algorithm to several OT applications. First, we find the measure that minimizes the regularized OT divergence to a given measure under moment constraints. Second and third, the proposed framework yields an algorithm for solving a regularized martingale OT problem, as well as a relaxed version of the barycentric weak OT problem. Finally, we show an approach from the literature for unbalanced OT problems.

Integrated sensing and communications (ISAC) is a spectrum-sharing paradigm that allows different users to jointly utilize and access the crowded electromagnetic spectrum. In this context, intelligent reflecting surfaces (IRSs) have lately emerged as an enabler for non-line-of-sight (NLoS) ISAC. Prior IRS-aided ISAC studies assume passive surfaces and rely on the continuous-valued phase-shift model. In practice, the phase-shifts are quantized. Moreover, recent research has shown substantial performance benefits with active IRS. In this paper, we include these characteristics in our IRS-aided ISAC model to maximize the receive radar and communications signal-to-noise ratios (SNR) subjected to a unimodular IRS phase-shift vector and power budget. The resulting optimization is a highly non-convex unimodular quartic optimization problem. We tackle this problem via a bi-quadratic transformation to split the design into two quadratic sub-problems that are solved using the power iteration method. The proposed approach employs the M-ary unimodular sequence design via relaxed power method-like iteration (MaRLI) to design the quantized phase-shifts. Numerical experiments employ continuous-valued phase shifts as a benchmark and demonstrate that our active-IRS-aided ISAC design with MaRLI converges to a higher value of SNR with an increase in the number of IRS quantization bits.

As of today, 5G is rolling out across the world, but academia and industry have shifted their attention to the sixth generation (6G) cellular technology for a full-digitalized, intelligent society in 2030 and beyond. 6G demands far more bandwidth to support extreme performance, exacerbating the problem of spectrum shortage in mobile communications. In this context, this paper proposes a novel concept coined Full-Spectrum Wireless Communications (FSWC). It makes use of all communication-feasible spectral resources over the whole electromagnetic (EW) spectrum, from microwave, millimeter wave, terahertz (THz), infrared light, visible light, to ultraviolet light. FSWC not only provides sufficient bandwidth but also enables new paradigms taking advantage of peculiarities on different EW bands. This paper will define FSWC, justify its necessity for 6G, and then discuss the opportunities and challenges of exploiting THz and optical bands.

In semantic communications, only task-relevant information is transmitted, yielding significant performance gains over conventional communications. To satisfy user requirements for different tasks, we investigate the semantic-aware resource allocation in a multi-cell network for serving multiple tasks in this paper. First, semantic entropy is defined and quantified to measure the semantic information for different tasks. Then, we develop a novel quality-of-experience (QoE) model to formulate the semantic-aware resource allocation problem in terms of semantic compression, channel assignment, and transmit power allocation. To solve the formulated problem, we first decouple it into two subproblems. The first one is to optimize semantic compression with given channel assignment and power allocation results, which is solved by a developed deep Q-network (DQN) based method. The second one is to optimize the channel assignment and transmit power, which is modeled as a many-to-one matching game and solved by a proposed low-complexity matching algorithm. Simulation results validate the effectiveness and superiority of the proposed semantic-aware resource allocation method, as well as its compatibility with conventional and semantic communications.

To make indoor industrial cell-free massive multiple-input multiple-output (CF-mMIMO) networks free from wired fronthaul, this paper studies a multicarrier-division duplex (MDD)-enabled two-tier terahertz (THz) fronthaul scheme. More specifically, two layers of fronthaul links rely on the mutually orthogonal subcarreir sets in the same THz band, while access links are implemented over sub-6G band. The proposed scheme leads to a complicated mixed-integer nonconvex optimization problem incorporating access point (AP) clustering, device selection, the assignment of subcarrier sets between two fronthaul links and the resource allocation at both the central processing unit (CPU) and APs. In order to address the formulated problem, we first resort to the low-complexity but efficient heuristic methods thereby relaxing the binary variables. Then, the overall end-to-end rate is obtained by iteratively optimizing the assignment of subcarrier sets and the number of AP clusters. Furthermore, an advanced MDD frame structure consisting of three parallel data streams is tailored for the proposed scheme. Simulation results demonstrate the effectiveness of the proposed dynamic AP clustering approach in dealing with the varying sizes of networks. Moreover, benefiting from the well-designed frame structure, MDD is capable of outperforming TDD in the two-tier fronthaul networks. Additionally, the effect of the THz bandwidth on system performance is analyzed, and it is shown that with sufficient frequency resources, our proposed two-tier fully-wireless fronthaul scheme can achieve a comparable performance to the fiber-optic based systems. Finally, the superiority of the proposed MDD-enabled fronthaul scheme is verified in a practical scenario with realistic ray-tracing simulations.

Graph Neural Networks (GNNs) have gained momentum in graph representation learning and boosted the state of the art in a variety of areas, such as data mining (\emph{e.g.,} social network analysis and recommender systems), computer vision (\emph{e.g.,} object detection and point cloud learning), and natural language processing (\emph{e.g.,} relation extraction and sequence learning), to name a few. With the emergence of Transformers in natural language processing and computer vision, graph Transformers embed a graph structure into the Transformer architecture to overcome the limitations of local neighborhood aggregation while avoiding strict structural inductive biases. In this paper, we present a comprehensive review of GNNs and graph Transformers in computer vision from a task-oriented perspective. Specifically, we divide their applications in computer vision into five categories according to the modality of input data, \emph{i.e.,} 2D natural images, videos, 3D data, vision + language, and medical images. In each category, we further divide the applications according to a set of vision tasks. Such a task-oriented taxonomy allows us to examine how each task is tackled by different GNN-based approaches and how well these approaches perform. Based on the necessary preliminaries, we provide the definitions and challenges of the tasks, in-depth coverage of the representative approaches, as well as discussions regarding insights, limitations, and future directions.

Transformer, first applied to the field of natural language processing, is a type of deep neural network mainly based on the self-attention mechanism. Thanks to its strong representation capabilities, researchers are looking at ways to apply transformer to computer vision tasks. In a variety of visual benchmarks, transformer-based models perform similar to or better than other types of networks such as convolutional and recurrent neural networks. Given its high performance and less need for vision-specific inductive bias, transformer is receiving more and more attention from the computer vision community. In this paper, we review these vision transformer models by categorizing them in different tasks and analyzing their advantages and disadvantages. The main categories we explore include the backbone network, high/mid-level vision, low-level vision, and video processing. We also include efficient transformer methods for pushing transformer into real device-based applications. Furthermore, we also take a brief look at the self-attention mechanism in computer vision, as it is the base component in transformer. Toward the end of this paper, we discuss the challenges and provide several further research directions for vision transformers.

Games and simulators can be a valuable platform to execute complex multi-agent, multiplayer, imperfect information scenarios with significant parallels to military applications: multiple participants manage resources and make decisions that command assets to secure specific areas of a map or neutralize opposing forces. These characteristics have attracted the artificial intelligence (AI) community by supporting development of algorithms with complex benchmarks and the capability to rapidly iterate over new ideas. The success of artificial intelligence algorithms in real-time strategy games such as StarCraft II have also attracted the attention of the military research community aiming to explore similar techniques in military counterpart scenarios. Aiming to bridge the connection between games and military applications, this work discusses past and current efforts on how games and simulators, together with the artificial intelligence algorithms, have been adapted to simulate certain aspects of military missions and how they might impact the future battlefield. This paper also investigates how advances in virtual reality and visual augmentation systems open new possibilities in human interfaces with gaming platforms and their military parallels.

北京阿比特科技有限公司