亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Enabling communications in the (sub-)THz band will call for massive multiple-input multiple-output (MIMO) arrays at either the transmit- or receive-side, or at both. To scale down the complexity and power consumption when operating across massive frequency and antenna dimensions, a sacrifice in the resolution of the digital-to-analog/analog-to-digital converters (DACs/ADCs) will be inevitable. In this paper, we analyze the extreme scenario where both the transmit- and receive-side are equipped with fully digital massive MIMO arrays and 1-bit DACs/ADCs, which leads to a system with minimum radio-frequency complexity, cost, and power consumption. Building upon the Bussgang decomposition, we derive a tractable approximation of the mean squared error (MSE) between the transmitted data symbols and their soft estimates. Numerical results show that, despite its simplicity, a doubly 1-bit quantized massive MIMO system with very large antenna arrays can deliver an impressive performance in terms of MSE and symbol error rate.

相關內容

We consider the movable-antenna (MA) arrayenabled wireless communication with coordinate multi-point (CoMP) reception, where multiple destinations adopt the maximal ratio combination technique to jointly decode the common message sent from the transmitter equipped with the MA array. Our goal is to maximize the effective received signal-to-noise ratio, by jointly optimizing the transmit beamforming and the positions of the MA array. Although the formulated problem is highly non-convex, we reveal that it is fundamental to maximize the principal eigenvalue of a hermite channel matrix which is a function of the positions of the MA array. The corresponding sub-problem is still non-convex, for which we develop a computationally efficient algorithm. Afterwards, the optimal transmit beamforming is determined with a closed-form solution. In addition, the theoretical performance upper bound is analyzed. Since the MA array brings an additional spatial degree of freedom by flexibly adjusting all antennas' positions, it achieves significant performance gain compared to competitive benchmarks.

We consider the problem of learning linear operators under squared loss between two infinite-dimensional Hilbert spaces in the online setting. We show that the class of linear operators with uniformly bounded $p$-Schatten norm is online learnable for any $p \in [1, \infty)$. On the other hand, we prove an impossibility result by showing that the class of uniformly bounded linear operators with respect to the operator norm is \textit{not} online learnable. Moreover, we show a separation between sequential uniform convergence and online learnability by identifying a class of bounded linear operators that is online learnable but uniform convergence does not hold. Finally, we prove that the impossibility result and the separation between uniform convergence and learnability also hold in the batch setting.

The principle of orthogonal time-frequency-space (OTFS) signaling is firstly analyzed, followed by explaining that OTFS embeds another signaling scheme referred to as orthogonal short-time Fourier (OSTF). Then, the relationship among OTFS, OSTF, orthogonal frequency-division multiplexing (OFDM) and single-carrier frequency-division multiple-access (SC-FDMA) is explored, demonstrating that OSTF/OTFS are fundamentally the extensions of OFDM/SC-FDMA from one-dimensional (1D) signaling to two-dimensional (2D) signaling. Hence, the characteristics and performance of OSTF/OTFS schemes can be perceived from the well-understood OFDM/SC-FDMA schemes. Accordingly, the advantages and disadvantages of OSTF/OTFS are discussed. Furthermore, from the principles of OFDM/SC-FDMA, the multiuser multiplexing in OSTF/OTFS systems is analyzed with respect to uplink and downlink, respectively. Added on this, a range of generalized multiplexing schemes are presented, whose characteristics are briefly analyzed.

Detecting if a graph contains a $k$-Clique is one of the most fundamental problems in computer science. The asymptotically fastest algorithm runs in time $O(n^{\omega k/3})$, where $\omega$ is the exponent of Boolean matrix multiplication. To date, this is the only technique capable of beating the trivial $O(n^k)$ bound by a polynomial factor. Due to this technique's various limitations, much effort has gone into designing "combinatorial" algorithms that improve over exhaustive search via other techniques. The first contribution of this work is a faster combinatorial algorithm for $k$-Clique, improving Vassilevska's bound of $O(n^{k}/\log^{k-1}{n})$ by two log factors. Technically, our main result is a new reduction from $k$-Clique to Triangle detection that exploits the same divide-and-conquer at the core of recent combinatorial algorithms by Chan (SODA'15) and Yu (ICALP'15). Our second contribution is exploiting combinatorial techniques to improve the state-of-the-art (even of non-combinatorial algorithms) for generalizations of the $k$-Clique problem. In particular, we give the first $o(n^k)$ algorithm for $k$-clique in hypergraphs and an $O(n^3/\log^{2.25}{n} + t)$ algorithm for listing $t$ triangles in a graph.

Efficient implementation of massive multiple-input-multiple-output (MIMO) transceivers is essential for the next-generation wireless networks. To reduce the high computational complexity of the massive MIMO transceiver, in this paper, we propose a new massive MIMO architecture using finite-precision arithmetic. First, we conduct the rounding error analysis and derive the lower bound of the achievable rate for single-input-multiple-output (SIMO) using maximal ratio combining (MRC) and multiple-input-single-output (MISO) systems using maximal ratio transmission (MRT) with finite-precision arithmetic. Then, considering the multi-user scenario, the rounding error analysis of zero-forcing (ZF) detection and precoding is derived by using the normal equations (NE) method. The corresponding lower bounds of the achievable sum rate are also derived and asymptotic analyses are presented. Built upon insights from these analyses and lower bounds, we propose a mixed-precision architecture for massive MIMO systems to offset performance gaps due to finite-precision arithmetic. The corresponding analysis of rounding errors and computational costs is obtained. Simulation results validate the derived bounds and underscore the superiority of the proposed mixed-precision architecture to the conventional structure.

We consider the movable antenna (MA) array-enabled wireless communication with coordinate multi-point (CoMP) reception, where multiple destinations adopt the maximal ratio combination technique to jointly decode the common message sent from the transmitter equipped with the MA array. Our goal is to maximize the effective received signal-to-noise ratio, by jointly optimizing the transmit beamforming and the positions of the MA array. Although the formulated problem is highly non-convex, we reveal that it is fundamental to maximize the principal eigenvalue of a hermite channel matrix which is a function of the positions of the MA array. The corresponding sub-problem is still non-convex, for which we develop a computationally efficient algorithm. Afterwards, the optimal transmit beamforming is determined with a closed-form solution. In addition, the theoretical performance upper bound is analyzed. Since the MA array brings an additional spatial degree of freedom by flexibly adjusting all antennas' positions, it achieves significant performance gain compared to competitive benchmarks.

Message passing graph neural networks (GNNs) are known to have their expressiveness upper-bounded by 1-dimensional Weisfeiler-Leman (1-WL) algorithm. To achieve more powerful GNNs, existing attempts either require ad hoc features, or involve operations that incur high time and space complexities. In this work, we propose a general and provably powerful GNN framework that preserves the scalability of the message passing scheme. In particular, we first propose to empower 1-WL for graph isomorphism test by considering edges among neighbors, giving rise to NC-1-WL. The expressiveness of NC-1-WL is shown to be strictly above 1-WL and below 3-WL theoretically. Further, we propose the NC-GNN framework as a differentiable neural version of NC-1-WL. Our simple implementation of NC-GNN is provably as powerful as NC-1-WL. Experiments demonstrate that our NC-GNN performs effectively and efficiently on various benchmarks.

Humans perceive the world by concurrently processing and fusing high-dimensional inputs from multiple modalities such as vision and audio. Machine perception models, in stark contrast, are typically modality-specific and optimised for unimodal benchmarks, and hence late-stage fusion of final representations or predictions from each modality (`late-fusion') is still a dominant paradigm for multimodal video classification. Instead, we introduce a novel transformer based architecture that uses `fusion bottlenecks' for modality fusion at multiple layers. Compared to traditional pairwise self-attention, our model forces information between different modalities to pass through a small number of bottleneck latents, requiring the model to collate and condense the most relevant information in each modality and only share what is necessary. We find that such a strategy improves fusion performance, at the same time reducing computational cost. We conduct thorough ablation studies, and achieve state-of-the-art results on multiple audio-visual classification benchmarks including Audioset, Epic-Kitchens and VGGSound. All code and models will be released.

Federated Learning (FL) is a decentralized machine-learning paradigm, in which a global server iteratively averages the model parameters of local users without accessing their data. User heterogeneity has imposed significant challenges to FL, which can incur drifted global models that are slow to converge. Knowledge Distillation has recently emerged to tackle this issue, by refining the server model using aggregated knowledge from heterogeneous users, other than directly averaging their model parameters. This approach, however, depends on a proxy dataset, making it impractical unless such a prerequisite is satisfied. Moreover, the ensemble knowledge is not fully utilized to guide local model learning, which may in turn affect the quality of the aggregated model. Inspired by the prior art, we propose a data-free knowledge distillation} approach to address heterogeneous FL, where the server learns a lightweight generator to ensemble user information in a data-free manner, which is then broadcasted to users, regulating local training using the learned knowledge as an inductive bias. Empirical studies powered by theoretical implications show that, our approach facilitates FL with better generalization performance using fewer communication rounds, compared with the state-of-the-art.

Image-to-image translation aims to learn the mapping between two visual domains. There are two main challenges for many applications: 1) the lack of aligned training pairs and 2) multiple possible outputs from a single input image. In this work, we present an approach based on disentangled representation for producing diverse outputs without paired training images. To achieve diversity, we propose to embed images onto two spaces: a domain-invariant content space capturing shared information across domains and a domain-specific attribute space. Our model takes the encoded content features extracted from a given input and the attribute vectors sampled from the attribute space to produce diverse outputs at test time. To handle unpaired training data, we introduce a novel cross-cycle consistency loss based on disentangled representations. Qualitative results show that our model can generate diverse and realistic images on a wide range of tasks without paired training data. For quantitative comparisons, we measure realism with user study and diversity with a perceptual distance metric. We apply the proposed model to domain adaptation and show competitive performance when compared to the state-of-the-art on the MNIST-M and the LineMod datasets.

北京阿比特科技有限公司