亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This letter offers a new frequency domain equalization (FDE) scheme that can work with a pseudo-random quantization (PRQ) scheme utilizing non-zero threshold quantization in one-bit uplink multi-user massive multiple-input multiple-output (MIMO) systems to mitigate quantization distortion and support high-order modulation schemes. The equalizer is based on Newton's method (NM) and applicable for orthogonal frequency division multiplexing (OFDM) transmission under frequency-selective fading by exploiting the properties of massive MIMO. We develop a low-complexity FDE scheme to obtain a quasi-Newton method. The proposed detector outperforms the benchmark detector with comparable complexity.

相關內容

Quantum annealing (QA) is proposed for vector perturbation precoding (VPP) in multiple input multiple output (MIMO) communications systems. The mathematical framework of VPP is presented, outlining the problem formulation and the benefits of lattice reduction algorithms. Lattice reduction aided quantum vector perturbation (LRAQVP) is designed by harnessing physical quantum hardware, and the optimization of hardware parameters is discussed. We observe a 5dB gain over lattice reduction zero forcing precoding (LRZFP), which behaves similarly to a quantum annealing algorithm operating without a lattice reduction stage. The proposed algorithm is also shown to approach the performance of a sphere encoder, which exhibits an exponentially escalating complexity.

Addressing the intricacies of open-domain question answering (QA) necessitates the extraction of pertinent information from expansive corpora to formulate precise answers. This paper introduces an innovative methodology, termed Generator-Retriever-Generator (GRG), which synergizes document retrieval strategies with advanced large language models (LLMs). The process commences with the LLM generating context-specific documents in response to a posed question. Concurrently, a sophisticated dual-encoder network undertakes the retrieval of documents pertinent to the question from an extensive external corpus. Both the generated and retrieved documents are subsequently processed by a second LLM, tasked with producing the definitive answer. By amalgamating the processes of document retrieval and LLM-based generation, our method adeptly navigates the complexities associated with open-domain QA, notably in delivering informative and contextually apt answers. Our GRG model demonstrably surpasses existing state-of-the-art methodologies, including generate-then-read and retrieve-then-read frameworks (GENREAD and RFiD), enhancing their performance by minimum margins of +5.2, +4.2, and +1.6 on the TriviaQA, NQ, and WebQ datasets, respectively. For further exploration and replication of our findings, we have made available the code, datasets, and checkpoints at \footnote{\url{//github.com/abdoelsayed2016/GRG}}.

We introduce fair-density parity-check (FDPC) codes targeting high-rate applications. In particular, we start with a base parity-check matrix $H_b$ of dimension $2 \sqrt{n} \times n$, where $n$ is the code block length, and the number of ones in each row and column of $H_b$ is equal to $\sqrt{n}$ and $2$, respectively. We propose a deterministic combinatorial method for picking the base matrix $H_b$, assuming $n=4t^2$ for some integer $t \geq 2$. We then extend this by obtaining permuted versions of $H_b$ (e.g., via random permutations of its columns) and stacking them on top of each other leading to codes of dimension $k \geq n-2s\sqrt{n}+s$, for some $s \geq 2$, referred to as order-$s$ FDPC codes. We propose methods to explicitly characterize and bound the weight distribution of the new codes and utilize them to derive union-type approximate upper bounds on their error probability under Maximum Likelihood (ML) decoding. For the binary erasure channel (BEC), we demonstrate that the approximate ML bound of FDPC codes closely follows the random coding upper bound (RCU) for a wide range of channel parameters. Also, remarkably, FDPC codes, under the low-complexity min-sum decoder, improve upon 5G-LDPC codes for transmission over the binary-input additive white Gaussian noise (B-AWGN) channel by almost 0.5dB (for $n=1024$, and rate $=0.878$). Furthermore, we propose a new decoder as a combination of weighted min-sum message-passing (MP) decoding algorithm together with a new progressive list (PL) decoding component, referred to as the MP-PL decoder, to further boost the performance of FDPC codes. This paper opens new avenues for a fresh investigation of new code constructions and decoding algorithms in high-rate regimes suitable for ultra-high throughput (high-frequency/optical) applications.

We present a distributed quasi-Newton (DQN) method, which enables a group of agents to compute an optimal solution of a separable multi-agent optimization problem locally using an approximation of the curvature of the aggregate objective function. Each agent computes a descent direction from its local estimate of the aggregate Hessian, obtained from quasi-Newton approximation schemes using the gradient of its local objective function. Moreover, we introduce a distributed quasi-Newton method for equality-constrained optimization (EC-DQN), where each agent takes Karush-Kuhn-Tucker-like update steps to compute an optimal solution. In our algorithms, each agent communicates with its one-hop neighbors over a peer-to-peer communication network to compute a common solution. We prove convergence of our algorithms to a stationary point of the optimization problem. In addition, we demonstrate the competitive empirical convergence of our algorithm in both well-conditioned and ill-conditioned optimization problems, in terms of the computation time and communication cost incurred by each agent for convergence, compared to existing distributed first-order and second-order methods. Particularly, in ill-conditioned problems, our algorithms achieve a faster computation time for convergence, while requiring a lower communication cost, across a range of communication networks with different degrees of connectedness, by leveraging information on the curvature of the problem.

Learning a universal policy across different robot morphologies can significantly improve learning efficiency and enable zero-shot generalization to unseen morphologies. However, learning a highly performant universal policy requires sophisticated architectures like transformers (TF) that have larger memory and computational cost than simpler multi-layer perceptrons (MLP). To achieve both good performance like TF and high efficiency like MLP at inference time, we propose HyperDistill, which consists of: (1) A morphology-conditioned hypernetwork (HN) that generates robot-wise MLP policies, and (2) A policy distillation approach that is essential for successful training. We show that on UNIMAL, a benchmark with hundreds of diverse morphologies, HyperDistill performs as well as a universal TF teacher policy on both training and unseen test robots, but reduces model size by 6-14 times, and computational cost by 67-160 times in different environments. Our analysis attributes the efficiency advantage of HyperDistill at inference time to knowledge decoupling, i.e., the ability to decouple inter-task and intra-task knowledge, a general principle that could also be applied to improve inference efficiency in other domains.

Self-supervised learning (SSL) in audio holds significant potential across various domains, particularly in situations where abundant, unlabeled data is readily available at no cost. This is pertinent in bioacoustics, where biologists routinely collect extensive sound datasets from the natural environment. In this study, we demonstrate that SSL is capable of acquiring meaningful representations of bird sounds from audio recordings without the need for annotations. Our experiments showcase that these learned representations exhibit the capacity to generalize to new bird species in few-shot learning (FSL) scenarios. Additionally, we show that selecting windows with high bird activation for self-supervised learning, using a pretrained audio neural network, significantly enhances the quality of the learned representations.

We present a large-scale study on unsupervised spatiotemporal representation learning from videos. With a unified perspective on four recent image-based frameworks, we study a simple objective that can easily generalize all these methods to space-time. Our objective encourages temporally-persistent features in the same video, and in spite of its simplicity, it works surprisingly well across: (i) different unsupervised frameworks, (ii) pre-training datasets, (iii) downstream datasets, and (iv) backbone architectures. We draw a series of intriguing observations from this study, e.g., we discover that encouraging long-spanned persistency can be effective even if the timespan is 60 seconds. In addition to state-of-the-art results in multiple benchmarks, we report a few promising cases in which unsupervised pre-training can outperform its supervised counterpart. Code is made available at //github.com/facebookresearch/SlowFast

Unsupervised domain adaptation (UDA) methods for person re-identification (re-ID) aim at transferring re-ID knowledge from labeled source data to unlabeled target data. Although achieving great success, most of them only use limited data from a single-source domain for model pre-training, making the rich labeled data insufficiently exploited. To make full use of the valuable labeled data, we introduce the multi-source concept into UDA person re-ID field, where multiple source datasets are used during training. However, because of domain gaps, simply combining different datasets only brings limited improvement. In this paper, we try to address this problem from two perspectives, \ie{} domain-specific view and domain-fusion view. Two constructive modules are proposed, and they are compatible with each other. First, a rectification domain-specific batch normalization (RDSBN) module is explored to simultaneously reduce domain-specific characteristics and increase the distinctiveness of person features. Second, a graph convolutional network (GCN) based multi-domain information fusion (MDIF) module is developed, which minimizes domain distances by fusing features of different domains. The proposed method outperforms state-of-the-art UDA person re-ID methods by a large margin, and even achieves comparable performance to the supervised approaches without any post-processing techniques.

In semi-supervised domain adaptation, a few labeled samples per class in the target domain guide features of the remaining target samples to aggregate around them. However, the trained model cannot produce a highly discriminative feature representation for the target domain because the training data is dominated by labeled samples from the source domain. This could lead to disconnection between the labeled and unlabeled target samples as well as misalignment between unlabeled target samples and the source domain. In this paper, we propose a novel approach called Cross-domain Adaptive Clustering to address this problem. To achieve both inter-domain and intra-domain adaptation, we first introduce an adversarial adaptive clustering loss to group features of unlabeled target data into clusters and perform cluster-wise feature alignment across the source and target domains. We further apply pseudo labeling to unlabeled samples in the target domain and retain pseudo-labels with high confidence. Pseudo labeling expands the number of ``labeled" samples in each class in the target domain, and thus produces a more robust and powerful cluster core for each class to facilitate adversarial learning. Extensive experiments on benchmark datasets, including DomainNet, Office-Home and Office, demonstrate that our proposed approach achieves the state-of-the-art performance in semi-supervised domain adaptation.

Social relations are often used to improve recommendation quality when user-item interaction data is sparse in recommender systems. Most existing social recommendation models exploit pairwise relations to mine potential user preferences. However, real-life interactions among users are very complicated and user relations can be high-order. Hypergraph provides a natural way to model complex high-order relations, while its potentials for improving social recommendation are under-explored. In this paper, we fill this gap and propose a multi-channel hypergraph convolutional network to enhance social recommendation by leveraging high-order user relations. Technically, each channel in the network encodes a hypergraph that depicts a common high-order user relation pattern via hypergraph convolution. By aggregating the embeddings learned through multiple channels, we obtain comprehensive user representations to generate recommendation results. However, the aggregation operation might also obscure the inherent characteristics of different types of high-order connectivity information. To compensate for the aggregating loss, we innovatively integrate self-supervised learning into the training of the hypergraph convolutional network to regain the connectivity information with hierarchical mutual information maximization. The experimental results on multiple real-world datasets show that the proposed model outperforms the SOTA methods, and the ablation study verifies the effectiveness of the multi-channel setting and the self-supervised task. The implementation of our model is available via //github.com/Coder-Yu/RecQ.

北京阿比特科技有限公司