亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We investigate the fundamental limits of the unsourced random access over the binary-input Gaussian channel. By fundamental limits, we mean the minimal energy per bit required to achieve the target per-user probability of error. The original method proposed by Y. Polyanskiy (2017) and based on Gallager's trick does not work well for binary signaling. We utilize Fano's method, which is based on the choice of the so-called ``good'' region. We apply this method for the cases of Gaussian and binary codebooks and obtain two achievability bounds. The first bound is very close to Polyanskiy's bound but does not lead to any improvement. At the same time, the numerical results show that the bound for the binary case practically coincides with the bound for the Gaussian codebook. Thus, we conclude that binary modulation does not lead to performance degradation, and energy-efficient schemes with binary modulation do exist.

相關內容

With the evergrowing sizes of pre-trained models (PTMs), it has been an emerging practice to only provide the inference APIs for users, namely model-as-a-service (MaaS) setting. To adapt PTMs with model parameters frozen, most current approaches focus on the input side, seeking for powerful prompts to stimulate models for correct answers. However, we argue that input-side adaptation could be arduous due to the lack of gradient signals and they usually require thousands of API queries, resulting in high computation and time costs. In light of this, we present Decoder Tuning (DecT), which in contrast optimizes task-specific decoder networks on the output side. Specifically, DecT first extracts prompt-stimulated output scores for initial predictions. On top of that, we train an additional decoder network on the output representations to incorporate posterior data knowledge. By gradient-based optimization, DecT can be trained within several seconds and requires only one PTM query per sample. Empirically, we conduct extensive natural language understanding experiments and show that DecT significantly outperforms state-of-the-art algorithms with a $200\times$ speed-up.

Being capable of enhancing the spectral efficiency (SE), faster-than-Nyquist (FTN) signaling is a promising approach for wireless communication systems. This paper investigates the doubly-selective (i.e., time- and frequency-selective) channel estimation and data detection of FTN signaling. We consider the intersymbol interference (ISI) resulting from both the FTN signaling and the frequency-selective channel and adopt an efficient frame structure with reduced overhead. We propose a novel channel estimation technique of FTN signaling based on the least sum of squared errors (LSSE) approach to estimate the complex channel coefficients at the pilot locations within the frame. In particular, we find the optimal pilot sequence that minimizes the mean square error (MSE) of the channel estimation. To address the time-selective nature of the channel, we use a low-complexity linear interpolation to track the complex channel coefficients at the data symbols locations within the frame. To detect the data symbols of FTN signaling, we adopt a turbo equalization technique based on a linear soft-input soft-output (SISO) minimum mean square error (MMSE) equalizer. Simulation results show that the MSE of the proposed FTN signaling channel estimation employing the designed optimal pilot sequence is lower than its counterpart designed for conventional Nyquist transmission. The bit error rate (BER) of the FTN signaling employing the proposed optimal pilot sequence shows improvement compared to the FTN signaling employing the conventional Nyquist pilot sequence. Additionally, for the same SE, the proposed FTN signaling channel estimation employing the designed optimal pilot sequence shows better performance when compared to competing techniques from the literature.

While prior research has proposed a plethora of methods that enhance the adversarial robustness of neural classifiers, practitioners are still reluctant to adopt these techniques due to their unacceptably severe penalties in clean accuracy. This paper shows that by mixing the output probabilities of a standard classifier and a robust model, where the standard network is optimized for clean accuracy and is not robust in general, this accuracy-robustness trade-off can be significantly alleviated. We show that the robust base classifier's confidence difference for correct and incorrect examples is the key ingredient of this improvement. In addition to providing intuitive and empirical evidence, we also theoretically certify the robustness of the mixed classifier under realistic assumptions. Furthermore, we adapt an adversarial input detector into a mixing network that adaptively adjusts the mixture of the two base models, further reducing the accuracy penalty of achieving robustness. The proposed flexible method, termed "adaptive smoothing", can work in conjunction with existing or even future methods that improve clean accuracy, robustness, or adversary detection. Our empirical evaluation considers strong attack methods, including AutoAttack and adaptive attack. On the CIFAR-100 dataset, our method achieves an 85.21% clean accuracy while maintaining a 38.72% $\ell_\infty$-AutoAttacked ($\epsilon$=8/255) accuracy, becoming the second most robust method on the RobustBench CIFAR-100 benchmark as of submission, while improving the clean accuracy by ten percentage points compared with all listed models. The code that implements our method is available at //github.com/Bai-YT/AdaptiveSmoothing.

Consider two $D$-dimensional data vectors (e.g., embeddings): $u, v$. In many embedding-based retrieval (EBR) applications where the vectors are generated from trained models, $D=256\sim 1024$ are common. In this paper, OPORP (one permutation + one random projection) uses a variant of the ``count-sketch'' type of data structures for achieving data reduction/compression. With OPORP, we first apply a permutation on the data vectors. A random vector $r$ is generated i.i.d. with moments: $E(r_i) = 0, E(r_i^2)=1, E(r_i^3) =0, E(r_i^4)=s$. We multiply (as dot product) $r$ with all permuted data vectors. Then we break the $D$ columns into $k$ equal-length bins and aggregate (i.e., sum) the values in each bin to obtain $k$ samples from each data vector. One crucial step is to normalize the $k$ samples to the unit $l_2$ norm. We show that the estimation variance is essentially: $(s-1)A + \frac{D-k}{D-1}\frac{1}{k}\left[ (1-\rho^2)^2 -2A\right]$, where $A\geq 0$ is a function of the data ($u,v$). This formula reveals several key properties: (1) We need $s=1$. (2) The factor $\frac{D-k}{D-1}$ can be highly beneficial in reducing variances. (3) The term $\frac{1}{k}(1-\rho^2)^2$ is a substantial improvement compared with $\frac{1}{k}(1+\rho^2)$, which corresponds to the un-normalized estimator. We illustrate that by letting the $k$ in OPORP to be $k=1$ and repeat the procedure $m$ times, we exactly recover the work of ``very spars random projections'' (VSRP). This immediately leads to a normalized estimator for VSRP which substantially improves the original estimator of VSRP. In summary, with OPORP, the two key steps: (i) the normalization and (ii) the fixed-length binning scheme, have considerably improved the accuracy in estimating the cosine similarity, which is a routine (and crucial) task in modern embedding-based retrieval (EBR) applications.

Due to complex interactions among various deep neural network (DNN) optimization techniques, modern DNNs can have weights and activations that are dense or sparse with diverse sparsity degrees. To offer a good trade-off between accuracy and hardware performance, an ideal DNN accelerator should have high flexibility to efficiently translate DNN sparsity into reductions in energy and/or latency without incurring significant complexity overhead. This paper introduces hierarchical structured sparsity (HSS), with the key insight that we can systematically represent diverse sparsity degrees by having them hierarchically composed from multiple simple sparsity patterns. As a result, HSS simplifies the underlying hardware since it only needs to support simple sparsity patterns; this significantly reduces the sparsity acceleration overhead, which improves efficiency. Motivated by such opportunities, we propose a simultaneously efficient and flexible accelerator, named HighLight, to accelerate DNNs that have diverse sparsity degrees (including dense). Due to the flexibility of HSS, different HSS patterns can be introduced to DNNs to meet different applications' accuracy requirements. Compared to existing works, HighLight achieves a geomean of up to 6.4x better energy-delay product (EDP) across workloads with diverse sparsity degrees, and always sits on the EDP-accuracy Pareto frontier for representative DNNs.

Fault-aware retraining has emerged as a prominent technique for mitigating permanent faults in Deep Neural Network (DNN) hardware accelerators. However, retraining leads to huge overheads, specifically when used for fine-tuning large DNNs designed for solving complex problems. Moreover, as each fabricated chip can have a distinct fault pattern, fault-aware retraining is required to be performed for each chip individually considering its unique fault map, which further aggravates the problem. To reduce the overall retraining cost, in this work, we introduce the concept of resilience-driven retraining amount selection. To realize this concept, we propose a novel framework, Reduce, that, at first, computes the resilience of the given DNN to faults at different fault rates and with different amounts of retraining. Then, based on the resilience, it computes the amount of retraining required for each chip considering its unique fault map. We demonstrate the effectiveness of our methodology for a systolic array-based DNN accelerator experiencing permanent faults in the computational array.

Most existing studies on joint activity detection and channel estimation for grant-free massive random access (RA) systems assume perfect synchronization among all active users, which is hard to achieve in practice. Therefore, this paper considers asynchronous grant-free massive RA systems and develops novel algorithms for joint user activity detection, synchronization delay detection, and channel estimation. In particular, the framework of orthogonal approximate message passing (OAMP) is first utilized to deal with the non-independent and identically distributed (i.i.d.) pilot matrix in asynchronous grant-free massive RA systems, and an OAMP-based algorithm capable of leveraging the common sparsity among the received pilot signals from multiple base station antennas is developed. To reduce the computational complexity, a memory AMP (MAMP)based algorithm is further proposed that eliminates the matrix inversions in the OAMP-based algorithm. Simulation results demonstrate the effectiveness of the two proposed algorithms over the baseline methods. Besides, the MAMP-based algorithm reduces 37% of the computations while maintaining comparable detection/estimation accuracy, compared with the OAMP-based algorithm.

Due to the power consumption and high circuit cost in antenna arrays, the practical application of massive multiple-input multiple-output (MIMO) in the sixth generation (6G) and future wireless networks is still challenging. Employing low-resolution analog-to-digital converters (ADCs) and hybrid analog and digital (HAD) structure is two low-cost choice with acceptable performance loss.In this paper, the combination of the mixed-ADC architecture and HAD structure employed at receiver is proposed for direction of arrival (DOA) estimation, which will be applied to the beamforming tracking and alignment in 6G. By adopting the additive quantization noise model, the exact closed-form expression of the Cram\'{e}r-Rao lower bound (CRLB) for the HAD architecture with mixed-ADCs is derived. Moreover, the closed-form expression of the performance loss factor is derived as a benchmark. In addition, to take power consumption into account, energy efficiency is also investigated in our paper. The numerical results reveal that the HAD structure with mixed-ADCs can significantly reduce the power consumption and hardware cost. Furthermore, that architecture is able to achieve a better trade-off between the performance loss and the power consumption. Finally, adopting 2-4 bits of resolution may be a good choice in practical massive MIMO systems.

Maximum mean discrepancy (MMD) flows suffer from high computational costs in large scale computations. In this paper, we show that MMD flows with Riesz kernels $K(x,y) = - \|x-y\|^r$, $r \in (0,2)$ have exceptional properties which allow for their efficient computation. First, the MMD of Riesz kernels coincides with the MMD of their sliced version. As a consequence, the computation of gradients of MMDs can be performed in the one-dimensional setting. Here, for $r=1$, a simple sorting algorithm can be applied to reduce the complexity from $O(MN+N^2)$ to $O((M+N)\log(M+N))$ for two empirical measures with $M$ and $N$ support points. For the implementations we approximate the gradient of the sliced MMD by using only a finite number $P$ of slices. We show that the resulting error has complexity $O(\sqrt{d/P})$, where $d$ is the data dimension. These results enable us to train generative models by approximating MMD gradient flows by neural networks even for large scale applications. We demonstrate the efficiency of our model by image generation on MNIST, FashionMNIST and CIFAR10.

This paper presents a parameter-efficient learning (PEL) to develop a low-resource accent adaptation for text-to-speech (TTS). A resource-efficient adaptation from a frozen pre-trained TTS model is developed by using only 1.2\% to 0.8\% of original trainable parameters to achieve competitive performance in voice synthesis. Motivated by a theoretical foundation of optimal transport (OT), this study carries out PEL for TTS where an auxiliary unsupervised loss based on OT is introduced to maximize a difference between the pre-trained source domain and the (unseen) target domain, in addition to its supervised training loss. Further, we leverage upon this unsupervised loss refinement to boost system performance via either sliced Wasserstein distance or maximum mean discrepancy. The merit of this work is demonstrated by fulfilling PEL solutions based on residual adapter learning, and model reprogramming when evaluating the Mandarin accent adaptation. Experiment results show that the proposed methods can achieve competitive naturalness with parameter-efficient decoder fine-tuning, and the auxiliary unsupervised loss improves model performance empirically.

北京阿比特科技有限公司