亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Sixth-generation (6G) communication systems are poised to accommodate high data-rate wireless communication services in highly dynamic channels, with applications including high-speed trains, unmanned aerial vehicles, and intelligent transportation systems. Orthogonal frequency-division multiplexing (OFDM) modulation suffers from performance degradation in such high-mobility applications due to high Doppler spread in the channel. The recently proposed Orthogonal Time Frequency Space (OTFS) modulation scheme outperforms OFDM in terms of supporting a higher transmitter (Tx) and receiver (Rx) user velocity. Additionally, the highly-dynamic time-frequency (TF) channel has little effect on OTFS modulated signals, which enables the realization of low-complexity pre-processing architectures for implementing massive-multiple input multiple outputs (MIMO) based OTFS systems. However, while OTFS has received attention in the literature from a theory and simulation perspective, there has been comparatively little work on real-time FPGA implementation of OTFS waveforms. Thus, in this paper, we first present a mathematical overview of OTFS modulation and then describe an FPGA implementation of OTFS implementation on hardware. Power, area, and timing analysis of the implemented design on a Zynq UltraScale+ RFSoC FPGA are provided for benchmarking purposes.

相關內容

FPGA:ACM/SIGDA International Symposium on Field-Programmable Gate Arrays。 Explanation:ACM/SIGDA現場可編程門(men)陣列國(guo)際研討會。 Publisher:ACM。 SIT:

Deep convolutional neural networks have been widely applied in salient object detection and have achieved remarkable results in this field. However, existing models suffer from information distortion caused by interpolation during up-sampling and down-sampling. In response to this drawback, this article starts from two directions in the network: feature and label. On the one hand, a novel cascaded interaction network with a guidance module named global-local aligned attention (GAA) is designed to reduce the negative impact of interpolation on the feature side. On the other hand, a deep supervision strategy based on edge erosion is proposed to reduce the negative guidance of label interpolation on lateral output. Extensive experiments on five popular datasets demonstrate the superiority of our method.

Internet of Things (IoT) applications are composed of massive quantities of resource-limited devices that collect sensitive data with long-term operational and security requirements. With the threat of emerging quantum computers, Post-Quantum Cryptography (PQC) is a critical requirement for IoTs. In particular, digital signatures offer scalable authentication with non-repudiation and are an essential tool for IoTs. However, as seen in NIST PQC standardization, post-quantum signatures are extremely costly for resource-limited IoTs. Hence, there is a significant need for quantum-safe signatures that respect the processing, memory, and bandwidth limitations of IoTs. In this paper, we created a new lightweight quantum-safe digital signature referred to as INFinity-HORS (INF-HORS), which is (to the best of our knowledge) the first signer-optimal hash-based signature with (polynomially) unbounded signing capability. INF-HORS enables a verifier to non-interactively construct one-time public keys from a master public key via encrypted function evaluations. This strategy avoids the performance bottleneck of hash-based standards (e.g., SPHINCS+) by eliminating hyper-tree structures. It also does not require a trusted party or non-colliding servers to distribute public keys. Our performance analysis confirms that INF-HORS is magnitudes of times more signer computation efficient than selected NIST PQC schemes (e.g., SPHINCS+, Dilithium, Falcon) with a small memory footprint.

With the increasing demands from passengers for data-intensive services, millimeter-wave (mmWave) communication is considered as an effective technique to release the transmission pressure on high speed train (HST) networks. However, mmWave signals ncounter severe losses when passing through the carriage, which decreases the quality of services on board. In this paper, we investigate an intelligent refracting surface (IRS)-assisted HST communication system. Herein, an IRS is deployed on the train window to dynamically reconfigure the propagation environment, and a hybrid time division multiple access-nonorthogonal multiple access scheme is leveraged for interference mitigation. We aim to maximize the overall throughput while taking into account the constraints imposed by base station beamforming, IRS discrete phase shifts and transmit power. To obtain a practical solution, we employ an alternating optimization method and propose a two-stage algorithm. In the first stage, the successive convex approximation method and branch and bound algorithm are leveraged for IRS phase shift design. In the second stage, the Lagrangian multiplier method is utilized for power allocation. Simulation results demonstrate the benefits of IRS adoption and power allocation for throughput improvement in mmWave HST networks.

Collaborative inference systems are one of the emerging solutions for deploying deep neural networks (DNNs) at the wireless network edge. Their main idea is to divide a DNN into two parts, where the first is shallow enough to be reliably executed at edge devices of limited computational power, while the second part is executed at an edge server with higher computational capabilities. The main advantage of such systems is that the input of the DNN gets compressed as the subsequent layers of the shallow part extract only the information necessary for the task. As a result, significant communication savings can be achieved compared to transmitting raw input samples. In this work, we study early exiting in the context of collaborative inference, which allows obtaining inference results at the edge device for certain samples, without the need to transmit the partially processed data to the edge server at all, leading to further communication savings. The central part of our system is the transmission-decision (TD) mechanism, which, given the information from the early exit, and the wireless channel conditions, decides whether to keep the early exit prediction or transmit the data to the edge server for further processing. In this paper, we evaluate various TD mechanisms and show experimentally, that for an image classification task over the wireless edge, proper utilization of early exits can provide both performance gains and significant communication savings.

Optimal transport (OT) and unbalanced optimal transport (UOT) are central in many machine learning, statistics and engineering applications. 1D OT is easily solved, with complexity O(n log n), but no efficient algorithm was known for 1D UOT. We present a new approach that leverages the successive shortest path algorithm for the corresponding network flow problem. By employing a suitable representation, we bundle together multiple steps that do not change the cost of the shortest path. We prove that our algorithm solves 1D UOT in O(n log n), closing the gap.

Relaying increases the coverage area and reliability of wireless communications systems by mitigating the fading effect on the received signal. Most technical contributions in the context of these systems assume ideal hardware (ID) by neglecting the non-idealities of the transceivers, which include phase noise, in-phase/quadrature mismatch and high power amplifier nonlinearities. These non-idealities create distortion on the received signal by causing variations in the phase and attenuating the amplitude. The resulting deterioration of the performance of wireless communication systems is further magnified as the frequency of transmission increases. In this paper, we investigate the aggregate impact of hardware impairments (HI) on the general multi-hop relay system using amplify-and-forward (AF) and decode-and-forward (DF) relaying techniques over a general H-fading model. H-fading model includes free space optics, radio frequency, millimeter wave, Terahertz, and underwater fading models. Closed-form expressions of outage probability, bit error probability and ergodic capacity are derived in terms of H-functions. Following an asymptotic analysis at high signal-to-noise ratio (SNR), practical optimization problems have been formulated with the objective of finding the optimal level of HI subject to the limitation on the total HI level. The analytical solution has been derived for the Nakagami-m fading channel which is a special case of H-fading for AF and DF relaying techniques. The overall instantaneous signal-to-noise-plus-distortion ratio has been demonstrated to reach a ceiling at high SNRs which has a reciprocal proportion to the HI level of all hops transceivers on the contrary to the ID.

Although robust statistical estimators are less affected by outlying observations, their computation is usually more challenging. This is particularly the case in high-dimensional sparse settings. The availability of new optimization procedures, mainly developed in the computer science domain, offers new possibilities for the field of robust statistics. This paper investigates how such procedures can be used for robust sparse association estimators. The problem can be split into a robust estimation step followed by an optimization for the remaining decoupled, (bi-)convex problem. A combination of the augmented Lagrangian algorithm and adaptive gradient descent is implemented to also include suitable constraints for inducing sparsity. We provide results concerning the precision of the algorithm and show the advantages over existing algorithms in this context. High-dimensional empirical examples underline the usefulness of this procedure. Extensions to other robust sparse estimators are possible.

Graph neural networks (GNNs) have been demonstrated to be a powerful algorithmic model in broad application fields for their effectiveness in learning over graphs. To scale GNN training up for large-scale and ever-growing graphs, the most promising solution is distributed training which distributes the workload of training across multiple computing nodes. However, the workflows, computational patterns, communication patterns, and optimization techniques of distributed GNN training remain preliminarily understood. In this paper, we provide a comprehensive survey of distributed GNN training by investigating various optimization techniques used in distributed GNN training. First, distributed GNN training is classified into several categories according to their workflows. In addition, their computational patterns and communication patterns, as well as the optimization techniques proposed by recent work are introduced. Second, the software frameworks and hardware platforms of distributed GNN training are also introduced for a deeper understanding. Third, distributed GNN training is compared with distributed training of deep neural networks, emphasizing the uniqueness of distributed GNN training. Finally, interesting issues and opportunities in this field are discussed.

As soon as abstract mathematical computations were adapted to computation on digital computers, the problem of efficient representation, manipulation, and communication of the numerical values in those computations arose. Strongly related to the problem of numerical representation is the problem of quantization: in what manner should a set of continuous real-valued numbers be distributed over a fixed discrete set of numbers to minimize the number of bits required and also to maximize the accuracy of the attendant computations? This perennial problem of quantization is particularly relevant whenever memory and/or computational resources are severely restricted, and it has come to the forefront in recent years due to the remarkable performance of Neural Network models in computer vision, natural language processing, and related areas. Moving from floating-point representations to low-precision fixed integer values represented in four bits or less holds the potential to reduce the memory footprint and latency by a factor of 16x; and, in fact, reductions of 4x to 8x are often realized in practice in these applications. Thus, it is not surprising that quantization has emerged recently as an important and very active sub-area of research in the efficient implementation of computations associated with Neural Networks. In this article, we survey approaches to the problem of quantizing the numerical values in deep Neural Network computations, covering the advantages/disadvantages of current methods. With this survey and its organization, we hope to have presented a useful snapshot of the current research in quantization for Neural Networks and to have given an intelligent organization to ease the evaluation of future research in this area.

Deep neural networks (DNNs) have been found to be vulnerable to adversarial examples resulting from adding small-magnitude perturbations to inputs. Such adversarial examples can mislead DNNs to produce adversary-selected results. Different attack strategies have been proposed to generate adversarial examples, but how to produce them with high perceptual quality and more efficiently requires more research efforts. In this paper, we propose AdvGAN to generate adversarial examples with generative adversarial networks (GANs), which can learn and approximate the distribution of original instances. For AdvGAN, once the generator is trained, it can generate adversarial perturbations efficiently for any instance, so as to potentially accelerate adversarial training as defenses. We apply AdvGAN in both semi-whitebox and black-box attack settings. In semi-whitebox attacks, there is no need to access the original target model after the generator is trained, in contrast to traditional white-box attacks. In black-box attacks, we dynamically train a distilled model for the black-box model and optimize the generator accordingly. Adversarial examples generated by AdvGAN on different target models have high attack success rate under state-of-the-art defenses compared to other attacks. Our attack has placed the first with 92.76% accuracy on a public MNIST black-box attack challenge.

北京阿比特科技有限公司