亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Fast multipliers with large bit widths can occupy significant silicon area, which, in turn, can be minimized by employing multi-cycle multipliers. This paper introduces architectures and parameterized Verilog circuit generators for 2-cycle integer multipliers. When implementing an algorithm in hardware, it is common that less than 1 multiplication needs to be performed per clock cycle. It is also possible that the multiplications per cycle is a fractional number, e.g., 3.5. In such case, we can surely use 4 multipliers, each with a throughput of 1 result per cycle. However, we can instead use 3 such multipliers plus a multiplier with a throughput of 1/2. Resource sharing allows a multiplier with a lower throughput to be smaller, hence area savings. These multipliers offer customization in regards to the latency and clock frequency. All proposed designs were automatically synthesized and tested for various bit widths. Two main architectures are presented in this work, and each has several variants. Our 2-cycle multipliers offer up to 21%, 42%, 32%, 41%, and 48% of area savings for bit widths of 8, 16, 32, 64, and 128, with respect to synthesizing the "*" operator with throughput of 1. Furthermore, some of the proposed designs also offer power savings under certain conditions.

相關內容

Numerical simulations with rigid particles, drops or vesicles constitute some examples that involve 3D objects with spherical topology. When the numerical method is based on boundary integral equations, the error in using a regular quadrature rule to approximate the layer potentials that appear in the formulation will increase rapidly as the evaluation point approaches the surface and the integrand becomes sharply peaked. To determine when the accuracy becomes insufficient, and a more costly special quadrature method should be used, error estimates are needed. In this paper we present quadrature error estimates for layer potentials evaluated near surfaces of genus 0, parametrized using a polar and an azimuthal angle, discretized by a combination of the Gauss-Legendre and the trapezoidal quadrature rules. The error estimates involve no unknown coefficients, but complex valued roots of a specified distance function. The evaluation of the error estimates in general requires a one dimensional local root-finding procedure, but for specific geometries we obtain analytical results. Based on these explicit solutions, we derive simplified error estimates for layer potentials evaluated near spheres; these simple formulas depend only on the distance from the surface, the radius of the sphere and the number of discretization points. The usefulness of these error estimates is illustrated with numerical examples.

Edge computing must be capable of executing computationally intensive algorithms, such as Deep Neural Networks (DNNs) while operating within a constrained computational resource budget. Such computations involve Matrix Vector Multiplications (MVMs) which are the dominant contributor to the memory and energy budget of DNNs. To alleviate the computational intensity and storage demand of MVMs, we propose circuit-algorithm co-design techniques with low-complexity approximate Multiply-Accumulate (MAC) units derived from the principles of Alphabet Set Multipliers (ASMs). Selection of few and proper alphabets from ASMs lead to a Multiplier-less DNN implementation, and enables encoding of low precision weights and input activations into fewer bits. To maintain accuracy under alphabet set approximations, we developed a novel ASM-alphabet aware training. The proposed low-complexity multiplication-aware algorithm was implemented In-Memory and Near-Memory with efficient shift operations to further improve the data-movement cost between memory and processing unit. We benchmark our design on CIFAR10 and ImageNet datasets for ResNet and MobileNet models and attain <1-2% accuracy degradation against full precision with energy benefits of >50% compared to standard Von-Neumann counterpart.

This paper develops fast and efficient algorithms for computing Tucker decomposition with a given multilinear rank. By combining random projection and the power scheme, we propose two efficient randomized versions for the truncated high-order singular value decomposition (T-HOSVD) and the sequentially T-HOSVD (ST-HOSVD), which are two common algorithms for approximating Tucker decomposition. To reduce the complexities of these two algorithms, fast and efficient algorithms are designed by combining two algorithms and approximate matrix multiplication. The theoretical results are also achieved based on the bounds of singular values of standard Gaussian matrices and the theoretical results for approximate matrix multiplication. Finally, the efficiency of these algorithms are illustrated via some test tensors from synthetic and real datasets.

Large Language Models (LLMs) have so far impressed the world, with unprecedented capabilities that emerge in models at large scales. On the vision side, transformer models (i.e., ViT) are following the same trend, achieving the best performance on challenging benchmarks. With the abundance of such unimodal models, a natural question arises; do we need also to follow this trend to tackle multimodal tasks? In this work, we propose to rather direct effort to efficient adaptations of existing models, and propose to augment Language Models with perception. Existing approaches for adapting pretrained models for vision-language tasks still rely on several key components that hinder their efficiency. In particular, they still train a large number of parameters, rely on large multimodal pretraining, use encoders (e.g., CLIP) trained on huge image-text datasets, and add significant inference overhead. In addition, most of these approaches have focused on Zero-Shot and In Context Learning, with little to no effort on direct finetuning. We investigate the minimal computational effort needed to adapt unimodal models for multimodal tasks and propose a new challenging setup, alongside different approaches, that efficiently adapts unimodal pretrained models. We show that by freezing more than 99\% of total parameters, training only one linear projection layer, and prepending only one trainable token, our approach (dubbed eP-ALM) significantly outperforms other baselines on VQA and Captioning across Image, Video, and Audio modalities, following the proposed setup. The code will be available here: //github.com/mshukor/eP-ALM.

Longitudinal network consists of a sequence of temporal edges among multiple nodes, where the temporal edges are observed in real time. It has become ubiquitous with the rise of online social platform and e-commerce, but largely under-investigated in literature. In this paper, we propose an efficient estimation framework for longitudinal network, leveraging strengths of adaptive network merging, tensor decomposition and point process. It merges neighboring sparse networks so as to enlarge the number of observed edges and reduce estimation variance, whereas the estimation bias introduced by network merging is controlled by exploiting local temporal structures for adaptive network neighborhood. A projected gradient descent algorithm is proposed to facilitate estimation, where the upper bound of the estimation error in each iteration is established. A thorough analysis is conducted to quantify the asymptotic behavior of the proposed method, which shows that it can significantly reduce the estimation error and also provides guideline for network merging under various scenarios. We further demonstrate the advantage of the proposed method through extensive numerical experiments on synthetic datasets and a militarized interstate dispute dataset.

This paper proposes a criterion for detecting change structures in tensor data. To accommodate tensor structure with structural mode that is not suitable to be equally treated and summarized in a distance to measure the difference between any two adjacent tensors, we define a mode-based signal-screening Frobenius distance for the moving sums of slices of tensor data to handle both dense and sparse model structures of the tensors. As a general distance, it can also deal with the case without structural mode. Based on the distance, we then construct signal statistics using the ratios with adaptive-to-change ridge functions. The number of changes and their locations can then be consistently estimated in certain senses, and the confidence intervals of the locations of change points are constructed. The results hold when the size of the tensor and the number of change points diverge at certain rates, respectively. Numerical studies are conducted to examine the finite sample performances of the proposed method. We also analyze two real data examples for illustration.

Head-related transfer functions (HRTFs) are essential for virtual acoustic realities, as they contain all cues for localizing sound sources in three-dimensional space. Acoustic measurements are one way to obtain high-quality HRTFs. To reduce measurement time, cost, and complexity of measurement systems, a promising approach is to capture only a few HRTFs on a sparse sampling grid and then upsample them to a dense HRTF set by interpolation. However, HRTF interpolation is challenging because small changes in source position can result in significant changes in the HRTF phase and magnitude response. Previous studies greatly improved the interpolation by time-aligning the HRTFs in preprocessing, but magnitude interpolation errors, especially in contralateral regions, remain a problem. Building upon the time-alignment approaches, we propose an additional post-interpolation magnitude correction derived from a frequency-smoothed HRTF representation. Employing all 96 individual simulated HRTF sets of the HUTUBS database, we show that the magnitude correction significantly reduces interpolation errors compared to state-of-the-art interpolation methods applying only time alignment. Our analysis shows that when upsampling very sparse HRTF sets, the subject-averaged magnitude error in the critical higher frequency range is up to 1.5 dB lower when averaged over all directions and even up to 4 dB lower in the contralateral region. As a result, the interaural level differences in the upsampled HRTFs are considerably improved. The proposed algorithm thus has the potential to further reduce the minimum number of HRTFs required for perceptually transparent interpolation.

Like most modern blockchain networks, Ethereum has relied on economic incentives to promote honest participation in the chain's consensus. The distributed character of the platform, together with the "randomness" or "luck" factor that both proof of work (PoW) and proof of stake (PoS) provide when electing the next block proposer, pushed the industry to model and improve the reward system of the system. With several improvements to predict PoW block proposal rewards and to maximize the extractable rewards of the same ones, the ultimate Ethereum's transition to PoS applied in the Paris Hard-Fork, more generally known as "The Merge", has meant a significant modification on the reward system in the platform. In this paper, we aim to break down both theoretically and empirically the new reward system in this post-merge era. We present a highly detailed description of the different rewards and their share among validators' rewards. Ultimately, we offer a study that uses the presented reward model to analyze the performance of the network during this transition.

Reliable multi-agent trajectory prediction is crucial for the safe planning and control of autonomous systems. Compared with single-agent cases, the major challenge in simultaneously processing multiple agents lies in modeling complex social interactions caused by various driving intentions and road conditions. Previous methods typically leverage graph-based message propagation or attention mechanism to encapsulate such interactions in the format of marginal probabilistic distributions. However, it is inherently sub-optimal. In this paper, we propose IPCC-TP, a novel relevance-aware module based on Incremental Pearson Correlation Coefficient to improve multi-agent interaction modeling. IPCC-TP learns pairwise joint Gaussian Distributions through the tightly-coupled estimation of the means and covariances according to interactive incremental movements. Our module can be conveniently embedded into existing multi-agent prediction methods to extend original motion distribution decoders. Extensive experiments on nuScenes and Argoverse 2 datasets demonstrate that IPCC-TP improves the performance of baselines by a large margin.

Compared with cheap addition operation, multiplication operation is of much higher computation complexity. The widely-used convolutions in deep neural networks are exactly cross-correlation to measure the similarity between input feature and convolution filters, which involves massive multiplications between float values. In this paper, we present adder networks (AdderNets) to trade these massive multiplications in deep neural networks, especially convolutional neural networks (CNNs), for much cheaper additions to reduce computation costs. In AdderNets, we take the $\ell_1$-norm distance between filters and input feature as the output response. The influence of this new similarity measure on the optimization of neural network have been thoroughly analyzed. To achieve a better performance, we develop a special back-propagation approach for AdderNets by investigating the full-precision gradient. We then propose an adaptive learning rate strategy to enhance the training procedure of AdderNets according to the magnitude of each neuron's gradient. As a result, the proposed AdderNets can achieve 74.9% Top-1 accuracy 91.7% Top-5 accuracy using ResNet-50 on the ImageNet dataset without any multiplication in convolution layer.

北京阿比特科技有限公司