亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Approximate computing is an emerging paradigm to improve power and performance efficiency for error-resilient application. Recent approximate adders have significantly extended the design space of accuracy-power configurable approximate adders, and find optimal designs by exploring the design space. In this paper, a new energy-efficient heterogeneous block-based approximate adder (HBBA) is proposed; which is a generic/configurable model that can be transformed to a particular adder by defining some configurations. An HBBA, in general, is composed of heterogeneous sub-adders, where each sub-adder can have a different configuration. A set of configurations of all the sub-adders in an HBBA defines its configuration. The block-based adders are approximated through inexact logic configuration and truncated carry chains. HBBA increases design space providing additional design points that fall on the Pareto-front and offer better power-accuracy trade-off compared to other configurations. Furthermore, to avoid Mont-Carlo simulations, we propose an analytical modelling technique to evaluate the probability of error and Probability Mass Function (PMF) of error value. Moreover, the estimation method estimates delay, area and power of heterogeneous block-based approximate adders. Thus, based on the analytical model and estimation method, the optimal configuration under a given error constraint can be selected from the whole design space of the proposed adder model by exhaustive search. The simulation results show that our HBBA provides improved accuracy in terms of error metrics compared to some state-of-the-art approximate adders. HBBA with 32 bits length serves about 15% reduction in area and up to 17% reduction in energy compared to state-of-the-art approximate adders.

相關內容

In recent years, many design automation methods have been developed to routinely create approximate implementations of circuits and programs that show excellent trade-offs between the quality of output and required resources. This paper deals with evolutionary approximation as one of the popular approximation methods. The paper provides the first survey of evolutionary algorithm (EA)-based approaches applied in the context of approximate computing. The survey reveals that EAs are primarily applied as multi-objective optimizers. We propose to divide these approaches into two main classes: (i) parameter optimization in which the EA optimizes a vector of system parameters, and (ii) synthesis and optimization in which EA is responsible for determining the architecture and parameters of the resulting system. The evolutionary approximation has been applied at all levels of design abstraction and in many different applications. The neural architecture search enabling the automated hardware-aware design of approximate deep neural networks was identified as a newly emerging topic in this area.

Automated Guided Vehicles (AGVs) have been widely used for material handling in flexible shop floors. Each product requires various raw materials to complete the assembly in production process. AGVs are used to realize the automatic handling of raw materials in different locations. Efficient AGVs task allocation strategy can reduce transportation costs and improve distribution efficiency. However, the traditional centralized approaches make high demands on the control center's computing power and real-time capability. In this paper, we present decentralized solutions to achieve flexible and self-organized AGVs task allocation. In particular, we propose two improved multi-agent reinforcement learning algorithms, MADDPG-IPF (Information Potential Field) and BiCNet-IPF, to realize the coordination among AGVs adapting to different scenarios. To address the reward-sparsity issue, we propose a reward shaping strategy based on information potential field, which provides stepwise rewards and implicitly guides the AGVs to different material targets. We conduct experiments under different settings (3 AGVs and 6 AGVs), and the experiment results indicate that, compared with baseline methods, our work obtains up to 47\% task response improvement and 22\% training iterations reduction.

In this paper we give a completely new approach to the problem of covariate selection in linear regression. A covariate or a set of covariates is included only if it is better in the sense of least squares than the same number of Gaussian covariates consisting of i.i.d. $N(0,1)$ random variables. The Gaussian P-value is defined as the probability that the Gaussian covariates are better. It is given in terms of the Beta distribution, it is exact and it holds for all data. The covariate selection procedures based on this require only a cut-off value $\alpha$ for the Gaussian P-value: the default value in this paper is $\alpha=0.01$. The resulting procedures are very simple, very fast, do not overfit and require only least squares. In particular there is no regularization parameter, no data splitting, no use of simulations, no shrinkage and no post selection inference is required. The paper includes the results of simulations, applications to real data sets and theorems on the asymptotic behaviour under the standard linear model. Here the stepwise procedure performs overwhelmingly better than any other procedure we are aware of. An R-package {\it gausscov} is available.

Considering how congestion will propagate in the near future, understanding traffic congestion propagation has become crucial in GPS navigation systems for providing users with a more accurate estimated time of arrival (ETA). However, providing the exact ETA during congestion is a challenge owing to the complex propagation process between roads and high uncertainty regarding the future behavior of the process. Recent studies have focused on finding frequent congestion propagation patterns and determining the propagation probabilities. By contrast, this study proposes a novel time delay estimation method for traffic congestion propagation between roads using lag-specific transfer entropy (TE). Nonlinear normalization with a sliding window is used to effectively reveal the causal relationship between the source and target time series in calculating the TE. Moreover, Markov bootstrap techniques were adopted to quantify the uncertainty in the time delay estimator. To the best of our knowledge, the time delay estimation method presented in this article is the first to determine the time delay between roads for any congestion propagation pattern. The proposed method was validated using simulated data as well as real user trajectory data obtained from a major GPS navigation system applied in South Korea.

The paper describes a new class of capture-recapture models for closed populations when individual covariates are available. The novelty consists in combining a latent class model for capture probabilities where the class weights and the conditional distributions given the latent may depend on covariates, with a model for the marginal distribution of the available covariates as in Liu et al, Biometrika (2017). In addition, the conditional distributions given the latent and covariates are allowed to take into account any general form of serial dependence. A Fisher scoring algorithm for maximum likelihood estimation is presented, and a powerful result based on the implicit function theorem is used to show that the marginal distribution of observed covariates is uniquely determined, once an estimate of the probabilities of being never captured is available. Asymptotic results are outlined, and a procedure for constructing likelihood based confidence intervals for the population total is presented. Two examples with real data are used to illustrate the proposed approach

In this paper, we revisit the classical goodness-of-fit problems for univariate distributions; we propose a new testing procedure based on a characterisation of the uniform distribution. Asymptotic theory for the simple hypothesis case is provided in a Hilbert-Space setting, including the asymptotic null distribution as well as values for the first four cumulants of this distribution, which are used to fit a Pearson system of distributions as an approximation to the limit distribution. Numerical results indicate that the null distribution of the test converges quickly to its asymptotic distribution, making the critical values obtained using the Pearson system particularly useful. Consistency of the test is shown against any fixed alternative distribution and we derive the limiting behaviour under fixed alternatives with an application to power approximation. We demonstrate the applicability of the newly proposed test when testing composite hypotheses. A Monte Carlo power study compares the finite sample power performance of the newly proposed test to existing omnibus tests in both the simple and composite hypothesis settings. This power study includes results related to testing for the uniform, normal and Pareto distributions. The empirical results obtained indicate that the test is competitive. An application of the newly proposed test in financial modelling is also included.

In real-world applications, data often come in a growing manner, where the data volume and the number of classes may increase dynamically. This will bring a critical challenge for learning: given the increasing data volume or the number of classes, one has to instantaneously adjust the neural model capacity to obtain promising performance. Existing methods either ignore the growing nature of data or seek to independently search an optimal architecture for a given dataset, and thus are incapable of promptly adjusting the architectures for the changed data. To address this, we present a neural architecture adaptation method, namely Adaptation eXpert (AdaXpert), to efficiently adjust previous architectures on the growing data. Specifically, we introduce an architecture adjuster to generate a suitable architecture for each data snapshot, based on the previous architecture and the different extent between current and previous data distributions. Furthermore, we propose an adaptation condition to determine the necessity of adjustment, thereby avoiding unnecessary and time-consuming adjustments. Extensive experiments on two growth scenarios (increasing data volume and number of classes) demonstrate the effectiveness of the proposed method.

Tracking by detection is a common approach to solving the Multiple Object Tracking problem. In this paper we show how deep metric learning can be used to improve three aspects of tracking by detection. We train a convolutional neural network to learn an embedding function in a Siamese configuration on a large person re-identification dataset offline. It is then used to improve the online performance of tracking while retaining a high frame rate. We use this learned appearance metric to robustly build estimates of pedestrian's trajectories in the MOT16 dataset. In breaking with the tracking by detection model, we use our appearance metric to propose detections using the predicted state of a tracklet as a prior in the case where the detector fails. This method achieves competitive results in evaluation, especially among online, real-time approaches. We present an ablative study showing the impact of each of the three uses of our deep appearance metric.

Generative models (GMs) such as Generative Adversary Network (GAN) and Variational Auto-Encoder (VAE) have thrived these years and achieved high quality results in generating new samples. Especially in Computer Vision, GMs have been used in image inpainting, denoising and completion, which can be treated as the inference from observed pixels to corrupted pixels. However, images are hierarchically structured which are quite different from many real-world inference scenarios with non-hierarchical features. These inference scenarios contain heterogeneous stochastic variables and irregular mutual dependences. Traditionally they are modeled by Bayesian Network (BN). However, the learning and inference of BN model are NP-hard thus the number of stochastic variables in BN is highly constrained. In this paper, we adapt typical GMs to enable heterogeneous learning and inference in polynomial time.We also propose an extended autoregressive (EAR) model and an EAR with adversary loss (EARA) model and give theoretical results on their effectiveness. Experiments on several BN datasets show that our proposed EAR model achieves the best performance in most cases compared to other GMs. Except for black box analysis, we've also done a serial of experiments on Markov border inference of GMs for white box analysis and give theoretical results.

We develop an approach to risk minimization and stochastic optimization that provides a convex surrogate for variance, allowing near-optimal and computationally efficient trading between approximation and estimation error. Our approach builds off of techniques for distributionally robust optimization and Owen's empirical likelihood, and we provide a number of finite-sample and asymptotic results characterizing the theoretical performance of the estimator. In particular, we show that our procedure comes with certificates of optimality, achieving (in some scenarios) faster rates of convergence than empirical risk minimization by virtue of automatically balancing bias and variance. We give corroborating empirical evidence showing that in practice, the estimator indeed trades between variance and absolute performance on a training sample, improving out-of-sample (test) performance over standard empirical risk minimization for a number of classification problems.

北京阿比特科技有限公司