Approximate computing (AC) has become a prominent solution to improve the performance, area, and power/energy efficiency of a digital design at the cost of output accuracy. We propose a novel scalable approximate multiplier that utilizes a lookup table-based compensation unit. To improve energy-efficiency, input operands are truncated to a reduced bitwidth representation (e.g., h bits) based on their leading one positions. Then, a curve-fitting method is employed to map the product term to a linear function, and a piecewise constant error-correction term is used to reduce the approximation error. For computing the piecewise constant error-compensation term, we partition the function space into M segments and compute the compensation factor for each segment by averaging the errors in the segment. The multiplier supports various degrees of truncation and error-compensation to exploit accuracy-efficiency trade-off. The proposed approximate multiplier offers better error metrics such as mean and standard deviation of absolute relative error (MARED and StdARED) compare to a state-of-the-art integer approximate multiplier. The proposed approximate multiplier improves the MARED and StdARED by about 38% and 32% when its energy consumption is about equal to the state-of-the-art approximate multiplier. Moreover, the performance of the proposed approximate multiplier is evaluated in image classification applications using a Deep Neural Network (DNN). The results indicate that the degradation of DNN accuracy is negligible especially due to the compensation properties of our approximate multiplier.
The One-versus-One (OvO) strategy is an approach of multi-classification models which focuses on training binary classifiers between each pair of classes. While the OvO strategy takes advantage of balanced training data, the classification accuracy is usually hindered by the voting mechanism to combine all binary classifiers. In this paper, a novel OvO multi-classification model incorporating a joint probability measure is proposed under the deep learning framework. In the proposed model, a two-stage algorithm is developed to estimate the class probability from the pairwise binary classifiers. Given the binary classifiers, the pairwise probability estimate is calibrated by a distance measure on the separating feature hyperplane. From that, the class probability of the subject is estimated by solving a joint probability-based distance minimization problem. Numerical experiments in different applications show that the proposed model achieves generally higher classification accuracy than other state-of-the-art models.
Multi-access Edge Computing (MEC) is an enabling technology to leverage new network applications, such as virtual/augmented reality, by providing faster task processing at the network edge. This is done by deploying servers closer to the end users to run the network applications. These applications are often intensive in terms of task processing, memory usage, and communication; thus mobile devices may take a long time or even not be able to run them efficiently. By transferring (offloading) the execution of these applications to the servers at the network edge, it is possible to achieve a lower completion time (makespan) and meet application requirements. However, offloading multiple entire applications to the edge server can overwhelm its hardware and communication channel, as well as underutilize the mobile devices' hardware. In this paper, network applications are modeled as Directed Acyclic Graphs (DAGs) and partitioned into tasks, and only part of these tasks are offloaded to the edge server. This is the DAG application partitioning and offloading problem, which is known to be NP-hard. To approximate its solution, this paper proposes the FlexDO algorithm. FlexDO combines a greedy phase with a permutation phase to find a set of offloading decisions, and then chooses the one that achieves the shortest makespan. FlexDO is compared with a proposal from the literature and two baseline decisions, considering realistic DAG applications extracted from the Alibaba Cluster Trace Program. Results show that FlexDO is consistently only 3.9% to 8.9% above the optimal makespan in all test scenarios, which include different levels of CPU availability, a multi-user case, and different communication channel transmission rates. FlexDO outperforms both baseline solutions by a wide margin, and is three times closer to the optimal makespan than its competitor.
In this paper, we address the problem of distributed power allocation in a $K$ user fading multiple access wiretap channel, where global channel state information is limited, i.e., each user has knowledge of their own channel state with respect to Bob and Eve but only knows the distribution of other users' channel states. We model this problem as a Bayesian game, where each user is assumed to selfishly maximize his average \emph{secrecy capacity} with partial channel state information. In this work, we first prove that there is a unique Bayesian equilibrium in the proposed game. Additionally, the price of anarchy is calculated to measure the efficiency of the equilibrium solution. We also propose a fast convergent iterative algorithm for power allocation. Finally, the results are validated using simulation results.
In a Subgraph Problem we are given some graph and want to find a feasible subgraph that optimizes some measure. We consider Multistage Subgraph Problems (MSPs), where we are given a sequence of graph instances (stages) and are asked to find a sequence of subgraphs, one for each stage, such that each is optimal for its respective stage and the subgraphs for subsequent stages are as similar as possible. We present a framework that provides a $(1/\sqrt{2\chi})$-approximation algorithm for the $2$-stage restriction of an MSP if the similarity of subsequent solutions is measured as the intersection cardinality and said MSP is preficient, i.e., we can efficiently find a single-stage solution that prefers some given subset. The approximation factor is dependent on the instance's intertwinement $\chi$, a similarity measure for multistage graphs. We also show that for any MSP, independent of similarity measure and preficiency, given an exact or approximation algorithm for a constant number of stages, we can approximate the MSP for an unrestricted number of stages. Finally, we combine and apply these results and show that the above restrictions describe a very rich class of MSPs and that proving membership for this class is mostly straightforward. As examples, we explicitly state these proofs for natural multistage versions of Perfect Matching, Shortest s-t-Path, Minimum s-t-Cut and further classical problems on bipartite or planar graphs, namely Maximum Cut, Vertex Cover, Independent Set, and Biclique.
This paper develops an approximation to the (effective) $p$-resistance and applies it to multi-class clustering. Spectral methods based on the graph Laplacian and its generalization to the graph $p$-Laplacian have been a backbone of non-euclidean clustering techniques. The advantage of the $p$-Laplacian is that the parameter $p$ induces a controllable bias on cluster structure. The drawback of $p$-Laplacian eigenvector based methods is that the third and higher eigenvectors are difficult to compute. Thus, instead, we are motivated to use the $p$-resistance induced by the $p$-Laplacian for clustering. For $p$-resistance, small $p$ biases towards clusters with high internal connectivity while large $p$ biases towards clusters of small ``extent,'' that is a preference for smaller shortest-path distances between vertices in the cluster. However, the $p$-resistance is expensive to compute. We overcome this by developing an approximation to the $p$-resistance. We prove upper and lower bounds on this approximation and observe that it is exact when the graph is a tree. We also provide theoretical justification for the use of $p$-resistance for clustering. Finally, we provide experiments comparing our approximated $p$-resistance clustering to other $p$-Laplacian based methods.
We study the Electrical Impedance Tomography Bayesian inverse problem for recovering the conductivity given noisy measurements of the voltage on some boundary surface electrodes. The uncertain conductivity depends linearly on a countable number of uniformly distributed random parameters in a compact interval, with the coefficient functions in the linear expansion decaying at an algebraic rate. We analyze the surrogate Markov Chain Monte Carlo (MCMC) approach for sampling the posterior probability measure, where the multivariate sparse adaptive interpolation, with interpolating points chosen according to a lower index set, is used for approximating the forward map. The forward equation is approximated once before running the MCMC for all the realizations, using interpolation on the finite element (FE) approximation at the parametric interpolating points. When evaluation of the solution is needed for a realization, we only need to compute a polynomial, thus cutting drastically the computation time. We contribute a rigorous error estimate for the MCMC convergence. In particular, we show that there is a nested sequence of interpolating lower index sets for which we can derive an interpolation error estimate in terms of the cardinality of these sets, uniformly for all the parameter realizations. An explicit convergence rate for the MCMC sampling of the posterior expectation of the conductivity is rigorously derived, in terms of the interpolating point number, the accuracy of the FE approximation of the forward equation, and the MCMC sample number. We perform numerical experiments using an adaptive greedy approach to construct the sets of interpolation points. We show the benefits of this approach over the simple MCMC where the forward equation is repeatedly solved for all the samples and the non-adaptive surrogate MCMC with an isotropic index set treating all the random parameters equally.
Distributing the inference of convolutional neural network (CNN) to multiple mobile devices has been studied in recent years to achieve real-time inference without losing accuracy. However, how to map CNN to devices remains a challenge. On the one hand, scheduling the workload of state-of-the-art CNNs with multiple devices is NP-Hard because the structures of CNNs are directed acyclic graphs (DAG) rather than simple chains. On the other hand, distributing the inference workload suffers from expensive communication and unbalanced computation due to the wireless environment and heterogeneous devices. This paper presents PICO, a pipeline cooperation framework to accelerate the inference of versatile CNNs on diverse mobile devices. At its core, PICO features: (1) a generic graph partition algorithm that considers the characteristics of any given CNN and orchestrates it into a list of model pieces with suitable granularity, and (2) a many-to-many mapping algorithm that produces the best pipeline configuration for heterogeneous devices. In our experiment with 2 ~ 8 Raspberry-Pi devices, the throughput can be improved by 1.8 ~ 6.8x under different CPU frequencies.
Privacy-utility tradeoff remains as one of the fundamental issues of differentially private machine learning. This paper introduces a geometrically inspired kernel-based approach to mitigate the accuracy-loss issue in classification. In this approach, a representation of the affine hull of given data points is learned in Reproducing Kernel Hilbert Spaces (RKHS). This leads to a novel distance measure that hides privacy-sensitive information about individual data points and improves the privacy-utility tradeoff via significantly reducing the risk of membership inference attacks. The effectiveness of the approach is demonstrated through experiments on MNIST dataset, Freiburg groceries dataset, and a real biomedical dataset. It is verified that the approach remains computationally practical. The application of the approach to federated learning is considered and it is observed that the accuracy-loss due to data being distributed is either marginal or not significantly high.
The reduced-rank vector autoregressive (VAR) model can be interpreted as a supervised factor model, where two factor modelings are simultaneously applied to response and predictor spaces. This article introduces a new model, called vector autoregression with common response and predictor factors, to explore further the common structure between the response and predictors in the VAR framework. The new model can provide better physical interpretations and improve estimation efficiency. In conjunction with the tensor operation, the model can easily be extended to any finite-order VAR model. A regularization-based method is considered for the high-dimensional estimation with the gradient descent algorithm, and its computational and statistical convergence guarantees are established. For data with pervasive cross-sectional dependence, a transformation for responses is developed to alleviate the diverging eigenvalue effect. Moreover, we consider additional sparsity structure in factor loading for the case of ultra-high dimension. Simulation experiments confirm our theoretical findings and a macroeconomic application showcases the appealing properties of the proposed model in structural analysis and forecasting.
Neural network compression has been an increasingly important subject, due to its practical implications in terms of reducing the computational requirements and its theoretical implications, as there is an explicit connection between compressibility and the generalization error. Recent studies have shown that the choice of the hyperparameters of stochastic gradient descent (SGD) can have an effect on the compressibility of the learned parameter vector. Even though these results have shed some light on the role of the training dynamics over compressibility, they relied on unverifiable assumptions and the resulting theory does not provide a practical guideline due to its implicitness. In this study, we propose a simple modification for SGD, such that the outputs of the algorithm will be provably compressible without making any nontrivial assumptions. We consider a one-hidden-layer neural network trained with SGD and we inject additive heavy-tailed noise to the iterates at each iteration. We then show that, for any compression rate, there exists a level of overparametrization (i.e., the number of hidden units), such that the output of the algorithm will be compressible with high probability. To achieve this result, we make two main technical contributions: (i) we build on a recent study on stochastic analysis and prove a 'propagation of chaos' result with improved rates for a class of heavy-tailed stochastic differential equations, and (ii) we derive strong-error estimates for their Euler discretization. We finally illustrate our approach on experiments, where the results suggest that the proposed approach achieves compressibility with a slight compromise from the training and test error.