In this paper, we study the problem of minimizing the age of information when a source can transmit status updates over two heterogeneous channels. Our work is motivated by recent developments in 5G mmWave technology, where transmissions may occur over an unreliable but fast (e.g., mmWave) channel or a slow reliable (e.g., sub-6GHz) channel. The unreliable channel is modeled as a time-correlated Gilbert-Elliot channel, where information can be transmitted at a high rate when the channel is in the ''ON'' state. The reliable channel provides a deterministic but lower data rate. The scheduling strategy determines the channel to be used for transmission with the aim to minimize the time-average age of information (AoI). The optimal scheduling problem is formulated as a Markov Decision Process (MDP), which in our setting poses some significant challenges because e.g., supermodularity does not hold for part of the state space. We show that there exists a multi-dimensional threshold-based scheduling policy that is optimal for minimizing the age. A low-complexity bisection algorithm is further devised to compute the optimal thresholds. Numerical simulations are provided to compare different scheduling policies.
In this study, we generalize a problem of sampling a scalar Gauss Markov Process, namely, the Ornstein-Uhlenbeck (OU) process, where the samples are sent to a remote estimator and the estimator makes a causal estimate of the observed realtime signal. In recent years, the problem is solved for stable OU processes. We present solutions for the optimal sampling policy that exhibits a smaller estimation error for both stable and unstable cases of the OU process along with a special case when the OU process turns to a Wiener process. The obtained optimal sampling policy is a threshold policy. However, the thresholds are different for all three cases. Later, we consider additional noise with the sample when the sampling decision is made beforehand. The estimator utilizes noisy samples to make an estimate of the current signal value. The mean-square error (mse) is changed from previous due to noise and the additional term in the mse is solved which provides performance upper bound and room for a pursuing further investigation on this problem to find an optimal sampling strategy that minimizes the estimation error when the observed samples are noisy. Numerical results show performance degradation caused by the additive noise.
The picking efficiency of warehouses assisted by KIVA robots benefit from exploiting synergy effect between order assignment and picking station scheduling. We treat an integrated optimization which contains both allocating orders and racks to multiple stations and concurrently sequencing their interlinked processing flows at each individual one. The various decisions included in our problem, which are closely associated and must be solved in real time, are often tackled separately for ease of treatment in past. We, however, develop a comprehensive mathematical model under the consideration of the minimum total rack visits. The problem can be proven NP-hard. Consequently, an efficient algorithm based on simulated annealing and dynamic programming is developed. The experimental results show that the proposed approach has more advantage in the light of solution quality as compared with actual rule-based policies. Moreover, the results reveal that ignoring order assignment policy leads to considerable optimality gaps under realistically sized settings.
Despite superior performance on many computer vision tasks, deep convolution neural networks are well known to be compressed on devices that have resource constraints. Most existing network pruning methods require laborious human efforts and prohibitive computation resources, especially when the constraints are changed. This practically limits the application of model compression when the model needs to be deployed on a wide range of devices. Besides, existing methods are still challenged by the missing theoretical guidance. In this paper we propose an information theory-inspired strategy for automatic model compression. The principle behind our method is the information bottleneck theory, i.e., the hidden representation should compress information with each other. We thus introduce the normalized Hilbert-Schmidt Independence Criterion (nHSIC) on network activations as a stable and generalized indicator of layer importance. When a certain resource constraint is given, we integrate the HSIC indicator with the constraint to transform the architecture search problem into a linear programming problem with quadratic constraints. Such a problem is easily solved by a convex optimization method with a few seconds. We also provide a rigorous proof to reveal that optimizing the normalized HSIC simultaneously minimizes the mutual information between different layers. Without any search process, our method achieves better compression tradeoffs comparing to the state-of-the-art compression algorithms. For instance, with ResNet-50, we achieve a 45.3%-FLOPs reduction, with a 75.75 top-1 accuracy on ImageNet. Codes are avaliable at //github.com/MAC-AutoML/ITPruner/tree/master.
In this paper, a novel spatially non-stationary channel model is proposed for link-level computer simulations of massive multiple-input multiple-output (mMIMO) with extremely large aperture array (ELAA). The proposed channel model allows a mix of non-line-of-sight (NLoS) and LoS links between a user and service antennas. The NLoS/LoS state of each link is characterized by a binary random variable, which obeys a correlated Bernoulli distribution. The correlation is described in the form of an exponentially decaying window. In addition, the proposed model incorporates shadowing effects which are non-identical for NLoS and LoS states. It is demonstrated, through computer emulation, that the proposed model can capture almost all spatially non-stationary fading behaviors of the ELAA-mMIMO channel. Moreover, it has a low implementational complexity. With the proposed channel model, Monte-Carlo simulations are carried out to evaluate the channel capacity of ELAA-mMIMO. It is shown that the ELAA-mMIMO channel capacity has considerably different stochastic characteristics from the conventional mMIMO due to the presence of channel spatial non-stationarity.
This paper proposes a new general technique for maximal subgraph enumeration which we call proximity search, whose aim is to design efficient enumeration algorithms for problems that could not be solved by existing frameworks. To support this claim and illustrate the technique we include output-polynomial algorithms for several problems for which output-polynomial algorithms were not known, including the enumeration of Maximal Bipartite Subgraphs, Maximal k-Degenerate Subgraphs (for bounded k), Maximal Induced Chordal Subgraphs, and Maximal Induced Trees. Using known techniques, such as reverse search, the space of all maximal solutions induces an implicit directed graph called "solution graph" or "supergraph", and solutions are enumerated by traversing it; however, nodes in this graph can have exponential out-degree, thus requiring exponential time to be spent on each solution. The novelty of proximity search is a formalization that allows us to define a better solution graph, and a technique, which we call canonical reconstruction, by which we can exploit the properties of given problems to build such graphs. This results in solution graphs whose nodes have significantly smaller (i.e., polynomial) out-degree with respect to existing approaches, but that remain strongly connected, so that all solutions can be enumerated in polynomial delay by a traversal. A drawback of this approach is the space required to keep track of visited solutions, which can be exponential: we further propose a technique to induce a parent-child relationship among solutions and achieve polynomial space when suitable conditions are met.
Mobile edge computing (MEC) integrated with multiple radio access technologies (RATs) is a promising technique for satisfying the growing low-latency computation demand of emerging intelligent internet of things (IoT) applications. Under the distributed MapReduce framework, this paper investigates the joint RAT selection and transceiver design for over-the-air (OTA) aggregation of intermediate values (IVAs) in wireless multiuser MEC systems, while taking into account the energy budget constraint for the local computing and IVA transmission per wireless device (WD). We aim to minimize the weighted sum of the computation mean squared error (MSE) of the aggregated IVA at the RAT receivers, the WDs' IVA transmission cost, and the associated transmission time delay, which is a mixed-integer and non-convex problem. Based on the Lagrange duality method and primal decomposition, we develop a low-complexity algorithm by solving the WDs' RAT selection problem, the WDs' transmit coefficients optimization problem, and the aggregation beamforming problem. Extensive numerical results are provided to demonstrate the effectiveness and merit of our proposed algorithm as compared with other existing schemes.
With the proliferation of the Internet of Things (IoT) and the wide penetration of wireless networks, the surging demand for data communications and computing calls for the emerging edge computing paradigm. By moving the services and functions located in the cloud to the proximity of users, edge computing can provide powerful communication, storage, networking, and communication capacity. The resource scheduling in edge computing, which is the key to the success of edge computing systems, has attracted increasing research interests. In this paper, we survey the state-of-the-art research findings to know the research progress in this field. Specifically, we present the architecture of edge computing, under which different collaborative manners for resource scheduling are discussed. Particularly, we introduce a unified model before summarizing the current works on resource scheduling from three research issues, including computation offloading, resource allocation, and resource provisioning. Based on two modes of operation, i.e., centralized and distributed modes, different techniques for resource scheduling are discussed and compared. Also, we summarize the main performance indicators based on the surveyed literature. To shed light on the significance of resource scheduling in real-world scenarios, we discuss several typical application scenarios involved in the research of resource scheduling in edge computing. Finally, we highlight some open research challenges yet to be addressed and outline several open issues as the future research direction.
We propose a novel dimensionality reduction method for maximum inner product search (MIPS), named CEOs, based on the theory of concomitants of extreme order statistics. Utilizing the asymptotic behavior of these concomitants, we show that a few dimensions associated with the extreme values of the query signature are enough to estimate inner products. Since CEOs only uses the sign of a small subset of the query signature for estimation, we can precompute all inner product estimators accurately before querying. These properties yield a sublinear MIPS algorithm with an exponential indexing space complexity. We show that our exponential space is optimal for the $(1 + \epsilon)$-approximate MIPS on a unit sphere. The search recall of CEOs can be theoretically guaranteed under a mild condition. To deal with the exponential space complexity, we propose two practical variants, including sCEOs-TA and coCEOs, that use linear space for solving MIPS. sCEOs-TA exploits the threshold algorithm (TA) and provides superior search recalls to competitive MIPS solvers. coCEOs is a data and dimension co-reduction technique and outperforms sCEOs-TA on high recall requirements. Empirically, they are very simple to implement and achieve at least 100x speedup compared to the bruteforce search while returning top-10 MIPS with accuracy at least 90% on many large-scale data sets.
Principal component analysis (PCA) has achieved great success in unsupervised learning by identifying covariance correlations among features. If the data collection fails to capture the covariance information, PCA will not be able to discover meaningful modes. In particular, PCA will fail the spatial Gaussian Process (GP) model in the undersampling regime, i.e. the averaged distance of neighboring anchor points (spatial features) is greater than the correlation length of GP. Counterintuitively, by drawing the connection between PCA and Schr\"odinger equation, we can not only attack the undersampling challenge but also compute in an efficient and decoupled way with the proposed algorithm called Schr\"odinger PCA. Our algorithm only requires variances of features and estimated correlation length as input, constructs the corresponding Schr\"odinger equation, and solves it to obtain the energy eigenstates, which coincide with principal components. We will also establish the connection of our algorithm to the model reduction techniques in the partial differential equation (PDE) community, where the steady-state Schr\"odinger operator is identified as a second-order approximation to the covariance function. Numerical experiments are implemented to testify the validity and efficiency of the proposed algorithm, showing its potential for unsupervised learning tasks on general graphs and manifolds.
In this paper, a novel framework is proposed for channel charting (CC)-aided localization in millimeter wave networks. In particular, a convolutional autoencoder model is proposed to estimate the three-dimensional location of wireless user equipment (UE), based on multipath channel state information (CSI), received by different base stations. In order to learn the radio-geometry map and capture the relative position of each UE, an autoencoder-based channel chart is constructed in an unsupervised manner, such that neighboring UEs in the physical space will remain close in the channel chart. Next, the channel charting model is extended to a semi-supervised framework, where the autoencoder is divided into two components: an encoder and a decoder, and each component is optimized individually, using the labeled CSI dataset with associated location information, to further improve positioning accuracy. Simulation results show that the proposed CC-aided semi-supervised localization yields a higher accuracy, compared with existing supervised positioning and conventional unsupervised CC approaches.