This paper analyzes wireless network control for remote estimation of linear time-invariant dynamical systems under various Hybrid Automatic Repeat Request (HARQ) packet retransmission schemes. In conventional HARQ, packet reliability increases gradually with additional packets; however, each retransmission maximally increases the Age of Information and causes severe degradation in estimation mean squared error (MSE) performance. We optimize standard HARQ schemes by allowing partial retransmissions to increase the packet reliability gradually and limit the AoI growth. In incremental redundancy HARQ, we optimize the retransmission time to enable the early arrival of the next status updates. In Chase combining HARQ, since packet length remains fixed, we allow retransmission and new updates in a single time slot using non-orthogonal signaling. Non-orthogonal retransmissions increase packet reliability without delaying the fresh updates. We formulate bi-objective optimization with the proposed variance of the MSE-based cost function and standard long-term average MSE cost function to guarantee short-term performance stability. Using the Markov decision process formulation, we find the optimal static and dynamic policies under the proposed HARQ schemes to improve MSE performance further. The simulation results show that the proposed HARQ-based policies are more robust and achieve significantly better and more stable MSE performance than standard HARQ-based policies.
Generative adversarial networks (GANs) have achieved remarkable progress in the natural image field. However, when applying GANs in the remote sensing (RS) image generation task, we discover an extraordinary phenomenon: the GAN model is more sensitive to the size of training data for RS image generation than for natural image generation. In other words, the generation quality of RS images will change significantly with the number of training categories or samples per category. In this paper, we first analyze this phenomenon from two kinds of toy experiments and conclude that the amount of feature information contained in the GAN model decreases with reduced training data. Based on this discovery, we propose two innovative adjustment schemes, namely Uniformity Regularization (UR) and Entropy Regularization (ER), to increase the information learned by the GAN model at the distributional and sample levels, respectively. We theoretically and empirically demonstrate the effectiveness and versatility of our methods. Extensive experiments on the NWPU-RESISC45 and PatternNet datasets show that our methods outperform the well-established models on RS image generation tasks.
Semiparametric models are useful in econometrics, social sciences and medicine application. In this paper, a new estimator based on least square methods is proposed to estimate the direction of unknown parameters in semi-parametric models. The proposed estimator is consistent and has asymptotic distribution under mild conditions without the knowledge of the form of link function. Simulations show that the proposed estimator is significantly superior to maximum score estimator given by Manski (1975) for binary response variables. When the error term is long-tailed distributions or distribution with infinity moments, the proposed estimator perform well. Its application is illustrated with data of exporting participation of manufactures in Guangdong.
How to detect a small community in a large network is an interesting problem, including clique detection as a special case, where a naive degree-based $\chi^2$-test was shown to be powerful in the presence of an Erd\H{o}s-Renyi background. Using Sinkhorn's theorem, we show that the signal captured by the $\chi^2$-test may be a modeling artifact, and it may disappear once we replace the Erd\H{o}s-Renyi model by a broader network model. We show that the recent SgnQ test is more appropriate for such a setting. The test is optimal in detecting communities with sizes comparable to the whole network, but has never been studied for our setting, which is substantially different and more challenging. Using a degree-corrected block model (DCBM), we establish phase transitions of this testing problem concerning the size of the small community and the edge densities in small and large communities. When the size of the small community is larger than $\sqrt{n}$, the SgnQ test is optimal for it attains the computational lower bound (CLB), the information lower bound for methods allowing polynomial computation time. When the size of the small community is smaller than $\sqrt{n}$, we establish the parameter regime where the SgnQ test has full power and make some conjectures of the CLB. We also study the classical information lower bound (LB) and show that there is always a gap between the CLB and LB in our range of interest.
This work considers Gaussian process interpolation with a periodized version of the Mat{\'e}rn covariance function introduced by Stein (22, Section 6.7). Convergence rates are studied for the joint maximum likelihood estimation of the regularity and the amplitude parameters when the data is sampled according to the model. The mean integrated squared error is also analyzed with fixed and estimated parameters, showing that maximum likelihood estimation yields asymptotically the same error as if the ground truth was known. Finally, the case where the observed function is a fixed deterministic element of a Sobolev space of continuous functions is also considered, suggesting that bounding assumptions on some parameters can lead to different estimates.
In this paper, we not only propose an new optimal sequential test of sum of logarithmic likelihood ratio (SLR) but also present the CUSUM sequential test (control chart, stopping time) with the observation-adjusted control limits (CUSUM-OAL) for monitoring quickly and adaptively the change in distribution of a sequential observations. Two limiting relationships between the optimal test and a series of the CUSUM-OAL tests are established. Moreover, we give the estimation of the in-control and the out-of-control average run lengths (ARLs) of the CUSUM-OAL test. The theoretical results are illustrated by numerical simulations in detecting mean shifts of the observations sequence.
We study a system in which two-state Markov sources send status updates to a common receiver over a slotted ALOHA random access channel. We characterize the performance of the system in terms of state estimation entropy (SEE), which measures the uncertainty at the receiver about the sources' state. Two channel access strategies are considered, a reactive policy that depends on the source behavior and a random one that is independent of it. We prove that the considered policies can be studied using two different hidden Markov models (HMM) and show through density evolution (DE) analysis that the reactive strategy outperforms the random one in terms of SEE while the opposite is true for AoI. Furthermore, we characterize the probability of error in the state estimation at the receiver, considering a maximum a posteriori (MAP) estimator and a low-complexity (decode and hold) estimator. Our study provides useful insights on the design trade-offs that emerge when different performance metrics such as SEE, age or information (AoI) or state estimation probability error are adopted. Moreover, we show how the source statistics significantly impact the system performance.
How to provide information security while fulfilling ultra reliability and low-latency requirements is one of the major concerns for enabling the next generation of ultra-reliable and low-latency communications service (xURLLC), specially in machine-type communications. In this work, we investigate the reliability-security tradeoff via defining the leakage-failure probability, which is a metric that jointly characterizes both reliability and security performances for short-packet transmissions. We discover that the system performance can be enhanced by counter-intuitively allocating fewer resources for the transmission with finite blocklength (FBL) codes. In order to solve the corresponding optimization problem for the joint resource allocation, we propose an optimization framework, that leverages lower-bounded approximations for the decoding error probability in the FBL regime. We characterize the convexity of the reformulated problem and establish an efficient iterative searching method, the convergence of which is guaranteed. To show the extendability of the framework, we further discuss the blocklength allocation schemes with practical requirements of reliable-secure performance, as well as the transmissions with the statistical channel state information (CSI). Numerical results verify the accuracy of the proposed approach and demonstrate the reliability-security tradeoff under various setups.
Physical simulations that accurately model reality are crucial for many engineering disciplines such as mechanical engineering and robotic motion planning. In recent years, learned Graph Network Simulators produced accurate mesh-based simulations while requiring only a fraction of the computational cost of traditional simulators. Yet, the resulting predictors are confined to learning from data generated by existing mesh-based simulators and thus cannot include real world sensory information such as point cloud data. As these predictors have to simulate complex physical systems from only an initial state, they exhibit a high error accumulation for long-term predictions. In this work, we integrate sensory information to ground Graph Network Simulators on real world observations. In particular, we predict the mesh state of deformable objects by utilizing point cloud data. The resulting model allows for accurate predictions over longer time horizons, even under uncertainties in the simulation, such as unknown material properties. Since point clouds are usually not available for every time step, especially in online settings, we employ an imputation-based model. The model can make use of such additional information only when provided, and resorts to a standard Graph Network Simulator, otherwise. We experimentally validate our approach on a suite of prediction tasks for mesh-based interactions between soft and rigid bodies. Our method results in utilization of additional point cloud information to accurately predict stable simulations where existing Graph Network Simulators fail.
Click-through rate (CTR) prediction plays a critical role in recommender systems and online advertising. The data used in these applications are multi-field categorical data, where each feature belongs to one field. Field information is proved to be important and there are several works considering fields in their models. In this paper, we proposed a novel approach to model the field information effectively and efficiently. The proposed approach is a direct improvement of FwFM, and is named as Field-matrixed Factorization Machines (FmFM, or $FM^2$). We also proposed a new explanation of FM and FwFM within the FmFM framework, and compared it with the FFM. Besides pruning the cross terms, our model supports field-specific variable dimensions of embedding vectors, which acts as soft pruning. We also proposed an efficient way to minimize the dimension while keeping the model performance. The FmFM model can also be optimized further by caching the intermediate vectors, and it only takes thousands of floating-point operations (FLOPs) to make a prediction. Our experiment results show that it can out-perform the FFM, which is more complex. The FmFM model's performance is also comparable to DNN models which require much more FLOPs in runtime.
It is a common paradigm in object detection frameworks to treat all samples equally and target at maximizing the performance on average. In this work, we revisit this paradigm through a careful study on how different samples contribute to the overall performance measured in terms of mAP. Our study suggests that the samples in each mini-batch are neither independent nor equally important, and therefore a better classifier on average does not necessarily mean higher mAP. Motivated by this study, we propose the notion of Prime Samples, those that play a key role in driving the detection performance. We further develop a simple yet effective sampling and learning strategy called PrIme Sample Attention (PISA) that directs the focus of the training process towards such samples. Our experiments demonstrate that it is often more effective to focus on prime samples than hard samples when training a detector. Particularly, On the MSCOCO dataset, PISA outperforms the random sampling baseline and hard mining schemes, e.g. OHEM and Focal Loss, consistently by more than 1% on both single-stage and two-stage detectors, with a strong backbone ResNeXt-101.