The age of Information (AoI) has been introduced to capture the notion of freshness in real-time monitoring applications. However, this metric falls short in many scenarios, especially when quantifying the mismatch between the current and the estimated states. To circumvent this issue, in this paper, we adopt the age of incorrect information metric (AoII) that considers the quantified mismatch between the source and the knowledge at the destination while tracking the impact of freshness. We consider for that a problem where a central entity pulls the information from remote sources that evolve according to a Markovian Process. It selects at each time slot which sources should send their updates. As the scheduler does not know the actual state of the remote sources, it estimates at each time the value of AoII based on the Markovian sources' parameters. Its goal is to keep the time average of the AoII function as small as possible. For that purpose, We develop a scheduling scheme based on Whittle's index policy. To that extent, we use the Lagrangian Relaxation Approach and establish that the dual problem has an optimal threshold policy. Building on that, we compute the expressions of Whittle's indices. Finally, we provide some numerical results to highlight the performance of our derived policy compared to the classical AoI metric.
We consider a network consisting of $n$ nodes that aim to track a continually updating process or event. To disseminate updates about the event to the network, two sources are available, such that information obtained from one source is considered more reliable than the other source. The nodes wish to have access to information about the event that is not only latest but also more reliable, and prefer a reliable packet over an unreliable packet even when the former is a bit outdated with respect to the latter. We study how such preference affects the fraction of users with reliable information in the network and their version age of information. We derive the analytical equations to characterize the two quantities, long-term expected fraction of nodes with reliable packets and their long-term expected version age using stochastic hybrid systems (SHS) modelling and study their properties. We also compare these results with the case where nodes give more preference to freshness of information than its reliability. Finally we show simulation results to verify the theoretical results and shed further light on behavior of above quantities with respect to dependent variables.
We consider the problem of optimizing the decisions of a preemptively capable transmitter to minimize the Age of Incorrect Information (AoII) when the communication channel has a random delay. In the system, a transmitter observes a Markovian source and makes decisions based on the system status. Time is slotted and normalized. In each time slot, the transmitter decides whether to preempt or skip when the channel is busy. When the channel is idle, the transmitter decides whether to send a new update. At the other end of the channel is a receiver that estimates the state of the Markovian source based on the update it receives. We consider a generic transmission delay and assume that the transmission delay is independent and identically distributed for each update. This paper aims to optimize the transmitter's decision in each time slot to minimize the AoII with generic time penalty functions. To this end, we first use the Markov decision process to formulate the optimization problem and derive the analytical expressions of the expected AoIIs achieved by two canonical preemptive policies. Then, we prove the existence of the optimal policy and provide a feasible value iteration algorithm to approximate the optimal policy. However, the value iteration algorithm will be computationally expensive if we want considerable confidence in the approximation. Therefore, we analyze the system characteristics under two canonical delay distributions and theoretically obtain the corresponding optimal policies using the policy improvement theorem. Finally, numerical results are presented to illustrate the performance improvements brought about by the preemption capability.
We consider the task of evaluating policies of algorithmic resource allocation through randomized controlled trials (RCTs). Such policies are tasked with optimizing the utilization of limited intervention resources, with the goal of maximizing the benefits derived. Evaluation of such allocation policies through RCTs proves difficult, notwithstanding the scale of the trial, because the individuals' outcomes are inextricably interlinked through resource constraints controlling the policy decisions. Our key contribution is to present a new estimator leveraging our proposed novel concept, that involves retrospective reshuffling of participants across experimental arms at the end of an RCT. We identify conditions under which such reassignments are permissible and can be leveraged to construct counterfactual trials, whose outcomes can be accurately ascertained, for free. We prove theoretically that such an estimator is more accurate than common estimators based on sample means -- we show that it returns an unbiased estimate and simultaneously reduces variance. We demonstrate the value of our approach through empirical experiments on synthetic, semi-synthetic as well as real case study data and show improved estimation accuracy across the board.
In offline model-based optimisation (MBO) we are interested in using machine learning to design candidates that maximise some measure of desirability through an expensive but real-world scoring process. Offline MBO tries to approximate this expensive scoring function and use that to evaluate generated designs, however evaluation is non-exact because one approximation is being evaluated with another. Instead, we ask ourselves: if we did have the real world scoring function at hand, what cheap-to-compute validation metrics would correlate best with this? Since the real-world scoring function is available for simulated MBO datasets, insights obtained from this can be transferred over to real-world offline MBO tasks where the real-world scoring function is expensive to compute. To address this, we propose a conceptual evaluation framework that is amenable to measuring extrapolation, and apply this to conditional denoising diffusion models. Empirically, we find that two validation metrics -- agreement and Frechet distance -- correlate quite well with the ground truth. When there is high variability in conditional generation, feedback is required in the form of an approximated version of the real-world scoring function. Furthermore, we find that generating high-scoring samples may require heavily weighting the generative model in favour of sample quality, potentially at the cost of sample diversity.
The impact of player age on performance has received attention across sport. Most research has focused on the performance of players at each age, ignoring the reality that age likewise influences which players receive opportunities to perform. Our manuscript makes two contributions. First, we highlight how selection bias is linked to both (i) which players receive opportunity to perform in sport, and (ii) at which ages we observe these players perform. This approach is used to generate underlying distributions of how players move in and out of sport organizations. Second, motivated by methods for missing data, we propose novel estimation methods of age curves by using both observed and unobserved (imputed) data. We use simulations to compare several comparative approaches for estimating aging curves. Imputation-based methods, as well as models that account for individual player skill, tend to generate lower RMSE and age curve shapes that better match the truth. We implement our approach using data from the National Hockey League.
Self-supervised depth estimation draws a lot of attention recently as it can promote the 3D sensing capabilities of self-driving vehicles. However, it intrinsically relies upon the photometric consistency assumption, which hardly holds during nighttime. Although various supervised nighttime image enhancement methods have been proposed, their generalization performance in challenging driving scenarios is not satisfactory. To this end, we propose the first method that jointly learns a nighttime image enhancer and a depth estimator, without using ground truth for either task. Our method tightly entangles two self-supervised tasks using a newly proposed uncertain pixel masking strategy. This strategy originates from the observation that nighttime images not only suffer from underexposed regions but also from overexposed regions. By fitting a bridge-shaped curve to the illumination map distribution, both regions are suppressed and two tasks are bridged naturally. We benchmark the method on two established datasets: nuScenes and RobotCar and demonstrate state-of-the-art performance on both of them. Detailed ablations also reveal the mechanism of our proposal. Last but not least, to mitigate the problem of sparse ground truth of existing datasets, we provide a new photo-realistically enhanced nighttime dataset based upon CARLA. It brings meaningful new challenges to the community. Codes, data, and models are available at //github.com/ucaszyp/STEPS.
Aggregate measures of family planning are used to monitor demand for and usage of contraceptive methods in populations globally, for example as part of the FP2030 initiative. Family planning measures for low- and middle-income countries are typically based on data collected through cross-sectional household surveys. Recently proposed measures account for sexual activity through assessment of the distribution of time-between-sex (TBS) in the population of interest. In this paper, we propose a statistical approach to estimate the distribution of TBS using data typically available in low- and middle-income countries, while addressing two major challenges. The first challenge is that timing of sex information is typically limited to women's time-since-last-sex (TSLS) data collected in the cross-sectional survey. In our proposed approach, we adopt the current duration method to estimate the distribution of TBS using the available TSLS data, from which the frequency of sex at the population level can be derived. Furthermore, the observed TSLS data are subject to reporting issues because they can be reported in different units and may be rounded off. To apply the current duration approach and account for these data reporting issues, we develop a flexible Bayesian model, and provide a detailed technical description of the proposed modeling approach.
Missing data is common in datasets retrieved in various areas, such as medicine, sports, and finance. In many cases, to enable proper and reliable analyses of such data, the missing values are often imputed, and it is necessary that the method used has a low root mean square error (RMSE) between the imputed and the true values. In addition, for some critical applications, it is also often a requirement that the logic behind the imputation is explainable, which is especially difficult for complex methods that are for example, based on deep learning. This motivates us to introduce a conditional Distribution based Imputation of Missing Values (DIMV) algorithm. This approach works based on finding the conditional distribution of a feature with missing entries based on the fully observed features. As will be illustrated in the paper, DIMV (i) gives a low RMSE for the imputed values compared to state-of-the-art methods under comparison; (ii) is explainable; (iii) can provide an approximated confidence region for the missing values in a given sample; (iv) works for both small and large scale data; (v) in many scenarios, does not require a huge number of parameters as deep learning approaches and therefore can be used for mobile devices or web browsers; and (vi) is robust to the normally distributed assumption that its theoretical grounds rely on. In addition to DIMV, we also introduce the DPER* algorithm improving the speed of DPER for estimating the mean and covariance matrix from the data, and we confirm the speed-up via experiments.
Many models for point process data are defined through a thinning procedure where locations of a base process (often Poisson) are either kept (observed) or discarded (thinned). In this paper, we go back to the fundamentals of the distribution theory for point processes and provide a colouring theorem that characterizes the joint density of thinned and observed locations in any of such models. In practice, the marginal model of observed points is often intractable, but thinned locations can be instantiated from their conditional distribution and typical data augmentation schemes can be employed to circumvent this problem. Such approaches have been employed in recent publications, but conceptual flaws have been introduced in this literature. We concentrate on an example: the so-called sigmoidal Gaussian Cox process. We apply our general theory to resolve what are contradicting viewpoints in the data augmentation step of the inference procedures therein. Finally, we provide a multitype extension to this process and conduct Bayesian inference on data consisting of positions of 2 different species of trees in Lansing Woods, Illinois.
Over the past few years, the rapid development of deep learning technologies for computer vision has greatly promoted the performance of medical image segmentation (MedISeg). However, the recent MedISeg publications usually focus on presentations of the major contributions (e.g., network architectures, training strategies, and loss functions) while unwittingly ignoring some marginal implementation details (also known as "tricks"), leading to a potential problem of the unfair experimental result comparisons. In this paper, we collect a series of MedISeg tricks for different model implementation phases (i.e., pre-training model, data pre-processing, data augmentation, model implementation, model inference, and result post-processing), and experimentally explore the effectiveness of these tricks on the consistent baseline models. Compared to paper-driven surveys that only blandly focus on the advantages and limitation analyses of segmentation models, our work provides a large number of solid experiments and is more technically operable. With the extensive experimental results on both the representative 2D and 3D medical image datasets, we explicitly clarify the effect of these tricks. Moreover, based on the surveyed tricks, we also open-sourced a strong MedISeg repository, where each of its components has the advantage of plug-and-play. We believe that this milestone work not only completes a comprehensive and complementary survey of the state-of-the-art MedISeg approaches, but also offers a practical guide for addressing the future medical image processing challenges including but not limited to small dataset learning, class imbalance learning, multi-modality learning, and domain adaptation. The code has been released at: //github.com/hust-linyi/MedISeg