This paper addresses the density based multi-sensor cooperative fusion using random finite set (RFS) type multi-object densities (MODs). Existing fusion methods use scalar weights to characterize the relative information confidence among the local MODs, and in this way the portion of contribution of each local MOD to the fused global MOD can be tuned via adjusting these weights. Our analysis shows that the fusion mechanism of using a scalar coefficient can be oversimplified for practical scenarios, as the information confidence of an MOD is complex and usually space-varying due to the imperfection of sensor ability and the various impacts from surveillance environment. Consequently, severe fusion performance degradation can be observed when these scalar weights fail to reflect the actual situation. We make two contributions towards addressing this problem. Firstly, we propose a novel heterogeneous fusion method to perform the information averaging among local RFS MODs. By factorizing each local MODs into a number of smaller size sub-MODs, it can transform the original complicated fusion problem into a much easier parallelizable multi-cluster fusion problem. Secondly, as the proposed fusion strategy is a general procedure without any particular model assumptions, we further derive the detailed heterogeneous fusion equations, with centralized network architecture, for both the probability hypothesis density (PHD) filter and the multi-Bernoulli (MB) filter. The Gaussian mixture implementations of the proposed fusion algorithms are also presented. Various numerical experiments are designed to demonstrate the efficacy of the proposed fusion methods.
We introduce the notion of universal graphs as a tool for constructing algorithms solving games of infinite duration such as parity games and mean payoff games. In the first part we develop the theory of universal graphs, with two goals: showing an equivalence and normalisation result between different recently introduced related models, and constructing generic value iteration algorithms for any positionally determined objective. In the second part we give four applications: to parity games, to mean payoff games, and to combinations of them (in the form of disjunctions of objectives). For each of these four cases we construct algorithms achieving or improving over the best known time and space complexity.
Understanding complex social interactions among agents is a key challenge for trajectory prediction. Most existing methods consider the interactions between pairwise traffic agents or in a local area, while the nature of interactions is unlimited, involving an uncertain number of agents and non-local areas simultaneously. Besides, they treat heterogeneous traffic agents the same, namely those among agents of different categories, while neglecting people's diverse reaction patterns toward traffic agents in ifferent categories. To address these problems, we propose a simple yet effective Unlimited Neighborhood Interaction Network (UNIN), which predicts trajectories of heterogeneous agents in multiple categories. Specifically, the proposed unlimited neighborhood interaction module generates the fused-features of all agents involved in an interaction simultaneously, which is adaptive to any number of agents and any range of interaction area. Meanwhile, a hierarchical graph attention module is proposed to obtain category-to-category interaction and agent-to-agent interaction. Finally, parameters of a Gaussian Mixture Model are estimated for generating the future trajectories. Extensive experimental results on benchmark datasets demonstrate a significant performance improvement of our method over the state-of-the-art methods.
The paper describes a new class of capture-recapture models for closed populations when individual covariates are available. The novelty consists in combining a latent class model for capture probabilities where the class weights and the conditional distributions given the latent may depend on covariates, with a model for the marginal distribution of the available covariates as in Liu et al, Biometrika (2017). In addition, the conditional distributions given the latent and covariates are allowed to take into account any general form of serial dependence. A Fisher scoring algorithm for maximum likelihood estimation is presented, and a powerful result based on the implicit function theorem is used to show that the marginal distribution of observed covariates is uniquely determined, once an estimate of the probabilities of being never captured is available. Asymptotic results are outlined, and a procedure for constructing likelihood based confidence intervals for the population total is presented. Two examples with real data are used to illustrate the proposed approach
In Bayesian density estimation, a question of interest is how the number of components in a finite mixture model grows with the number of observations. We provide a novel perspective on this question by using results from stochastic geometry to find that the growth rate of the expected number of components of a finite mixture model whose components belong to the unit simplex $\Delta^{J-1}$ of the Euclidean space $\mathbb{R}^J$ is $(\log n)^{J-1}$. We also provide a central limit theorem for the number of components. In addition, we relate our model to a classical non-parametric density estimator based on a P\'olya tree. Combining this latter with techniques from Choquet theory, we are able to retrieve mixture weights. We also give the rate of convergence of the P\'olya tree posterior to the Dirac measure on the weights. We further present an algorithm to correctly specify the number of components in a latent Dirichlet allocation (LDA) analysis.
I propose a new type of confidence interval for correct asymptotic inference after using data to select a model of interest without assuming any model is correctly specified. This hybrid confidence interval is constructed by combining techniques from the selective inference and post-selection inference literatures to yield a short confidence interval across a wide range of data realizations. I show that hybrid confidence intervals have correct asymptotic coverage, uniformly over a large class of probability distributions that do not bound scaled model parameters. I illustrate the use of these confidence intervals in the problem of inference after using the LASSO objective function to select a regression model of interest and provide evidence of their desirable length and coverage properties in small samples via a set of Monte Carlo experiments that entail a variety of different data distributions as well as an empirical application to the predictors of diabetes disease progression.
Partially recorded data are frequently encountered in many applications and usually clustered by first removing incomplete cases or features with missing values, or by imputing missing values, followed by application of a clustering algorithm to the resulting altered dataset. Here, we develop clustering methodology through a model-based approach using the marginal density for the observed values, assuming a finite mixture model of multivariate $t$ distributions. We compare our approximate algorithm to the corresponding full expectation-maximization (EM) approach that considers the missing values in the incomplete data set and makes a missing at random (MAR) assumption, as well as case deletion and imputation methods. Since only the observed values are utilized, our approach is computationally more efficient than imputation or full EM. Simulation studies demonstrate that our approach has favorable recovery of the true cluster partition compared to case deletion and imputation under various missingness mechanisms, and is at least competitive with the full EM approach, even when MAR assumptions are violated. Our methodology is demonstrated on a problem of clustering gamma-ray bursts and is implemented at //github.com/emilygoren/MixtClust.
Lymphoma detection and segmentation from whole-body Positron Emission Tomography/Computed Tomography (PET/CT) volumes are crucial for surgical indication and radiotherapy. Designing automatic segmentation methods capable of effectively exploiting the information from PET and CT as well as resolving their uncertainty remain a challenge. In this paper, we propose an lymphoma segmentation model using an UNet with an evidential PET/CT fusion layer. Single-modality volumes are trained separately to get initial segmentation maps and an evidential fusion layer is proposed to fuse the two pieces of evidence using Dempster-Shafer theory (DST). Moreover, a multi-task loss function is proposed: in addition to the use of the Dice loss for PET and CT segmentation, a loss function based on the concordance between the two segmentation is added to constrain the final segmentation. We evaluate our proposal on a database of polycentric PET/CT volumes of patients treated for lymphoma, delineated by the experts. Our method get accurate segmentation results with Dice score of 0.726, without any user interaction. Quantitative results show that our method is superior to the state-of-the-art methods.
Human pose estimation - the process of recognizing human keypoints in a given image - is one of the most important tasks in computer vision and has a wide range of applications including movement diagnostics, surveillance, or self-driving vehicle. The accuracy of human keypoint prediction is increasingly improved thanks to the burgeoning development of deep learning. Most existing methods solved human pose estimation by generating heatmaps in which the ith heatmap indicates the location confidence of the ith keypoint. In this paper, we introduce novel network structures referred to as multiresolution representation learning for human keypoint prediction. At different resolutions in the learning process, our networks branch off and use extra layers to learn heatmap generation. We firstly consider the architectures for generating the multiresolution heatmaps after obtaining the lowest-resolution feature maps. Our second approach allows learning during the process of feature extraction in which the heatmaps are generated at each resolution of the feature extractor. The first and second approaches are referred to as multi-resolution heatmap learning and multi-resolution feature map learning respectively. Our architectures are simple yet effective, achieving good performance. We conducted experiments on two common benchmarks for human pose estimation: MS-COCO and MPII dataset.
Detecting objects in aerial images is challenging for at least two reasons: (1) target objects like pedestrians are very small in pixels, making them hardly distinguished from surrounding background; and (2) targets are in general sparsely and non-uniformly distributed, making the detection very inefficient. In this paper, we address both issues inspired by observing that these targets are often clustered. In particular, we propose a Clustered Detection (ClusDet) network that unifies object clustering and detection in an end-to-end framework. The key components in ClusDet include a cluster proposal sub-network (CPNet), a scale estimation sub-network (ScaleNet), and a dedicated detection network (DetecNet). Given an input image, CPNet produces object cluster regions and ScaleNet estimates object scales for these regions. Then, each scale-normalized cluster region is fed into DetecNet for object detection. ClusDet has several advantages over previous solutions: (1) it greatly reduces the number of chips for final object detection and hence achieves high running time efficiency, (2) the cluster-based scale estimation is more accurate than previously used single-object based ones, hence effectively improves the detection for small objects, and (3) the final DetecNet is dedicated for clustered regions and implicitly models the prior context information so as to boost detection accuracy. The proposed method is tested on three popular aerial image datasets including VisDrone, UAVDT and DOTA. In all experiments, ClusDet achieves promising performance in comparison with state-of-the-art detectors. Code will be available in \url{//github.com/fyangneil}.
Network embedding has attracted considerable research attention recently. However, the existing methods are incapable of handling billion-scale networks, because they are computationally expensive and, at the same time, difficult to be accelerated by distributed computing schemes. To address these problems, we propose RandNE, a novel and simple billion-scale network embedding method. Specifically, we propose a Gaussian random projection approach to map the network into a low-dimensional embedding space while preserving the high-order proximities between nodes. To reduce the time complexity, we design an iterative projection procedure to avoid the explicit calculation of the high-order proximities. Theoretical analysis shows that our method is extremely efficient, and friendly to distributed computing schemes without any communication cost in the calculation. We demonstrate the efficacy of RandNE over state-of-the-art methods in network reconstruction and link prediction tasks on multiple datasets with different scales, ranging from thousands to billions of nodes and edges.