In this paper we derive a new capability for robots to measure relative direction, or Angle-of-Arrival (AOA), to other robots operating in non-line-of-sight and unmapped environments with occlusions, without requiring external infrastructure. We do so by capturing all of the paths that a WiFi signal traverses as it travels from a transmitting to a receiving robot, which we term an AOA profile. The key intuition is to "emulate antenna arrays in the air" as the robots move in 3D space, a method akin to Synthetic Aperture Radar (SAR). The main contributions include development of i) a framework to accommodate arbitrary 3D trajectories, as well as continuous mobility all robots, while computing AOA profiles and ii) an accompanying analysis that provides a lower bound on variance of AOA estimation as a function of robot trajectory geometry based on the Cramer Rao Bound. This is a critical distinction with previous work on SAR that restricts robot mobility to prescribed motion patterns, does not generalize to 3D space, and/or requires transmitting robots to be static during data acquisition periods. Our method results in more accurate AOA profiles and thus better AOA estimation, and formally characterizes this observation as the informativeness of the trajectory; a computable quantity for which we derive a closed form. All theoretical developments are substantiated by extensive simulation and hardware experiments. We also show that our formulation can be used with an off-the-shelf trajectory estimation sensor. Finally, we demonstrate the performance of our system on a multi-robot dynamic rendezvous task.
Despite the recent development of methods dealing with partially observed epidemic dynamics (unobserved model coordinates, discrete and noisy outbreak data), limitations remain in practice, mainly related to the quantity of augmented data and calibration of numerous tuning parameters. In particular, as coordinates of dynamic epidemic models are coupled, the presence of unobserved coordinates leads to a statistically difficult problem. The aim is to propose an easy-to-use and general inference method that is able to tackle these issues. First, using the properties of epidemics in large populations, a two-layer model is constructed. Via a diffusion-based approach, a Gaussian approximation of the epidemic density-dependent Markovian jump process is obtained, representing the state model. The observational model, consisting of noisy observations of certain model coordinates, is approximated by Gaussian distributions. Then, an inference method based on an approximate likelihood using Kalman filtering recursion is developed to estimate parameters of both the state and observational models. The performance of estimators of key model parameters is assessed on simulated data of SIR epidemic dynamics for different scenarios with respect to the population size and the number of observations. This performance is compared with that obtained using the well-known maximum iterated filtering method. Finally, the inference method is applied to a real data set on an influenza outbreak in a British boarding school in 1978.
Density estimation plays a crucial role in many data analysis tasks, as it infers a continuous probability density function (PDF) from discrete samples. Thus, it is used in tasks as diverse as analyzing population data, spatial locations in 2D sensor readings, or reconstructing scenes from 3D scans. In this paper, we introduce a learned, data-driven deep density estimation (DDE) to infer PDFs in an accurate and efficient manner, while being independent of domain dimensionality or sample size. Furthermore, we do not require access to the original PDF during estimation, neither in parametric form, nor as priors, or in the form of many samples. This is enabled by training an unstructured convolutional neural network on an infinite stream of synthetic PDFs, as unbound amounts of synthetic training data generalize better across a deck of natural PDFs than any natural finite training data will do. Thus, we hope that our publicly available DDE method will be beneficial in many areas of data analysis, where continuous models are to be estimated from discrete observations.
Future urban transportation concepts include a mixture of ground and air vehicles with varying degrees of autonomy in a congested environment. In such dynamic environments, occupancy maps alone are not sufficient for safe path planning. Safe and efficient transportation requires reasoning about the 3D flow of traffic and properly modeling uncertainty. Several different approaches can be taken for developing 3D velocity maps. This paper explores a Bayesian approach that captures our uncertainty in the map given training data. The approach involves projecting spatial coordinates into a high-dimensional feature space and then applying Bayesian linear regression to make predictions and quantify uncertainty in our estimates. On a collection of air and ground datasets, we demonstrate that this approach is effective and more scalable than several alternative approaches.
In this paper, we study the problem of sparse channel estimation via a collaborative and fully distributed approach. The estimation problem is formulated in the angular domain by exploiting the spatially common sparsity structure of the involved channels in a multi-user scenario. The sparse channel estimation problem is solved via an efficient distributed approach in which the participating users collaboratively estimate their channel sparsity support sets, before locally estimate the channel values, under the assumption that global and common support subsets are present. The performance of the proposed algorithm, named WDiOMP, is compared to DiOMP, local OMP and a centralized solution based on SOMP, in terms of the support set recovery error under various experimental scenarios. The efficacy of WDiOMP is demonstrated even in the case in which the underlining sparsity structure is unknown.
We consider parameter estimation in distributed networks, where each sensor in the network observes an independent sample from an underlying distribution and has $k$ bits to communicate its sample to a centralized processor which computes an estimate of a desired parameter. We develop lower bounds for the minimax risk of estimating the underlying parameter for a large class of losses and distributions. Our results show that under mild regularity conditions, the communication constraint reduces the effective sample size by a factor of $d$ when $k$ is small, where $d$ is the dimension of the estimated parameter. Furthermore, this penalty reduces at most exponentially with increasing $k$, which is the case for some models, e.g., estimating high-dimensional distributions. For other models however, we show that the sample size reduction is re-mediated only linearly with increasing $k$, e.g. when some sub-Gaussian structure is available. We apply our results to the distributed setting with product Bernoulli model, multinomial model, Gaussian location models, and logistic regression which recover or strengthen existing results. Our approach significantly deviates from existing approaches for developing information-theoretic lower bounds for communication-efficient estimation. We circumvent the need for strong data processing inequalities used in prior work and develop a geometric approach which builds on a new representation of the communication constraint. This approach allows us to strengthen and generalize existing results with simpler and more transparent proofs.
In this paper we develop a plane wave type method for discretization of homogeneous Helmholtz equations with variable wave numbers. In the proposed method, local basis functions (on each element) are constructed by the geometric optics ansatz such that they approximately satisfy a homogeneous Helmholtz equation without boundary condition. More precisely, each basis function is expressed as the product of an exponential plane wave function and a polynomial function, where the phase function in the exponential function approximately satisfies the eikonal equation and the polynomial factor is recursively determined by transport equations associated with the considered Helmholtz equation. We prove that the resulting plane wave spaces have high order $h$-approximations as the standard plane wave spaces (which are available only to the case with constant wave number). We apply the proposed plane wave spaces to the discretization of nonhomogeneous Helmholtz equations with variable wave numbers and establish the corresponding error estimates of their finite element solutions. We report some numerical results to illustrate the efficiency of the proposed method.
In order for an autonomous robot to efficiently explore an unknown environment, it must account for uncertainty in sensor measurements, hazard assessment, localization, and motion execution. Making decisions for maximal reward in a stochastic setting requires value learning and policy construction over a belief space, i.e., probability distribution over all possible robot-world states. However, belief space planning in a large spatial environment over long temporal horizons suffers from severe computational challenges. Moreover, constructed policies must safely adapt to unexpected changes in the belief at runtime. This work proposes a scalable value learning framework, PLGRIM (Probabilistic Local and Global Reasoning on Information roadMaps), that bridges the gap between (i) local, risk-aware resiliency and (ii) global, reward-seeking mission objectives. Leveraging hierarchical belief space planners with information-rich graph structures, PLGRIM addresses large-scale exploration problems while providing locally near-optimal coverage plans. We validate our proposed framework with high-fidelity dynamic simulations in diverse environments and on physical robots in Martian-analog lava tubes.
The finite element method (FEM) and the boundary element method (BEM) can numerically solve the Helmholtz system for acoustic wave propagation. When an object with heterogeneous wave speed or density is embedded in an unbounded exterior medium, the coupled FEM-BEM algorithm promises to combine the strengths of each technique. The FEM handles the heterogeneous regions while the BEM models the homogeneous exterior. Even though standard FEM-BEM algorithms are effective, they do require stabilisation at resonance frequencies. One such approach is to add a regularisation term to the system of equations. This algorithm is stable at all frequencies but also brings higher computational costs. This study proposes a regulariser based on the on-surface radiation conditions (OSRC). The OSRC operators are also used to precondition the boundary integral operators and combined with incomplete LU factorisations for the volumetric weak formulation. The proposed preconditioning strategy improves the convergence of iterative linear solvers significantly, especially at higher frequencies.
This paper introduces a novel neural network-based reinforcement learning approach for robot gaze control. Our approach enables a robot to learn and to adapt its gaze control strategy for human-robot interaction neither with the use of external sensors nor with human supervision. The robot learns to focus its attention onto groups of people from its own audio-visual experiences, independently of the number of people, of their positions and of their physical appearances. In particular, we use a recurrent neural network architecture in combination with Q-learning to find an optimal action-selection policy; we pre-train the network using a simulated environment that mimics realistic scenarios that involve speaking/silent participants, thus avoiding the need of tedious sessions of a robot interacting with people. Our experimental evaluation suggests that the proposed method is robust against parameter estimation, i.e. the parameter values yielded by the method do not have a decisive impact on the performance. The best results are obtained when both audio and visual information is jointly used. Experiments with the Nao robot indicate that our framework is a step forward towards the autonomous learning of socially acceptable gaze behavior.
In this paper we introduce a covariance framework for the analysis of EEG and MEG data that takes into account observed temporal stationarity on small time scales and trial-to-trial variations. We formulate a model for the covariance matrix, which is a Kronecker product of three components that correspond to space, time and epochs/trials, and consider maximum likelihood estimation of the unknown parameter values. An iterative algorithm that finds approximations of the maximum likelihood estimates is proposed. We perform a simulation study to assess the performance of the estimator and investigate the influence of different assumptions about the covariance factors on the estimated covariance matrix and on its components. Apart from that, we illustrate our method on real EEG and MEG data sets. The proposed covariance model is applicable in a variety of cases where spontaneous EEG or MEG acts as source of noise and realistic noise covariance estimates are needed for accurate dipole localization, such as in evoked activity studies, or where the properties of spontaneous EEG or MEG are themselves the topic of interest, such as in combined EEG/fMRI experiments in which the correlation between EEG and fMRI signals is investigated.