Drive-by sensing (i.e. vehicle-based mobile sensing) is an emerging data collection paradigm that leverages vehicle mobilities to scan a city at low costs. It represents a positive social externality of urban transport activities. Bus transit systems are widely considered in drive-by sensing due to extensive spatial coverage, reliable operations, and low maintenance costs. It is critical for the underlying monitoring scenario (e.g. air quality, traffic state, and road roughness) to assign a limited number of sensors to a bus fleet to ensure their optimal spatial-temporal distribution. In this paper we present a trip-based sensor deployment problem, which explicitly considers timetabled trips that must be executed by the fleet while a portion of them perform sensing tasks. To address the computational challenge in large-scale instances, we design a multi-stage solution framework that decouples the spatial-temporal structures of the sensing task through line pre-selection and bi-level optimization. As a result, the computational complexity is reduced to be sub-linear w.r.t. the number of lines, rather than combinatorial w.r.t. the number of buses in existing vehicle-based approaches. A real-world case study covering 400 km$^2$ in central Chengdu demonstrates the effectiveness of the model in solving large-scale problems. It is found that coordinating bus scheduling and sensing tasks can substantially increase the spatial-temporal sensing coverage. We also provide a few model extensions and recommendation for practice regarding the application of this method.
So-called "classification trimmed likelihood curves" have been proposed as a useful heuristic tool to determine the number of clusters and trimming proportion in trimming-based robust clustering methods. However, these curves needs a careful visual inspection, and this way of choosing parameters requires subjective decisions. This work is intended to provide theoretical background for the understanding of these curves and the elements involved in their derivation. Moreover, a parametric bootstrap approach is presented in order to automatize the choice of parameter more by providing a reduced list of "sensible" choices for the parameters. The user can then pick a solution that fits their aims from that reduced list.
Grid maps, especially occupancy grid maps, are ubiquitous in many mobile robot applications. To simplify the process of learning the map, grid maps subdivide the world into a grid of cells, whose occupancies are independently estimated using only measurements in the perceptual field of the particular cell. However, the world consists of objects that span multiple cells, which means that measurements falling onto a cell provide evidence on the occupancy of other cells belonging to the same object. This correlation is not captured by current models. In this work, we present a way to generalize the update of grid maps relaxing the assumption of independence by modeling the relationship between the measurements and the occupancy of each cell as a set of latent variables, and jointly estimating those variables and the posterior of the map. Additionally, we propose a method to estimate the latent variables by clustering based on semantic labels and an extension to the Normal Distributions Transfer Occupancy Map (NDT-OM) to facilitate the proposed map update method. We perform comprehensive experiments of map creation and localization with real world data sets, and show that the proposed method creates better maps in highly dynamic environments compared to state-of-the-art methods. Finally, we demonstrate the ability of the proposed method to remove occluded objects from the map in a lifelong map update scenario.
Current physics-informed (standard or operator) neural networks still rely on accurately learning the initial conditions of the system they are solving. In contrast, standard numerical methods evolve such initial conditions without needing to learn these. In this study, we propose to improve current physics-informed deep learning strategies such that initial conditions do not need to be learned and are represented exactly in the predicted solution. Moreover, this method guarantees that when a DeepONet is applied multiple times to time step a solution, the resulting function is continuous.
Many applications in computational physics involve approximating problems with microstructure, characterized by multiple spatial scales in their data. However, these numerical solutions are often computationally expensive due to the need to capture fine details at small scales. As a result, simulating such phenomena becomes unaffordable for many-query applications, such as parametrized systems with multiple scale-dependent features. Traditional projection-based reduced order models (ROMs) fail to resolve these issues, even for second-order elliptic PDEs commonly found in engineering applications. To address this, we propose an alternative nonintrusive strategy to build a ROM, that combines classical proper orthogonal decomposition (POD) with a suitable neural network (NN) model to account for the small scales. Specifically, we employ sparse mesh-informed neural networks (MINNs), which handle both spatial dependencies in the solutions and model parameters simultaneously. We evaluate the performance of this strategy on benchmark problems and then apply it to approximate a real-life problem involving the impact of microcirculation in transport phenomena through the tissue microenvironment.
Telemanipulation has become a promising technology that combines human intelligence with robotic capabilities to perform tasks remotely. However, it faces several challenges such as insufficient transparency, low immersion, and limited feedback to the human operator. Moreover, the high cost of haptic interfaces is a major limitation for the application of telemanipulation in various fields, including elder care, where our research is focused. To address these challenges, this paper proposes the usage of nonlinear model predictive control for telemanipulation using low-cost virtual reality controllers, including multiple control goals in the objective function. The framework utilizes models for human input prediction and taskrelated models of the robot and the environment. The proposed framework is validated on an UR5e robot arm in the scenario of handling liquid without spilling. Further extensions of the framework such as pouring assistance and collision avoidance can easily be included.
Next Point-of-Interest (POI) recommendation is a critical task in location-based services that aim to provide personalized suggestions for the user's next destination. Previous works on POI recommendation have laid focused on modeling the user's spatial preference. However, existing works that leverage spatial information are only based on the aggregation of users' previous visited positions, which discourages the model from recommending POIs in novel areas. This trait of position-based methods will harm the model's performance in many situations. Additionally, incorporating sequential information into the user's spatial preference remains a challenge. In this paper, we propose Diff-POI: a Diffusion-based model that samples the user's spatial preference for the next POI recommendation. Inspired by the wide application of diffusion algorithm in sampling from distributions, Diff-POI encodes the user's visiting sequence and spatial character with two tailor-designed graph encoding modules, followed by a diffusion-based sampling strategy to explore the user's spatial visiting trends. We leverage the diffusion process and its reversed form to sample from the posterior distribution and optimized the corresponding score function. We design a joint training and inference framework to optimize and evaluate the proposed Diff-POI. Extensive experiments on four real-world POI recommendation datasets demonstrate the superiority of our Diff-POI over state-of-the-art baseline methods. Further ablation and parameter studies on Diff-POI reveal the functionality and effectiveness of the proposed diffusion-based sampling strategy for addressing the limitations of existing methods.
The vast majority of reduced-order models (ROMs) first obtain a low dimensional representation of the problem from high-dimensional model (HDM) training data which is afterwards used to obtain a system of reduced complexity. Unfortunately, convection-dominated problems generally have a slowly decaying Kolmogorov n-width, which makes obtaining an accurate ROM built solely from training data very challenging. The accuracy of a ROM can be improved through enrichment with HDM solutions; however, due to the large computational expense of HDM evaluations for complex problems, they can only be used parsimoniously to obtain relevant computational savings. In this work, we exploit the local spatial and temporal coherence often exhibited by these problems to derive an accurate, cost-efficient approach that repeatedly combines HDM and ROM evaluations without a separate training phase. Our approach obtains solutions at a given time step by either fully solving the HDM or by combining partial HDM and ROM solves. A dynamic sampling procedure identifies regions that require the HDM solution for global accuracy and the reminder of the flow is reconstructed using the ROM. Moreover, solutions combining both HDM and ROM solves use spatial filtering to eliminate potential spurious oscillations that may develop. We test the proposed method on inviscid compressible flow problems and demonstrate speedups up to an order of magnitude.
Multicast enables efficient one-to-many communications. Several applications benefit from its scalability properties, e.g., live-streaming and large-scale software updates. Historically, multicast applications have used specialized transport protocols. The flexibility of the recently standardized QUIC protocol opens the possibility of providing both unicast and multicast services to applications with a single transport protocol. We present MCQUIC, an extended version of the QUIC protocol that supports multicast communications. We show how QUIC features and built-in security can be leveraged for multicast transport. We present the design of MCQUIC and implement it in Cloudflare quiche. We assess its performance through benchmarks and in emulated networks under realistic scenarios. We also demonstrate MCQUIC in a campus network. By coupling QUIC with our multicast extension, applications can rely on multicast for efficiency with the possibility to fall back on unicast in case of incompatible network conditions.
In large-scale systems there are fundamental challenges when centralised techniques are used for task allocation. The number of interactions is limited by resource constraints such as on computation, storage, and network communication. We can increase scalability by implementing the system as a distributed task-allocation system, sharing tasks across many agents. However, this also increases the resource cost of communications and synchronisation, and is difficult to scale. In this paper we present four algorithms to solve these problems. The combination of these algorithms enable each agent to improve their task allocation strategy through reinforcement learning, while changing how much they explore the system in response to how optimal they believe their current strategy is, given their past experience. We focus on distributed agent systems where the agents' behaviours are constrained by resource usage limits, limiting agents to local rather than system-wide knowledge. We evaluate these algorithms in a simulated environment where agents are given a task composed of multiple subtasks that must be allocated to other agents with differing capabilities, to then carry out those tasks. We also simulate real-life system effects such as networking instability. Our solution is shown to solve the task allocation problem to 6.7% of the theoretical optimal within the system configurations considered. It provides 5x better performance recovery over no-knowledge retention approaches when system connectivity is impacted, and is tested against systems up to 100 agents with less than a 9% impact on the algorithms' performance.
Recent advances in 3D fully convolutional networks (FCN) have made it feasible to produce dense voxel-wise predictions of volumetric images. In this work, we show that a multi-class 3D FCN trained on manually labeled CT scans of several anatomical structures (ranging from the large organs to thin vessels) can achieve competitive segmentation results, while avoiding the need for handcrafting features or training class-specific models. To this end, we propose a two-stage, coarse-to-fine approach that will first use a 3D FCN to roughly define a candidate region, which will then be used as input to a second 3D FCN. This reduces the number of voxels the second FCN has to classify to ~10% and allows it to focus on more detailed segmentation of the organs and vessels. We utilize training and validation sets consisting of 331 clinical CT images and test our models on a completely unseen data collection acquired at a different hospital that includes 150 CT scans, targeting three anatomical organs (liver, spleen, and pancreas). In challenging organs such as the pancreas, our cascaded approach improves the mean Dice score from 68.5 to 82.2%, achieving the highest reported average score on this dataset. We compare with a 2D FCN method on a separate dataset of 240 CT scans with 18 classes and achieve a significantly higher performance in small organs and vessels. Furthermore, we explore fine-tuning our models to different datasets. Our experiments illustrate the promise and robustness of current 3D FCN based semantic segmentation of medical images, achieving state-of-the-art results. Our code and trained models are available for download: //github.com/holgerroth/3Dunet_abdomen_cascade.