In the context of robotics, accurate ground truth positioning is essential for the development of Simultaneous Localization and Mapping (SLAM) and control algorithms. Robotic Total Stations (RTSs) provide accurate and precise reference positions in different types of outdoor environments, especially when compared to the limited accuracy of Global Navigation Satellite System (GNSS) in cluttered areas. Three RTSs give the possibility to obtain the six-Degrees Of Freedom (DOF) reference pose of a robotic platform. However, the uncertainty of every pose is rarely computed for trajectory evaluation. As evaluation algorithms are getting increasingly precise, it becomes crucial to take into account this uncertainty. We propose a method to compute this six-DOF uncertainty from the fusion of three RTSs based on Monte Carlo (MC) methods. This solution relies on point-to-point minimization to propagate the noise of RTSs on the pose of the robotic platform. Five main noise sources are identified to model this uncertainty: noise inherent to the instrument, tilt noise, atmospheric factors, time synchronization noise, and extrinsic calibration noise. Based on extensive experimental work, we compare the impact of each noise source on the prism uncertainty and the final estimated pose. Tested on more than 50 km of trajectories, our comparison highlighted the importance of the calibration noise and the measurement distance, which should be ideally under 75 m. Moreover, it has been noted that the uncertainty on the pose of the robot is not prominently affected by one particular noise source, compared to the others.
The main question this paper addresses is: What combination of a robot controller and a learning method should be used, if the morphology of the learning robot is not known in advance? Our interest is rooted in the context of morphologically evolving modular robots, but the question is also relevant in general, for system designers interested in widely applicable solutions. We perform an experimental comparison of three controller-and-learner combinations: one approach where controllers are based on modelling animal locomotion (Central Pattern Generators, CPG) and the learner is an evolutionary algorithm, a completely different method using Reinforcement Learning (RL) with a neural network controller architecture, and a combination `in-between' where controllers are neural networks and the learner is an evolutionary algorithm. We apply these three combinations to a test suite of modular robots and compare their efficacy, efficiency, and robustness. Surprisingly, the usual CPG-based and RL-based options are outperformed by the in-between combination that is more robust and efficient than the other two setups.
A linear-time algorithm for generating auxiliary subgraphs for the 3-edge-connected components of a connected multigraph is presented. The algorithm uses an innovative graph contraction operation and makes only one pass over the graph. By contrast, the previously best-known algorithms make multiple passes over the graph to decompose it into its 2-edge-connected components or 2-vertex-connected components, then its 3-edge-connected components or 3-vertex-connected components, and then construct a cactus representation for the 2-cuts to generate the auxiliary subgraphs for the 3-edge-connected components.
Technological advancements in noninvasive imaging facilitate the construction of whole brain interconnected networks, known as brain connectivity. Existing approaches to analyze brain connectivity frequently disaggregate the entire network into a vector of unique edges or summary measures, leading to a substantial loss of information. Motivated by the need to explore the effect mechanism among genetic exposure, brain connectivity and time to disease onset, we propose an integrative Bayesian framework to model the effect pathway between each of these components while quantifying the mediating role of brain networks. To accommodate the biological architectures of brain connectivity constructed along white matter fiber tracts, we develop a structural modeling framework that includes a symmetric matrix-variate accelerated failure time model and a symmetric matrix response regression to characterize the effect paths. We further impose within-graph sparsity and between-graph shrinkage to identify informative network configurations and eliminate the interference of noisy components. Extensive simulations confirm the superiority of our method compared with existing alternatives. By applying the proposed method to the landmark Alzheimer's Disease Neuroimaging Initiative study, we obtain neurobiologically plausible insights that may inform future intervention strategies.
Reconfigurable intelligent surfaces (RIS) can be crucial in next-generation communication systems. However, designing the {RIS} phases according to the instantaneous channel state information (CSI) can be challenging in practice due to the short coherent time of the channel. In this regard, we propose a novel algorithm based on the channel statistics of massive multiple input multiple output systems rather than the instantaneous {CSI}. The beamforming at the base station (BS), power allocation of the users, and phase shifts at the RIS elements are optimized to maximize the minimum signal-to-interference and noise ratio (SINR), guaranteeing fair operation among various users. In particular, we design the RIS phases by leveraging the asymptotic deterministic equivalent of the minimum {SINR} that depends only on the channel statistics. This significantly reduces the computational complexity and the amount of controlling data between the {BS} and {RIS} for updating the phases. This setup is also useful for electromagnetic fields (EMF)-aware systems with constraints on the maximum user's exposure to EMF. The numerical results show that the proposed algorithms achieve more than $100 \%$ gain in terms of minimum SINR, compared to a system with random RIS phase shifts, when $40$ RIS elements, $20$ antennas at the BS and $10$ users, are considered.
Scientific cooperation on an international level has been well studied in the literature. However, much less is known about this cooperation on the intercontinental level. In this paper, we address this issue by creating a collection of approximately 13.8 million publications around the papers by one of the highly cited author working in complex networks and their applications. The obtained rank-frequency distribution of the probability of sequences describing continents and number of countries -- with which authors of papers are affiliated -- follows the power law with an exponent $-1.9108(15)$. Such a dependence is known in the literature as Zipf's law and it has been originally observed in linguistics, later it turned out that it is very commonly observed in various fields. The number of distinct ``continent (number of countries)'' sequences in a function of the number of analyzed papers grows according to power law with exponent $0.527(14)$, i.e. it follows Heap's law.
Including information from additional spectral bands (e.g., near-infrared) can improve deep learning model performance for many vision-oriented tasks. There are many possible ways to incorporate this additional information into a deep learning model, but the optimal fusion strategy has not yet been determined and can vary between applications. At one extreme, known as "early fusion," additional bands are stacked as extra channels to obtain an input image with more than three channels. At the other extreme, known as "late fusion," RGB and non-RGB bands are passed through separate branches of a deep learning model and merged immediately before a final classification or segmentation layer. In this work, we characterize the performance of a suite of multispectral deep learning models with different fusion approaches, quantify their relative reliance on different input bands and evaluate their robustness to naturalistic image corruptions affecting one or more input channels.
The continuous development of computer network technology has accelerated the pace of informatization, and at the same time, network security issues are becoming increasingly prominent. Networking technology with different network topologies is one of the important means to solve network security problems. The security of VPN is based on the division of geographical boundaries, but the granularity is relatively coarse, which is difficult to cope with the dynamic changes of the security situation. Zero trust network solves the VPN problem through peer to peer authorization and continuous verification, but most of the solutions use a central proxy device, resulting in the central node becoming the bottleneck of the network. This paper put forward the hard-Nat traversal formula based on the birthday paradox, which solves the long-standing problem of hard NAT traversal. A full mesh networking mechanism with variable parameter full-dimensional spatial peer-to-peer grid topology was proposed, which covers all types of networking schemes and achieve peer-2-peer resource interconnection on both methodological and engineering level.
Neuromorphic computing is one of the few current approaches that have the potential to significantly reduce power consumption in Machine Learning and Artificial Intelligence. Imam & Cleland presented an odour-learning algorithm that runs on a neuromorphic architecture and is inspired by circuits described in the mammalian olfactory bulb. They assess the algorithm's performance in "rapid online learning and identification" of gaseous odorants and odorless gases (short "gases") using a set of gas sensor recordings of different odour presentations and corrupting them by impulse noise. We replicated parts of the study and discovered limitations that affect some of the conclusions drawn. First, the dataset used suffers from sensor drift and a non-randomised measurement protocol, rendering it of limited use for odour identification benchmarks. Second, we found that the model is restricted in its ability to generalise over repeated presentations of the same gas. We demonstrate that the task the study refers to can be solved with a simple hash table approach, matching or exceeding the reported results in accuracy and runtime. Therefore, a validation of the model that goes beyond restoring a learned data sample remains to be shown, in particular its suitability to odour identification tasks.
We propose an approach to compute inner and outer-approximations of the sets of values satisfying constraints expressed as arbitrarily quantified formulas. Such formulas arise for instance when specifying important problems in control such as robustness, motion planning or controllers comparison. We propose an interval-based method which allows for tractable but tight approximations. We demonstrate its applicability through a series of examples and benchmarks using a prototype implementation.
Nowadays, the Convolutional Neural Networks (CNNs) have achieved impressive performance on many computer vision related tasks, such as object detection, image recognition, image retrieval, etc. These achievements benefit from the CNNs outstanding capability to learn the input features with deep layers of neuron structures and iterative training process. However, these learned features are hard to identify and interpret from a human vision perspective, causing a lack of understanding of the CNNs internal working mechanism. To improve the CNN interpretability, the CNN visualization is well utilized as a qualitative analysis method, which translates the internal features into visually perceptible patterns. And many CNN visualization works have been proposed in the literature to interpret the CNN in perspectives of network structure, operation, and semantic concept. In this paper, we expect to provide a comprehensive survey of several representative CNN visualization methods, including Activation Maximization, Network Inversion, Deconvolutional Neural Networks (DeconvNet), and Network Dissection based visualization. These methods are presented in terms of motivations, algorithms, and experiment results. Based on these visualization methods, we also discuss their practical applications to demonstrate the significance of the CNN interpretability in areas of network design, optimization, security enhancement, etc.