This work details a scalable framework to orchestrate a swarm of rotary-wing UAVs serving as cellular relays to facilitate beyond line-of-sight connectivity and traffic offloading for ground users. First, a Multiscale Adaptive Energy-conscious Scheduling and TRajectory Optimization (MAESTRO) framework is developed for a single UAV. Aiming to minimize the time-averaged latency to serve user requests, subject to an average UAV power constraint, it is shown that the optimization problem can be cast as a semi-Markov decision process, and exhibits a multiscale structure: outer actions on radial wait velocities and terminal service positions minimize the long-term delay-power trade-off, optimized via value iteration; given these outer actions, inner actions on angular wait velocities and service trajectories minimize a short-term delay-energy cost. A novel hierarchical competitive swarm optimization scheme is developed in the inner optimization, to devise high-resolution trajectories via iterative pair-wise updates. Next, MAESTRO is eXtended to UAV swarms (MAESTRO-X) via scalable policy replication: enabled by a decentralized command-and-control network, the optimal single-agent policy is augmented with spread maximization, consensus-driven conflict resolution, adaptive frequency reuse, and piggybacking. Numerical evaluations show that, for user requests of 10 Mbits, generated according to a Poisson arrival process with rate 0.2 req/min/UAV, single-agent MAESTRO offers 3.8x faster service than a high-altitude platform and 29% faster than a static UAV deployment; moreover, for a swarm of 3 UAV-relays, MAESTRO-X delivers data payloads 4.7x faster than a successive convex approximation scheme; and remarkably, a single UAV optimized via MAESTRO outclasses 3 UAVs optimized via a deep-Q network by 38%.
Trajectory optimization is a powerful tool for robot motion planning and control. State-of-the-art general-purpose nonlinear programming solvers are versatile, handle constraints in an effective way and provide a high numerical robustness, but they are slow because they do not fully exploit the optimal control problem structure at hand. Existing structure-exploiting solvers are fast but they often lack techniques to deal with nonlinearity or rely on penalty methods to enforce (equality or inequality) path constraints. This works presents FATROP: a trajectory optimization solver that is fast and benefits from the salient features of general-purpose nonlinear optimization solvers. The speed-up is mainly achieved through the use of a specialized linear solver, based on a Riccati recursion that is generalized to also support stagewise equality constraints. To demonstrate the algorithm's potential, it is benchmarked on a set of robot problems that are challenging from a numerical perspective, including problems with a minimum-time objective and no-collision constraints. The solver is shown to solve problems for trajectory generation of a quadrotor, a robot manipulator and a truck-trailer problem in a few tens of milliseconds. The algorithm's C++-code implementation accompanies this work as open source software, released under the GNU Lesser General Public License (LGPL). This software framework may encourage and enable the robotics community to use trajectory optimization in more challenging applications.
We consider the problem of state estimation from $m$ linear measurements, where the state $u$ to recover is an element of the manifold $\mathcal{M}$ of solutions of a parameter-dependent equation. The state is estimated using a prior knowledge on $\mathcal{M}$ coming from model order reduction. Variational approaches based on linear approximation of $\mathcal{M}$, such as PBDW, yields a recovery error limited by the Kolmogorov $m$-width of $\mathcal{M}$. To overcome this issue, piecewise-affine approximations of $\mathcal{M}$ have also be considered, that consist in using a library of linear spaces among which one is selected by minimizing some distance to $\mathcal{M}$. In this paper, we propose a state estimation method relying on dictionary-based model reduction, where a space is selected from a library generated by a dictionary of snapshots, using a distance to the manifold. The selection is performed among a set of candidate spaces obtained from the path of a $\ell_1$-regularized least-squares problem. Then, in the framework of parameter-dependent operator equations (or PDEs) with affine parameterizations, we provide an efficient offline-online decomposition based on randomized linear algebra, that ensures efficient and stable computations while preserving theoretical guarantees.
We present a forward sufficient dimension reduction method for categorical or ordinal responses by extending the outer product of gradients and minimum average variance estimator to multinomial generalized linear model. Previous work in this direction extend forward regression to binary responses, and are applied in a pairwise manner to multinomial data, which is less efficient than our approach. Like other forward regression-based sufficient dimension reduction methods, our approach avoids the relatively stringent distributional requirements necessary for inverse regression alternatives. We show consistency of our proposed estimator and derive its convergence rate. We develop an algorithm for our methods based on repeated applications of available algorithms for forward regression. We also propose a clustering-based tuning procedure to estimate the tuning parameters. The effectiveness of our estimator and related algorithms is demonstrated via simulations and applications.
Mutual localization plays a crucial role in multi-robot cooperation. CREPES, a novel system that focuses on six degrees of freedom (DOF) relative pose estimation for multi-robot systems, is proposed in this paper. CREPES has a compact hardware design using active infrared (IR) LEDs, an IR fish-eye camera, an ultra-wideband (UWB) module and an inertial measurement unit (IMU). By leveraging IR light communication, the system solves data association between visual detection and UWB ranging. Ranging measurements from the UWB and directional information from the camera offer relative 3-DOF position estimation. Combining the mutual relative position with neighbors and the gravity constraints provided by IMUs, we can estimate the 6-DOF relative pose from a single frame of sensor measurements. In addition, we design an estimator based on the error-state Kalman filter (ESKF) to enhance system accuracy and robustness. When multiple neighbors are available, a Pose Graph Optimization (PGO) algorithm is applied to further improve system accuracy. We conduct enormous experiments to demonstrate CREPES' accuracy between robot pairs and a team of robots, as well as performance under challenging conditions.
We consider a setting in which one swarm of agents is to service or track a second swarm, and formulate an optimal control problem which trades off between the competing objectives of servicing and motion costs. We consider the continuum limit where large-scale swarms are modeled in terms of their time-varying densities, and where the Wasserstein distance between two densities captures the servicing cost. We show how this non-linear infinite-dimensional optimal control problem is intimately related to the geometry of Wasserstein space, and provide new results in the case of absolutely continuous densities and constant-in-time references. Specifically, we show that optimal swarm trajectories follow Wasserstein geodesics, while the optimal control tradeoff determines the time-schedule of travel along these geodesics. We briefly describe how this solution provides a basis for a model-predictive control scheme for tracking time-varying and real-time reference trajectories as well.
We consider search by mobile agents for a hidden, idle target, placed on the infinite line. Feasible solutions are agent trajectories in which all agents reach the target sooner or later. A special feature of our problem is that the agents are $p$-faulty, meaning that every attempt to change direction is an independent Bernoulli trial with known probability $p$, where $p$ is the probability that a turn fails. We are looking for agent trajectories that minimize the worst-case expected termination time, relative to competitive analysis. First, we study linear search with one deterministic $p$-faulty agent, i.e., with no access to random oracles, $p\in (0,1/2)$. For this problem, we provide trajectories that leverage the probabilistic faults into an algorithmic advantage. Our strongest result pertains to a search algorithm (deterministic, aside from the adversarial probabilistic faults) which, as $p\to 0$, has optimal performance $4.59112+\epsilon$, up to the additive term $\epsilon$ that can be arbitrarily small. Additionally, it has performance less than $9$ for $p\leq 0.390388$. When $p\to 1/2$, our algorithm has performance $\Theta(1/(1-2p))$, which we also show is optimal up to a constant factor. Second, we consider linear search with two $p$-faulty agents, $p\in (0,1/2)$, for which we provide three algorithms of different advantages, all with a bounded competitive ratio even as $p\rightarrow 1/2$. Indeed, for this problem, we show how the agents can simulate the trajectory of any $0$-faulty agent (deterministic or randomized), independently of the underlying communication model. As a result, searching with two agents allows for a solution with a competitive ratio of $9+\epsilon$, or a competitive ratio of $4.59112+\epsilon$. Our final contribution is a novel algorithm for searching with two $p$-faulty agents that achieves a competitive ratio $3+4\sqrt{p(1-p)}$.
The Internet of Things (IoT) is a futuristic technology that promises to connect tons of devices via the internet. As more individuals connect to the internet, it is believed that communication will generate mountains of data. IoT is currently leveraging Wireless Sensor Networks (WSNs) to collect, monitor, and transmit data and sensitive data across wireless networks using sensor nodes. WSNs encounter a variety of threats posed by attackers, including unauthorized access and data security. Especially in the context of the Internet of Things, where small embedded devices with limited computational capabilities, such as sensor nodes, are expected to connect to a larger network. As a result, WSNs are vulnerable to a variety of attacks. Furthermore, implementing security is time-consuming and selective, as traditional security algorithms degrade network performance due to their computational complexity and inherent delays. This paper describes an encryption algorithm that combines the Secure IoT (SIT) algorithm with the Security Protocols for Sensor Networks (SPINS) security protocol to create the Lightweight Security Algorithm (LSA), which addresses data security concerns while reducing power consumption in WSNs without sacrificing performance.
It is important to quantify the uncertainty of input samples, especially in mission-critical domains such as autonomous driving and healthcare, where failure predictions on out-of-distribution (OOD) data are likely to cause big problems. OOD detection problem fundamentally begins in that the model cannot express what it is not aware of. Post-hoc OOD detection approaches are widely explored because they do not require an additional re-training process which might degrade the model's performance and increase the training cost. In this study, from the perspective of neurons in the deep layer of the model representing high-level features, we introduce a new aspect for analyzing the difference in model outputs between in-distribution data and OOD data. We propose a novel method, Leveraging Important Neurons (LINe), for post-hoc Out of distribution detection. Shapley value-based pruning reduces the effects of noisy outputs by selecting only high-contribution neurons for predicting specific classes of input data and masking the rest. Activation clipping fixes all values above a certain threshold into the same value, allowing LINe to treat all the class-specific features equally and just consider the difference between the number of activated feature differences between in-distribution and OOD data. Comprehensive experiments verify the effectiveness of the proposed method by outperforming state-of-the-art post-hoc OOD detection methods on CIFAR-10, CIFAR-100, and ImageNet datasets.
Communication load balancing aims to balance the load between different available resources, and thus improve the quality of service for network systems. After formulating the load balancing (LB) as a Markov decision process problem, reinforcement learning (RL) has recently proven effective in addressing the LB problem. To leverage the benefits of classical RL for load balancing, however, we need an explicit reward definition. Engineering this reward function is challenging, because it involves the need for expert knowledge and there lacks a general consensus on the form of an optimal reward function. In this work, we tackle the communication load balancing problem from an inverse reinforcement learning (IRL) approach. To the best of our knowledge, this is the first time IRL has been successfully applied in the field of communication load balancing. Specifically, first, we infer a reward function from a set of demonstrations, and then learn a reinforcement learning load balancing policy with the inferred reward function. Compared to classical RL-based solution, the proposed solution can be more general and more suitable for real-world scenarios. Experimental evaluations implemented on different simulated traffic scenarios have shown our method to be effective and better than other baselines by a considerable margin.
Unmanned aerial vehicle (UAV) swarm enabled edge computing is envisioned to be promising in the sixth generation wireless communication networks due to their wide application sensories and flexible deployment. However, most of the existing works focus on edge computing enabled by a single or a small scale UAVs, which are very different from UAV swarm-enabled edge computing. In order to facilitate the practical applications of UAV swarm-enabled edge computing, the state of the art research is presented in this article. The potential applications, architectures and implementation considerations are illustrated. Moreover, the promising enabling technologies for UAV swarm-enabled edge computing are discussed. Furthermore, we outline challenges and open issues in order to shed light on the future research directions.