Finding a computable expression for the feedback capacity of channels with colored Gaussian, additive noise is a long standing open problem. In this paper, we solve this problem in the scenario where the channel has multiple inputs and multiple outputs (MIMO) and the noise process is generated as the output of a time-invariant state-space model. Our main result is a computable expression for the feedback capacity in terms of a finite-dimensional convex optimization. The solution to the feedback capacity problem is obtained by formulating the finite-block counterpart of the capacity problem as a \emph{sequential convex optimization problem} which leads in turn to a single-letter upper bound. This converse derivation integrates tools and ideas from information theory, control, filtering and convex optimization. A tight lower bound is realized by optimizing over a family of time-invariant policies thus showing that time-invariant inputs are optimal even when the noise process may not be stationary. The optimal time-invariant policy is used to construct a capacity-achieving and simple coding scheme for scalar channels, and its analysis reveals an interesting relation between a smoothing problem and the feedback capacity expression.
Haptic perception is incredibly important for immersive teleoperation of robots, especially for accomplishing manipulation tasks. We propose a low-cost haptic sensing and rendering system, which is capable of detecting and displaying surface roughness. As the robot fingertip moves across a surface of interest, two microphones capture sound coupled directly through the fingertip and through the air, respectively. A learning-based detector system analyzes the data in real-time and gives roughness estimates with both high temporal resolution and low latency. Finally, an audio-based haptic actuator displays the result to the human operator. We demonstrate the effectiveness of our system through experiments and our winning entry in the ANA Avatar XPRIZE competition finals, where impartial judges solved a roughness-based selection task even without additional vision feedback. We publish our dataset used for training and evaluation together with our trained models to enable reproducibility.
In cooperative multi-agent robotic systems, coordination is necessary in order to complete a given task. Important examples include search and rescue, operations in hazardous environments, and environmental monitoring. Coordination, in turn, requires simultaneous satisfaction of safety critical constraints, in the form of state and input constraints, and a connectivity constraint, in order to ensure that at every time instant there exists a communication path between every pair of agents in the network. In this work, we present a model predictive controller that tackles the problem of performing multi-agent coordination while simultaneously satisfying safety critical and connectivity constraints. The former is formulated in the form of state and input constraints and the latter as a constraint on the second smallest eigenvalue of the associated communication graph Laplacian matrix, also known as Fiedler eigenvalue, which enforces the connectivity of the communication network. We propose a sequential quadratic programming formulation to solve the associated optimization problem that is amenable to distributed optimization, making the proposed solution suitable for control of multi-agent robotics systems relying on local computation. Finally, the effectiveness of the algorithm is highlighted with a numerical simulation.
In this work, we investigate the sum-rate performance of multicell and cell-free massive MIMO systems using linear precoding and multiuser scheduling algorithms. We consider the use of a network-centric clustering approach to reduce the computational complexity of the techniques applied to the cell-free system. We then develop a greedy algorithm that considers multiple candidates for the subset of users to be scheduled and that approaches the performance of the optimal exhaustive search. We assess the proposed and existing scheduling algorithms in both multicell and cell-free networks with the same coverage area. Numerical results illustrate the sum-rate performance of the proposed scheduling algorithm against existing approaches.
Ramp metering is one of the most effective tools to combat traffic congestion. In this paper, we present a ramp metering policy for a network of freeways with arbitrary number of on- and off-ramps, merge, and diverge junctions. The proposed policy is designed at the microscopic level and takes into account vehicle following safety constraints. In addition, each on-ramp operates in cycles during which it releases vehicles as long as the number of releases does not exceed its queue size at the start of the cycle. Moreover, each on-ramp dynamically adjusts its release rate based on the traffic condition. To evaluate the performance of the policy, we analyze its throughput, which is characterized by the set of arrival rates for which the queue sizes at all on-ramps remain bounded in expectation. We show that the proposed policy is able to maximize the throughput if the merging speed at all the on-ramps is equal to the free flow speed and the network has no merge junction. We provide simulations to illustrate the performance of our policy and compare it with a well-known policy from the literature.
PDDLStream solvers have recently emerged as viable solutions for Task and Motion Planning (TAMP) problems, extending PDDL to problems with continuous action spaces. Prior work has shown how PDDLStream problems can be reduced to a sequence of PDDL planning problems, which can then be solved using off-the-shelf planners. However, this approach can suffer from long runtimes. In this paper we propose LAZY, a solver for PDDLStream problems that maintains a single integrated search over action skeletons, which gets progressively more geometrically informed, as samples of possible motions are lazily drawn during motion planning. We explore how learned models of goal-directed policies and current motion sampling data can be incorporated in LAZY to adaptively guide the task planner. We show that this leads to significant speed-ups in the search for a feasible solution evaluated over unseen test environments of varying numbers of objects, goals, and initial conditions. We evaluate our TAMP approach by comparing to existing solvers for PDDLStream problems on a range of simulated 7DoF rearrangement/manipulation problems.
In this paper, we present an unsupervised learning neural model to design transmit precoders for integrated sensing and communication (ISAC) systems to maximize the worst-case target illumination power while ensuring a minimum signal-to-interference-plus-noise ratio (SINR) for all the users. The problem of learning transmit precoders from uplink pilots and echoes can be viewed as a parameterized function estimation problem and we propose to learn this function using a neural network model. To learn the neural network parameters, we develop a novel loss function based on the first-order optimality conditions to incorporate the SINR and power constraints. Through numerical simulations, we demonstrate that the proposed method outperforms traditional optimization-based methods in presence of channel estimation errors while incurring lesser computational complexity and generalizing well across different channel conditions that were not shown during training.
The brain is a nonlinear and highly Recurrent Neural Network (RNN). This RNN is surprisingly plastic and supports our astonishing ability to learn and execute complex tasks. However, learning is incredibly complicated due to the brain's nonlinear nature and the obscurity of mechanisms for determining the contribution of each synapse to the output error. This issue is known as the Credit Assignment Problem (CAP) and is a fundamental challenge in neuroscience and Artificial Intelligence (AI). Nevertheless, in the current understanding of cognitive neuroscience, it is widely accepted that a feedback loop systems play an essential role in synaptic plasticity. With this as inspiration, we propose a computational model by combining Neural Networks (NN) and nonlinear optimal control theory. The proposed framework involves a new NN-based actor-critic method which is used to simulate the error feedback loop systems and projections on the NN's synaptic plasticity so as to ensure that the output error is minimized.
In a task where many similar inverse problems must be solved, evaluating costly simulations is impractical. Therefore, replacing the model $y$ with a surrogate model $y_s$ that can be evaluated quickly leads to a significant speedup. The approximation quality of the surrogate model depends strongly on the number, position, and accuracy of the sample points. With an additional finite computational budget, this leads to a problem of (computer) experimental design. In contrast to the selection of sample points, the trade-off between accuracy and effort has hardly been studied systematically. We therefore propose an adaptive algorithm to find an optimal design in terms of position and accuracy. Pursuing a sequential design by incrementally appending the computational budget leads to a convex and constrained optimization problem. As a surrogate, we construct a Gaussian process regression model. We measure the global approximation error in terms of its impact on the accuracy of the identified parameter and aim for a uniform absolute tolerance, assuming that $y_s$ is computed by finite element calculations. A priori error estimates and a coarse estimate of computational effort relate the expected improvement of the surrogate model error to computational effort, resulting in the most efficient combination of sample point and evaluation tolerance. We also allow for improving the accuracy of already existing sample points by continuing previously truncated finite element solution procedures.
Accurate and efficient uncertainty estimation is crucial to build reliable Machine Learning (ML) models capable to provide calibrated uncertainty estimates, generalize and detect Out-Of-Distribution (OOD) datasets. To this end, Deterministic Uncertainty Methods (DUMs) is a promising model family capable to perform uncertainty estimation in a single forward pass. This work investigates important design choices in DUMs: (1) we show that training schemes decoupling the core architecture and the uncertainty head schemes can significantly improve uncertainty performances. (2) we demonstrate that the core architecture expressiveness is crucial for uncertainty performance and that additional architecture constraints to avoid feature collapse can deteriorate the trade-off between OOD generalization and detection. (3) Contrary to other Bayesian models, we show that the prior defined by DUMs do not have a strong effect on the final performances.
Balanced Singular Perturbation Approximation (SPA) is a model order reduction method for linear time-invariant systems that guarantees asymptotic stability and for which there exists an a priori error bound. In that respect, it is similar to Balanced Truncation (BT). However, the reduced models obtained by SPA generally introduce better approximation in the lower frequency range and near steady-states, whereas BT is better suited for the higher frequency range. Even so, independently of the frequency range of interest, BT and its variants are more often applied in practice, since there exist more efficient algorithmic realizations thereof. In this paper, we aim at closing this practically-relevant gap for SPA. We propose two novel and efficient algorithms that are adapted for different settings. Firstly, we derive a low-rank implementation of SPA that is applicable in the large-scale setting. Secondly, a data-driven reinterpretation of the method is proposed that only requires input-output data, and thus, is realization-free. A main tool for our derivations is the reciprocal transformation, which induces a distinct view on implementing the method. While the reciprocal transformation and the characterization of SPA is not new, its significance for the practical algorithmic realization has been overlooked in the literature. Our proposed algorithms have well-established counterparts for BT, and as such, also a comparable computational complexity. The numerical performance of the two novel implementations is tested for several numerical benchmarks, and comparisons to their counterparts for BT as well as the existing implementations of SPA are made.