This paper studies the tracking control problem of networked multi-agent systems under both multiple networks and event-triggered mechanisms. Multiple networks are to connect multiple agents and reference systems with decentralized controllers to guarantee their information transmission, whereas the event-triggered mechanisms are to reduce the information transmission via the networks. In this paper, each agent has a network to communicate with its controller and reference system, and all networks are independent and asynchronous and have local event-triggered mechanisms, which are based on local measurements and determine whether the local measurements need to be transmitted via the corresponding network. To address this scenario, we first implement the emulation-based approach to develop a novel hybrid model for the tracking control of networked multi-agent systems. Next, sufficient conditions are derived and decentralized event-triggered mechanisms are designed to guarantee the desired tracking performance. Furthermore, the proposed approach is applied to derive novel results for the event-triggered observer design problem of networked multi-agent systems. Finally, two numerical examples are presented to illustrate the validity of the developed results.
The stochastic nature of iterative optimization heuristics leads to inherently noisy performance measurements. Since these measurements are often gathered once and then used repeatedly, the number of collected samples will have a significant impact on the reliability of algorithm comparisons. We show that care should be taken when making decisions based on limited data. Particularly, we show that the number of runs used in many benchmarking studies, e.g., the default value of 15 suggested by the COCO environment, can be insufficient to reliably rank algorithms on well-known numerical optimization benchmarks. Additionally, methods for automated algorithm configuration are sensitive to insufficient sample sizes. This may result in the configurator choosing a `lucky' but poor-performing configuration despite exploring better ones. We show that relying on mean performance values, as many configurators do, can require a large number of runs to provide accurate comparisons between the considered configurations. Common statistical tests can greatly improve the situation in most cases but not always. We show examples of performance losses of more than 20%, even when using statistical races to dynamically adjust the number of runs, as done by irace. Our results underline the importance of appropriately considering the statistical distribution of performance values.
We propose a real-time vision-based teleoperation approach for robotic arms that employs a single depth-based camera, exempting the user from the need for any wearable devices. By employing a natural user interface, this novel approach leverages the conventional fine-tuning control, turning it into a direct body pose capture process. The proposed approach is comprised of two main parts. The first is a nonlinear customizable pose mapping based on Thin-Plate Splines (TPS), to directly transfer human body motion to robotic arm motion in a nonlinear fashion, thus allowing matching dissimilar bodies with different workspace shapes and kinematic constraints. The second is a Deep Neural Network hand-state classifier based on Long-term Recurrent Convolutional Networks (LRCN) that exploits the temporal coherence of the acquired depth data. We validate, evaluate and compare our approach through both classical cross-validation experiments of the proposed hand state classifier; and user studies over a set of practical experiments involving variants of pick-and-place and manufacturing tasks. Results revealed that LRCN networks outperform single image Convolutional Neural Networks; and that users' learning curves were steep, thus allowing the successful completion of the proposed tasks. When compared to a previous approach, the TPS approach revealed no increase in task complexity and similar times of completion, while providing more precise operation in regions closer to workspace boundaries.
In this paper, we consider a resilient consensus problem for the multi-agent network where some of the agents are subject to Byzantine attacks and may transmit erroneous state values to their neighbors. In particular, we develop an event-triggered update rule to tackle this problem as well as reduce the communication for each agent. Our approach is based on the mean subsequence reduced (MSR) algorithm with agents being capable to communicate with multi-hop neighbors. Since delays are critical in such an environment, we provide necessary graph conditions for the proposed algorithm to perform well with delays in the communication. We highlight that through multi-hop communication, the network connectivity can be reduced especially in comparison with the common onehop communication case. Lastly, we show the effectiveness of the proposed algorithm by a numerical example.
Multi-Agent Systems (MAS) are notoriously complex and hard to verify. In fact, it is not trivial to model a MAS, and even when a model is built, it is not always possible to verify, in a formal way, that it is actually behaving as we expect. Usually, it is relevant to know whether an agent is capable of fulfilling its own goals. One possible way to check this is through Model Checking. Specifically, by verifying Alternating-time Temporal Logic (ATL) properties, where the notion of strategies for achieving goals can be described. Unfortunately, the resulting model checking problem is not decidable in general. In this paper, we present a verification procedure based on combining Model Checking and Runtime Verification, where sub-models of the MAS model belonging to decidable fragments are verified by a model checker, and runtime monitors are used to verify the rest. Furthermore, we implement our technique and show experimental results.
A novel distributed control law for consensus of networked double integrator systems with biased measurements is developed in this article. The agents measure relative positions over a time-varying, undirected graph with an unknown and constant sensor bias corrupting the measurements. An adaptive control law is derived using Lyapunov methods to estimate the individual sensor biases accurately. The proposed algorithm ensures that position consensus is achieved exponentially in addition to bias estimation. The results leverage recent advances in collective initial excitation based results in adaptive estimation. Conditions connecting bipartite graphs and collective initial excitation are also developed. The algorithms are illustrated via simulation studies on a network of double integrators with local communication and biased measurements.
Federated learning (FL) has been recognized as a viable distributed learning paradigm which trains a machine learning model collaboratively with massive mobile devices in the wireless edge while protecting user privacy. Although various communication schemes have been proposed to expedite the FL process, most of them have assumed ideal wireless channels which provide reliable and lossless communication links between the server and mobile clients. Unfortunately, in practical systems with limited radio resources such as constraint on the training latency and constraints on the transmission power and bandwidth, transmission of a large number of model parameters inevitably suffers from quantization errors (QE) and transmission outage (TO). In this paper, we consider such non-ideal wireless channels, and carry out the first analysis showing that the FL convergence can be severely jeopardized by TO and QE, but intriguingly can be alleviated if the clients have uniform outage probabilities. These insightful results motivate us to propose a robust FL scheme, named FedTOE, which performs joint allocation of wireless resources and quantization bits across the clients to minimize the QE while making the clients have the same TO probability. Extensive experimental results are presented to show the superior performance of FedTOE for deep learning-based classification tasks with transmission latency constraints.
Modern software development is based on a series of rapid incremental changes collaboratively made to large source code repositories by developers with varying experience and expertise levels. The ZeroIn project is aimed at analyzing the metadata of these dynamic phenomena, including the data on repositories, commits, and developers, to rapidly and accurately mark the quality of commits as they arrive at the repositories. In this context, the present article presents a characterization of the software development metadata in terms of distributions of data that best captures the trends in the datasets. Multiple datasets are analyzed for this purpose, including Stack Overflow on developers' features and GitHub data on over 452 million repositories with 16 million commits. This characterization is intended to make it possible to generate multiple synthetic datasets that can be used in training and testing novel machine learning-based solutions to improve the reliability of software even as it evolves. It is also aimed at serving the development process to exploit the latent correlations among many key feature vectors across the aggregate space of repositories and developers. The data characterization of this article is designed to feed into the machine learning components of ZeroIn, including the application of binary classifiers for early flagging of buggy software commits and the development of graph-based learning methods to exploit sparse connectivity among the sets of repositories, commits, and developers.
5G applications have become increasingly popular in recent years as the spread of fifth-generation (5G) network deployment has grown. For vehicular networks, mmWave band signals have been well studied and used for communication and sensing. In this work, we propose a new dynamic ray tracing algorithm that exploits spatial and temporal coherence. We evaluate the performance by comparing the results on typical vehicular communication scenarios with GEMV^2, which uses a combination of deterministic and stochastic models, and WinProp, which utilizes the deterministic model for simulations with given environment information. We also compare the performance of our algorithm on complex, urban models and observe a reduction in computation time by 36% compared to GEMV^2 and by 30% compared to WinProp, while maintaining similar prediction accuracy.
Multi-object tracking (MOT) is a crucial component of situational awareness in military defense applications. With the growing use of unmanned aerial systems (UASs), MOT methods for aerial surveillance is in high demand. Application of MOT in UAS presents specific challenges such as moving sensor, changing zoom levels, dynamic background, illumination changes, obscurations and small objects. In this work, we present a robust object tracking architecture aimed to accommodate for the noise in real-time situations. We propose a kinematic prediction model, called Deep Extended Kalman Filter (DeepEKF), in which a sequence-to-sequence architecture is used to predict entity trajectories in latent space. DeepEKF utilizes a learned image embedding along with an attention mechanism trained to weight the importance of areas in an image to predict future states. For the visual scoring, we experiment with different similarity measures to calculate distance based on entity appearances, including a convolutional neural network (CNN) encoder, pre-trained using Siamese networks. In initial evaluation experiments, we show that our method, combining scoring structure of the kinematic and visual models within a MHT framework, has improved performance especially in edge cases where entity motion is unpredictable, or the data presents frames with significant gaps.
Edge intelligence refers to a set of connected systems and devices for data collection, caching, processing, and analysis in locations close to where data is captured based on artificial intelligence. The aim of edge intelligence is to enhance the quality and speed of data processing and protect the privacy and security of the data. Although recently emerged, spanning the period from 2011 to now, this field of research has shown explosive growth over the past five years. In this paper, we present a thorough and comprehensive survey on the literature surrounding edge intelligence. We first identify four fundamental components of edge intelligence, namely edge caching, edge training, edge inference, and edge offloading, based on theoretical and practical results pertaining to proposed and deployed systems. We then aim for a systematic classification of the state of the solutions by examining research results and observations for each of the four components and present a taxonomy that includes practical problems, adopted techniques, and application goals. For each category, we elaborate, compare and analyse the literature from the perspectives of adopted techniques, objectives, performance, advantages and drawbacks, etc. This survey article provides a comprehensive introduction to edge intelligence and its application areas. In addition, we summarise the development of the emerging research field and the current state-of-the-art and discuss the important open issues and possible theoretical and technical solutions.