Coverage path planning is a fundamental challenge in robotics, with diverse applications in aerial surveillance, manufacturing, cleaning, inspection, agriculture, and more. The main objective is to devise a trajectory for an agent that efficiently covers a given area, while minimizing time or energy consumption. Existing practical approaches often lack a solid theoretical foundation, relying on purely heuristic methods, or overly abstracting the problem to a simple Traveling Salesman Problem in Grid Graphs. Moreover, the considered cost functions only rarely consider turn cost, prize-collecting variants for uneven cover demand, or arbitrary geometric regions. In this paper, we describe an array of systematic methods for handling arbitrary meshes derived from intricate, polygonal environments. This adaptation paves the way to compute efficient coverage paths with a robust theoretical foundation for real-world robotic applications. Through comprehensive evaluations, we demonstrate that the algorithm also exhibits low optimality gaps, while efficiently handling complex environments. Furthermore, we showcase its versatility in handling partial coverage and accommodating heterogeneous passage costs, offering the flexibility to trade off coverage quality and time efficiency.
With the expected proliferation of delay constrained applications, future communication technologies are pushed towards using short codes. The performance using short codes cannot be inferred through classical channel capacity analysis, which intrinsically assumes long codes and vanishing frame error rate (FER). This paper studies the performance of an uplink large-scale network in the finite blocklength regime. Bounds on the spatially averaged rate outage probability as well as the coding rate meta distribution are derived. The results reveal the exact achievable rate for a given blocklength and FER, and demonstrate the discrepancy between the actual network rate and idealistic classical channel capacity.
Accurate segmentation of interconnected line networks, such as grain boundaries in polycrystalline material microstructures, poses a significant challenge due to the fragmented masks produced by conventional computer vision algorithms, including convolutional neural networks. These algorithms struggle with thin masks, often necessitating intricate post-processing for effective contour closure and continuity. Addressing this issue, this paper introduces a fast, high-fidelity post-processing technique, leveraging domain knowledge about grain boundary connectivity and employing conditional random fields and perceptual grouping rules. This approach significantly enhances segmentation mask accuracy, achieving a 79% segment identification accuracy in validation with a U-Net model on electron microscopy images of a polycrystalline oxide. Additionally, a novel grain alignment metric is introduced, showing a 51% improvement in grain alignment, providing a more detailed assessment of segmentation performance for complex microstructures. This method not only enables rapid and accurate segmentation but also facilitates an unprecedented level of data analysis, significantly improving the statistical representation of grain boundary networks, making it suitable for a range of disciplines where precise segmentation of interconnected line networks is essential.
While coresets have been growing in terms of their application, barring few exceptions, they have mostly been limited to unsupervised settings. We consider supervised classification problems, and non-decomposable evaluation measures in such settings. We show that stratified uniform sampling based coresets have excellent empirical performance that are backed by theoretical guarantees too. We focus on the F1 score and Matthews Correlation Coefficient, two widely used non-decomposable objective functions that are nontrivial to optimize for and show that uniform coresets attain a lower bound for coreset size, and have good empirical performance, comparable with ``smarter'' coreset construction strategies.
The kinematic/robotic community is not only interested in measuring the closeness of a given robot configuration to its next singular one but also in a geometric meaningful index evaluating how far the robot design is away from being architecturally singular. Such an architecture singularity distance, which can be used by engineers as a criterion within the design process, is presented for a certain class of parallel manipulators of Stewart-Gough type; namely so-called linear pentapods. Geometrically the architecture singular designs are well-understood and can be subclassified into several cases, which allows to solve the optimization problem of computing the closest architecture singular design to a given linear pentapod with algorithms from numerical algebraic geometry.
In this paper, a novel artificial intelligence-based cyber-attack detection model for smart grids is developed to stop data integrity cyber-attacks (DIAs) on the received load data by supervisory control and data acquisition (SCADA). In the proposed model, first the load data is forecasted using a regression model and after processing stage, the processed data is clustered using the unsupervised learning method. In this work, in order to achieve the best performance, three load forecasting methods (i.e. extra tree regression (ETR), long short-term memory (LSTM) and bidirectional long short-term memory (BiLSTM)) are utilized as regression models and their performance is compared. For clustering and outlying detection, the covariance elliptic envelope (EE) is employed as an unsupervised learning method. To examine the proposed model, the hourly load data of the power company of the city of Johor in Malaysia is employed and Two common DIAs, which are DIAs targeting economic loss and DIAs targeting blackouts, are used to evaluate the accuracy of detection methods in several scenarios. The simulation results show that the proposed EE-BiLSTM method can perform more robust and accurate compared to the other two methods.
Facilitating large-scale, cross-institutional collaboration in biomedical machine learning projects requires a trustworthy and resilient federated learning (FL) environment to ensure that sensitive information such as protected health information is kept confidential. In this work, we introduce APPFLx, a low-code FL framework that enables the easy setup, configuration, and running of FL experiments across organizational and administrative boundaries while providing secure end-to-end communication, privacy-preserving functionality, and identity management. APPFLx is completely agnostic to the underlying computational infrastructure of participating clients. We demonstrate the capability of APPFLx as an easy-to-use framework for accelerating biomedical studies across institutions and healthcare systems while maintaining the protection of private medical data in two case studies: (1) predicting participant age from electrocardiogram (ECG) waveforms, and (2) detecting COVID-19 disease from chest radiographs. These experiments were performed securely across heterogeneous compute resources, including a mixture of on-premise high-performance computing and cloud computing, and highlight the role of federated learning in improving model generalizability and performance when aggregating data from multiple healthcare systems. Finally, we demonstrate that APPFLx serves as a convenient and easy-to-use framework for accelerating biomedical studies across institutions and healthcare system while maintaining the protection of private medical data.
Synchronous dynamic systems are well-established models that have been used to capture a range of phenomena in networks, including opinion diffusion, spread of disease and product adoption. We study the three most notable problems in synchronous dynamic systems: whether the system will transition to a target configuration from a starting configuration, whether the system will reach convergence from a starting configuration, and whether the system is guaranteed to converge from every possible starting configuration. While all three problems were known to be intractable in the classical sense, we initiate the study of their exact boundaries of tractability from the perspective of structural parameters of the network by making use of the more fine-grained parameterized complexity paradigm. As our first result, we consider treewidth - as the most prominent and ubiquitous structural parameter - and show that all three problems remain intractable even on instances of constant treewidth. We complement this negative finding with fixed-parameter algorithms for the former two problems parameterized by treedepth, a well-studied restriction of treewidth. While it is possible to rule out a similar algorithm for convergence guarantee under treedepth, we conclude with a fixed-parameter algorithm for this last problem when parameterized by treedepth and the maximum in-degree.
Time series anomaly detection has applications in a wide range of research fields and applications, including manufacturing and healthcare. The presence of anomalies can indicate novel or unexpected events, such as production faults, system defects, or heart fluttering, and is therefore of particular interest. The large size and complex patterns of time series have led researchers to develop specialised deep learning models for detecting anomalous patterns. This survey focuses on providing structured and comprehensive state-of-the-art time series anomaly detection models through the use of deep learning. It providing a taxonomy based on the factors that divide anomaly detection models into different categories. Aside from describing the basic anomaly detection technique for each category, the advantages and limitations are also discussed. Furthermore, this study includes examples of deep anomaly detection in time series across various application domains in recent years. It finally summarises open issues in research and challenges faced while adopting deep anomaly detection models.
The development of unmanned aerial vehicles (UAVs) has been gaining momentum in recent years owing to technological advances and a significant reduction in their cost. UAV technology can be used in a wide range of domains, including communication, agriculture, security, and transportation. It may be useful to group the UAVs into clusters/flocks in certain domains, and various challenges associated with UAV usage can be alleviated by clustering. Several computational challenges arise in UAV flock management, which can be solved by using machine learning (ML) methods. In this survey, we describe the basic terms relating to UAVS and modern ML methods, and we provide an overview of related tutorials and surveys. We subsequently consider the different challenges that appear in UAV flocks. For each issue, we survey several machine learning-based methods that have been suggested in the literature to handle the associated challenges. Thereafter, we describe various open issues in which ML can be applied to solve the different challenges of flocks, and we suggest means of using ML methods for this purpose. This comprehensive review may be useful for both researchers and developers in providing a wide view of various aspects of state-of-the-art ML technologies that are applicable to flock management.
Knowledge graph embedding, which aims to represent entities and relations as low dimensional vectors (or matrices, tensors, etc.), has been shown to be a powerful technique for predicting missing links in knowledge graphs. Existing knowledge graph embedding models mainly focus on modeling relation patterns such as symmetry/antisymmetry, inversion, and composition. However, many existing approaches fail to model semantic hierarchies, which are common in real-world applications. To address this challenge, we propose a novel knowledge graph embedding model---namely, Hierarchy-Aware Knowledge Graph Embedding (HAKE)---which maps entities into the polar coordinate system. HAKE is inspired by the fact that concentric circles in the polar coordinate system can naturally reflect the hierarchy. Specifically, the radial coordinate aims to model entities at different levels of the hierarchy, and entities with smaller radii are expected to be at higher levels; the angular coordinate aims to distinguish entities at the same level of the hierarchy, and these entities are expected to have roughly the same radii but different angles. Experiments demonstrate that HAKE can effectively model the semantic hierarchies in knowledge graphs, and significantly outperforms existing state-of-the-art methods on benchmark datasets for the link prediction task.