The growth in the number of low-cost narrow band radios such as Bluetooth low energy (BLE) enabled applications such as asset tracking, human behavior monitoring, and keyless entry. The accurate range estimation is a must in such applications. Phase-based ranging has recently gained momentum due to its high accuracy in multipath environment compared to traditional schemes such as ranging based on received signal strength. The phase-based ranging requires tone exchange on multiple frequencies on a uniformly sampled frequency grid. Such tone exchange may not be possible due to some missing tones, e.g., reserved advertisement channels. Furthermore, the IQ values at a given tone may be distorted by interference. In this paper, we proposed two phase-based ranging schemes which deal with the missing/interfered tones. We compare the performance and complexity of the proposed schemes using simulations, complexity analysis, and measurement setups. In particular, we show that for small number of missing/interfered tones, the proposed system based on employing a trained neural network (NN) performs very close to a reference ranging system where there is no missing/interference tones. Interestingly, this high performance is at the cost of negligible additional computational complexity and up to 60.5 Kbytes of additional required memory compared to the reference system, making it an attractive solution for ranging using hardware-limited radios such as BLE.
Detection-based tracking is one of the main methods of multi-object tracking. It can obtain good tracking results when using excellent detectors but it may associate wrong targets when facing overlapping and low-confidence detections. To address this issue, this paper proposes a multi-object tracker based on shape constraint and confidence named SCTracker. In the data association stage, an Intersection of Union distance with shape constraints is applied to calculate the cost matrix between tracks and detections, which can effectively avoid the track tracking to the wrong target with the similar position but inconsistent shape, so as to improve the accuracy of data association. Additionally, the Kalman Filter based on the detection confidence is used to update the motion state to improve the tracking performance when the detection has low confidence. Experimental results on MOT 17 dataset show that the proposed method can effectively improve the tracking performance of multi-object tracking.
Tracking individuals is a vital part of many experiments conducted to understand collective behaviour. Ants are the paradigmatic model system for such experiments but their lack of individually distinguishing visual features and their high colony densities make it extremely difficult to perform reliable tracking automatically. Additionally, the wide diversity of their species' appearances makes a generalized approach even harder. In this paper, we propose a data-driven multi-object tracker that, for the first time, employs domain adaptation to achieve the required generalisation. This approach is built upon a joint-detection-and-tracking framework that is extended by a set of domain discriminator modules integrating an adversarial training strategy in addition to the tracking loss. In addition to this novel domain-adaptive tracking framework, we present a new dataset and a benchmark for the ant tracking problem. The dataset contains 57 video sequences with full trajectory annotation, including 30k frames captured from two different ant species moving on different background patterns. It comprises 33 and 24 sequences for source and target domains, respectively. We compare our proposed framework against other domain-adaptive and non-domain-adaptive multi-object tracking baselines using this dataset and show that incorporating domain adaptation at multiple levels of the tracking pipeline yields significant improvements. The code and the dataset are available at //github.com/chamathabeysinghe/da-tracker.
The proliferation of connected devices through Internet connectivity presents both opportunities for smart applications and risks to security and privacy. It is vital to proactively address these concerns to fully leverage the potential of the Internet of Things. IoT services where one data owner serves multiple clients, like smart city transportation, smart building management and healthcare can offer benefits but also bring cybersecurity and data privacy risks. For example, in healthcare, a hospital may collect data from medical devices and make it available to multiple clients such as researchers and pharmaceutical companies. This data can be used to improve medical treatments and research but if not protected, it can also put patients' personal information at risk. To ensure the benefits of these services, it is important to implement proper security and privacy measures. In this paper, we propose a symmetric searchable encryption scheme with dynamic updates on a database that has a single owner and multiple clients for IoT environments. Our proposed scheme supports both forward and backward privacy. Additionally, our scheme supports a decentralized storage environment in which data owners can outsource data across multiple servers or even across multiple service providers to improve security and privacy. Further, it takes a minimum amount of effort and costs to revoke a client's access to our system at any time. The performance and formal security analyses of the proposed scheme show that our scheme provides better functionality, and security and is more efficient in terms of computation and storage than the closely related works.
We propose a type of non-cooperative game, termed multi-cluster aggregative game, which is composed of clusters as players, where each cluster consists of collaborative agents with cost functions depending on their own decisions and the aggregate quantity of each participant cluster to modeling large-scale and hierarchical multi-agent systems. This novel game model is motivated by decision-making problems in competitive-cooperative network systems with large-scale nodes, such as the Energy Internet. To address challenges arising in seeking Nash equilibrium for such network systems, we develop an algorithm with a hierarchical communication topology which is a hybrid with distributed and semi-decentralized protocols. The upper level consists of cluster coordinators estimating the aggregate quantities with local communications, while the lower level is cluster subnets composed of its coordinator and agents aiming to track the gradient of the corresponding cluster. In particular, the clusters exchange the aggregate quantities instead of their decisions to relieve the burden of communication. Under strongly monotone and mildly Lipschitz continuous assumptions, we rigorously prove that the algorithm linearly converges to a Nash equilibrium with a fixed step size.We present the applications in the context of the Energy Internet. Furthermore, the numerical results verify the effectiveness of the algorithm.
This paper presents a boundary element method (BEM) for computing the energy transmittance of a singly-periodic grating in 2D for a wide frequency band, which is of engineering interest in various fields with possible applications to acoustic metamaterial design. The proposed method is based on the Pad\'e approximants of the response. The high-order frequency derivatives of the sound pressure necessary to evaluate the approximants are evaluated by a novel fast BEM accelerated by the fast-multipole and hierarchical matrix methods combined with the automatic differentiation. The target frequency band is divided adaptively, and the Pad\'e approximation is used in each subband so as to accurately estimate the transmittance for a wide frequency range. Through some numerical examples, we confirm that the proposed method can efficiently and accurately give the transmittance even when some anomalies and stopband exist in the target band.
Solving the problem of cooperation is of fundamental importance to the creation and maintenance of functional societies, with examples of cooperative dilemmas ranging from navigating busy road junctions to negotiating carbon reduction treaties. As the use of AI becomes more pervasive throughout society, the need for socially intelligent agents that are able to navigate these complex cooperative dilemmas is becoming increasingly evident. In the natural world, direct punishment is an ubiquitous social mechanism that has been shown to benefit the emergence of cooperation within populations. However no prior work has investigated its impact on the development of cooperation within populations of artificial learning agents experiencing social dilemmas. Additionally, within natural populations the use of any form of punishment is strongly coupled with the related social mechanisms of partner selection and reputation. However, no previous work has considered the impact of combining multiple social mechanisms on the emergence of cooperation in multi-agent systems. Therefore, in this paper we present a comprehensive analysis of the behaviours and learning dynamics associated with direct punishment in multi-agent reinforcement learning systems and how it compares to third-party punishment, when both are combined with the related social mechanisms of partner selection and reputation. We provide an extensive and systematic evaluation of the impact of these key mechanisms on the dynamics of the strategies learned by agents. Finally, we discuss the implications of the use of these mechanisms on the design of cooperative AI systems.
While Moore's law has driven exponential computing power expectations, its nearing end calls for new avenues for improving the overall system performance. One of these avenues is the exploration of alternative brain-inspired computing architectures that aim at achieving the flexibility and computational efficiency of biological neural processing systems. Within this context, neuromorphic engineering represents a paradigm shift in computing based on the implementation of spiking neural network architectures in which processing and memory are tightly co-located. In this paper, we provide a comprehensive overview of the field, highlighting the different levels of granularity at which this paradigm shift is realized and comparing design approaches that focus on replicating natural intelligence (bottom-up) versus those that aim at solving practical artificial intelligence applications (top-down). First, we present the analog, mixed-signal and digital circuit design styles, identifying the boundary between processing and memory through time multiplexing, in-memory computation, and novel devices. Then, we highlight the key tradeoffs for each of the bottom-up and top-down design approaches, survey their silicon implementations, and carry out detailed comparative analyses to extract design guidelines. Finally, we identify necessary synergies and missing elements required to achieve a competitive advantage for neuromorphic systems over conventional machine-learning accelerators in edge computing applications, and outline the key ingredients for a framework toward neuromorphic intelligence.
This paper systematizes log based Transparency Enhancing Technologies. Based on established work on transparency from multiple disciplines we outline the purpose, usefulness, and pitfalls of transparency. We outline the mechanisms that allow log based transparency enhancing technologies to be implemented, in particular logging mechanisms, sanitisation mechanisms and the trade-offs with privacy, data release and query mechanisms, and how transparency relates to the external mechanisms that can provide the ability to contest a system and hold system operators accountable. We illustrate the role these mechanisms play with two case studies, Certificate Transparency and cryptocurrencies, and show the role that transparency plays in their function as well as the issues these systems face in delivering transparency.
Although the expenses associated with DNA sequencing have been rapidly decreasing, the current cost stands at roughly \$1.3K/TB, which is dramatically more expensive than reading from existing archival storage solutions today. In this work, we aim to reduce not only the cost but also the latency of DNA storage by studying the DNA coverage depth problem, which aims to reduce the required number of reads to retrieve information from the storage system. Under this framework, our main goal is to understand how to optimally pair an error-correcting code with a given retrieval algorithm to minimize the sequencing coverage depth, while guaranteeing retrieval of the information with high probability. Additionally, we study the DNA coverage depth problem under the random-access setup.
Real-time traffic and sensor data from connected vehicles have the potential to provide insights that will lead to the immediate benefit of efficient management of the transportation infrastructure and related adjacent services. However, the growth of electric vehicles (EVs) and connected vehicles (CVs) has generated an abundance of CV data and sensor data that has put a strain on the processing capabilities of existing data center infrastructure. As a result, the benefits are either delayed or not fully realized. To address this issue, we propose a solution for processing state-wide CV traffic and sensor data on GPUs that provides real-time micro-scale insights in both temporal and spatial dimensions. This is achieved through the use of the Nvidia Rapids framework and the Dask parallel cluster in Python. Our findings demonstrate a 70x acceleration in the extraction, transformation, and loading (ETL) of CV data for the State of Missouri for a full day of all unique CV journeys, reducing the processing time from approximately 48 hours to just 25 minutes. Given that these results are for thousands of CVs and several thousands of individual journeys with sub-second sensor data, implies that we can model and obtain actionable insights for the management of the transportation infrastructure.