Exploring the spatio-temporal variations of COVID-19 transmission and its potential determinants could provide a deeper understanding of the dynamics of disease spread. This study aims to investigate the spatio-temporal spread of COVID-19 infection rate in England, and examine its associations with socioeconomic, demographic and environmental risk factors. Using weekly reported COVID-19 cases from 7 March 2020 to 26 March 2022 at Middle Layer Super Output Area (MSOA) level in mainland England, we developed a Bayesian hierarchical spatio-temporal model to predict the COVID-19 infection rates and investigate the influencing factors. The analysis showed that our model outperformed the ordinary least squares (OLS) and geographically weighted regression (GWR) models in terms of prediction accuracy. The results showed that the spread of COVID-19 infection rates over space and time was heterogeneous. Hotspots of infection rate exhibited inconsistent clustered patterns over time. Among the selected risk factors, the annual household income, unemployment rate, population density, percentage of Caribbean population, percentage of adults aged 45-64 years old, and particulate matter concentrations were found to be positively associated with the COVID-19 infection rate. The findings assist policymakers in developing tailored public health interventions for COVID-19 prevention and control.
Low-count PET is an efficient way to reduce radiation exposure and acquisition time, but the reconstructed images often suffer from low signal-to-noise ratio (SNR), thus affecting diagnosis and other downstream tasks. Recent advances in deep learning have shown great potential in improving low-count PET image quality, but acquiring a large, centralized, and diverse dataset from multiple institutions for training a robust model is difficult due to privacy and security concerns of patient data. Moreover, low-count PET data at different institutions may have different data distribution, thus requiring personalized models. While previous federated learning (FL) algorithms enable multi-institution collaborative training without the need of aggregating local data, addressing the large domain shift in the application of multi-institutional low-count PET denoising remains a challenge and is still highly under-explored. In this work, we propose FedFTN, a personalized federated learning strategy that addresses these challenges. FedFTN uses a local deep feature transformation network (FTN) to modulate the feature outputs of a globally shared denoising network, enabling personalized low-count PET denoising for each institution. During the federated learning process, only the denoising network's weights are communicated and aggregated, while the FTN remains at the local institutions for feature transformation. We evaluated our method using a large-scale dataset of multi-institutional low-count PET imaging data from three medical centers located across three continents, and showed that FedFTN provides high-quality low-count PET images, outperforming previous baseline FL reconstruction methods across all low-count levels at all three institutions.
Simultaneous Localization and Mapping (SLAM) plays an important role in robot autonomy. Reliability and efficiency are the two most valued features for applying SLAM in robot applications. In this paper, we consider achieving a reliable LiDAR-based SLAM function in computation-limited platforms, such as quadrotor UAVs based on graph-based point cloud association. First, contrary to most works selecting salient features for point cloud registration, we propose a non-conspicuous feature selection strategy for reliability and robustness purposes. Then a two-stage correspondence selection method is used to register the point cloud, which includes a KD-tree-based coarse matching followed by a graph-based matching method that uses geometric consistency to vote out incorrect correspondences. Additionally, we propose an odometry approach where the weight optimizations are guided by vote results from the aforementioned geometric consistency graph. In this way, the optimization of LiDAR odometry rapidly converges and evaluates a fairly accurate transformation resulting in the back-end module efficiently finishing the mapping task. Finally, we evaluate our proposed framework on the KITTI odometry dataset and real-world environments. Experiments show that our SLAM system achieves a comparative level or higher level of accuracy with more balanced computation efficiency compared with the mainstream LiDAR-based SLAM solutions.
This paper develops a unified and computationally efficient method for change-point estimation along the time dimension in a non-stationary spatio-temporal process. By modeling a non-stationary spatio-temporal process as a piecewise stationary spatio-temporal process, we consider simultaneous estimation of the number and locations of change-points, and model parameters in each segment. A composite likelihood-based criterion is developed for change-point and parameters estimation. Under the framework of increasing domain asymptotics, theoretical results including consistency and distribution of the estimators are derived under mild conditions. In contrast to classical results in fixed dimensional time series that the localization error of change-point estimator is $O_{p}(1)$, exact recovery of true change-points can be achieved in the spatio-temporal setting. More surprisingly, the consistency of change-point estimation can be achieved without any penalty term in the criterion function. In addition, we further establish consistency of the number and locations of the change-point estimator under the infill asymptotics framework where the time domain is increasing while the spatial sampling domain is fixed. A computationally efficient pruned dynamic programming algorithm is developed for the challenging criterion optimization problem. Extensive simulation studies and an application to U.S. precipitation data are provided to demonstrate the effectiveness and practicality of the proposed method.
We present and evaluate the Futhark implementation of reverse-mode automatic differentiation (AD) for the basic blocks of parallel programming: reduce, prefix sum (scan), and reduce by index. We first present derivations of general-case algorithms and then discuss several specializations that result in efficient differentiation of most cases of practical interest. We report an experiment that evaluates the performance of the differentiated code in the context of GPU execution and highlights the impact of the proposed specializations as well as the strengths and weaknesses of differentiating at high level vs. low level (i.e., ``differentiating the memory'').
We aim to improve the Inverted Neural Radiance Fields (iNeRF) algorithm which defines the image pose estimation problem as a NeRF based iterative linear optimization. NeRFs are novel neural space representation models that can synthesize photorealistic novel views of real-world scenes or objects. Our contributions are as follows: we extend the localization optimization objective with a depth-based loss function, we introduce a multi-image based loss function where a sequence of images with known relative poses are used without increasing the computational complexity, we omit hierarchical sampling during volumetric rendering, meaning only the coarse model is used for pose estimation, and we how that by extending the sampling interval convergence can be achieved even or higher initial pose estimate errors. With the proposed modifications the convergence speed is significantly improved, and the basin of convergence is substantially extended.
We develop aspects of music theory related to harmony, such as scales, chord formation and improvisation from a combinatorial perspective. The goal is to provide a foundation for this subject by deriving the basic structure from a few assumptions, rather than writing down long lists of chords/scales to memorize without an underlying principle. Our approach involves introducing constraints that limit the possible scales we can consider. For example, we may impose the constraint that two voices cannot be only a semitone apart as this is too dissonant. We can then study scales that do not contain notes that are a semitone apart. A more refined constraint avoids three voices colliding by studying scales that do not have three notes separated only by semitones. Additionally, we require that our scales are complete, which roughly means that they are the maximal sets of tones that satisfy these constraints. As it turns out, completeness as applied to these simple two/three voice constraints characterizes the types of scales that are commonly used in music composition. Surprisingly, there is a correspondence between scales subject to the two-voice constraint and those subject to the three-voice constraint. We formulate this correspondence as a duality statement that provides a way to understand scales subject to one type of constraint in terms of scales subject to the other. Finally, we combine these constraint ideas to provide a classification of chords.
An extension of Transformers is proposed that enables explicit relational reasoning through a novel module called the Abstractor. At the core of the Abstractor is a variant of attention called relational cross-attention. The approach is motivated by an architectural inductive bias for relational learning that disentangles relational information from extraneous features about individual objects. This enables explicit relational reasoning, supporting abstraction and generalization from limited data. The Abstractor is first evaluated on simple discriminative relational tasks and compared to existing relational architectures. Next, the Abstractor is evaluated on purely relational sequence-to-sequence tasks, where dramatic improvements are seen in sample efficiency compared to standard Transformers. Finally, Abstractors are evaluated on a collection of tasks based on mathematical problem solving, where modest but consistent improvements in performance and sample efficiency are observed.
IoT (Internet of Things) refers to the network of interconnected physical devices, vehicles, home appliances, and other items embedded with sensors, software, and connectivity, enabling them to collect and exchange data. IoT Forensics is collecting and analyzing digital evidence from IoT devices to investigate cybercrimes, security breaches, and other malicious activities that may have taken place on these connected devices. In particular, EM-SCA has become an essential tool for IoT forensics due to its ability to reveal confidential information about the internal workings of IoT devices without interfering these devices or wiretapping their networks. However, the accuracy and reliability of EM-SCA results can be limited by device variability, environmental factors, and data collection and processing methods. Besides, there is very few research on these limitations that affects significantly the accuracy of EM-SCA approaches for the crossed-IoT device portability as well as limited research on the possible solutions to address such challenge. Therefore, this empirical study examines the impact of device variability on the accuracy and reliability of EM-SCA approaches, in particular machine-learning (ML) based approaches for EM-SCA. We firstly presents the background, basic concepts and techniques used to evaluate the limitations of current EM-SCA approaches and datasets. Our study then addresses one of the most important limitation, which is caused by the multi-core architecture of the processors (SoC). We present an approach to collect the EM-SCA datasets and demonstrate the feasibility of using transfer learning to obtain more meaningful and reliable results from EM-SCA in IoT forensics of crossed-IoT devices. Our study moreover contributes a new dataset for using deep learning models in analysing Electromagnetic Side-Channel data with regards to the cross-device portability matter.
This paper presents MindTheDApp, a toolchain designed specifically for the structural analysis of Ethereum-based Decentralized Applications (DApps), with a distinct focus on a complex network-driven approach. Unlike existing tools, our toolchain combines the power of ANTLR4 and Abstract Syntax Tree (AST) traversal techniques to transform the architecture and interactions within smart contracts into a specialized bipartite graph. This enables advanced network analytics to highlight operational efficiencies within the DApp's architecture. The bipartite graph generated by the proposed tool comprises two sets of nodes: one representing smart contracts, interfaces, and libraries, and the other including functions, events, and modifiers. Edges in the graph connect functions to smart contracts they interact with, offering a granular view of interdependencies and execution flow within the DApp. This network-centric approach allows researchers and practitioners to apply complex network theory in understanding the robustness, adaptability, and intricacies of decentralized systems. Our work contributes to the enhancement of security in smart contracts by allowing the visualisation of the network, and it provides a deep understanding of the architecture and operational logic within DApps. Given the growing importance of smart contracts in the blockchain ecosystem and the emerging application of complex network theory in technology, our toolchain offers a timely contribution to both academic research and practical applications in the field of blockchain technology.
Classical forgery attacks against Offset Two-round (OTR) structures require some harsh conditions, such as some plaintext and ciphertext pairs need to be known, and the success probability is not too high. To solve these problems, a quantum forgery attack on OTR structure using Simon's algorithm is proposed. The attacker intercept the ciphertext-tag pair $(C,T)$ between the sender and receiver, while Simon's algorithm is used to find the period of the tag generation function in OTR, then we can successfully forge new ciphertext $C'$ ($C'\ne C$) for intercepted tag $T$. For a variant of OTR structure (Pr{/o}st-OTR-Even-Mansour structure), a universal forgery attack, in which it is easy to generate the correct tag of any given message if the attacker is allowed to change a single block in it, is proposed. It first obtains the secret parameter L using Simon's algorithm, then the secret parameter L is used to find the keys $k_1$ and $k_2$, so that an attacker can forge the changed messages. It only needs several plaintext blocks to help obtain the keys to forge any messages. Performance analysis shows that the query complexity of our attack is $O(n)$, and its success probability is very close to 1.