Including Artificial Neural Networks in embedded systems at the edge allows applications to exploit Artificial Intelligence capabilities directly within devices operating at the network periphery. This paper introduces Spiker+, a comprehensive framework for generating efficient, low-power, and low-area customized Spiking Neural Networks (SNN) accelerators on FPGA for inference at the edge. Spiker+ presents a configurable multi-layer hardware SNN, a library of highly efficient neuron architectures, and a design framework, enabling the development of complex neural network accelerators with few lines of Python code. Spiker+ is tested on two benchmark datasets, the MNIST and the Spiking Heidelberg Digits (SHD). On the MNIST, it demonstrates competitive performance compared to state-of-the-art SNN accelerators. It outperforms them in terms of resource allocation, with a requirement of 7,612 logic cells and 18 Block RAMs (BRAMs), which makes it fit in very small FPGA, and power consumption, draining only 180mW for a complete inference on an input image. The latency is comparable to the ones observed in the state-of-the-art, with 780us/img. To the authors' knowledge, Spiker+ is the first SNN accelerator tested on the SHD. In this case, the accelerator requires 18,268 logic cells and 51 BRAM, with an overall power consumption of 430mW and a latency of 54 us for a complete inference on input data. This underscores the significance of Spiker+ in the hardware-accelerated SNN landscape, making it an excellent solution to deploy configurable and tunable SNN architectures in resource and power-constrained edge applications.
LoRaWAN is a wireless technology that enables high-density deployments of IoT devices. Designed for Low Power Wide Area Networks (LPWAN), LoRaWAN employs large cells to service a potentially extremely high number of devices. The technology enforces a centralized architecture, directing all data generated by the devices to a single network server for data processing. End-to-end encryption is used to guarantee the confidentiality and security of data. In this demo, we present \edgelora, a system architecture designed to incorporate edge processing in LoRaWAN without compromising security and confidentiality of data. \edgelora maintains backward compatibility and addresses scalability issues arising from handling large amounts of data sourced from a diverse range of devices. The demo provides evidence on the advantages in terms of reduced latency, lower network bandwidth requirements, higher scalability, and improved security and privacy resulting from the application of the Edge processing paradigm to LoRaWAN.
This paper presents a statistical forward model for a Compton imaging system, called Compton imager. This system, under development at the University of Illinois Urbana Champaign, is a variant of Compton cameras with a single type of sensors which can simultaneously act as scatterers and absorbers. This imager is convenient for imaging situations requiring a wide field of view. The proposed statistical forward model is then used to solve the inverse problem of estimating the location and energy of point-like sources from observed data. This inverse problem is formulated and solved in a Bayesian framework by using a Metropolis within Gibbs algorithm for the estimation of the location, and an expectation-maximization algorithm for the estimation of the energy. This approach leads to more accurate estimation when compared with the deterministic standard back-projection approach, with the additional benefit of uncertainty quantification in the low photon imaging setting.
Deep neural networks used for reconstructing sparse-view CT data are typically trained by minimizing a pixel-wise mean-squared error or similar loss function over a set of training images. However, networks trained with such pixel-wise losses are prone to wipe out small, low-contrast features that are critical for screening and diagnosis. To remedy this issue, we introduce a novel training loss inspired by the model observer framework to enhance the detectability of weak signals in the reconstructions. We evaluate our approach on the reconstruction of synthetic sparse-view breast CT data, and demonstrate an improvement in signal detectability with the proposed loss.
We introduce Optimistix: a nonlinear optimisation library built in JAX and Equinox. Optimistix introduces a novel, modular approach for its minimisers and least-squares solvers. This modularity relies on new practical abstractions for optimisation which we call search and descent, and which generalise classical notions of line search, trust-region, and learning-rate algorithms. It provides high-level APIs and solvers for minimisation, nonlinear least-squares, root-finding, and fixed-point iteration. Optimistix is available at //github.com/patrick-kidger/optimistix.
Locally repairable codes have been extensively investigated due to practical applications in distributed and cloud storage systems in recent years. However, not much work on asymptotic behavior of locally repairable codes has been done. In particular, there is few result on constructive lower bound of asymptotic behavior of locally repairable codes with multiple recovering sets. In this paper, we construct some families of asymptotically good locally repairable codes with multiple recovering sets via automorphism groups of function fields of the Garcia-Stichtenoth towers. The main advantage of our construction is to allow more flexibility of localities.
Undersampled MRI reconstruction is crucial for accelerating clinical scanning. Dual-domain reconstruction network is performant among SoTA deep learning methods. In this paper, we rethink dual-domain model design from the perspective of the receptive field, which is needed for image recovery and K-space interpolation problems. Further, we introduce domain-specific modules for dual-domain reconstruction, namely k-space global initialization and image-domain parallel local detail enhancement. We evaluate our modules by translating a SoTA method DuDoRNet under different conventions of MRI reconstruction including image-domain, dual-domain, and reference-guided reconstruction on the public IXI dataset. Our model DuDoRNet+ achieves significant improvements over competing deep learning methods.
The spread of the Internet of Things (IoT) is demanding new, powerful architectures for handling the huge amounts of data produced by the IoT devices. In many scenarios, many existing isolated solutions applied to IoT devices use a set of rules to detect, report and mitigate malware activities or threats. This paper describes a development environment that allows the programming and debugging of such rule-based multi-agent solutions. The solution consists of the integration of a rule engine into the agent, the use of a specialized, wrapping agent class with a graphical user interface for programming and testing purposes, and a mechanism for the incremental composition of behaviors. Finally, a set of examples and a comparative study were accomplished to test the suitability and validity of the approach. The JADE multi-agent middleware has been used for the practical implementation of the approach.
Mesh-based Graph Neural Networks (GNNs) have recently shown capabilities to simulate complex multiphysics problems with accelerated performance times. However, mesh-based GNNs require a large number of message-passing (MP) steps and suffer from over-smoothing for problems involving very fine mesh. In this work, we develop a multiscale mesh-based GNN framework mimicking a conventional iterative multigrid solver, coupled with adaptive mesh refinement (AMR), to mitigate challenges with conventional mesh-based GNNs. We use the framework to accelerate phase field (PF) fracture problems involving coupled partial differential equations with a near-singular operator due to near-zero modulus inside the crack. We define the initial graph representation using all mesh resolution levels. We perform a series of downsampling steps using Transformer MP GNNs to reach the coarsest graph followed by upsampling steps to reach the original graph. We use skip connectors from the generated embedding during coarsening to prevent over-smoothing. We use Transfer Learning (TL) to significantly reduce the size of training datasets needed to simulate different crack configurations and loading conditions. The trained framework showed accelerated simulation times, while maintaining high accuracy for all cases compared to physics-based PF fracture model. Finally, this work provides a new approach to accelerate a variety of mesh-based engineering multiphysics problems
Quality assessment, including inspecting the images for artifacts, is a critical step during MRI data acquisition to ensure data quality and downstream analysis or interpretation success. This study demonstrates a deep learning model to detect rigid motion in T1-weighted brain images. We leveraged a 2D CNN for three-class classification and tested it on publicly available retrospective and prospective datasets. Grad-CAM heatmaps enabled the identification of failure modes and provided an interpretation of the model's results. The model achieved average precision and recall metrics of 85% and 80% on six motion-simulated retrospective datasets. Additionally, the model's classifications on the prospective dataset showed a strong inverse correlation (-0.84) compared to average edge strength, an image quality metric indicative of motion. This model is part of the ArtifactID tool, aimed at inline automatic detection of Gibbs ringing, wrap-around, and motion artifacts. This tool automates part of the time-consuming QA process and augments expertise on-site, particularly relevant in low-resource settings where local MR knowledge is scarce.
We present Cryptomite, a Python library of randomness extractor implementations. The library offers a range of two-source, seeded and deterministic randomness extractors, together with parameter calculation modules, making it easy to use and suitable for a variety of applications. We also present theoretical results, including new extractor constructions and improvements to existing extractor parameters. The extractor implementations are efficient in practice and tolerate input sizes of up to $2^{40} > 10^{12}$ bits. They are also numerically precise (implementing convolutions using the Number Theoretic Transform to avoid floating point arithmetic), making them well suited to cryptography. The algorithms and parameter calculation are described in detail, including illustrative code examples and performance benchmarking.