Exploring transactions within the Bitcoin blockchain entails examining the transfer of bitcoins among several hundred million entities. However, it is often impractical and resource-consuming to study such a vast number of entities. Consequently, entity clustering serves as an initial step in most analytical studies. This process often employs heuristics grounded in the practices and behaviors of these entities. In this research, we delve into the examination of two widely used heuristics, alongside the introduction of four novel ones. Our contribution includes the introduction of the \textit{clustering ratio}, a metric designed to quantify the reduction in the number of entities achieved by a given heuristic. The assessment of this reduction ratio plays an important role in justifying the selection of a specific heuristic for analytical purposes. Given the dynamic nature of the Bitcoin system, characterized by a continuous increase in the number of entities on the blockchain, and the evolving behaviors of these entities, we extend our study to explore the temporal evolution of the clustering ratio for each heuristic. This temporal analysis enhances our understanding of the effectiveness of these heuristics over time.
Contextual bandit with linear reward functions is among one of the most extensively studied models in bandit and online learning research. Recently, there has been increasing interest in designing \emph{locally private} linear contextual bandit algorithms, where sensitive information contained in contexts and rewards is protected against leakage to the general public. While the classical linear contextual bandit algorithm admits cumulative regret upper bounds of $\tilde O(\sqrt{T})$ via multiple alternative methods, it has remained open whether such regret bounds are attainable in the presence of local privacy constraints, with the state-of-the-art result being $\tilde O(T^{3/4})$. In this paper, we show that it is indeed possible to achieve an $\tilde O(\sqrt{T})$ regret upper bound for locally private linear contextual bandit. Our solution relies on several new algorithmic and analytical ideas, such as the analysis of mean absolute deviation errors and layered principal component regression in order to achieve small mean absolute deviation errors.
The use of Potential Based Reward Shaping (PBRS) has shown great promise in the ongoing research effort to tackle sample inefficiency in Reinforcement Learning (RL). However, the choice of the potential function is critical for this technique to be effective. Additionally, RL techniques are usually constrained to use a finite horizon for computational limitations. This introduces a bias when using PBRS, thus adding an additional layer of complexity. In this paper, we leverage abstractions to automatically produce a "good" potential function. We analyse the bias induced by finite horizons in the context of PBRS producing novel insights. Finally, to asses sample efficiency and performance impact, we evaluate our approach on four environments including a goal-oriented navigation task and three Arcade Learning Environments (ALE) games demonstrating that we can reach the same level of performance as CNN-based solutions with a simple fully-connected network.
The digital innovation accompanied by explicit economic incentives have fundamentally changed the process of innovation diffusion. As a representative of digital innovation, NFTs provide a decentralized and secure way to authenticate and trade digital assets, offering the potential for new revenue streams in the digital space. However, current researches about NFTs mainly focus on their transaction networks and community culture, leaving the interplay among diffusion dynamics, economic dynamics, and social constraints on Twitter. By collecting and analyzing NFTs-related tweet dataset, the motivations of retweeters, the information mechanisms behind emojis, and the networked-based diffusion dynamics is systematically investigated. Results indicate that Retweeting is fueled by Freemint and trading information, with the higher economic incentives as a major motivation and some potential organizational tendencies. The diffusion of NFT is primarily driven by a 'Ringed-layered' information mechanism involving individual promoters and speculators. Both the frequency and presentation of content contribute positively to the growth of the retweet network. This study contributes to the innovation diffusion theory with economic incentives embedded.
We investigate several online packing problems in which convex polygons arrive one by one and have to be placed irrevocably into a container, while the aim is to minimize the used space. Among other variants, we consider strip packing and bin packing, where the container is the infinite horizontal strip $[0,\infty)\times [0,1]$ or a collection of $1 \times 1$ bins, respectively. We draw interesting connections to the following online sorting problem OnlineSorting$[\gamma,n]$: We receive a stream of real numbers $s_1,\ldots,s_n$, $s_i\in[0,1]$, one by one. Each real must be placed in an array $A$ with $\gamma n$ initially empty cells without knowing the subsequent reals. The goal is to minimize the sum of differences of consecutive reals in $A$. The offline optimum is to place the reals in sorted order so the cost is at most $1$. We show that for any $\Delta$-competitive online algorithm of OnlineSorting$[\gamma,n]$, it holds that $\gamma \Delta \in\Omega(\log n/\log \log n)$. We use this lower bound to prove the non-existence of competitive algorithms for various online translational packing problems of convex polygons, among them strip packing, bin packing and perimeter packing. This also implies that there exists no online algorithm that can pack all streams of pieces of diameter and total area at most $\delta$ into the unit square. These results are in contrast to the case when the pieces are restricted to rectangles, for which competitive algorithms are known. Likewise, the offline versions of packing convex polygons have constant factor approximation algorithms. As a complement, we also include algorithms for both online sorting and translation-only online strip packing with non-trivial competitive ratios. Our algorithm for strip packing relies on a new technique for recursively subdividing the strip into parallelograms of varying height, thickness and slope.
The optimal branch number of MDS matrices makes them a preferred choice for designing diffusion layers in many block ciphers and hash functions. Consequently, various methods have been proposed for designing MDS matrices, including search and direct methods. While exhaustive search is suitable for small order MDS matrices, direct constructions are preferred for larger orders due to the vast search space involved. In the literature, there has been extensive research on the direct construction of MDS matrices using both recursive and nonrecursive methods. On the other hand, in lightweight cryptography, Near-MDS (NMDS) matrices with sub-optimal branch numbers offer a better balance between security and efficiency as a diffusion layer compared to MDS matrices. However, no direct construction method is available in the literature for constructing recursive NMDS matrices. This paper introduces some direct constructions of NMDS matrices in both nonrecursive and recursive settings. Additionally, it presents some direct constructions of nonrecursive MDS matrices from the generalized Vandermonde matrices. We propose a method for constructing involutory MDS and NMDS matrices using generalized Vandermonde matrices. Furthermore, we prove some folklore results that are used in the literature related to the NMDS code.
The human shoulder, with its glenohumeral joint, tendons, ligaments, and muscles, allows for the execution of complex tasks with precision and efficiency. However, current robotic shoulder designs lack the compliance and compactness inherent in their biological counterparts. A major limitation of these designs is their reliance on external sensors like rotary encoders, which restrict mechanical joint design and introduce bulk to the system. To address this constraint, we present a bio-inspired antagonistic robotic shoulder with two degrees of freedom powered by self-sensing hydraulically amplified self-healing electrostatic actuators. Our artificial muscle design decouples the high-voltage electrostatic actuation from the pair of low-voltage self-sensing electrodes. This approach allows for proprioceptive feedback control of trajectories in the task space while eliminating the necessity for any additional sensors. We assess the platform's efficacy by comparing it to a feedback control based on position data provided by a motion capture system. The study demonstrates closed-loop controllable robotic manipulators based on an inherent self-sensing capability of electrohydraulic actuators. The proposed architecture can serve as a basis for complex musculoskeletal joint arrangements.
We give a constructive characterization of matrices satisfying the reverse-order law for the Moore--Penrose pseudoinverse. In particular, for a given matrix $A$ we construct another matrix $B$, of arbitrary compatible size and chosen rank, in terms of the right singular vectors of $A$, such that the reverse order law for $AB$ is satisfied. Moreover, we show that any matrix satisfying this law comes from a similar construction. As a consequence, several equivalent conditions to $B^+ A^+$ being a pseudoinverse of $AB$ are given, for example $\mathcal{C}(A^*AB)=\mathcal{C}(BB^*A^*)$ or $B\left(AB\right)^+A$ being an orthogonal projection. In addition, we parameterize all possible SVD decompositions of a fixed matrix and give Greville-like equivalent conditions for $B^+A^+$ being a $\{1,2\}-$inverse of $AB$, with a geometric insight in terms of the principal angles between $\mathcal{C}(A^*)$ and $\mathcal{C}(B)$.
Denoising diffusion probabilistic models for image inpainting aim to add the noise to the texture of image during the forward process and recover masked regions with unmasked ones of the texture via the reverse denoising process. Despite the meaningful semantics generation, the existing arts suffer from the semantic discrepancy between masked and unmasked regions, since the semantically dense unmasked texture fails to be completely degraded while the masked regions turn to the pure noise in diffusion process, leading to the large discrepancy between them. In this paper, we aim to answer how unmasked semantics guide texture denoising process;together with how to tackle the semantic discrepancy, to facilitate the consistent and meaningful semantics generation. To this end, we propose a novel structure-guided diffusion model named StrDiffusion, to reformulate the conventional texture denoising process under structure guidance to derive a simplified denoising objective for image inpainting, while revealing: 1) the semantically sparse structure is beneficial to tackle semantic discrepancy in early stage, while dense texture generates reasonable semantics in late stage; 2) the semantics from unmasked regions essentially offer the time-dependent structure guidance for the texture denoising process, benefiting from the time-dependent sparsity of the structure semantics. For the denoising process, a structure-guided neural network is trained to estimate the simplified denoising objective by exploiting the consistency of the denoised structure between masked and unmasked regions. Besides, we devise an adaptive resampling strategy as a formal criterion as whether structure is competent to guide the texture denoising process, while regulate their semantic correlations. Extensive experiments validate the merits of StrDiffusion over the state-of-the-arts. Our code is available at //github.com/htyjers/StrDiffusion.
Diabetic Retinopathy (DR) stands as the leading cause of blindness globally, particularly affecting individuals between the ages of 20 and 70. This paper presents a Computer-Aided Diagnosis (CAD) system designed for the automatic classification of retinal images into five distinct classes: Normal, Mild, Moderate, Severe, and Proliferative Diabetic Retinopathy (PDR). The proposed system leverages Convolutional Neural Networks (CNNs) employing pre-trained deep learning models. Through the application of fine-tuning techniques, our model is trained on fundus images of diabetic retinopathy with resolutions of 350x350x3 and 224x224x3. Experimental results obtained on the Kaggle platform, utilizing resources comprising 4 CPUs, 17 GB RAM, and 1 GB Disk, demonstrate the efficacy of our approach. The achieved Area Under the Curve (AUC) values for CNN, MobileNet, VGG-16, InceptionV3, and InceptionResNetV2 models are 0.50, 0.70, 0.53, 0.63, and 0.69, respectively.
Explainable Artificial Intelligence (XAI) is transforming the field of Artificial Intelligence (AI) by enhancing the trust of end-users in machines. As the number of connected devices keeps on growing, the Internet of Things (IoT) market needs to be trustworthy for the end-users. However, existing literature still lacks a systematic and comprehensive survey work on the use of XAI for IoT. To bridge this lacking, in this paper, we address the XAI frameworks with a focus on their characteristics and support for IoT. We illustrate the widely-used XAI services for IoT applications, such as security enhancement, Internet of Medical Things (IoMT), Industrial IoT (IIoT), and Internet of City Things (IoCT). We also suggest the implementation choice of XAI models over IoT systems in these applications with appropriate examples and summarize the key inferences for future works. Moreover, we present the cutting-edge development in edge XAI structures and the support of sixth-generation (6G) communication services for IoT applications, along with key inferences. In a nutshell, this paper constitutes the first holistic compilation on the development of XAI-based frameworks tailored for the demands of future IoT use cases.