This paper presents a novel design for a Variable Stiffness 3 DoF actuated wrist to improve task adaptability and safety during interactions with people and objects. The proposed design employs a hybrid serial-parallel configuration to achieve a 3 DoF wrist joint which can actively and continuously vary its overall stiffness thanks to the redundant elastic actuation system, using only four motors. Its stiffness control principle is similar to human muscular impedance regulation, with the shape of the stiffness ellipsoid mostly depending on posture, while the elastic cocontraction modulates its overall size. The employed mechanical configuration achieves a compact and lightweight device that, thanks to its anthropomorphous characteristics, could be suitable for prostheses and humanoid robots. After introducing the design concept of the device, this work provides methods to estimate the posture of the wrist by using joint angle measurements and to modulate its stiffness. Thereafter, this paper describes the first physical implementation of the presented design, detailing the mechanical prototype and electronic hardware, the control architecture, and the associated firmware. The reported experimental results show the potential of the proposed device while highlighting some limitations. To conclude, we show the motion and stiffness behavior of the device with some qualitative experiments.
In the realm of Tiny AI, we introduce "You Only Look at Interested Cells" (YOLIC), an efficient method for object localization and classification on edge devices. Seamlessly blending the strengths of semantic segmentation and object detection, YOLIC offers superior computational efficiency and precision. By adopting Cells of Interest for classification instead of individual pixels, YOLIC encapsulates relevant information, reduces computational load, and enables rough object shape inference. Importantly, the need for bounding box regression is obviated, as YOLIC capitalizes on the predetermined cell configuration that provides information about potential object location, size, and shape. To tackle the issue of single-label classification limitations, a multi-label classification approach is applied to each cell, effectively recognizing overlapping or closely situated objects. This paper presents extensive experiments on multiple datasets, demonstrating that YOLIC achieves detection performance comparable to the state-of-the-art YOLO algorithms while surpassing in speed, exceeding 30fps on a Raspberry Pi 4B CPU. All resources related to this study, including datasets, cell designer, image annotation tool, and source code, have been made publicly available on our project website at //kai3316.github.io/yolic.github.io
Efforts to secure computing systems via software traditionally focus on the operating system and application levels. In contrast, the Security Protocol and Data Model (SPDM) tackles firmware level security challenges, which are much harder (if at all possible) to detect with regular protection software. SPDM includes key features like enabling peripheral authentication, authenticated hardware measurements retrieval, and secure session establishment. Since SPDM is a relatively recent proposal, there is a lack of studies evaluating its performance impact on real-world applications. In this article, we address this gap by: (1) implementing the protocol on a simple virtual device, and then investigating the overhead introduced by each SDPM message; and (2) creating an SPDM-capable virtual hard drive based on VirtIO, and comparing the resulting read/write performance with a regular, unsecured implementation. Our results suggest that SPDM bootstrap time takes the order of tens of milliseconds, while the toll of introducing SPDM on hard drive communication highly depends on specific workload patterns. For example, for mixed random read/write operations, the slowdown is negligible in comparison to the baseline unsecured setup. Conversely, for sequential read or write operations, the data encryption process becomes the bottleneck, reducing the performance indicators by several orders of magnitude.
The role of cryptocurrencies within the financial systems has been expanding rapidly in recent years among investors and institutions. It is therefore crucial to investigate the phenomena and develop statistical methods able to capture their interrelationships, the links with other global systems, and, at the same time, the serial heterogeneity. For these reasons, this paper introduces hidden Markov regression models for jointly estimating quantiles and expectiles of cryptocurrency returns using regime-switching copulas. The proposed approach allows us to focus on extreme returns and describe their temporal evolution by introducing time-dependent coefficients evolving according to a latent Markov chain. Moreover to model their time-varying dependence structure, we consider elliptical copula functions defined by state-specific parameters. Maximum likelihood estimates are obtained via an Expectation-Maximization algorithm. The empirical analysis investigates the relationship between daily returns of five cryptocurrencies and major world market indices.
Due to its superior efficiency in utilizing annotations and addressing gigapixel-sized images, multiple instance learning (MIL) has shown great promise as a framework for whole slide image (WSI) classification in digital pathology diagnosis. However, existing methods tend to focus on advanced aggregators with different structures, often overlooking the intrinsic features of H\&E pathological slides. To address this limitation, we introduced two pathological priors: nuclear heterogeneity of diseased cells and spatial correlation of pathological tiles. Leveraging the former, we proposed a data augmentation method that utilizes stain separation during extractor training via a contrastive learning strategy to obtain instance-level representations. We then described the spatial relationships between the tiles using an adjacency matrix. By integrating these two views, we designed a multi-instance framework for analyzing H\&E-stained tissue images based on pathological inductive bias, encompassing feature extraction, filtering, and aggregation. Extensive experiments on the Camelyon16 breast dataset and TCGA-NSCLC Lung dataset demonstrate that our proposed framework can effectively handle tasks related to cancer detection and differentiation of subtypes, outperforming state-of-the-art medical image classification methods based on MIL. The code will be released later.
This study explores the intricacies of waiting games, a novel dynamic that emerged with Ethereum's transition to a Proof-of-Stake (PoS)-based block proposer selection protocol. Within this PoS framework, validators acquire a distinct monopoly position during their assigned slots, given that block proposal rights are set deterministically, contrasting with Proof-of-Work (PoW) protocols. Consequently, validators have the power to delay block proposals, stepping outside the honest validator specs, optimizing potential returns through MEV payments. Nonetheless, this strategic behaviour introduces the risk of orphaning if attestors fail to observe and vote on the block timely. Our quantitative analysis of this waiting phenomenon and its associated risks reveals an opportunity for enhanced MEV extraction, exceeding standard protocol rewards, and providing sufficient incentives for validators to play the game. Notably, our findings indicate that delayed proposals do not always result in orphaning and orphaned blocks are not consistently proposed later than non-orphaned ones. To further examine consensus stability under varying network conditions, we adopt an agent-based simulation model tailored for PoS-Ethereum, illustrating that consensus disruption will not be observed unless significant delay strategies are adopted. Ultimately, this research offers valuable insights into the advent of waiting games on Ethereum, providing a comprehensive understanding of trade-offs and potential profits for validators within the blockchain ecosystem.
Making contactless payments using a smartwatch is increasingly popular, but this payment medium lacks traditional biometric security measures such as facial or fingerprint recognition. In 2022, Sturgess et al. proposed WatchAuth, a system for authenticating smartwatch payments using the physical gesture of reaching towards a payment terminal. While effective, the system requires the user to undergo a burdensome enrolment period to achieve acceptable error levels. In this dissertation, we explore whether applications of deep learning can reduce the number of gestures a user must provide to enrol into an authentication system for smartwatch payment. We firstly construct a deep-learned authentication system that outperforms the current state-of-the-art, including in a scenario where the target user has provided a limited number of gestures. We then develop a regularised autoencoder model for generating synthetic user-specific gestures. We show that using these gestures in training improves classification ability for an authentication system. Through this technique we can reduce the number of gestures required to enrol a user into a WatchAuth-like system without negatively impacting its error rates.
Applications of the ensemble Kalman filter to high-dimensional problems are feasible only with small ensembles. This necessitates a kind of regularization of the analysis (observation update) problem. We propose a regularization technique based on a new non-stationary, non-parametric spatial model on the sphere. The model termed the Locally Stationary Convolution Model is a constrained version of the general Gaussian process convolution model. The constraints on the location-dependent convolution kernel include local isotropy, positive definiteness as a function of distance, and smoothness as a function of location. The model allows for a rigorous definition of the local spectrum, which is required to be a smooth function of spatial wavenumber. We propose and test an ensemble filter in which prior covariances are postulated to obey the Locally Stationary Convolution Model. The model is estimated online in a two-stage procedure. First, ensemble perturbations are bandpass filtered in several wavenumber bands to extract aggregated local spatial spectra. Second, a neural network recovers the local spectra from sample variances of the filtered fields. In simulation experiments, the new filter was capable of outperforming several existing techniques. With small to moderate ensemble sizes, the improvement was substantial.
The state of the art related to parameter correlation in two-parameter models has been reviewed in this paper. The apparent contradictions between the different authors regarding the ability of D--optimality to simultaneously reduce the correlation and the area of the confidence ellipse in two-parameter models were analyzed. Two main approaches were found: 1) those who consider that the optimality criteria simultaneously control the precision and correlation of the parameter estimators; and 2) those that consider a combination of criteria to achieve the same objective. An analytical criterion combining in its structure both the optimality of the precision of the estimators of the parameters and the reduction of the correlation between their estimators is provided. The criterion was tested both in a simple linear regression model, considering all possible design spaces, and in a non-linear model with strong correlation of the estimators of the parameters (Michaelis--Menten) to show its performance. This criterion showed a superior behavior to all the strategies and criteria to control at the same time the precision and the correlation.
In this work we propose a two-step alternative clearing method of day-ahead electricity markets. In the first step, using the aggregation of bids, an approximate clearing is performed, and based on the outcome of this problem, the estimates for the clearing prices of individual periods are derived. These assumptions regarding the range of clearing prices explicitly determine the acceptance indicators for a subset of the original bids. In the subsequent stage, another round of clearing is performed to determine the acceptance indicators of the remaining bids and the market clearing prices. We show that the bid-aggregation based method may result in suboptimal solution or in an infeasible problem in the second step, but we also point out that these pitfalls of the algorithm may be avoided if a different aggregation pattern is used. We propose to define multiple different aggregation patterns, and to use parallel computing to enhance the performance of the algorithm. We test the proposed approach on setups of various problem sizes, and conclude that in the case of parallel computing with 4 threads a significant gain in computational speed may be achieved, with a high success rate.
Games and simulators can be a valuable platform to execute complex multi-agent, multiplayer, imperfect information scenarios with significant parallels to military applications: multiple participants manage resources and make decisions that command assets to secure specific areas of a map or neutralize opposing forces. These characteristics have attracted the artificial intelligence (AI) community by supporting development of algorithms with complex benchmarks and the capability to rapidly iterate over new ideas. The success of artificial intelligence algorithms in real-time strategy games such as StarCraft II have also attracted the attention of the military research community aiming to explore similar techniques in military counterpart scenarios. Aiming to bridge the connection between games and military applications, this work discusses past and current efforts on how games and simulators, together with the artificial intelligence algorithms, have been adapted to simulate certain aspects of military missions and how they might impact the future battlefield. This paper also investigates how advances in virtual reality and visual augmentation systems open new possibilities in human interfaces with gaming platforms and their military parallels.