In this paper, we propose a novel performance metric for articulated robotic mechanisms called the sensor observability analysis and the resulting sensor observability index. The goal is to analyse and evaluate the performance of robot-mounted distributed directional or axial-based sensors to observe specific axes in task space as a function of joint configuration. For example, joint torque sensors are often used in serial robot manipulators and assumed to be perfectly capable of estimating end effector forces, but certain joint configurations may cause one or more task-space axes to be unobservable as a result of how the joint torque sensors are aligned. The proposed sensor observability analysis provides a method to analyse the cumulative quality of a robot configuration to observe the task space, akin to forward kinematics for sensors. The resultant metrics can then be used in optimization and in null-space control to avoid sensor observability singular configurations or to maximize sensor observability in particular directions. Parallels are drawn between sensor observability and the traditional kinematic Jacobian for the particular case of joint torque sensors in serial robot manipulators. Compared to kinematic analysis using the Jacobian in serial manipulators, sensor observability analysis is shown to be more generalizable in terms of analysing non-joint-mounted sensors and can potentially be applied to sensor types other than for force sensing, e.g., link-mounted proximity sensors. Simulations and experiments using a custom 3-DOF robot and the Baxter robot demonstrate the utility and importance of sensor observability in physical interactions.
We study distributed estimation and learning problems in a networked environment in which agents exchange information to estimate unknown statistical properties of random variables from their privately observed samples. By exchanging information about their private observations, the agents can collectively estimate the unknown quantities, but they also face privacy risks. The goal of our aggregation schemes is to combine the observed data efficiently over time and across the network, while accommodating the privacy needs of the agents and without any coordination beyond their local neighborhoods. Our algorithms enable the participating agents to estimate a complete sufficient statistic from private signals that are acquired offline or online over time, and to preserve the privacy of their signals and network neighborhoods. This is achieved through linear aggregation schemes with adjusted randomization schemes that add noise to the exchanged estimates subject to differential privacy (DP) constraints. In every case, we demonstrate the efficiency of our algorithms by proving convergence to the estimators of a hypothetical, omniscient observer that has central access to all of the signals. We also provide convergence rate analysis and finite-time performance guarantees and show that the noise that minimizes the convergence time to the best estimates is the Laplace noise, with parameters corresponding to each agent's sensitivity to their signal and network characteristics. Finally, to supplement and validate our theoretical results, we run experiments on real-world data from the US Power Grid Network and electric consumption data from German Households to estimate the average power consumption of power stations and households under all privacy regimes.
We introduce a sparse estimation in the ordinary kriging for functional data. The functional kriging predicts a feature given as a function at a location where the data are not observed by a linear combination of data observed at other locations. To estimate the weights of the linear combination, we apply the lasso-type regularization in minimizing the expected squared error. We derive an algorithm to derive the estimator using the augmented Lagrange method. Tuning parameters included in the estimation procedure are selected by cross-validation. Since the proposed method can shrink some of the weights of the linear combination toward zeros exactly, we can investigate which locations are necessary or unnecessary to predict the feature. Simulation and real data analysis show that the proposed method appropriately provides reasonable results.
Power analysis poses a significant threat to the security of cryptographic algorithms, as it can be leveraged to recover secret keys. While various software-based countermeasures exist to mitigate this non-invasive attack, they often involve a trade-off between time and space constraints. Techniques such as masking and shuffling, while effective, can noticeably impact execution speed and rely heavily on run-time random number generators. On the contrary, internally encoded implementations of block ciphers offer an alternative approach that does not rely on run-time random sources, but it comes with the drawback of requiring substantial memory space to accommodate lookup tables. Internal encoding, commonly employed in white-box cryptography, suffers from a security limitation as it does not effectively protect the secret key against statistical analysis. To overcome this weakness, this paper introduces a secure internal encoding method for an AES implementation. By addressing the root cause of vulnerabilities found in previous encoding methods, we propose a balanced encoding technique that aims to minimize the problematic correlation with key-dependent intermediate values. We analyze the potential weaknesses associated with the balanced encoding and present a method that utilizes complementary sets of lookup tables. In this approach, the size of the lookup tables is approximately 512KB, and the number of table lookups is 1,024. This is comparable to the table size of non-protected white-box AES-128 implementations, while requiring only half the number of lookups. By adopting this method, our aim is to introduce a non-masking technique that mitigates the vulnerability to statistical analysis present in current internally-encoded AES implementations.
State estimation of robotic systems is essential to implementing feedback controllers which usually provide better robustness to modeling uncertainties than open-loop controllers. However, state estimation of soft robots is very challenging because soft robots have theoretically infinite degrees of freedom while existing sensors only provide a limited number of discrete measurements. In this paper, we design an observer for soft continuum robotic arms based on the well-known Cosserat rod theory which models continuum robotic arms by nonlinear partial differential equations (PDEs). The observer is able to estimate all the continuum (infinite-dimensional) robot states (poses, strains, and velocities) by only sensing the tip velocity of the continuum robot (and hence it is called a ``boundary'' observer). More importantly, the estimation error dynamics is formally proven to be locally input-to-state stable. The key idea is to inject sequential tip velocity measurements into the observer in a way that dissipates the energy of the estimation errors through the boundary. Furthermore, this boundary observer can be implemented by simply changing a boundary condition in any numerical solvers of Cosserat rod models. Extensive numerical studies are included and suggest that the domain of attraction is large and the observer is robust to uncertainties of tip velocity measurements and model parameters.
There is an increasing interest in learning reward functions that model human intent and human preferences. However, many frameworks use blackbox learning methods that, while expressive, are difficult to interpret. We propose and evaluate a novel approach for learning expressive and interpretable reward functions from preferences using Differentiable Decision Trees (DDTs). Our experiments across several domains, including Cartpole, Visual Gridworld environments and Atari games, provide evidence that that the tree structure of our learned reward function is useful in determining the extent to which the reward function is aligned with human preferences. We experimentally demonstrate that using reward DDTs results in competitive performance when compared with larger capacity deep neural network reward functions. We also observe that the choice between soft and hard (argmax) output of reward DDT reveals a tension between wanting highly shaped rewards to ensure good RL performance, while also wanting simple, non-shaped rewards to afford interpretability.
Nowadays, the deployment of deep learning models on edge devices for addressing real-world classification problems is becoming more prevalent. Moreover, there is a growing popularity in the approach of early classification, a technique that involves classifying the input data after observing only an early portion of it, aiming to achieve reduced communication and computation requirements, which are crucial parameters in edge intelligence environments. While early classification in the field of time series analysis has been broadly researched, existing solutions for multivariate time series problems primarily focus on early classification along the temporal dimension, treating the multiple input channels in a collective manner. In this study, we propose a more flexible early classification pipeline that offers a more granular consideration of input channels and extends the early classification paradigm to the channel dimension. To implement this method, we utilize reinforcement learning techniques and introduce constraints to ensure the feasibility and practicality of our objective. To validate its effectiveness, we conduct experiments using synthetic data and we also evaluate its performance on real datasets. The comprehensive results from our experiments demonstrate that, for multiple datasets, our method can enhance the early classification paradigm by achieving improved accuracy for equal input utilization.
The AI community is increasingly focused on merging logic with deep learning to create Neuro-Symbolic (NeSy) paradigms and assist neural approaches with symbolic knowledge. A significant trend in the literature involves integrating axioms and facts in loss functions by grounding logical symbols with neural networks and operators with fuzzy semantics. Logic Tensor Networks (LTN) is one of the main representatives in this category, known for its simplicity, efficiency, and versatility. However, it has been previously shown that not all fuzzy operators perform equally when applied in a differentiable setting. Researchers have proposed several configurations of operators, trading off between effectiveness, numerical stability, and generalization to different formulas. This paper presents a configuration of fuzzy operators for grounding formulas end-to-end in the logarithm space. Our goal is to develop a configuration that is more effective than previous proposals, able to handle any formula, and numerically stable. To achieve this, we propose semantics that are best suited for the logarithm space and introduce novel simplifications and improvements that are crucial for optimization via gradient-descent. We use LTN as the framework for our experiments, but the conclusions of our work apply to any similar NeSy framework. Our findings, both formal and empirical, show that the proposed configuration outperforms the state-of-the-art and that each of our modifications is essential in achieving these results.
In everyday life, we often find that we can maintain an object's equilibrium on a tray by adjusting its orientation. Building upon this observation and extending the method we previously proposed to suppress sloshing in a moving vessel, this paper presents a feedforward control approach for transporting objects with a robot that are not firmly grasped but simply placed on a tray. The proposed approach combines smoothing actions and end-effector re-orientation to prevent object sliding. It can be integrated into existing robotic systems as a plug-in element between the reference trajectory generator and the robot control. To demonstrate the effectiveness of the proposed methods, particularly when dealing with unknown reference signals, we embed them in a direct teleoperation scheme. In this scheme, the user commands the robot carrying the tray by simply moving their hand in free space, with the hand's 3D position detected by a motion capture system. Furthermore, in the case of point-to-point motions, the same feedforward control, when fed with step inputs representing the desired goal position, dynamically generates the minimum-time reference trajectory that complies with velocity and acceleration constraints, thus avoiding sloshing and slipping. More information and accompanying videos can be found at //sites.google.com/view/robotwaiter/
In reinforcement learning (RL), sparse rewards can present a significant challenge. Fortunately, expert actions can be utilized to overcome this issue. However, acquiring explicit expert actions can be costly, and expert observations are often more readily available. This paper presents a new approach that uses expert observations for learning in robot manipulation tasks with sparse rewards from pixel observations. In particular, our technique involves using expert observations as intermediate visual goals for a goal-conditioned RL agent, enabling it to complete a task by successively reaching a series of goals. We demonstrate the efficacy of our method in five challenging block construction tasks in simulation and show that when combined with two state-of-the-art agents, our approach can significantly improve their performance while requiring 4-20 times fewer expert actions during training. Moreover, our method is also superior to a hierarchical baseline.
Task and Motion Planning (TAMP) approaches are effective at planning long-horizon autonomous robot manipulation. However, because they require a planning model, it can be difficult to apply them to domains where the environment and its dynamics are not fully known. We propose to overcome these limitations by leveraging deep generative modeling, specifically diffusion models, to learn constraints and samplers that capture these difficult-to-engineer aspects of the planning model. These learned samplers are composed and combined within a TAMP solver in order to find action parameter values jointly that satisfy the constraints along a plan. To tractably make predictions for unseen objects in the environment, we define these samplers on low-dimensional learned latent embeddings of changing object state. We evaluate our approach in an articulated object manipulation domain and show how the combination of classical TAMP, generative learning, and latent embeddings enables long-horizon constraint-based reasoning.