Tactile servoing is an important technique because it enables robots to manipulate objects with precision and accuracy while adapting to changes in their environments in real-time. One approach for tactile servo control with high-resolution soft tactile sensors is to estimate the contact pose relative to an object surface using a convolutional neural network (CNN) for use as a feedback signal. In this paper, we investigate how the surface pose estimation model can be extended to include shear, and utilize these combined pose-and-shear models to develop a tactile robotic system that can be programmed for diverse non-prehensile manipulation tasks, such as object tracking, surface following, single-arm object pushing and dual-arm object pushing. In doing this, two technical challenges had to be overcome. Firstly, the use of tactile data that includes shear-induced slippage can lead to error-prone estimates unsuitable for accurate control, and so we modified the CNN into a Gaussian-density neural network and used a discriminative Bayesian filter to improve the predictions with a state dynamics model that utilizes the robot kinematics. Secondly, to achieve smooth robot motion in 3D space while interacting with objects, we used SE(3) velocity-based servo control, which required re-deriving the Bayesian filter update equations using Lie group theory, as many standard assumptions do not hold for state variables defined on non-Euclidean manifolds. In future, we believe that pose and shear-based tactile servoing will enable many object manipulation tasks and the fully-dexterous utilization of multi-fingered tactile robot hands. Video: //www.youtube.com/watch?v=xVs4hd34ek0
Bayesian model averaging is a practical method for dealing with uncertainty due to model specification. Use of this technique requires the estimation of model probability weights. In this work, we revisit the derivation of estimators for these model weights. Use of the Kullback-Leibler divergence as a starting point leads naturally to a number of alternative information criteria suitable for Bayesian model weight estimation. We explore three such criteria, known to the statistics literature before, in detail: a Bayesian analogue of the Akaike information criterion which we call the BAIC, the Bayesian predictive information criterion (BPIC), and the posterior predictive information criterion (PPIC). We compare the use of these information criteria in numerical analysis problems common in lattice field theory calculations. We find that the PPIC has the most appealing theoretical properties and can give the best performance in terms of model-averaging uncertainty, particularly in the presence of noisy data, while the BAIC is a simple and reliable alternative.
Generative diffusion models and many stochastic models in science and engineering naturally live in infinite dimensions before discretisation. To incorporate observed data for statistical and learning tasks, one needs to condition on observations. While recent work has treated conditioning linear processes in infinite dimensions, conditioning non-linear processes in infinite dimensions has not been explored. This paper conditions function valued stochastic processes without prior discretisation. To do so, we use an infinite-dimensional version of Girsanov's theorem to condition a function-valued stochastic process, leading to a stochastic differential equation (SDE) for the conditioned process involving the score. We apply this technique to do time series analysis for shapes of organisms in evolutionary biology, where we discretise via the Fourier basis and then learn the coefficients of the score function with score matching methods.
Manufacturing assembly tasks can vary in complexity and level of automation. Yet, achieving full automation can be challenging and inefficient, particularly due to the complexity of certain assembly operations. Human-robot collaborative work, leveraging the strengths of human labor alongside the capabilities of robots, can be a solution for enhancing efficiency. This paper introduces the CT benchmark, a benchmark and model set designed to facilitate the testing and evaluation of human-robot collaborative assembly scenarios. It was designed to compare manual and automatic processes using metrics such as the assembly time and human workload. The components of the model set can be assembled through the most common assembly tasks, each with varying levels of difficulty. The CT benchmark was designed with a focus on its applicability in human-robot collaborative environments, with the aim of ensuring the reproducibility and replicability of experiments. Experiments were carried out to assess assembly performance in three different setups (manual, automatic and collaborative), measuring metrics related to the assembly time and the workload on human operators. The results suggest that the collaborative approach takes longer than the fully manual assembly, with an increase of 70.8%. However, users reported a lower overall workload, as well as reduced mental demand, physical demand, and effort according to the NASA-TLX questionnaire.
LiNGAM determines the variable order from cause to effect using additive noise models, but it faces challenges with confounding. Previous methods maintained LiNGAM's fundamental structure while trying to identify and address variables affected by confounding. As a result, these methods required significant computational resources regardless of the presence of confounding, and they did not ensure the detection of all confounding types. In contrast, this paper enhances LiNGAM by introducing LiNGAM-MMI, a method that quantifies the magnitude of confounding using KL divergence and arranges the variables to minimize its impact. This method efficiently achieves a globally optimal variable order through the shortest path problem formulation. LiNGAM-MMI processes data as efficiently as traditional LiNGAM in scenarios without confounding while effectively addressing confounding situations. Our experimental results suggest that LiNGAM-MMI more accurately determines the correct variable order, both in the presence and absence of confounding.
Network meta-analysis (NMA) is a useful tool to compare multiple interventions simultaneously in a single meta-analysis, it can be very helpful for medical decision making when the study aims to find the best therapy among several active candidates. However, the validity of its results is threatened by the publication bias issue. Existing methods to handle the publication bias issue in the standard pairwise meta-analysis are hard to extend to this area with the complicated data structure and the underlying assumptions for pooling the data. In this paper, we aimed to provide a flexible inverse probability weighting (IPW) framework along with several t-type selection functions to deal with the publication bias problem in the NMA context. To solve these proposed selection functions, we recommend making use of the additional information from the unpublished studies from multiple clinical trial registries. A comprehensive numerical study and a real example showed that our methodology can help obtain more accurate estimates and higher coverage probabilities, and improve other properties of an NMA (e.g., ranking the interventions).
Diffusion models have recently emerged as a promising framework for Image Restoration (IR), owing to their ability to produce high-quality reconstructions and their compatibility with established methods. Existing methods for solving noisy inverse problems in IR, considers the pixel-wise data-fidelity. In this paper, we propose SaFaRI, a spatial-and-frequency-aware diffusion model for IR with Gaussian noise. Our model encourages images to preserve data-fidelity in both the spatial and frequency domains, resulting in enhanced reconstruction quality. We comprehensively evaluate the performance of our model on a variety of noisy inverse problems, including inpainting, denoising, and super-resolution. Our thorough evaluation demonstrates that SaFaRI achieves state-of-the-art performance on both the ImageNet datasets and FFHQ datasets, outperforming existing zero-shot IR methods in terms of LPIPS and FID metrics.
A general quantum circuit can be simulated classically in exponential time. If it has a planar layout, then a tensor-network contraction algorithm due to Markov and Shi has a runtime exponential in the square root of its size, or more generally exponential in the treewidth of the underlying graph. Separately, Gottesman and Knill showed that if all gates are restricted to be Clifford, then there is a polynomial time simulation. We combine these two ideas and show that treewidth and planarity can be exploited to improve Clifford circuit simulation. Our main result is a classical algorithm with runtime scaling asymptotically as $n^{\omega/2}<n^{1.19}$ which samples from the output distribution obtained by measuring all $n$ qubits of a planar graph state in given Pauli bases. Here $\omega$ is the matrix multiplication exponent. We also provide a classical algorithm with the same asymptotic runtime which samples from the output distribution of any constant-depth Clifford circuit in a planar geometry. Our work improves known classical algorithms with cubic runtime. A key ingredient is a mapping which, given a tree decomposition of some graph $G$, produces a Clifford circuit with a structure that mirrors the tree decomposition and which emulates measurement of the corresponding graph state. We provide a classical simulation of this circuit with the runtime stated above for planar graphs and otherwise $nt^{\omega-1}$ where $t$ is the width of the tree decomposition. Our algorithm incorporates two subroutines which may be of independent interest. The first is a matrix-multiplication-time version of the Gottesman-Knill simulation of multi-qubit measurement on stabilizer states. The second is a new classical algorithm for solving symmetric linear systems over $\mathbb{F}_2$ in a planar geometry, extending previous works which only applied to non-singular linear systems in the analogous setting.
It is well-known that decision-making problems from stochastic control can be formulated by means of forward-backward stochastic differential equation (FBSDE). Recently, the authors of Ji et al. 2022 proposed an efficient deep learning-based algorithm which was based on the stochastic maximum principle (SMP). In this paper, we provide a convergence result for this deep SMP-BSDE algorithm and compare its performance with other existing methods. In particular, by adopting a similar strategy as in Han and Long 2020, we derive a posteriori error estimate, and show that the total approximation error can be bounded by the value of the loss functional and the discretization error. We present numerical examples for high-dimensional stochastic control problems, both in case of drift- and diffusion control, which showcase superior performance compared to existing algorithms.
In this work we demonstrate that SVD-based model reduction techniques known for ordinary differential equations, such as the proper orthogonal decomposition, can be extended to stochastic differential equations in order to reduce the computational cost arising from both the high dimension of the considered stochastic system and the large number of independent Monte Carlo runs. We also extend the proper symplectic decomposition method to stochastic Hamiltonian systems, both with and without external forcing, and argue that preserving the underlying symplectic or variational structures results in more accurate and stable solutions that conserve energy better than when the non-geometric approach is used. We validate our proposed techniques with numerical experiments for a semi-discretization of the stochastic nonlinear Schr\"odinger equation and the Kubo oscillator.
In large-scale systems there are fundamental challenges when centralised techniques are used for task allocation. The number of interactions is limited by resource constraints such as on computation, storage, and network communication. We can increase scalability by implementing the system as a distributed task-allocation system, sharing tasks across many agents. However, this also increases the resource cost of communications and synchronisation, and is difficult to scale. In this paper we present four algorithms to solve these problems. The combination of these algorithms enable each agent to improve their task allocation strategy through reinforcement learning, while changing how much they explore the system in response to how optimal they believe their current strategy is, given their past experience. We focus on distributed agent systems where the agents' behaviours are constrained by resource usage limits, limiting agents to local rather than system-wide knowledge. We evaluate these algorithms in a simulated environment where agents are given a task composed of multiple subtasks that must be allocated to other agents with differing capabilities, to then carry out those tasks. We also simulate real-life system effects such as networking instability. Our solution is shown to solve the task allocation problem to 6.7% of the theoretical optimal within the system configurations considered. It provides 5x better performance recovery over no-knowledge retention approaches when system connectivity is impacted, and is tested against systems up to 100 agents with less than a 9% impact on the algorithms' performance.