The increasing interest in autonomous robots with a high number of degrees of freedom for industrial applications and service robotics demands control algorithms to handle multiple tasks as well as hard constraints efficiently. This paper presents a general framework in which both kinematic (velocity- or acceleration-based) and dynamic (torque-based) control of redundant robots are handled in a unified fashion. The framework allows for the specification of redundancy resolution problems featuring a hierarchy of arbitrary (equality and inequality) constraints, arbitrary weighting of the control effort in the cost function and an additional input used to optimize possibly remaining redundancy. To solve such problems, a generalization of the Saturation in the Null Space (SNS) algorithm is introduced, which extends the original method according to the features required by our general control framework. Variants of the developed algorithm are presented, which ensure both efficient computation and optimality of the solution. Experiments on a KUKA LBRiiwa robotic arm, as well as simulations with a highly redundant mobile manipulator are reported.
QBF solvers implementing the QCDCL paradigm are powerful algorithms that successfully tackle many computationally complex applications. However, our theoretical understanding of the strength and limitations of these QCDCL solvers is very limited. In this paper we suggest to formally model QCDCL solvers as proof systems. We define different policies that can be used for decision heuristics and unit propagation and give rise to a number of sound and complete QBF proof systems (and hence new QCDCL algorithms). With respect to the standard policies used in practical QCDCL solving, we show that the corresponding QCDCL proof system is incomparable (via exponential separations) to Q-resolution, the classical QBF resolution system used in the literature. This is in stark contrast to the propositional setting where CDCL and resolution are known to be p-equivalent. This raises the question what formulas are hard for standard QCDCL, since Q-resolution lower bounds do not necessarily apply to QCDCL as we show here. In answer to this question we prove several lower bounds for QCDCL, including exponential lower bounds for a large class of random QBFs. We also introduce a strengthening of the decision heuristic used in classical QCDCL, which does not necessarily decide variables in order of the prefix, but still allows to learn asserting clauses. We show that with this decision policy, QCDCL can be exponentially faster on some formulas. We further exhibit a QCDCL proof system that is p-equivalent to Q-resolution. In comparison to classical QCDCL, this new QCDCL version adapts both decision and unit propagation policies.
Though recent years have witnessed remarkable progress in single image super-resolution (SISR) tasks with the prosperous development of deep neural networks (DNNs), the deep learning methods are confronted with the computation and memory consumption issues in practice, especially for resource-limited platforms such as mobile devices. To overcome the challenge and facilitate the real-time deployment of SISR tasks on mobile, we combine neural architecture search with pruning search and propose an automatic search framework that derives sparse super-resolution (SR) models with high image quality while satisfying the real-time inference requirement. To decrease the search cost, we leverage the weight sharing strategy by introducing a supernet and decouple the search problem into three stages, including supernet construction, compiler-aware architecture and pruning search, and compiler-aware pruning ratio search. With the proposed framework, we are the first to achieve real-time SR inference (with only tens of milliseconds per frame) for implementing 720p resolution with competitive image quality (in terms of PSNR and SSIM) on mobile platforms (Samsung Galaxy S20).
In building practical applications of evolutionary computation (EC), two optimizations are essential. First, the parameters of the search method need to be tuned to the domain in order to balance exploration and exploitation effectively. Second, the search method needs to be distributed to take advantage of parallel computing resources. This paper presents BLADE (BLAnket Distributed Evolution) as an approach to achieving both goals simultaneously. BLADE uses blankets (i.e., masks on the genetic representation) to tune the evolutionary operators during the search, and implements the search through hub-and-spoke distribution. In the paper, (1) the blanket method is formalized for the (1 + 1)EA case as a Markov chain process. Its effectiveness is then demonstrated by analyzing dominant and subdominant eigenvalues of stochastic matrices, suggesting a generalizable theory; (2) the fitness-level theory is used to analyze the distribution method; and (3) these insights are verified experimentally on three benchmark problems, showing that both blankets and distribution lead to accelerated evolution. Moreover, a surprising synergy emerges between them: When combined with distribution, the blanket approach achieves more than $n$-fold speedup with $n$ clients in some cases. The work thus highlights the importance and potential of optimizing evolutionary computation in practical applications.
In this work we address the problem of finding feasible policies for Constrained Markov Decision Processes under probability one constraints. We argue that stationary policies are not sufficient for solving this problem, and that a rich class of policies can be found by endowing the controller with a scalar quantity, so called budget, that tracks how close the agent is to violating the constraint. We show that the minimal budget required to act safely can be obtained as the smallest fixed point of a Bellman-like operator, for which we analyze its convergence properties. We also show how to learn this quantity when the true kernel of the Markov decision process is not known, while providing sample-complexity bounds. The utility of knowing this minimal budget relies in that it can aid in the search of optimal or near-optimal policies by shrinking down the region of the state space the agent must navigate. Simulations illustrate the different nature of probability one constraints against the typically used constraints in expectation.
We present a novel framework based on semi-bounded spatial operators for analyzing and discretizing initial boundary value problems on moving and deforming domains. This development extends an existing framework for well-posed problems and energy stable discretizations from stationary domains to the general case including arbitrary mesh motion. In particular, we show that an energy estimate derived in the physical coordinate system is equivalent to a semi-bounded property with respect to a stationary reference domain. The continuous analysis leading up to this result is based on a skew-symmetric splitting of the material time derivative, and thus relies on the property of integration-by-parts. Following this, a mimetic energy stable arbitrary Lagrangian-Eulerian framework for semi-discretization is formulated, based on approximating the material time derivative in a way consistent with discrete summation-by-parts. Thanks to the semi-bounded property, a method-of-lines approach using standard explicit or implicit time integration schemes can be applied to march the system forward in time. The same type of stability arguments applies as for the corresponding stationary domain problem, without regards to additional properties such as discrete geometric conservation. As an additional bonus we demonstrate that discrete geometric conservation, in the sense of exact free-stream preservation, can still be achieved in an automatic way with the new framework. However, we stress that this is not necessary for stability.
Electrical substations are becoming more prone to cyber-attacks due to increasing digitalization. Prevailing defense measures based on cyber rules are often inadequate to detect attacks that use legitimate-looking measurements. In this work, we design and implement a bad data detection solution for electrical substations called ResiGate, that effectively combines a physics-based approach and a machine-learning-based approach to provide substantial speed-up in high-throughput substation communication scenarios, while still maintaining high detection accuracy and confidence. While many existing physics-based schemes are designed for deployment in control centers (due to their high computational requirement), ResiGate is designed as a security appliance that can be deployed on low-cost industrial computers at the edge of the smart grid so that it can detect local substation-level attacks in a timely manner. A key challenge for this is to continuously run the computationally demanding physics-based analysis to monitor the measurement data frequently transmitted in a typical substation. To provide high throughput without sacrificing accuracy, ResiGate uses machine learning to effectively filter out most of the non-suspicious (normal) data and thereby reducing the overall computational load, allowing efficient performance even with a high volume of network traffic. We implement ResiGate on a low-cost industrial computer and our experiments confirm that ResiGate can detect attacks with zero error while sustaining a high throughput.
Safety is crucial for robotic missions within an uncertain environment. Common safety requirements such as collision avoidance are only state-dependent, which can be restrictive for complex missions. In this work, we address a more general formulation as safe-return constraints, which require the existence of a return-policy to drive the system back to a set of safe states with high probability. The robot motion is modeled as a Markov Decision Process (MDP) with probabilistic labels, which can be highly non-ergodic. The robotic task is specified as Linear Temporal Logic (LTL) formulas over these labels, such as surveillance and transportation. We first provide theoretical guarantees on the re-formulation of such safe-return constraints, and a baseline solution based on computing two complete product automata. Furthermore, to tackle the computational complexity, we propose a hierarchical planning algorithm that combines the feature-based symbolic and temporal abstraction with constrained optimization. It synthesizes simultaneously two dependent motion policies: the outbound policy minimizes the overall cost of satisfying the task with a high probability, while the return policy ensures the safe-return constraints. The problem formulation is versatile regarding the robot model, task specifications and safety constraints. The proposed hierarchical algorithm is more efficient and can solve much larger problems than the baseline solution, with only a slight loss of optimality. Numerical validations include simulations and hardware experiments of a search-and-rescue mission and a planetary exploration mission over various system sizes.
Motion artifact reduction is one of the most concerned problems in magnetic resonance imaging. In recent years, deep learning-based methods have been widely investigated for artifact reduction tasks in MRI. As a retrospective processing method, neural network does not cost additional acquisition time or require new acquisition equipment, and seems to work better than traditional artifact reduction methods. In the previous study, training such models require the paired motion-corrupted and motion-free MR images. However, it is extremely tough or even impossible to obtain these images in reality because patients have difficulty in maintaining the same state during two image acquisition, which makes the training in a supervised manner impractical. In this paper, we proposed a new unsupervised abnormality extraction network (UNAEN) to alleviate this problem. Our network realizes the transition from artifact domain to motion-free domain by processing the abnormal information introduced by artifact in unpaired MR images. Different from directly generating artifact reduction results from motion-corrupted MR images, we adopted the strategy of abnormality extraction to indirectly correct the impact of artifact in MR images by learning the deep features. Experimental results show that our method is superior to state-of-the-art networks and can potentially be applied in real clinical settings.
This paper presents an analysis of the fundamental limits on energy efficiency in both digital and analog in-memory computing architectures, and compares their performance to single instruction, single data (scalar) machines specifically in the context of machine inference. The focus of the analysis is on how efficiency scales with the size, arithmetic intensity, and bit precision of the computation to be performed. It is shown that analog, in-memory computing architectures can approach arbitrarily high energy efficiency as both the problem size and processor size scales.