Soft robotic manipulators are attractive for a range of applications such as medical interventions or industrial inspections in confined environments. A myriad of soft robotic manipulators have been proposed in the literature, but their designs tend to be relatively similar, and generally offer a relatively low force. This limits the payload they can carry and therefore their usability. A comparison of force of the different designs is not available under a common framework, and designs present different diameters and features that make them hard to compare. In this paper, we present the design of a soft robotic manipulator that is optimised to maximise its force while respecting typical application constraints such as size, workspace, payload capability, and maximum pressure. The design presented here has the advantage that it morphs to an optimal design as it is pressurised to move in different directions, and this leads to higher lateral force. The robot is designed using a set of principles and thus can be adapted to other applications. We also present a non-dimensional analysis for soft robotic manipulators, and we apply it to compare the performance of the design proposed here with other designs in the literature. We show that our design has a higher force than other designs in the same category. Experimental results confirm the higher force of our proposed design.
It is now well documented that genetic covariance between functionally related traits leads to an uneven distribution of genetic variation across multivariate trait combinations, and possibly a large part of phenotype-space that is inaccessible to evolution. How the size of this nearly-null genetic space translates to the broader phenome level is unknown. High dimensional phenotype data to address these questions are now within reach, however, incorporating these data into genetic analyses remains a challenge. Multi-trait genetic analyses, of more than a handful of traits, are slow and often fail to converge when fit with REML. This makes it challenging to estimate the genetic covariance ($\mathbf{G}$) underlying thousands of traits, let alone study its properties. We present a previously proposed REML algorithm that is feasible for high dimensional genetic studies in the specific setting of a balanced nested half-sib design, common of quantitative genetics. We show that it substantially outperforms other common approaches when the number of traits is large, and we use it to investigate the bias in estimated eigenvalues of $\mathbf{G}$ and the size of the nearly-null genetic subspace. We show that the high-dimensional biases observed are qualitatively similar to those substantiated by asymptotic approximation in a simpler setting of a sample covariance matrix based on i.i.d. vector observation, and that interpreting the estimated size of the nearly-null genetic subspace requires considerable caution in high-dimensional studies of genetic variation. Our results provide the foundation for future research characterizing the asymptotic approximation of estimated genetic eigenvalues, and a statistical null distribution for phenome-wide studies of genetic variation.
This paper reviews various Evolutionary Approaches applied to the domain of Evolutionary Robotics with the intention of resolving difficult problems in the areas of robotic design and control. Evolutionary Robotics is a fast-growing field that has attracted substantial research attention in recent years. The paper thus collates recent findings along with some anticipated applications. The reviewed literature is organized systematically to give a categorical overview of recent developments and is presented in tabulated form for quick reference. We discuss the outstanding potentialities and challenges that exist in robotics from an ER perspective, with the belief that these will be have the capacity to be addressed in the near future via the application of evolutionary approaches. The primary objective of this study is to explore the applicability of Evolutionary Approaches in robotic application development. We believe that this study will enable the researchers to utilize Evolutionary Approaches to solve complex outstanding problems in robotics.
In this paper we present a trade study-based method to optimize the architecture of ReachBot, a new robotic concept that uses deployable booms as prismatic joints for mobility in environments with adverse gravity conditions and challenging terrain. Specifically, we introduce a design process wherein we analyze the compatibility of ReachBot's design with its mission. We incorporate terrain parameters and mission requirements to produce a final design optimized for mission-specific objectives. ReachBot's design parameters include (1) number of booms, (2) positions and orientations of the booms on ReachBot's chassis, (3) boom maximum extension, (4) boom cross-sectional geometry, and (5) number of active/passive degrees-of-freedom at each joint. Using first-order approximations, we analyze the relationships between these parameters and various performance metrics including stability, manipulability, and mechanical interference. We apply our method to a mission where ReachBot navigates and gathers data from a martian lava tube. The resulting design is shown in Fig. 1.
Resource constrained project scheduling is an important combinatorial optimisation problem with many practical applications. With complex requirements such as precedence constraints, limited resources, and finance-based objectives, finding optimal solutions for large problem instances is very challenging even with well-customised meta-heuristics and matheuristics. To address this challenge, we propose a new math-heuristic algorithm based on Merge Search and parallel computing to solve the resource constrained project scheduling with the aim of maximising the net present value. This paper presents a novel matheuristic framework designed for resource constrained project scheduling, Merge search, which is a variable partitioning and merging mechanism to formulate restricted mixed integer programs with the aim of improving an existing pool of solutions. The solution pool is obtained via a customised parallel ant colony optimisation algorithm, which is also capable of generating high quality solutions on its own. The experimental results show that the proposed method outperforms the current state-of-the-art algorithms on known benchmark problem instances. Further analyses also demonstrate that the proposed algorithm is substantially more efficient compared to its counterparts in respect to its convergence properties when considering multiple cores.
Two-stage randomized experiments are becoming an increasingly popular experimental design for causal inference when the outcome of one unit may be affected by the treatment assignments of other units in the same cluster. In this paper, we provide a methodological framework for general tools of statistical inference and power analysis for two-stage randomized experiments. Under the randomization-based framework, we consider the estimation of a new direct effect of interest as well as the average direct and spillover effects studied in the literature. We provide unbiased estimators of these causal quantities and their conservative variance estimators in a general setting. Using these results, we then develop hypothesis testing procedures and derive sample size formulas. We theoretically compare the two-stage randomized design with the completely randomized and cluster randomized designs, which represent two limiting designs. Finally, we conduct simulation studies to evaluate the empirical performance of our sample size formulas. For empirical illustration, the proposed methodology is applied to the randomized evaluation of the Indian national health insurance program. An open-source software package is available for implementing the proposed methodology.
Real-time scheduling theory assists developers of embedded systems in verifying that the timing constraints required by critical software tasks can be feasibly met on a given hardware platform. Fundamental problems in the theory are often formulated as search problems for fixed points of functions and are solved by fixed-point iterations. These fixed-point methods are used widely because they are simple to understand, simple to implement, and seem to work well in practice. These fundamental problems can also be formulated as integer programs and solved with algorithms that are based on theories of linear programming and cutting planes amongst others. However, such algorithms are harder to understand and implement than fixed-point iterations. In this research, we show that ideas like linear programming duality and cutting planes can be used to develop algorithms that are as easy to implement as existing fixed-point iteration schemes but have better convergence properties. We evaluate the algorithms on synthetically generated problem instances to demonstrate that the new algorithms are faster than the existing algorithms.
Bayesian optimal experimental design is a sub-field of statistics focused on developing methods to make efficient use of experimental resources. Any potential design is evaluated in terms of a utility function, such as the (theoretically well-justified) expected information gain (EIG); unfortunately however, under most circumstances the EIG is intractable to evaluate. In this work we build off of successful variational approaches, which optimize a parameterized variational model with respect to bounds on the EIG. Past work focused on learning a new variational model from scratch for each new design considered. Here we present a novel neural architecture that allows experimenters to optimize a single variational model that can estimate the EIG for potentially infinitely many designs. To further improve computational efficiency, we also propose to train the variational model on a significantly cheaper-to-evaluate lower bound, and show empirically that the resulting model provides an excellent guide for more accurate, but expensive to evaluate bounds on the EIG. We demonstrate the effectiveness of our technique on generalized linear models, a class of statistical models that is widely used in the analysis of controlled experiments. Experiments show that our method is able to greatly improve accuracy over existing approximation strategies, and achieve these results with far better sample efficiency.
The broad adoption of the Internet of Things during the last decade has widened the application horizons of distributed sensor networks, ranging from smart home appliances to automation, including remote sensing. Typically, these distributed systems are composed of several nodes attached to sensing devices linked by a heterogeneous communication network. The unreliable nature of these systems (e.g., devices might run out of energy or communications might become unavailable) drives practitioners to implement heavyweight fault tolerance mechanisms to identify those untrustworthy nodes that are misbehaving erratically and, thus, ensure that the sensed data from the IoT domain are correct. The overhead in the communication network degrades the overall system, especially in scenarios with limited available bandwidth that are exposed to severely harsh conditions. Quantum Internet might be a promising alternative to minimize traffic congestion and avoid worsening reliability due to the link saturation effect by using a quantum consensus layer. In this regard, the purpose of this paper is to explore and simulate the usage of quantum consensus architecture in one of the most challenging natural environments in the world where researchers need a responsive sensor network: the remote sensing of permafrost in Antarctica. More specifically, this paper 1) describes the use case of permafrost remote sensing in Antarctica, 2) proposes the usage of a quantum consensus management plane to reduce the traffic overhead associated with fault tolerance protocols, and 3) discusses, by means of simulation, possible improvements to increase the trustworthiness of a holistic telemetry system by exploiting the complexity reduction offered by the quantum parallelism. Collected insights from this research can be generalized to current and forthcoming IoT environments.
A well-known challenge in beamforming is how to optimally utilize the degrees of freedom (DoF) of the array to design a robust beamformer, especially when the array DoF is limited. In this paper, we leverage the tool of constrained convex optimization and propose a penalized inequality-constrained minimum variance (P-ICMV) beamformer to address this challenge. Specifically, a well-targeted objective function and inequality constraints are proposed to achieve the design goals. By penalizing the maximum gain of the beamformer at any interfering directions, the total interference power can be efficiently mitigated with limited DoF. Multiple robust constraints on the target protection and interference suppression can be introduced to increase the robustness of the beamformer against steering vector mismatch. By integrating the noise reduction, interference suppression, and target protection, the proposed formulation can efficiently obtain a robust beamformer design while optimally trading off various design goals. To numerically solve this problem, we formulate the P-ICMV beamformer design as a convex second-order cone program (SOCP) and propose a low complexity iterative algorithm based on the alternating direction method of multipliers (ADMM). Three applications are simulated to demonstrate the effectiveness of the proposed beamformer.
Graphs drawn in the plane are ubiquitous, arising from data sets through a variety of methods ranging from GIS analysis to image classification to shape analysis. A fundamental problem in this type of data is comparison: given a set of such graphs, can we rank how similar they are, in such a way that we capture their geometric "shape" in the plane? In this paper we explore a method to compare two such embedded graphs, via a simplified combinatorial representation called a tail-less merge tree which encodes the structure based on a fixed direction. First, we examine the properties of a distance designed to compare merge trees called the branching distance, and show that the distance as defined in previous work fails to satisfy some of the requirements of a metric. We incorporate this into a new distance function called average branching distance to compare graphs by looking at the branching distance for merge trees defined over many directions. Despite the theoretical issues, we show that the definition is still quite useful in practice by using our open-source code to cluster data sets of embedded graphs.