A crossover trial is an efficient trial design when there is no carry-over effect. To reduce the impact of the biological carry-over effect, a washout period is often designed. However, the carry-over effect remains an outstanding concern when a washout period is unethical or cannot sufficiently diminish the impact of the carry-over effect. The latter can occur in comparative effectiveness research where the carry-over effect is often non-biological but behavioral. In this paper, we investigate the crossover design under a potential outcomes framework with and without the carry-over effect. We find that when the carry-over effect exists and satisfies a sign condition, the basic estimator underestimates the treatment effect, which does not inflate the type I error of one-sided tests but negatively impacts the power. This leads to a power trade-off between the crossover design and the parallel-group design, and we derive the condition under which the crossover design does not lead to type I error inflation and is still more powerful than the parallel-group design. We also develop covariate adjustment methods for crossover trials. We evaluate the performance of cross-over design and covariate adjustment using data from the MTN-034/REACH study.
Take a multiplicative monoid of sequences in which the multiplication is given by Hadamard product. The set of linear combinations of interleaving monoid elements then yields a ring. We consider such a construction for the monoid of hypergeometric sequences, yielding what we call the ring of hypergeometric-type sequences -- a subring of the ring of holonomic sequences. We present two algorithms in this setting: one for computing holonomic recurrence equations from hypergeometric-type normal forms and the other for finding products of hypergeometric-type terms. These are newly implemented commands in our Maple package $\texttt{HyperTypeSeq}$, which we also describe.
Though diffusion models have been successfully applied to various image restoration (IR) tasks, their performance is sensitive to the choice of training datasets. Typically, diffusion models trained in specific datasets fail to recover images that have out-of-distribution degradations. To address this problem, this work leverages a capable vision-language model and a synthetic degradation pipeline to learn image restoration in the wild (wild IR). More specifically, all low-quality images are simulated with a synthetic degradation pipeline that contains multiple common degradations such as blur, resize, noise, and JPEG compression. Then we introduce robust training for a degradation-aware CLIP model to extract enriched image content features to assist high-quality image restoration. Our base diffusion model is the image restoration SDE (IR-SDE). Built upon it, we further present a posterior sampling strategy for fast noise-free image generation. We evaluate our model on both synthetic and real-world degradation datasets. Moreover, experiments on the unified image restoration task illustrate that the proposed posterior sampling improves image generation quality for various degradations.
Robot swarms can effectively serve a variety of sensing and inspection applications. Certain inspection tasks require a binary classification decision. This work presents an experimental setup for a surface inspection task based on vibration sensing and studies a Bayesian two-outcome decision-making algorithm in a swarm of miniaturized wheeled robots. The robots are tasked with individually inspecting and collectively classifying a 1mx1m tiled surface consisting of vibrating and non-vibrating tiles based on the majority type of tiles. The robots sense vibrations using onboard IMUs and perform collision avoidance using a set of IR sensors. We develop a simulation and optimization framework leveraging the Webots robotic simulator and a Particle Swarm Optimization (PSO) method. We consider two existing information sharing strategies and propose a new one that allows the swarm to rapidly reach accurate classification decisions. We first find optimal parameters that allow efficient sampling in simulation and then evaluate our proposed strategy against the two existing ones using 100 randomized simulation and 10 real experiments. We find that our proposed method compels the swarm to make decisions at an accelerated rate, with an improvement of up to 20.52% in mean decision time at only 0.78% loss in accuracy.
The success of AI models relies on the availability of large, diverse, and high-quality datasets, which can be challenging to obtain due to data scarcity, privacy concerns, and high costs. Synthetic data has emerged as a promising solution by generating artificial data that mimics real-world patterns. This paper provides an overview of synthetic data research, discussing its applications, challenges, and future directions. We present empirical evidence from prior art to demonstrate its effectiveness and highlight the importance of ensuring its factuality, fidelity, and unbiasedness. We emphasize the need for responsible use of synthetic data to build more powerful, inclusive, and trustworthy language models.
Plane arrangements are a useful tool for surface and volume modelling. However, their main drawback is poor scalability. We introduce two key novelties that enable the construction of plane arrangements for complex objects and entire scenes: an ordering scheme for the plane insertion and the direct use of input points during arrangement construction. Both ingredients reduce the number of unwanted splits, resulting in improved scalability of the construction mechanism by up to two orders of magnitude compared to existing algorithms. We further introduce a remeshing and simplification technique that allows us to extract low-polygon surface meshes and lightweight convex decompositions of volumes from the arrangement. We show that our approach leads to state-of-the-art results for the aforementioned tasks by comparing it to learning-based and traditional approaches on various different datasets. Our implementation is available at //github.com/raphaelsulzer/compod .
The need for compliant and proprioceptive actuators has grown more evident in pursuing more adaptable and versatile robotic systems. Hydraulically Amplified Self-Healing Electrostatic (HASEL) actuators offer distinctive advantages with their inherent softness and flexibility, making them promising candidates for various robotic tasks, including delicate interactions with humans and animals, biomimetic locomotion, prosthetics, and exoskeletons. This has resulted in a growing interest in the capacitive self-sensing capabilities of HASEL actuators to create miniature displacement estimation circuitry that does not require external sensors. However, achieving HASEL self-sensing for actuation frequencies above 1 Hz and with miniature high-voltage power supplies has remained limited. In this paper, we introduce the F-HASEL actuator, which adds an additional electrode pair used exclusively for capacitive sensing to a Peano-HASEL actuator. We demonstrate displacement estimation of the F-HASEL during high-frequency actuation up to 20 Hz and during external loading using miniaturized circuitry comprised of low-cost off-the-shelf components and a miniature high-voltage power supply. Finally, we propose a circuitry to estimate the displacement of multiple F-HASELs and demonstrate it in a wearable application to track joint rotations of a virtual reality user in real-time.
Clustering, or unsupervised classification, is a task often plagued by outliers. Yet there is a paucity of work on handling outliers in clustering. Outlier identification algorithms tend to fall into three broad categories: outlier inclusion, outlier trimming, and \textit{post hoc} outlier identification methods, with the former two often requiring pre-specification of the number of outliers. The fact that sample Mahalanobis distance is beta-distributed is used to derive an approximate distribution for the log-likelihoods of subset finite Gaussian mixture models. An algorithm is then proposed that removes the least plausible points according to the subset log-likelihoods, which are deemed outliers, until the subset log-likelihoods adhere to the reference distribution. This results in a trimming method, called OCLUST, that inherently estimates the number of outliers.
Question Generation aims to automatically generate questions based on a given input provided as context. A controllable question generation scheme focuses on generating questions with specific attributes, allowing better control. In this study, we propose a few-shot prompting strategy for controlling the generation of question-answer pairs from children's narrative texts. We aim to control two attributes: the question's explicitness and underlying narrative elements. With empirical evaluation, we show the effectiveness of controlling the generation process by employing few-shot prompting side by side with a reference model. Our experiments highlight instances where the few-shot strategy surpasses the reference model, particularly in scenarios such as semantic closeness evaluation and the diversity and coherency of question-answer pairs. However, these improvements are not always statistically significant. The code is publicly available at github.com/bernardoleite/few-shot-prompting-qg-control.
Aerial vehicles equipped with manipulators can serve contact-based industrial applications, where fundamental tasks like drilling and grinding often necessitate aerial platforms to handle heavy tools. Industrial environments often involve non-horizontal surfaces. Existing aerial manipulation platforms based on multirotors typically feature a fixed CoM (Center of Mass) within the rotor-defined area, leading to a considerable moment arm between the EE (End-Effector) tip and the CoM for operations on such surfaces. Carrying heavy tools at the EE tip of the manipulator with an extended moment arm can lead to system instability and potential damage to the servo actuators used in the manipulator. To tackle this issue, we present a novel aerial vehicle tailored for handling heavy tools on non-horizontal surfaces. In this work, we provide the platform's system design, modeling, and control strategies. This platform can carry heavy manipulators within the rotor-defined area during free flight. During interactions, the manipulator can shift towards the work surface outside the rotor-defined area, resulting in a displaced CoM location with a significantly shorter moment arm. Furthermore, we propose a method for automatically determining the manipulator's position to reach the maximum CoM displacement towards the work surface. Our proposed concepts are validated through simulations that closely capture the developed physical prototype of the platform.
The recent proliferation of knowledge graphs (KGs) coupled with incomplete or partial information, in the form of missing relations (links) between entities, has fueled a lot of research on knowledge base completion (also known as relation prediction). Several recent works suggest that convolutional neural network (CNN) based models generate richer and more expressive feature embeddings and hence also perform well on relation prediction. However, we observe that these KG embeddings treat triples independently and thus fail to cover the complex and hidden information that is inherently implicit in the local neighborhood surrounding a triple. To this effect, our paper proposes a novel attention based feature embedding that captures both entity and relation features in any given entity's neighborhood. Additionally, we also encapsulate relation clusters and multihop relations in our model. Our empirical study offers insights into the efficacy of our attention based model and we show marked performance gains in comparison to state of the art methods on all datasets.