Over the last decade, developments in unmanned aerial vehicles (UAVs) has greatly increased, and they are being used in many fields including surveillance, crisis management or automated mission planning. This last field implies the search of plans for missions with multiple tasks, UAVs and ground control stations; and the optimization of several objectives, including makespan, fuel consumption or cost, among others. In this work, this problem has been solved using a multi-objective evolutionary algorithm combined with a constraint satisfaction problem model, which is used in the fitness function of the algorithm. The algorithm has been tested on several missions of increasing complexity, and the computational complexity of the different element considered in the missions has been studied.
We explore the Ziv-Lempel and Crochemore factorizations of some classical automatic sequences making an extensive use of the theorem prover Walnut.
Utilizing non-concurrent controls in the analysis of late-entering experimental arms in platform trials has recently received considerable attention, both on academic and regulatory levels. While incorporating this data can lead to increased power and lower required sample sizes, it might also introduce bias to the effect estimators if temporal drifts are present in the trial. Aiming to mitigate the potential calendar time bias, we propose various frequentist model-based approaches that leverage the non-concurrent control data, while adjusting for time trends. One of the currently available frequentist models incorporates time as a categorical fixed effect, separating the duration of the trial into periods, defined as time intervals bounded by any treatment arm entering or leaving the platform. In this work, we propose two extensions of this model. First, we consider an alternative definition of the time covariate by dividing the trial into fixed-length calendar time intervals. Second, we propose alternative methods to adjust for time trends. In particular, we investigate adjusting for autocorrelated random effects to account for dependency between closer time intervals and employing spline regression to model time with a smooth polynomial function. We evaluate the performance of the proposed approaches in a simulation study and illustrate their use by means of a case study.
Printing custom DNA sequences is essential to scientific and biomedical research, but the technology can be used to manufacture plagues as well as cures. Just as ink printers recognize and reject attempts to counterfeit money, DNA synthesizers and assemblers should deny unauthorized requests to make viral DNA that could be used to ignite a pandemic. There are three complications. First, we don't need to quickly update printers to deal with newly discovered currencies, whereas we regularly learn of new viruses and other biological threats. Second, anti-counterfeiting specifications on a local printer can't be extracted and misused by malicious actors, unlike information on biological threats. Finally, any screening must keep the inspected DNA sequences private, as they may constitute valuable trade secrets. Here we describe SecureDNA, a free, privacy-preserving, and fully automated system capable of verifiably screening all DNA synthesis orders of 30+ base pairs against an up-to-date database of hazards, and its operational performance and specificity when applied to 67 million base pairs of DNA synthesized by providers in the United States, Europe, and China.
This paper presents a method for future motion prediction of multi-agent systems by including group formation information and future intent. Formation of groups depends on a physics-based clustering method that follows the agglomerative hierarchical clustering algorithm. We identify clusters that incorporate the minimum cost-to-go function of a relevant optimal control problem as a metric for clustering between the groups among agents, where groups with similar associated costs are assumed to be likely to move together. The cost metric accounts for proximity to other agents as well as the intended goal of each agent. An unscented Kalman filter based approach is used to update the established clusters as well as add new clusters when new information is obtained. Our approach is verified through non-trivial numerical simulations implementing the proposed algorithm on different datasets pertaining to a variety of scenarios and agents.
We propose a material design method via gradient-based optimization on compositions, overcoming the limitations of traditional methods: exhaustive database searches and conditional generation models. It optimizes inputs via backpropagation, aligning the model's output closely with the target property and facilitating the discovery of unlisted materials and precise property determination. Our method is also capable of adaptive optimization under new conditions without retraining. Applying to exploring high-Tc superconductors, we identified potential compositions beyond existing databases and discovered new hydrogen superconductors via conditional optimization. This method is versatile and significantly advances material design by enabling efficient, extensive searches and adaptability to new constraints.
We present the design and implementation of a tool for semi-automatic verification of functional specifications of operating system modules. Such verification tasks are traditionally done in interactive theorem provers, where the functionalities of the module are specified at abstract and concrete levels using data such as structures, algebraic datatypes, arrays, maps and so on. In this work, we provide encodings to SMT for these commonly occurring data types. This allows verification conditions to be reduced into a form suitable for SMT solvers. The use of SMT solvers combined with a tactic language allows semi-automatic verification of the specification. We apply the tool to verify functional specification for key parts of the uC-OS/II operating system, based on earlier work giving full verification of the system in Coq. We demonstrate a large reduction in the amount of human effort due to increased level of automation.
Ever since the seminal work of R. A. Fisher and F. Yates, factorial designs have been an important experimental tool to simultaneously estimate the effects of multiple treatment factors. In factorial designs, the number of treatment combinations grows exponentially with the number of treatment factors, which motivates the forward selection strategy based on the sparsity, hierarchy, and heredity principles for factorial effects. Although this strategy is intuitive and has been widely used in practice, its rigorous statistical theory has not been formally established. To fill this gap, we establish design-based theory for forward factor selection in factorial designs based on the potential outcome framework. We not only prove a consistency property for the factor selection procedure but also discuss statistical inference after factor selection. In particular, with selection consistency, we quantify the advantages of forward selection based on asymptotic efficiency gain in estimating factorial effects. With inconsistent selection in higher-order interactions, we propose two strategies and investigate their impact on subsequent inference. Our formulation differs from the existing literature on variable selection and post-selection inference because our theory is based solely on the physical randomization of the factorial design and does not rely on a correctly specified outcome model.
In large-scale systems there are fundamental challenges when centralised techniques are used for task allocation. The number of interactions is limited by resource constraints such as on computation, storage, and network communication. We can increase scalability by implementing the system as a distributed task-allocation system, sharing tasks across many agents. However, this also increases the resource cost of communications and synchronisation, and is difficult to scale. In this paper we present four algorithms to solve these problems. The combination of these algorithms enable each agent to improve their task allocation strategy through reinforcement learning, while changing how much they explore the system in response to how optimal they believe their current strategy is, given their past experience. We focus on distributed agent systems where the agents' behaviours are constrained by resource usage limits, limiting agents to local rather than system-wide knowledge. We evaluate these algorithms in a simulated environment where agents are given a task composed of multiple subtasks that must be allocated to other agents with differing capabilities, to then carry out those tasks. We also simulate real-life system effects such as networking instability. Our solution is shown to solve the task allocation problem to 6.7% of the theoretical optimal within the system configurations considered. It provides 5x better performance recovery over no-knowledge retention approaches when system connectivity is impacted, and is tested against systems up to 100 agents with less than a 9% impact on the algorithms' performance.
In recent years, object detection has experienced impressive progress. Despite these improvements, there is still a significant gap in the performance between the detection of small and large objects. We analyze the current state-of-the-art model, Mask-RCNN, on a challenging dataset, MS COCO. We show that the overlap between small ground-truth objects and the predicted anchors is much lower than the expected IoU threshold. We conjecture this is due to two factors; (1) only a few images are containing small objects, and (2) small objects do not appear enough even within each image containing them. We thus propose to oversample those images with small objects and augment each of those images by copy-pasting small objects many times. It allows us to trade off the quality of the detector on large objects with that on small objects. We evaluate different pasting augmentation strategies, and ultimately, we achieve 9.7\% relative improvement on the instance segmentation and 7.1\% on the object detection of small objects, compared to the current state of the art method on MS COCO.
Recent advances in 3D fully convolutional networks (FCN) have made it feasible to produce dense voxel-wise predictions of volumetric images. In this work, we show that a multi-class 3D FCN trained on manually labeled CT scans of several anatomical structures (ranging from the large organs to thin vessels) can achieve competitive segmentation results, while avoiding the need for handcrafting features or training class-specific models. To this end, we propose a two-stage, coarse-to-fine approach that will first use a 3D FCN to roughly define a candidate region, which will then be used as input to a second 3D FCN. This reduces the number of voxels the second FCN has to classify to ~10% and allows it to focus on more detailed segmentation of the organs and vessels. We utilize training and validation sets consisting of 331 clinical CT images and test our models on a completely unseen data collection acquired at a different hospital that includes 150 CT scans, targeting three anatomical organs (liver, spleen, and pancreas). In challenging organs such as the pancreas, our cascaded approach improves the mean Dice score from 68.5 to 82.2%, achieving the highest reported average score on this dataset. We compare with a 2D FCN method on a separate dataset of 240 CT scans with 18 classes and achieve a significantly higher performance in small organs and vessels. Furthermore, we explore fine-tuning our models to different datasets. Our experiments illustrate the promise and robustness of current 3D FCN based semantic segmentation of medical images, achieving state-of-the-art results. Our code and trained models are available for download: //github.com/holgerroth/3Dunet_abdomen_cascade.