Evacuation planning is a crucial part of disaster management where the goal is to relocate people to safety and minimize casualties. Every evacuation plan has two essential components: routing and scheduling. However, joint optimization of these two components with objectives such as minimizing average evacuation time or evacuation completion time, is a computationally hard problem. To approach it, we present MIP-LNS, a scalable optimization method that combines heuristic search with mathematical optimization and can optimize a variety of objective functions. We use real-world road network and population data from Harris County in Houston, Texas, and apply MIP-LNS to find evacuation routes and schedule for the area. We show that, within a given time limit, our proposed method finds better solutions than existing methods in terms of average evacuation time, evacuation completion time and optimality guarantee of the solutions. We perform agent-based simulations of evacuation in our study area to demonstrate the efficacy and robustness of our solution. We show that our prescribed evacuation plan remains effective even if the evacuees deviate from the suggested schedule upto a certain extent. We also examine how evacuation plans are affected by road failures. Our results show that MIP-LNS can use information regarding estimated deadline of roads to come up with better evacuation plans in terms evacuating more people successfully and conveniently.
Evolutionary multi-agent systems (EMASs) are very good at dealing with difficult, multi-dimensional problems, their efficacy was proven theoretically based on analysis of the relevant Markov-Chain based model. Now the research continues on introducing autonomous hybridization into EMAS. This paper focuses on a proposed hybrid version of the EMAS, and covers selection and introduction of a number of hybrid operators and defining rules for starting the hybrid steps of the main algorithm. Those hybrid steps leverage existing, well-known and proven to be efficient metaheuristics, and integrate their results into the main algorithm. The discussed modifications are evaluated based on a number of difficult continuous-optimization benchmarks.
We propose coordinating guiding vector fields to achieve two tasks simultaneously with a team of robots: first, the guidance and navigation of multiple robots to possibly different paths or surfaces typically embedded in 2D or 3D; second, their motion coordination while tracking their prescribed paths or surfaces. The motion coordination is defined by desired parametric displacements between robots on the path or surface. Such a desired displacement is achieved by controlling the virtual coordinates, which correspond to the path or surface's parameters, between guiding vector fields. Rigorous mathematical guarantees underpinned by dynamical systems theory and Lyapunov theory are provided for the effective distributed motion coordination and navigation of robots on paths or surfaces from all initial positions. As an example for practical robotic applications, we derive a control algorithm from the proposed coordinating guiding vector fields for a Dubins-car-like model with actuation saturation. Our proposed algorithm is distributed and scalable to an arbitrary number of robots. Furthermore, extensive illustrative simulations and fixed-wing aircraft outdoor experiments validate the effectiveness and robustness of our algorithm.
Patient scheduling is a difficult task involving stochastic factors such as the unknown arrival times of patients. Similarly, the scheduling of radiotherapy for cancer treatments needs to handle patients with different urgency levels when allocating resources. High priority patients may arrive at any time, and there must be resources available to accommodate them. A common solution is to reserve a flat percentage of treatment capacity for emergency patients. However, this solution can result in overdue treatments for urgent patients, a failure to fully exploit treatment capacity, and delayed treatments for low-priority patients. This problem is especially severe in large and crowded hospitals. In this paper, we propose a prediction-based approach for online dynamic radiotherapy scheduling that dynamically adapts the present scheduling decision based on each incoming patient and the current allocation of resources. Our approach is based on a regression model trained to recognize the links between patients' arrival patterns, and their ideal waiting time in optimal offline solutions where all future arrivals are known in advance. When our prediction-based approach is compared to flat-reservation policies, it does a better job of preventing overdue treatments for emergency patients, while also maintaining comparable waiting times for the other patients. We also demonstrate how our proposed approach supports explainability and interpretability in scheduling decisions using SHAP values.
Metaheuristics are universal optimization algorithms which should be used for solving difficult problems, unsolvable by classic approaches. In this paper we aim at constructing novel socio-cognitive metaheuristic based on castes, and apply several versions of this algorithm to optimization of time-delay system model. Besides giving the background and the details of the proposed algorithms we apply them to optimization of selected variants of the problem and discuss the results.
Resource constrained project scheduling is an important combinatorial optimisation problem with many practical applications. With complex requirements such as precedence constraints, limited resources, and finance-based objectives, finding optimal solutions for large problem instances is very challenging even with well-customised meta-heuristics and matheuristics. To address this challenge, we propose a new math-heuristic algorithm based on Merge Search and parallel computing to solve the resource constrained project scheduling with the aim of maximising the net present value. This paper presents a novel matheuristic framework designed for resource constrained project scheduling, Merge search, which is a variable partitioning and merging mechanism to formulate restricted mixed integer programs with the aim of improving an existing pool of solutions. The solution pool is obtained via a customised parallel ant colony optimisation algorithm, which is also capable of generating high quality solutions on its own. The experimental results show that the proposed method outperforms the current state-of-the-art algorithms on known benchmark problem instances. Further analyses also demonstrate that the proposed algorithm is substantially more efficient compared to its counterparts in respect to its convergence properties when considering multiple cores.
The alfalfa crop is globally important as livestock feed, so highly efficient planting and harvesting could benefit many industries, especially as the global climate changes and traditional methods become less accurate. Recent work using machine learning (ML) to predict yields for alfalfa and other crops has shown promise. Previous efforts used remote sensing, weather, planting, and soil data to train machine learning models for yield prediction. However, while remote sensing works well, the models require large amounts of data and cannot make predictions until the harvesting season begins. Using weather and planting data from alfalfa variety trials in Kentucky and Georgia, our previous work compared feature selection techniques to find the best technique and best feature set. In this work, we trained a variety of machine learning models, using cross validation for hyperparameter optimization, to predict biomass yields, and we showed better accuracy than similar work that employed more complex techniques. Our best individual model was a random forest with a mean absolute error of 0.081 tons/acre and R{$^2$} of 0.941. Next, we expanded this dataset to include Wisconsin and Mississippi, and we repeated our experiments, obtaining a higher best R{$^2$} of 0.982 with a regression tree. We then isolated our testing datasets by state to explore this problem's eligibility for domain adaptation (DA), as we trained on multiple source states and tested on one target state. This Trivial DA (TDA) approach leaves plenty of room for improvement through exploring more complex DA techniques in forthcoming work.
Real-time scheduling theory assists developers of embedded systems in verifying that the timing constraints required by critical software tasks can be feasibly met on a given hardware platform. Fundamental problems in the theory are often formulated as search problems for fixed points of functions and are solved by fixed-point iterations. These fixed-point methods are used widely because they are simple to understand, simple to implement, and seem to work well in practice. These fundamental problems can also be formulated as integer programs and solved with algorithms that are based on theories of linear programming and cutting planes amongst others. However, such algorithms are harder to understand and implement than fixed-point iterations. In this research, we show that ideas like linear programming duality and cutting planes can be used to develop algorithms that are as easy to implement as existing fixed-point iteration schemes but have better convergence properties. We evaluate the algorithms on synthetically generated problem instances to demonstrate that the new algorithms are faster than the existing algorithms.
Deep Reinforcement Learning (DRL) has achieved remarkable success in scenarios such as games and has emerged as a potential solution for control tasks. That is due to its ability to leverage scalability and handle complex dynamics. However, few works have targeted environments grounded in real-world settings. Indeed, real-world scenarios can be challenging, especially when faced with the high dimensionality of the state space and unknown reward function. We release a testbed consisting of an environment simulator and demonstrations of human operation concerning pump scheduling of a real-world water distribution facility to facilitate research. The pump scheduling problem can be viewed as a decision process to decide when to operate pumps to supply water while limiting electricity consumption and meeting system constraints. To provide a starting point, we release a well-documented codebase, present an overview of some challenges that can be addressed and provide a baseline representation of the problem. The code and dataset are available at //gitlab.com/hdonancio/pumpscheduling.
Unsupervised domain adaptation has recently emerged as an effective paradigm for generalizing deep neural networks to new target domains. However, there is still enormous potential to be tapped to reach the fully supervised performance. In this paper, we present a novel active learning strategy to assist knowledge transfer in the target domain, dubbed active domain adaptation. We start from an observation that energy-based models exhibit free energy biases when training (source) and test (target) data come from different distributions. Inspired by this inherent mechanism, we empirically reveal that a simple yet efficient energy-based sampling strategy sheds light on selecting the most valuable target samples than existing approaches requiring particular architectures or computation of the distances. Our algorithm, Energy-based Active Domain Adaptation (EADA), queries groups of targe data that incorporate both domain characteristic and instance uncertainty into every selection round. Meanwhile, by aligning the free energy of target data compact around the source domain via a regularization term, domain gap can be implicitly diminished. Through extensive experiments, we show that EADA surpasses state-of-the-art methods on well-known challenging benchmarks with substantial improvements, making it a useful option in the open world. Code is available at //github.com/BIT-DA/EADA.
The previous work for event extraction has mainly focused on the predictions for event triggers and argument roles, treating entity mentions as being provided by human annotators. This is unrealistic as entity mentions are usually predicted by some existing toolkits whose errors might be propagated to the event trigger and argument role recognition. Few of the recent work has addressed this problem by jointly predicting entity mentions, event triggers and arguments. However, such work is limited to using discrete engineering features to represent contextual information for the individual tasks and their interactions. In this work, we propose a novel model to jointly perform predictions for entity mentions, event triggers and arguments based on the shared hidden representations from deep learning. The experiments demonstrate the benefits of the proposed method, leading to the state-of-the-art performance for event extraction.