One popular technique to solve temporal planning problems consists in decoupling the causal decisions, demanding them to heuristic search, from temporal decisions, demanding them to a simple temporal network (STN) solver. In this architecture, one needs to check the consistency of a series of STNs that are related one another, therefore having methods to incrementally re-use previous computations and that avoid expensive memory duplication is of paramount importance. In this paper, we describe in detail how STNs are used in temporal planning, we identify a clear interface to support this use-case and we present an efficient data-structure implementing this interface that is both time- and memory-efficient. We show that our data structure, called \deltastn, is superior to other state-of-the-art approaches on temporal planning sequences of problems.
Embedded devices collect and process significant amounts of data in a variety of applications including environmental monitoring, industrial automation and control, and other Internet of Things (IoT) applications. Storing data efficiently is critically important, especially when the device must perform local processing on the data. The most widely used data structure for high performance query and insert is the B-tree. However, existing implementations consume too much memory for small embedded devices and often rely on operating system support. This work presents an extremely memory efficient implementation of B-trees for embedded devices that functions on the smallest devices and does not require an operating system. Experimental results demonstrate that the B-tree implementation can run on devices with as little as 4 KB of RAM while efficiently processing thousands of records.
Object detection in aerial images is a fundamental research topic in the geoscience and remote sensing domain. However, the advanced approaches on this topic mainly focus on designing the elaborate backbones or head networks but ignore neck networks. In this letter, we first underline the importance of the neck network in object detection from the perspective of information bottleneck. Then, to alleviate the information deficiency problem in the current approaches, we propose a global semantic network (GSNet), which acts as a bridge from the backbone network to the head network in a bidirectional global pattern. Compared to the existing approaches, our model can capture the rich and enhanced image features with less computational costs. Besides, we further propose a feature fusion refinement module (FRM) for different levels of features, which are suffering from the problem of semantic gap in feature fusion. To demonstrate the effectiveness and efficiency of our approach, experiments are carried out on two challenging and representative aerial image datasets (i.e., DOTA and HRSC2016). Experimental results in terms of accuracy and complexity validate the superiority of our method. The code has been open-sourced at GSNet.
Nonresponse arises frequently in surveys and follow-ups are routinely made to increase the response rate. In order to monitor the follow-up process, callback data have been used in social sciences and survey studies for decades. In modern surveys, the availability of callback data is increasing because the response rate is decreasing and follow-ups are essential to collect maximum information. Although callback data are helpful to reduce the bias in surveys, such data have not been widely used in statistical analysis until recently. We propose a stableness of resistance assumption for nonresponse adjustment with callback data. We establish the identification and the semiparametric efficiency theory under this assumption, and propose a suite of semiparametric estimation methods including a doubly robust one, which generalize existing parametric approaches for callback data analysis. We apply the approach to a Consumer Expenditure Survey dataset. The results suggest an association between nonresponse and high housing expenditures.
Change detection (CD) aims to identify changes that occur in an image pair taken different times. Prior methods devise specific networks from scratch to predict change masks in pixel-level, and struggle with general segmentation problems. In this paper, we propose a new paradigm that reduces CD to semantic segmentation which means tailoring an existing and powerful semantic segmentation network to solve CD. This new paradigm conveniently enjoys the mainstream semantic segmentation techniques to deal with general segmentation problems in CD. Hence we can concentrate on studying how to detect changes. We propose a novel and importance insight that different change types exist in CD and they should be learned separately. Based on it, we devise a module named MTF to extract the change information and fuse temporal features. MTF enjoys high interpretability and reveals the essential characteristic of CD. And most segmentation networks can be adapted to solve the CD problems with our MTF module. Finally, we propose C-3PO, a network to detect changes at pixel-level. C-3PO achieves state-of-the-art performance without bells and whistles. It is simple but effective and can be considered as a new baseline in this field. Our code is at //github.com/DoctorKey/C-3PO.
Efficient consistency maintenance of incomplete and dynamic real-life databases is a quality label for further data analysis. In prior work, we tackled the generic problem of database updating in the presence of tuple generating constraints from a theoretical viewpoint. The current paper considers the usability of our approach by (a) introducing incremental update routines (instead of the previous from-scratch versions) and (b) removing the restriction that limits the contents of the database to fit in the main memory. In doing so, this paper offers new algorithms, proposes queries and data models inviting discussions on the representation of incompleteness on databases. We also propose implementations under a graph database model and the traditional relational database model. Our experiments show that computation times are similar globally but point to discrepancies in some steps.
In this letter, a novel method for change detection is proposed using neighborhood structure correlation. Because structure features are insensitive to the intensity differences between bi-temporal images, we perform the correlation analysis on structure features rather than intensity information. First, we extract the structure feature maps by using multi-orientated gradient information. Then, the structure feature maps are used to obtain the Neighborhood Structural Correlation Image (NSCI), which can represent the context structure information. In addition, we introduce a measure named matching error which can be used to improve neighborhood information. Subsequently, a change detection model based on the random forest is constructed. The NSCI feature and matching error are used as the model inputs for training and prediction. Finally, the decision tree voting is used to produce the change detection result. To evaluate the performance of the proposed method, it was compared with three state-of-the-art change detection methods. The experimental results on two datasets demonstrated the effectiveness and robustness of the proposed method.
We present a novel application of neural networks to design improved mixing elements for single-screw extruders. Specifically, we propose to use neural networks in numerical shape optimization to parameterize geometries. Geometry parameterization is crucial in enabling efficient shape optimization as it allows for optimizing complex shapes using only a few design variables. Recent approaches often utilize CAD data in conjunction with spline-based methods where the spline's control points serve as design variables. Consequently, these approaches rely on the same design variables as specified by the human designer. While this choice is convenient, it either restricts the design to small modifications of given, initial design features - effectively prohibiting topological changes - or yields undesirably many design variables. In this work, we step away from CAD and spline-based approaches and construct an artificial, feature-dense yet low-dimensional optimization space using a generative neural network. Using the neural network for the geometry parameterization extends state-of-the-art methods in that the resulting design space is not restricted to user-prescribed modifications of certain basis shapes. Instead, within the same optimization space, we can interpolate between and explore seemingly unrelated designs. To show the performance of this new approach, we integrate the developed shape parameterization into our numerical design framework for dynamic mixing elements in plastics extrusion. Finally, we challenge the novel method in a competitive setting against current free-form deformation-based approaches and demonstrate the method's performance even at this early stage.
Given a dataset on actions and resulting long-term rewards, a direct estimation approach fits value functions that minimize prediction error on the training data. Temporal difference learning (TD) methods instead fit value functions by minimizing the degree of temporal inconsistency between estimates made at successive time-steps. Focusing on finite state Markov chains, we provide a crisp asymptotic theory of the statistical advantages of this approach. First, we show that an intuitive inverse trajectory pooling coefficient completely characterizes the percent reduction in mean-squared error of value estimates. Depending on problem structure, the reduction could be enormous or nonexistent. Next, we prove that there can be dramatic improvements in estimates of the difference in value-to-go for two states: TD's errors are bounded in terms of a novel measure - the problem's trajectory crossing time - which can be much smaller than the problem's time horizon.
Knowledge is a formal way of understanding the world, providing a human-level cognition and intelligence for the next-generation artificial intelligence (AI). One of the representations of knowledge is the structural relations between entities. An effective way to automatically acquire this important knowledge, called Relation Extraction (RE), a sub-task of information extraction, plays a vital role in Natural Language Processing (NLP). Its purpose is to identify semantic relations between entities from natural language text. To date, there are several studies for RE in previous works, which have documented these techniques based on Deep Neural Networks (DNNs) become a prevailing technique in this research. Especially, the supervised and distant supervision methods based on DNNs are the most popular and reliable solutions for RE. This article 1)introduces some general concepts, and further 2)gives a comprehensive overview of DNNs in RE from two points of view: supervised RE, which attempts to improve the standard RE systems, and distant supervision RE, which adopts DNNs to design the sentence encoder and the de-noise method. We further 3)cover some novel methods and describe some recent trends and discuss possible future research directions for this task.
Graph neural networks (GNNs) are a popular class of machine learning models whose major advantage is their ability to incorporate a sparse and discrete dependency structure between data points. Unfortunately, GNNs can only be used when such a graph-structure is available. In practice, however, real-world graphs are often noisy and incomplete or might not be available at all. With this work, we propose to jointly learn the graph structure and the parameters of graph convolutional networks (GCNs) by approximately solving a bilevel program that learns a discrete probability distribution on the edges of the graph. This allows one to apply GCNs not only in scenarios where the given graph is incomplete or corrupted but also in those where a graph is not available. We conduct a series of experiments that analyze the behavior of the proposed method and demonstrate that it outperforms related methods by a significant margin.