The VALERIE tool pipeline is a synthetic data generator developed with the goal to contribute to the understanding of domain-specific factors that influence perception performance of DNNs (deep neural networks). This work was carried out under the German research project KI Absicherung in order to develop a methodology for the validation of DNNs in the context of pedestrian detection in urban environments for automated driving. The VALERIE22 dataset was generated with the VALERIE procedural tools pipeline providing a photorealistic sensor simulation rendered from automatically synthesized scenes. The dataset provides a uniquely rich set of metadata, allowing extraction of specific scene and semantic features (like pixel-accurate occlusion rates, positions in the scene and distance + angle to the camera). This enables a multitude of possible tests on the data and we hope to stimulate research on understanding performance of DNNs. Based on performance metric a comparison with several other publicly available datasets is provided, demonstrating that VALERIE22 is one of best performing synthetic datasets currently available in the open domain.
This work presents the analysis of semantically segmented, longitudinally, and spatially rich thermal images collected at the neighborhood scale to identify hot and cool spots in urban areas. An infrared observatory was operated over a few months to collect thermal images of different types of buildings on the educational campus of the National University of Singapore. A subset of the thermal image dataset was used to train state-of-the-art deep learning models to segment various urban features such as buildings, vegetation, sky, and roads. It was observed that the U-Net segmentation model with `resnet34' CNN backbone has the highest mIoU score of 0.99 on the test dataset, compared to other models such as DeepLabV3, DeeplabV3+, FPN, and PSPnet. The masks generated using the segmentation models were then used to extract the temperature from thermal images and correct for differences in the emissivity of various urban features. Further, various statistical measure of the temperature extracted using the predicted segmentation masks is shown to closely match the temperature extracted using the ground truth masks. Finally, the masks were used to identify hot and cool spots in the urban feature at various instances of time. This forms one of the very few studies demonstrating the automated analysis of thermal images, which can be of potential use to urban planners for devising mitigation strategies for reducing the urban heat island (UHI) effect, improving building energy efficiency, and maximizing outdoor thermal comfort.
Research in high energy physics (HEP) requires huge amounts of computing and storage, putting strong constraints on the code speed and resource usage. To meet these requirements, a compiled high-performance language is typically used; while for physicists, who focus on the application when developing the code, better research productivity pleads for a high-level programming language. A popular approach consists of combining Python, used for the high-level interface, and C++, used for the computing intensive part of the code. A more convenient and efficient approach would be to use a language that provides both high-level programming and high-performance. The Julia programming language, developed at MIT especially to allow the use of a single language in research activities, has followed this path. In this paper the applicability of using the Julia language for HEP research is explored, covering the different aspects that are important for HEP code development: runtime performance, handling of large projects, interface with legacy code, distributed computing, training, and ease of programming. The study shows that the HEP community would benefit from a large scale adoption of this programming language. The HEP-specific foundation libraries that would need to be consolidated are identified
This paper proposes a novel signed $\beta$-model for directed signed network, which is frequently encountered in application domains but largely neglected in literature. The proposed signed $\beta$-model decomposes a directed signed network as the difference of two unsigned networks and embeds each node with two latent factors for in-status and out-status. The presence of negative edges leads to a non-concave log-likelihood, and a one-step estimation algorithm is developed to facilitate parameter estimation, which is efficient both theoretically and computationally. We also develop an inferential procedure for pairwise and multiple node comparisons under the signed $\beta$-model, which fills the void of lacking uncertainty quantification for node ranking. Theoretical results are established for the coverage probability of confidence interval, as well as the false discovery rate (FDR) control for multiple node comparison. The finite sample performance of the signed $\beta$-model is also examined through extensive numerical experiments on both synthetic and real-life networks.
Logical modeling is a powerful tool in biology, offering a system-level understanding of the complex interactions that govern biological processes. A gap that hinders the scalability of logical models is the need to specify the update function of every vertex in the network depending on the status of its predecessors. To address this, we introduce in this paper the concept of strong regulation, where a vertex is only updated to active/inactive if all its predecessors agree in their influences; otherwise, it is set to ambiguous. We explore the interplay between active, inactive, and ambiguous influences in a network. We discuss the existence of phenotype attractors in such networks, where the status of some of the variables is fixed to active/inactive, while the others can have an arbitrary status, including ambiguous.
Analysis of higher-order organizations, usually small connected subgraphs called motifs, is a fundamental task on complex networks. This paper studies a new problem of testing higher-order clusterability: given query access to an undirected graph, can we judge whether this graph can be partitioned into a few clusters of highly-connected motifs? This problem is an extension of the former work proposed by Czumaj et al. (STOC' 15), who recognized cluster structure on graphs using the framework of property testing. In this paper, a good graph cluster on high dimensions is first defined for higher-order clustering. Then, query lower bound is given for testing whether this kind of good cluster exists. Finally, an optimal sublinear-time algorithm is developed for testing clusterability based on triangles.
Polylla is a polygonal mesh algorithm that generates meshes with arbitrarily shaped polygons using the concept of terminal-edge regions. Until now, Polylla has been limited to 2D meshes, but in this work, we extend Polylla to 3D volumetric meshes. We present two versions of Polylla 3D. The first version generates terminal-edge regions, converts them into polyhedra, and repairs polyhedra that are joined by only an edge. This version differs from the original Polylla algorithm in that it does not have the same phases as the 2D version. In the second version, we define two new concepts: longest-face propagation path and terminal-face regions. We use these concepts to create an almost direct extension of the 2D Polylla mesh with the same three phases: label phase, traversal phase, and repair phase.
The causal roadmap is a formal framework for causal and statistical inference that supports clear specification of the causal question, interpretable and transparent statement of required causal assumptions, robust inference, and optimal precision. The roadmap is thus particularly well-suited to evaluating longitudinal causal effects using large scale registries; however, application of the roadmap to registry data also introduces particular challenges. In this paper we provide a detailed case study of the longitudinal causal roadmap applied to the Danish National Registry to evaluate the comparative effectiveness of second-line diabetes drugs on dementia risk. Specifically, we evaluate the difference in counterfactual five-year cumulative risk of dementia if a target population of adults with type 2 diabetes had initiated and remained on GLP-1 receptor agonists (a second-line diabetes drug) compared to a range of active comparator protocols. Time-dependent confounding is accounted for through use of the iterated conditional expectation representation of the longitudinal g-formula as a statistical estimand. Statistical estimation uses longitudinal targeted maximum likelihood, incorporating machine learning. We provide practical guidance on the implementation of the roadmap using registry data, and highlight how rare exposures and outcomes over long-term follow up can raise challenges for flexible and robust estimators, even in the context of the large sample sizes provided by the registry. We demonstrate how outcome blind simulations can be used to help address these challenges by supporting careful estimator pre-specification. We find a protective effect of GLP-1RAs compared to some but not all other second-line treatments.
One central theme in machine learning is function estimation from sparse and noisy data. An example is supervised learning where the elements of the training set are couples, each containing an input location and an output response. In the last decades, a substantial amount of work has been devoted to design estimators for the unknown function and to study their convergence to the optimal predictor, also characterizing the learning rate. These results typically rely on stationary assumptions where input locations are drawn from a probability distribution that does not change in time. In this work, we consider kernel-based ridge regression and derive convergence conditions under non stationary distributions, addressing also cases where stochastic adaption may happen infinitely often. This includes the important exploration-exploitation problems where e.g. a set of agents/robots has to monitor an environment to reconstruct a sensorial field and their movements rules are continuously updated on the basis of the acquired knowledge on the field and/or the surrounding environment.
We discuss recently developed methods that quantify the stability and generalizability of statistical findings under distributional changes. In many practical problems, the data is not drawn i.i.d. from the target population. For example, unobserved sampling bias, batch effects, or unknown associations might inflate the variance compared to i.i.d. sampling. For reliable statistical inference, it is thus necessary to account for these types of variation. We discuss and review two methods that allow quantifying distribution stability based on a single dataset. The first method computes the sensitivity of a parameter under worst-case distributional perturbations to understand which types of shift pose a threat to external validity. The second method treats distributional shifts as random which allows assessing average robustness (instead of worst-case). Based on a stability analysis of multiple estimators on a single dataset, it integrates both sampling and distributional uncertainty into a single confidence interval.
Hashing has been widely used in approximate nearest search for large-scale database retrieval for its computation and storage efficiency. Deep hashing, which devises convolutional neural network architecture to exploit and extract the semantic information or feature of images, has received increasing attention recently. In this survey, several deep supervised hashing methods for image retrieval are evaluated and I conclude three main different directions for deep supervised hashing methods. Several comments are made at the end. Moreover, to break through the bottleneck of the existing hashing methods, I propose a Shadow Recurrent Hashing(SRH) method as a try. Specifically, I devise a CNN architecture to extract the semantic features of images and design a loss function to encourage similar images projected close. To this end, I propose a concept: shadow of the CNN output. During optimization process, the CNN output and its shadow are guiding each other so as to achieve the optimal solution as much as possible. Several experiments on dataset CIFAR-10 show the satisfying performance of SRH.