Navigation in complex 3D scenarios requires appropriate environment representation for efficient scene understanding and trajectory generation. We propose a highly efficient and extensible global navigation framework based on a tomographic understanding of the environment to navigate ground robots in multi-layer structures. Our approach generates tomogram slices using the point cloud map to encode the geometric structure as ground and ceiling elevations. Then it evaluates the scene traversability considering the robot's motion capabilities. Both the tomogram construction and the scene evaluation are accelerated through parallel computation. Our approach further alleviates the trajectory generation complexity compared with planning in 3D spaces directly. It generates 3D trajectories by searching through multiple tomogram slices and separately adjusts the robot height to avoid overhangs. We evaluate our framework in various simulation scenarios and further test it in the real world on a quadrupedal robot. Our approach reduces the scene evaluation time by 3 orders of magnitude and improves the path planning speed by 3 times compared with existing approaches, demonstrating highly efficient global navigation in various complex 3D environments. The code is available at: //github.com/byangw/PCT_planner.
Smart contract transactions associated with security attacks often exhibit distinct behavioral patterns compared with historical benign transactions before the attacking events. While many runtime monitoring and guarding mechanisms have been proposed to validate invariants and stop anomalous transactions on the fly, the empirical effectiveness of the invariants used remains largely unexplored. In this paper, we studied 23 prevalent invariants of 8 categories, which are either deployed in high-profile protocols or endorsed by leading auditing firms and security experts. Using these well-established invariants as templates, we developed a tool Trace2Inv which dynamically generates new invariants customized for a given contract based on its historical transaction data. We evaluated Trace2Inv on 42 smart contracts that fell victim to 27 distinct exploits on the Ethereum blockchain. Our findings reveal that the most effective invariant guard alone can successfully block 18 of the 27 identified exploits with minimal gas overhead. Our analysis also shows that most of the invariants remain effective even when the experienced attackers attempt to bypass them. Additionally, we studied the possibility of combining multiple invariant guards, resulting in blocking up to 23 of the 27 benchmark exploits and achieving false positive rates as low as 0.32%. Trace2Inv outperforms current state-of-the-art works on smart contract invariant mining and transaction attack detection in terms of both practicality and accuracy. Though Trace2Inv is not primarily designed for transaction attack detection, it surprisingly found two previously unreported exploit transactions, earlier than any reported exploit transactions against the same victim contracts.
Despite extensive safety assessments of drugs prior to their introduction to the market, certain adverse drug reactions (ADRs) remain undetected. The primary objective of pharmacovigilance is to identify these ADRs (i.e., signals). In addition to traditional spontaneous reporting systems (SRSs), electronic health (EHC) data is being used for signal detection as well. Unlike SRS, EHC data is longitudinal and thus requires assumptions about the patient's drug exposure history and its impact on ADR occurrences over time, which many current methods do implicitly. We propose an exposure model framework that explicitly models the longitudinal relationship between the drug and the ADR. By considering multiple such models simultaneously, we can detect signals that might be missed by other approaches. The parameters of these models are estimated using maximum likelihood, and the Bayesian Information Criterion (BIC) is employed to select the most suitable model. Since BIC is connected to the posterior distribution, it servers the dual purpose of identifying the best-fitting model and determining the presence of a signal by evaluating the posterior probability of the null model. We evaluate the effectiveness of this framework through a simulation study, for which we develop an EHC data simulator. Additionally, we conduct a case study applying our approach to four drug-ADR pairs using an EHC dataset comprising over 1.2 million insured individuals. Both the method and the EHC data simulator code are publicly accessible as part of the R package //github.com/bips-hb/expard.
Teleportation, a widely-used locomotion technique in Virtual Reality (VR), allows instantaneous movement within VR environments. Enhanced hand tracking in modern VR headsets has popularized hands-only teleportation methods, which eliminate the need for physical controllers. However, these techniques have not fully explored the potential of bi-manual input, where each hand plays a distinct role in teleportation: one controls the teleportation point and the other confirms selections. Additionally, the influence of users' posture, whether sitting or standing, on these techniques remains unexplored. Furthermore, previous teleportation evaluations lacked assessments based on established human motor models such as Fitts' Law. To address these gaps, we conducted a user study (N=20) to evaluate bi-manual pointing performance in VR teleportation tasks, considering both sitting and standing postures. We proposed a variation of the Fitts' Law model to accurately assess users' teleportation performance. We designed and evaluated various bi-manual teleportation techniques, comparing them to uni-manual and dwell-based techniques. Results showed that bi-manual techniques, particularly when the dominant hand is used for pointing and the non-dominant hand for selection, enable faster teleportation compared to other methods. Furthermore, bi-manual and dwell techniques proved significantly more accurate than uni-manual teleportation. Moreover, our proposed Fitts' Law variation more accurately predicted users' teleportation performance compared to existing models. Finally, we developed a set of guidelines for designers to enhance VR teleportation experiences and optimize user interactions.
Parallelisation in Bayesian optimisation is a common strategy but faces several challenges: the need for flexibility in acquisition functions and kernel choices, flexibility dealing with discrete and continuous variables simultaneously, model misspecification, and lastly fast massive parallelisation. To address these challenges, we introduce a versatile and modular framework for batch Bayesian optimisation via probabilistic lifting with kernel quadrature, called SOBER, which we present as a Python library based on GPyTorch/BoTorch. Our framework offers the following unique benefits: (1) Versatility in downstream tasks under a unified approach. (2) A gradient-free sampler, which does not require the gradient of acquisition functions, offering domain-agnostic sampling (e.g., discrete and mixed variables, non-Euclidean space). (3) Flexibility in domain prior distribution. (4) Adaptive batch size (autonomous determination of the optimal batch size). (5) Robustness against a misspecified reproducing kernel Hilbert space. (6) Natural stopping criterion.
Regional planning processes and associated redevelopment projects can be complex due to the vast amount of diverse data involved. However, all of this data shares a common geographical reference, especially in the renaturation of former open-cast mining areas. To ensure safety, it is crucial to maintain a comprehensive overview of the interrelated data and draw accurate conclusions. This requires special tools and can be a very time-consuming process. A geographical information system (GIS) is well-suited for this purpose, but even a GIS has limitations when dealing with multiple data types and sources. Additional tools are often necessary to process and view all the data, which can complicate the planning process. Our paper describes a system architecture that addresses the aforementioned issues and provides a simple, yet flexible tool for these activities. The architecture is based on microservices using Docker and is divided into a backend and a frontend. The backend simplifies and generalizes the integration of different data types, while a graph database is used to link relevant data and reveal potential new relationships between them. Finally, a modern web frontend displays the data and relationships.
The robustness of SLAM (Simultaneous Localization and Mapping) algorithms under challenging environmental conditions is critical for the success of autonomous driving. However, the real-world impact of such conditions remains largely unexplored due to the difficulty of altering environmental parameters in a controlled manner. To address this, we introduce CARLA-Loc, a synthetic dataset designed for challenging and dynamic environments, created using the CARLA simulator. Our dataset integrates a variety of sensors, including cameras, event cameras, LiDAR, radar, and IMU, etc. with tuned parameters and modifications to ensure the realism of the generated data. CARLA-Loc comprises 7 maps and 42 sequences, each varying in dynamics and weather conditions. Additionally, a pipeline script is provided that allows users to generate custom sequences conveniently. We evaluated 5 visual-based and 4 LiDAR-based SLAM algorithms across different sequences, analyzing how various challenging environmental factors influence localization accuracy. Our findings demonstrate the utility of the CARLA-Loc dataset in validating the efficacy of SLAM algorithms under diverse conditions.
Emotion recognition in conversation (ERC) aims to detect the emotion label for each utterance. Motivated by recent studies which have proven that feeding training examples in a meaningful order rather than considering them randomly can boost the performance of models, we propose an ERC-oriented hybrid curriculum learning framework. Our framework consists of two curricula: (1) conversation-level curriculum (CC); and (2) utterance-level curriculum (UC). In CC, we construct a difficulty measurer based on "emotion shift" frequency within a conversation, then the conversations are scheduled in an "easy to hard" schema according to the difficulty score returned by the difficulty measurer. For UC, it is implemented from an emotion-similarity perspective, which progressively strengthens the model's ability in identifying the confusing emotions. With the proposed model-agnostic hybrid curriculum learning strategy, we observe significant performance boosts over a wide range of existing ERC models and we are able to achieve new state-of-the-art results on four public ERC datasets.
Recent contrastive representation learning methods rely on estimating mutual information (MI) between multiple views of an underlying context. E.g., we can derive multiple views of a given image by applying data augmentation, or we can split a sequence into views comprising the past and future of some step in the sequence. Contrastive lower bounds on MI are easy to optimize, but have a strong underestimation bias when estimating large amounts of MI. We propose decomposing the full MI estimation problem into a sum of smaller estimation problems by splitting one of the views into progressively more informed subviews and by applying the chain rule on MI between the decomposed views. This expression contains a sum of unconditional and conditional MI terms, each measuring modest chunks of the total MI, which facilitates approximation via contrastive bounds. To maximize the sum, we formulate a contrastive lower bound on the conditional MI which can be approximated efficiently. We refer to our general approach as Decomposed Estimation of Mutual Information (DEMI). We show that DEMI can capture a larger amount of MI than standard non-decomposed contrastive bounds in a synthetic setting, and learns better representations in a vision domain and for dialogue generation.
The recent proliferation of knowledge graphs (KGs) coupled with incomplete or partial information, in the form of missing relations (links) between entities, has fueled a lot of research on knowledge base completion (also known as relation prediction). Several recent works suggest that convolutional neural network (CNN) based models generate richer and more expressive feature embeddings and hence also perform well on relation prediction. However, we observe that these KG embeddings treat triples independently and thus fail to cover the complex and hidden information that is inherently implicit in the local neighborhood surrounding a triple. To this effect, our paper proposes a novel attention based feature embedding that captures both entity and relation features in any given entity's neighborhood. Additionally, we also encapsulate relation clusters and multihop relations in our model. Our empirical study offers insights into the efficacy of our attention based model and we show marked performance gains in comparison to state of the art methods on all datasets.
Multi-relation Question Answering is a challenging task, due to the requirement of elaborated analysis on questions and reasoning over multiple fact triples in knowledge base. In this paper, we present a novel model called Interpretable Reasoning Network that employs an interpretable, hop-by-hop reasoning process for question answering. The model dynamically decides which part of an input question should be analyzed at each hop; predicts a relation that corresponds to the current parsed results; utilizes the predicted relation to update the question representation and the state of the reasoning process; and then drives the next-hop reasoning. Experiments show that our model yields state-of-the-art results on two datasets. More interestingly, the model can offer traceable and observable intermediate predictions for reasoning analysis and failure diagnosis.