Advanced artificial intelligence (AI) systems with access to millions of research papers could inspire new research ideas that may not be conceived by humans alone. However, how interesting are these AI-generated ideas, and how can we improve their quality? Here, we introduce SciMuse, a system that uses an evolving knowledge graph built from more than 58 million scientific papers to generate personalized research ideas via an interface to GPT-4. We conducted a large-scale human evaluation with over 100 research group leaders from the Max Planck Society, who ranked more than 4,000 personalized research ideas based on their level of interest. This evaluation allows us to understand the relationships between scientific interest and the core properties of the knowledge graph. We find that data-efficient machine learning can predict research interest with high precision, allowing us to optimize the interest-level of generated research ideas. This work represents a step towards an artificial scientific muse that could catalyze unforeseen collaborations and suggest interesting avenues for scientists.
This paper investigates an efficient exponential integrator generalized multiscale finite element method for solving a class of time-evolving partial differential equations in bounded domains. The proposed method first performs the spatial discretization of the model problem using constraint energy minimizing generalized multiscale finite element method (CEM-GMsFEM). This approach consists of two stages. First, the auxiliary space is constructed by solving local spectral problems, where the basis functions corresponding to small eigenvalues are captured. The multiscale basis functions are obtained in the second stage using the auxiliary space by solving local energy minimization problems over the oversampling domains. The basis functions have exponential decay outside the corresponding local oversampling regions. We shall consider the first and second-order explicit exponential Runge-Kutta approach for temporal discretization and to build a fully discrete numerical solution. The exponential integration strategy for the time variable allows us to take full advantage of the CEM-GMsFEM as it enables larger time steps due to its stability properties. We derive the error estimates in the energy norm under the regularity assumption. Finally, we will provide some numerical experiments to sustain the efficiency of the proposed method.
This research presents a comprehensive approach to predicting the duration of traffic incidents and classifying them as short-term or long-term across the Sydney Metropolitan Area. Leveraging a dataset that encompasses detailed records of traffic incidents, road network characteristics, and socio-economic indicators, we train and evaluate a variety of advanced machine learning models including Gradient Boosted Decision Trees (GBDT), Random Forest, LightGBM, and XGBoost. The models are assessed using Root Mean Square Error (RMSE) for regression tasks and F1 score for classification tasks. Our experimental results demonstrate that XGBoost and LightGBM outperform conventional models with XGBoost achieving the lowest RMSE of 33.7 for predicting incident duration and highest classification F1 score of 0.62 for a 30-minute duration threshold. For classification, the 30-minute threshold balances performance with 70.84% short-term duration classification accuracy and 62.72% long-term duration classification accuracy. Feature importance analysis, employing both tree split counts and SHAP values, identifies the number of affected lanes, traffic volume, and types of primary and secondary vehicles as the most influential features. The proposed methodology not only achieves high predictive accuracy but also provides stakeholders with vital insights into factors contributing to incident durations. These insights enable more informed decision-making for traffic management and response strategies. The code is available by the link: //github.com/Future-Mobility-Lab/SydneyIncidents
In the field of autonomous driving research, the use of immersive virtual reality (VR) techniques is widespread to enable a variety of studies under safe and controlled conditions. However, this methodology is only valid and consistent if the conduct of participants in the simulated setting mirrors their actions in an actual environment. In this paper, we present a first and innovative approach to evaluating what we term the behavioural gap, a concept that captures the disparity in a participant's conduct when engaging in a VR experiment compared to an equivalent real-world situation. To this end, we developed a digital twin of a pre-existed crosswalk and carried out a field experiment (N=18) to investigate pedestrian-autonomous vehicle interaction in both real and simulated driving conditions. In the experiment, the pedestrian attempts to cross the road in the presence of different driving styles and an external Human-Machine Interface (eHMI). By combining survey-based and behavioural analysis methodologies, we develop a quantitative approach to empirically assess the behavioural gap, as a mechanism to validate data obtained from real subjects interacting in a simulated VR-based environment. Results show that participants are more cautious and curious in VR, affecting their speed and decisions, and that VR interfaces significantly influence their actions.
Machine learning (ML)-based monitoring systems have been extensively developed to enhance the print quality of additive manufacturing (AM). In-situ and in-process data acquired using sensors can be used to train ML models that detect process anomalies, predict part quality, and adjust process parameters. However, the reproducibility of the proposed AM monitoring systems has not been investigated. There has not been a method to evaluate and improve reproducibility in the joint domain of AM and ML. Consequently, some crucial information for reproducing the research is usually missing from the publications; thus, systems reproduced based on the publications often cannot achieve the claimed performance. This paper establishes the definition of reproducibility in this domain, proposes a reproducibility investigation pipeline, and composes a reproducibility checklist. A research is reproducible if a performance comparable to the original research can be obtained when reproduced by a different team using a different experiment setup. The reproducibility investigation pipeline sequentially guides the readers through all the necessary reproduction steps, during which the reproducibility checklist will help extract the reproducibility information from the publication. A case study that reproduced a vision-based warping detection system demonstrated the usage and validated the efficacy of the proposed pipeline and checklist. It has been observed that the reproducibility checklist can help the authors verify that all the information critical to reproducibility is provided in the publications. The investigation pipeline can help identify the missing reproducibility information, which should be acquired from the original authors to achieve the claimed performance.
Longitudinal cohort studies, which follow a group of individuals over time, provide the opportunity to examine causal effects of complex exposures on long-term health outcomes. Utilizing data from multiple cohorts has the potential to add further benefit by improving precision of estimates through data pooling and by allowing examination of effect heterogeneity through replication of analyses across cohorts. However, the interpretation of findings can be complicated by biases that may be compounded when pooling data, or, contribute to discrepant findings when analyses are replicated. The "target trial" is a powerful tool for guiding causal inference in single-cohort studies. Here we extend this conceptual framework to address the specific challenges that can arise in the multi-cohort setting. By representing a clear definition of the target estimand, the target trial provides a central point of reference against which biases arising in each cohort and from data pooling can be systematically assessed. Consequently, analyses can be designed to reduce these biases and the resulting findings appropriately interpreted in light of potential remaining biases. We use a case study to demonstrate the framework and its potential to strengthen causal inference in multi-cohort studies through improved analysis design and clarity in the interpretation of findings.
This work explores multi-modal inference in a high-dimensional simplified model, analytically quantifying the performance gain of multi-modal inference over that of analyzing modalities in isolation. We present the Bayes-optimal performance and weak recovery thresholds in a model where the objective is to recover the latent structures from two noisy data matrices with correlated spikes. The paper derives the approximate message passing (AMP) algorithm for this model and characterizes its performance in the high-dimensional limit via the associated state evolution. The analysis holds for a broad range of priors and noise channels, which can differ across modalities. The linearization of AMP is compared numerically to the widely used partial least squares (PLS) and canonical correlation analysis (CCA) methods, which are both observed to suffer from a sub-optimal recovery threshold.
In the field of materials science and manufacturing, a vast amount of heterogeneous data exists, encompassing measurement and simulation data, machine data, publications, and more. This data serves as the bedrock of valuable knowledge that can be leveraged for various engineering applications. However, efficiently storing and handling such diverse data remain significantly challenging, often due to the lack of standardization and integration across different organizational units. Addressing these issues is crucial for fully utilizing the potential of data-driven approaches in these fields. In this paper, we present a novel technology stack named Dataspace Management System (DSMS) for powering dataspace solutions. The core of DSMS lies on its distinctive knowledge management approach tuned to meet the specific demands of the materials science and manufacturing domain, all while adhering to the FAIR principles. This includes data integration, linkage, exploration, visualization, processing, and enrichment, in order to support engineers in decision-making and in solving design and optimization problems. We provide an architectural overview and describe the core components of DSMS. Additionally, we demonstrate the applicability of DSMS to typical data processing tasks in materials science through use cases from two research projects, namely StahlDigital and KupferDigital, both part of the German MaterialDigital initiative.
Large language models (LLMs) bear promise as a fast and accurate material modeling paradigm for evaluation, analysis, and design. Their vast number of trainable parameters necessitates a wealth of data to achieve accuracy and mitigate overfitting. However, experimental measurements are often limited and costly to obtain in sufficient quantities for finetuning. To this end, we present a physics-based training pipeline that tackles the pathology of data scarcity. The core enabler is a physics-based modeling framework that generates a multitude of synthetic data to align the LLM to a physically consistent initial state before finetuning. Our framework features a two-phase training strategy: (1) utilizing the large-in-amount while less accurate synthetic data for supervised pretraining, and (2) finetuning the phase-1 model with limited experimental data. We empirically demonstrate that supervised pretraining is vital to obtaining accurate finetuned LLMs, via the lens of learning polymer flammability metrics where cone calorimeter data is sparse.
We hypothesize that due to the greedy nature of learning in multi-modal deep neural networks, these models tend to rely on just one modality while under-fitting the other modalities. Such behavior is counter-intuitive and hurts the models' generalization, as we observe empirically. To estimate the model's dependence on each modality, we compute the gain on the accuracy when the model has access to it in addition to another modality. We refer to this gain as the conditional utilization rate. In the experiments, we consistently observe an imbalance in conditional utilization rates between modalities, across multiple tasks and architectures. Since conditional utilization rate cannot be computed efficiently during training, we introduce a proxy for it based on the pace at which the model learns from each modality, which we refer to as the conditional learning speed. We propose an algorithm to balance the conditional learning speeds between modalities during training and demonstrate that it indeed addresses the issue of greedy learning. The proposed algorithm improves the model's generalization on three datasets: Colored MNIST, Princeton ModelNet40, and NVIDIA Dynamic Hand Gesture.
Knowledge graphs (KGs) of real-world facts about entities and their relationships are useful resources for a variety of natural language processing tasks. However, because knowledge graphs are typically incomplete, it is useful to perform knowledge graph completion or link prediction, i.e. predict whether a relationship not in the knowledge graph is likely to be true. This paper serves as a comprehensive survey of embedding models of entities and relationships for knowledge graph completion, summarizing up-to-date experimental results on standard benchmark datasets and pointing out potential future research directions.