There is currently no established method for evaluating human response timing across a range of naturalistic traffic conflict types. Traditional notions derived from controlled experiments, such as perception-response time, fail to account for the situation-dependency of human responses and offer no clear way to define the stimulus in many common traffic conflict scenarios. As a result, they are not well suited for application in naturalistic settings. Our main contribution is the development of a novel framework for measuring and modeling response times in naturalistic traffic conflicts applicable to automated driving systems as well as other traffic safety domains. The framework suggests that response timing must be understood relative to the subject's current (prior) belief and is always embedded in, and dependent on, the dynamically evolving situation. The response process is modeled as a belief update process driven by perceived violations to this prior belief, that is, by surprising stimuli. The framework resolves two key limitations with traditional notions of response time when applied in naturalistic scenarios: (1) The strong situation-dependence of response timing and (2) how to unambiguously define the stimulus. Resolving these issues is a challenge that must be addressed by any response timing model intended to be applied in naturalistic traffic conflicts. We show how the framework can be implemented by means of a relatively simple heuristic model fit to naturalistic human response data from real crashes and near crashes from the SHRP2 dataset and discuss how it is, in principle, generalizable to any traffic conflict scenario. We also discuss how the response timing framework can be implemented computationally based on evidence accumulation enhanced by machine learning-based generative models and the information-theoretic concept of surprise.
Early warnings for dynamical transitions in complex systems or high-dimensional observation data are essential in many real world applications, such as gene mutation, brain diseases, natural disasters, financial crises, and engineering reliability. To effectively extract early warning signals, we develop a novel approach: the directed anisotropic diffusion map that captures the latent evolutionary dynamics in low-dimensional manifold. Applying the methodology to authentic electroencephalogram (EEG) data, we successfully find the appropriate effective coordinates, and derive early warning signals capable of detecting the tipping point during the state transition. Our method bridges the latent dynamics with the original dataset. The framework is validated to be accurate and effective through numerical experiments, in terms of density and transition probability. It is shown that the second coordinate holds meaningful information for critical transition in various evaluation metrics.
The evaluation of clustering results is difficult, highly dependent on the evaluated data set and the perspective of the beholder. There are many different clustering quality measures, which try to provide a general measure to validate clustering results. A very popular measure is the Silhouette. We discuss the efficient medoid-based variant of the Silhouette, perform a theoretical analysis of its properties, provide two fast versions for the direct optimization, and discuss the use to choose the optimal number of clusters. We combine ideas from the original Silhouette with the well-known PAM algorithm and its latest improvements FasterPAM. One of the versions guarantees equal results to the original variant and provides a run speedup of $O(k^2)$. In experiments on real data with 30000 samples and $k$=100, we observed a 10464$\times$ speedup compared to the original PAMMEDSIL algorithm. Additionally, we provide a variant to choose the optimal number of clusters directly.
Next Point-of-Interest (POI) recommendation is a critical task in location-based services that aim to provide personalized suggestions for the user's next destination. Previous works on POI recommendation have laid focused on modeling the user's spatial preference. However, existing works that leverage spatial information are only based on the aggregation of users' previous visited positions, which discourages the model from recommending POIs in novel areas. This trait of position-based methods will harm the model's performance in many situations. Additionally, incorporating sequential information into the user's spatial preference remains a challenge. In this paper, we propose Diff-POI: a Diffusion-based model that samples the user's spatial preference for the next POI recommendation. Inspired by the wide application of diffusion algorithm in sampling from distributions, Diff-POI encodes the user's visiting sequence and spatial character with two tailor-designed graph encoding modules, followed by a diffusion-based sampling strategy to explore the user's spatial visiting trends. We leverage the diffusion process and its reversed form to sample from the posterior distribution and optimized the corresponding score function. We design a joint training and inference framework to optimize and evaluate the proposed Diff-POI. Extensive experiments on four real-world POI recommendation datasets demonstrate the superiority of our Diff-POI over state-of-the-art baseline methods. Further ablation and parameter studies on Diff-POI reveal the functionality and effectiveness of the proposed diffusion-based sampling strategy for addressing the limitations of existing methods.
We applied physics-informed neural networks to solve the constitutive relations for nonlinear, path-dependent material behavior. As a result, the trained network not only satisfies all thermodynamic constraints but also instantly provides information about the current material state (i.e., free energy, stress, and the evolution of internal variables) under any given loading scenario without requiring initial data. One advantage of this work is that it bypasses the repetitive Newton iterations needed to solve nonlinear equations in complex material models. Additionally, strategies are provided to reduce the required order of derivative for obtaining the tangent operator. The trained model can be directly used in any finite element package (or other numerical methods) as a user-defined material model. However, challenges remain in the proper definition of collocation points and in integrating several non-equality constraints that become active or non-active simultaneously. We tested this methodology on rate-independent processes such as the classical von Mises plasticity model with a nonlinear hardening law, as well as local damage models for interface cracking behavior with a nonlinear softening law. In order to demonstrate the applicability of the methodology in handling complex path dependency in a three-dimensional (3D) scenario, we tested the approach using the equations governing a damage model for a three-dimensional interface model. Such models are frequently employed for intergranular fracture at grain boundaries. We have observed a perfect agreement between the results obtained through the proposed methodology and those obtained using the classical approach. Furthermore, the proposed approach requires significantly less effort in terms of implementation and computing time compared to the traditional methods.
Accurate segmentation of clustered microcalcifications in mammography is crucial for the diagnosis and treatment of breast cancer. Despite exhibiting expert-level accuracy, recent deep learning advancements in medical image segmentation provide insufficient contribution to practical applications, due to the domain shift resulting from differences in patient postures, individual gland density, and imaging modalities of mammography etc. In this paper, a novel framework named MLN-net, which can accurately segment multi-source images using only single source images, is proposed for clustered microcalcification segmentation. We first propose a source domain image augmentation method to generate multi-source images, leading to improved generalization. And a structure of multiple layer normalization (LN) layers is used to construct the segmentation network, which can be found efficient for clustered microcalcification segmentation in different domains. Additionally, a branch selection strategy is designed for measuring the similarity of the source domain data and the target domain data. To validate the proposed MLN-net, extensive analyses including ablation experiments are performed, comparison of 12 baseline methods. Extensive experiments validate the effectiveness of MLN-net in segmenting clustered microcalcifications from different domains and the its segmentation accuracy surpasses state-of-the-art methods. Code will be available at //github.com/yezanting/MLN-NET-VERSON1.
Ontologies are traditionally expressed in the Web Ontology Language (OWL), that provides a syntax for expressing taxonomies with axioms regulating class membership. The semantics of OWL, based on Description Logic (DL), allows for the use of automated reasoning to check the consistency of ontologies, perform classification, and to answer DL queries. However, the open world assumption of OWL, along with limitations in its expressiveness, makes OWL less suitable for modelling rules and regulations, used in public administration. In such cases, it is desirable to have closed world semantics and a rule-based engine to check compliance with regulations. In this paper we describe and discuss data model management using the Shapes Constraint Language (SHACL), for concept modelling of concrete requirements in regulation documents within the public sector. We show how complex regulations, often containing a number of alternative requirements, can be expressed as constraints, and the utility of SHACL engines in verification of instance data against the SHACL model. We discuss benefits of modelling with SHACL, compared to OWL, and demonstrate the maintainability of the SHACL model by domain experts without prior knowledge of ontology management.
Stein's method for Gaussian process approximation can be used to bound the differences between the expectations of smooth functionals $h$ of a c\`adl\`ag random process $X$ of interest and the expectations of the same functionals of a well understood target random process $Z$ with continuous paths. Unfortunately, the class of smooth functionals for which this is easily possible is very restricted. Here, we prove an infinite dimensional Gaussian smoothing inequality, which enables the class of functionals to be greatly expanded -- examples are Lipschitz functionals with respect to the uniform metric, and indicators of arbitrary events -- in exchange for a loss of precision in the bounds. Our inequalities are expressed in terms of the smooth test function bound, an expectation of a functional of $X$ that is closely related to classical tightness criteria, a similar expectation for $Z$, and, for the indicator of a set $K$, the probability $\mathbb{P}(Z \in K^\theta \setminus K^{-\theta})$ that the target process is close to the boundary of $K$.
We suggest a global perspective on dynamic network flow problems that takes advantage of the similarities to port-Hamiltonian dynamics. Dynamic minimum cost flow problems are formulated as open-loop optimal control problems for general port-Hamiltonian systems with possibly state-dependent system matrices. We prove well-posedness of these systems and characterize optimal controls by the first-order optimality system, which is the starting point for the derivation of an adjoint-based gradient descent algorithm. Our theoretical analysis is complemented by a proof of concept, where we apply the proposed algorithm to static minimum cost flow problems and dynamic minimum cost flow problems on a simple directed acyclic graph. We present numerical results to validate the approach.
Many researchers have identified distribution shift as a likely contributor to the reproducibility crisis in behavioral and biomedical sciences. The idea is that if treatment effects vary across individual characteristics and experimental contexts, then studies conducted in different populations will estimate different average effects. This paper uses ``generalizability" methods to quantify how much of the effect size discrepancy between an original study and its replication can be explained by distribution shift on observed unit-level characteristics. More specifically, we decompose this discrepancy into ``components" attributable to sampling variability (including publication bias), observable distribution shifts, and residual factors. We compute this decomposition for several directly-replicated behavioral science experiments and find little evidence that observable distribution shifts contribute appreciably to non-replicability. In some cases, this is because there is too much statistical noise. In other cases, there is strong evidence that controlling for additional moderators is necessary for reliable replication.
In this paper, two novel classes of implicit exponential Runge-Kutta (ERK) methods are studied for solving highly oscillatory systems. Firstly, we analyze the symplectic conditions for two kinds of exponential integrators and obtain the symplectic method. In order to effectively solve highly oscillatory problems, we try to design the highly accurate implicit ERK integrators. By comparing the Taylor series expansion of numerical solution with exact solution, it can be verified that the order conditions of two new kinds of exponential methods are identical to classical Runge-Kutta (RK) methods, which implies that using the coefficients of RK methods, some highly accurate numerical methods are directly formulated. Furthermore, we also investigate the linear stability properties for these exponential methods. Finally, numerical results not only display the long time energy preservation of the symplectic method, but also present the accuracy and efficiency of these formulated methods in comparison with standard ERK methods.