Traffic accidents frequently lead to fatal injuries, contributing to over 50 million deaths until 2023. To mitigate driving hazards and ensure personal safety, it is crucial to assist vehicles in anticipating important objects during travel. Previous research on important object detection primarily assessed the importance of individual participants, treating them as independent entities and frequently overlooking the connections between these participants. Unfortunately, this approach has proven less effective in detecting important objects in complex scenarios. In response, we introduce Driving scene Relationship self-Understanding transformer (DRUformer), designed to enhance the important object detection task. The DRUformer is a transformer-based multi-modal important object detection model that takes into account the relationships between all the participants in the driving scenario. Recognizing that driving intention also significantly affects the detection of important objects during driving, we have incorporated a module for embedding driving intention. To assess the performance of our approach, we conducted a comparative experiment on the DRAMA dataset, pitting our model against other state-of-the-art (SOTA) models. The results demonstrated a noteworthy 16.2\% improvement in mIoU and a substantial 12.3\% boost in ACC compared to SOTA methods. Furthermore, we conducted a qualitative analysis of our model's ability to detect important objects across different road scenarios and classes, highlighting its effectiveness in diverse contexts. Finally, we conducted various ablation studies to assess the efficiency of the proposed modules in our DRUformer model.
In this article we consider an aggregate loss model with dependent losses. The losses occurrence process is governed by a two-state Markovian arrival process (MAP2), a Markov renewal process process that allows for (1) correlated inter-losses times, (2) non-exponentially distributed inter-losses times and, (3) overdisperse losses counts. Some quantities of interest to measure persistence in the loss occurrence process are obtained. Given a real operational risk database, the aggregate loss model is estimated by fitting separately the inter-losses times and severities. The MAP2 is estimated via direct maximization of the likelihood function, and severities are modeled by the heavy-tailed, double-Pareto Lognormal distribution. In comparison with the fit provided by the Poisson process, the results point out that taking into account the dependence and overdispersion in the inter-losses times distribution leads to higher capital charges.
Temporal Action Detection(TAD) is a crucial but challenging task in video understanding.It is aimed at detecting both the type and start-end frame for each action instance in a long, untrimmed video.Most current models adopt both RGB and Optical-Flow streams for the TAD task. Thus, original RGB frames must be converted manually into Optical-Flow frames with additional computation and time cost, which is an obstacle to achieve real-time processing. At present, many models adopt two-stage strategies, which would slow the inference speed down and complicatedly tuning on proposals generating.By comparison, we propose a one-stage anchor-free temporal localization method with RGB stream only, in which a novel Newtonian Mechanics-MLP architecture is established. It has comparable accuracy with all existing state-of-the-art models, while surpasses the inference speed of these methods by a large margin. The typical inference speed in this paper is astounding 4.44 video per second on THUMOS14. In applications, because there is no need to convert optical flow, the inference speed will be faster.It also proves that MLP has great potential in downstream tasks such as TAD. The source code is available at //github.com/BonedDeng/TadML
It is well established that to ensure or certify the robustness of a neural network, its Lipschitz constant plays a prominent role. However, its calculation is NP-hard. In this note, by taking into account activation regions at each layer as new constraints, we propose new quadratically constrained MIP formulations for the neural network Lipschitz estimation problem. The solutions of these problems give lower bounds and upper bounds of the Lipschitz constant and we detail conditions when they coincide with the exact Lipschitz constant.
Most of the existing Mendelian randomization (MR) methods are limited by the assumption of linear causality between exposure and outcome, and the development of new non-linear MR methods is highly desirable. We introduce two-stage prediction estimation and control function estimation from econometrics to MR and extend them to non-linear causality. We give conditions for parameter identification and theoretically prove the consistency and asymptotic normality of the estimates. We compare the two methods theoretically under both linear and non-linear causality. We also extend the control function estimation to a more flexible semi-parametric framework without detailed parametric specifications of causality. Extensive simulations numerically corroborate our theoretical results. Application to UK Biobank data reveals non-linear causal relationships between sleep duration and systolic/diastolic blood pressure.
Large Language Models (LLMs) and, more specifically, the Generative Pre-Trained Transformers (GPT) can help stakeholders in climate action explore digital knowledge bases and extract and utilize climate action knowledge in a sustainable manner. However, LLMs are "probabilistic models of knowledge bases" that excel at generating convincing texts but cannot be entirely relied upon due to the probabilistic nature of the information produced. This brief report illustrates the problem space with examples of LLM responses to some of the questions of relevance to climate action.
Random probabilities are a key component to many nonparametric methods in Statistics and Machine Learning. To quantify comparisons between different laws of random probabilities several works are starting to use the elegant Wasserstein over Wasserstein distance. In this paper we prove that the infinite-dimensionality of the space of probabilities drastically deteriorates its sample complexity, which is slower than any polynomial rate in the sample size. We thus propose a new distance that preserves many desirable properties of the former while achieving a parametric rate of convergence. In particular, our distance 1) metrizes weak convergence; 2) can be estimated numerically through samples with low complexity; 3) can be bounded analytically from above and below. The main ingredient are integral probability metrics, which lead to the name hierarchical IPM.
We revisit a self-supervised method that segments unlabelled speech into word-like segments. We start from the two-stage duration-penalised dynamic programming method that performs zero-resource segmentation without learning an explicit lexicon. In the first acoustic unit discovery stage, we replace contrastive predictive coding features with HuBERT. After word segmentation in the second stage, we get an acoustic word embedding for each segment by averaging HuBERT features. These embeddings are clustered using K-means to get a lexicon. The result is good full-coverage segmentation with a lexicon that achieves state-of-the-art performance on the ZeroSpeech benchmarks.
The problem of estimating a parameter in the drift coefficient is addressed for $N$ discretely observed independent and identically distributed stochastic differential equations (SDEs). This is done considering additional constraints, wherein only public data can be published and used for inference. The concept of local differential privacy (LDP) is formally introduced for a system of stochastic differential equations. The objective is to estimate the drift parameter by proposing a contrast function based on a pseudo-likelihood approach. A suitably scaled Laplace noise is incorporated to meet the privacy requirements. Our key findings encompass the derivation of explicit conditions tied to the privacy level. Under these conditions, we establish the consistency and asymptotic normality of the associated estimator. Notably, the convergence rate is intricately linked to the privacy level, and is some situations may be completely different from the case where privacy constraints are ignored. Our results hold true as the discretization step approaches zero and the number of processes $N$ tends to infinity.
In recent years, many positivity-preserving schemes for initial value problems have been constructed by modifying a Runge--Kutta (RK) method by weighting the right-hand side of the system of differential equations with solution-dependent factors. These include the classes of modified Patankar--Runge--Kutta (MPRK) and Geometric Conservative (GeCo) methods. Compared to traditional RK methods, the analysis of accuracy and stability of these methods is more complicated. In this work, we provide a comprehensive and unifying theory of order conditions for such RK-like methods, which differ from original RK schemes in that their coefficients are solution-dependent. The resulting order conditions are themselves solution-dependent and obtained using the theory of NB-series, and thus, can easily be read off from labeled N-trees. We present for the first time order conditions for MPRK and GeCo schemes of arbitrary order; For MPRK schemes, the order conditions are given implicitly in terms of the stages. From these results, we recover as particular cases all known order conditions from the literature for first- and second-order GeCo as well as first-, second- and third-order MPRK methods. Additionally, we derive sufficient and necessary conditions in an explicit form for 3rd and 4th order GeCo schemes as well as 4th order MPRK methods. We also present a new 4th order MPRK method within this framework and numerically confirm its convergence rate.
Recent dramatic advances in artificial intelligence indicate that in the coming years, humanity may irreversibly cross a threshold by creating superhuman general-purpose AI: AI that is better than humans at cognitive tasks in general in the way that AI is currently unbeatable in certain domains. This would upend core aspects of human society, present many unprecedented risks, and is likely to be uncontrollable in several senses. We can choose to not do so, starting by instituting hard limits - placed at the national and international level, and verified by hardware security measures - on the computation that can be used to train and run neural networks. With these limits in place, AI research and industry can focus on making both narrow and general-purpose AI that humans can understand and control, and from which we can reap enormous benefit.