This paper presents a novel approach for pointwise estimation of multivariate density functions on known domains of arbitrary dimensions using nonparametric local polynomial estimators. Our method is highly flexible, as it applies to both simple domains, such as open connected sets, and more complicated domains that are not star-shaped around the point of estimation. This enables us to handle domains with sharp concavities, holes, and local pinches, such as polynomial sectors. Additionally, we introduce a data-driven selection rule based on the general ideas of Goldenshluger and Lepski. Our results demonstrate that the local polynomial estimators are minimax under a $L^2$ risk across a wide range of H\"older-type functional classes. In the adaptive case, we provide oracle inequalities and explicitly determine the convergence rate of our statistical procedure. Simulations on polynomial sectors show that our oracle estimates outperform those of the most popular alternative method, found in the sparr package for the R software. Our statistical procedure is implemented in an online R package which is readily accessible.
The adoption of individual behavioural patterns is largely determined by stimuli arriving from peers via social interactions or from external sources. Based on these influences, individuals are commonly assumed to follow simple or complex adoption rules, inducing social contagion processes. In reality, multiple adoption rules may coexist even within the same social contagion process, introducing additional complexity into the spreading phenomena. Our goal is to understand whether coexisting adoption mechanisms can be distinguished from a microscopic view, at the egocentric network level, without requiring global information about the underlying network, or the unfolding spreading process. We formulate this question as a classification problem, and study it through a Bayesian likelihood approach and with random forest classifiers in various synthetic and data-driven experiments. This study offers a novel perspective on the observations of propagation processes at the egocentric level and a better understanding of landmark contagion mechanisms from a local view.
This research presents a comprehensive approach to predicting the duration of traffic incidents and classifying them as short-term or long-term across the Sydney Metropolitan Area. Leveraging a dataset that encompasses detailed records of traffic incidents, road network characteristics, and socio-economic indicators, we train and evaluate a variety of advanced machine learning models including Gradient Boosted Decision Trees (GBDT), Random Forest, LightGBM, and XGBoost. The models are assessed using Root Mean Square Error (RMSE) for regression tasks and F1 score for classification tasks. Our experimental results demonstrate that XGBoost and LightGBM outperform conventional models with XGBoost achieving the lowest RMSE of 33.7 for predicting incident duration and highest classification F1 score of 0.62 for a 30-minute duration threshold. For classification, the 30-minute threshold balances performance with 70.84\% short-term duration classification accuracy and 62.72\% long-term duration classification accuracy. Feature importance analysis, employing both tree split counts and SHAP values, identifies the number of affected lanes, traffic volume, and types of primary and secondary vehicles as the most influential features. The proposed methodology not only achieves high predictive accuracy but also provides stakeholders with vital insights into factors contributing to incident durations. These insights enable more informed decision-making for traffic management and response strategies. The code is available by the link: //github.com/Future-Mobility-Lab/SydneyIncidents
We examine two-layer networks and centrality measures defined on them. The propose two fast and accurate algorithms to approximate the game-theoretic centrality measures and examine connection between centrality measures and characteristics of opinion dynamic processes on such networks. As an example, we consider a Zachary's karate club social network and extend it by adding the second (internal) layer of communication. Internal layer represents the idea that individuals can share their real opinions with their close friends. The structures of the external and internal layers may be different. As characteristics of of opinion dynamic processes we mean consensus time and winning rate of a particular opinion. We find significantly strong positive correlation between internal graph density and consensus time, and significantly strong negative correlation between centrality of authoritative nodes and consensus time.
This paper studies the influence of probabilism and non-determinism on some quantitative aspect X of the execution of a system modeled as a Markov decision process (MDP). To this end, the novel notion of demonic variance is introduced: For a random variable X in an MDP M, it is defined as 1/2 times the maximal expected squared distance of the values of X in two independent execution of M in which also the non-deterministic choices are resolved independently by two distinct schedulers. It is shown that the demonic variance is between 1 and 2 times as large as the maximal variance of X in M that can be achieved by a single scheduler. This allows defining a non-determinism score for M and X measuring how strongly the difference of X in two executions of M can be influenced by the non-deterministic choices. Properties of MDPs M with extremal values of the non-determinism score are established. Further, the algorithmic problems of computing the maximal variance and the demonic variance are investigated for two random variables, namely weighted reachability and accumulated rewards. In the process, also the structure of schedulers maximizing the variance and of scheduler pairs realizing the demonic variance is analyzed.
In a paper of 1976, Rauzy studied two complexity notions, $\underline{\beta}$ and $\overline{\beta}$, for infinite sequences over a finite alphabet. The function $\underline{\beta}$ is maximum exactly in the Borel normal sequences and $\overline{\beta}$ is minimum exactly in the sequences that, when added to any Borel normal sequence, the result is also Borel normal. Although the definition of $\underline{\beta}$ and $\overline{\beta}$ do not involve finite-state automata, we establish some connections between them and the lower $\underline{\rm dim}$ and upper $\overline{\rm dim}$ finite-state dimension (or other equivalent notions like finite-state compression ratio, aligned-entropy or cumulative log-loss of finite-state predictors). We show tight lower and upper bounds on $\underline{\rm dim}$ and $\overline{\rm dim}$ as functions of $\underline{\beta}$ and $\overline{\beta}$, respectively. In particular this implies that sequences with $\overline{\rm dim}$ zero are exactly the ones that that, when added to any Borel normal sequence, the result is also Borel normal. We also show that the finite-state dimensions $\underline{\rm dim}$ and $\overline{\rm dim}$ are essentially subadditive. We need two technical tools that are of independent interest. One is the family of local finite-state automata, which are automata whose memory consists of the last $k$ read symbols for some fixed integer $k$. We show that compressors based on local finite-state automata are as good as standard finite-state compressors. The other one is a notion of finite-state relational (non-deterministic) compressor, which can compress an input in several ways provided the input can always be recovered from any of its outputs. We show that such compressors cannot compress more than standard (deterministic) finite-state compressors.
We consider wave propagation problems over 2-dimensional domains with piecewise-linear boundaries, possibly including scatterers. We assume that the wave speed is constant, and that the initial conditions and forcing terms are radially symmetric and compactly supported. We propose an approximation of the propagating wave as the sum of some special space-time functions. Each term in this sum identifies a particular field component, modeling the result of a single reflection or diffraction effect. We describe an algorithm for identifying such components automatically, based on the domain geometry. To showcase our proposed method, we present several numerical examples, such as waves scattering off wedges and waves propagating through a room in presence of obstacles. Software implementing our numerical algorithm is made available as open-source code.
Modelling the extremal dependence of bivariate variables is important in a wide variety of practical applications, including environmental planning, catastrophe modelling and hydrology. The majority of these approaches are based on the framework of bivariate regular variation, and a wide range of literature is available for estimating the dependence structure in this setting. However, such procedures are only applicable to variables exhibiting asymptotic dependence, even though asymptotic independence is often observed in practice. In this paper, we consider the so-called `angular dependence function'; this quantity summarises the extremal dependence structure for asymptotically independent variables. Until recently, only pointwise estimators of the angular dependence function have been available. We introduce a range of global estimators and compare them to another recently introduced technique for global estimation through a systematic simulation study, and a case study on river flow data from the north of England, UK.
Practical parameter identifiability in ODE-based epidemiological models is a known issue, yet one that merits further study. It is essentially ubiquitous due to noise and errors in real data. In this study, to avoid uncertainty stemming from data of unknown quality, simulated data with added noise are used to investigate practical identifiability in two distinct epidemiological models. Particular emphasis is placed on the role of initial conditions, which are assumed unknown, except those that are directly measured. Instead of just focusing on one method of estimation, we use and compare results from various broadly used methods, including maximum likelihood and Markov Chain Monte Carlo (MCMC) estimation. Among other findings, our analysis revealed that the MCMC estimator is overall more robust than the point estimators considered. Its estimates and predictions are improved when the initial conditions of certain compartments are fixed so that the model becomes globally identifiable. For the point estimators, whether fixing or fitting the that are not directly measured improves parameter estimates is model-dependent. Specifically, in the standard SEIR model, fixing the initial condition for the susceptible population S(0) improved parameter estimates, while this was not true when fixing the initial condition of the asymptomatic population in a more involved model. Our study corroborates the change in quality of parameter estimates upon usage of pre-peak or post-peak time-series under consideration. Finally, our examples suggest that in the presence of significantly noisy data, the value of structural identifiability is moot.
The choice of the shape parameter highly effects the behaviour of radial basis function (RBF) approximations, as it needs to be selected to balance between ill-condition of the interpolation matrix and high accuracy. In this paper, we demonstrate how to use neural networks to determine the shape parameters in RBFs. In particular, we construct a multilayer perceptron trained using an unsupervised learning strategy, and use it to predict shape parameters for inverse multiquadric and Gaussian kernels. We test the neural network approach in RBF interpolation tasks and in a RBF-finite difference method in one and two-space dimensions, demonstrating promising results.
In this paper we develop a novel neural network model for predicting implied volatility surface. Prior financial domain knowledge is taken into account. A new activation function that incorporates volatility smile is proposed, which is used for the hidden nodes that process the underlying asset price. In addition, financial conditions, such as the absence of arbitrage, the boundaries and the asymptotic slope, are embedded into the loss function. This is one of the very first studies which discuss a methodological framework that incorporates prior financial domain knowledge into neural network architecture design and model training. The proposed model outperforms the benchmarked models with the option data on the S&P 500 index over 20 years. More importantly, the domain knowledge is satisfied empirically, showing the model is consistent with the existing financial theories and conditions related to implied volatility surface.