We study the graph parameter elimination distance to bounded degree, which was introduced by Bulian and Dawar in their study of the parameterized complexity of the graph isomorphism problem. We prove that the problem is fixed-parameter tractable on planar graphs, that is, there exists an algorithm that given a planar graph $G$ and integers $d$ and $k$ decides in time $f(k,d)\cdot n^c$ for a computable function~$f$ and constant $c$ whether the elimination distance of $G$ to the class of degree $d$ graphs is at most $k$.
This research presents a comprehensive approach to predicting the duration of traffic incidents and classifying them as short-term or long-term across the Sydney Metropolitan Area. Leveraging a dataset that encompasses detailed records of traffic incidents, road network characteristics, and socio-economic indicators, we train and evaluate a variety of advanced machine learning models including Gradient Boosted Decision Trees (GBDT), Random Forest, LightGBM, and XGBoost. The models are assessed using Root Mean Square Error (RMSE) for regression tasks and F1 score for classification tasks. Our experimental results demonstrate that XGBoost and LightGBM outperform conventional models with XGBoost achieving the lowest RMSE of 33.7 for predicting incident duration and highest classification F1 score of 0.62 for a 30-minute duration threshold. For classification, the 30-minute threshold balances performance with 70.84% short-term duration classification accuracy and 62.72% long-term duration classification accuracy. Feature importance analysis, employing both tree split counts and SHAP values, identifies the number of affected lanes, traffic volume, and types of primary and secondary vehicles as the most influential features. The proposed methodology not only achieves high predictive accuracy but also provides stakeholders with vital insights into factors contributing to incident durations. These insights enable more informed decision-making for traffic management and response strategies. The code is available by the link: //github.com/Future-Mobility-Lab/SydneyIncidents
We derive bounds on the moduli of the eigenvalues of special type of rational matrices using the following techniques/methods: (1) the Bauer-Fike theorem on an associated block matrix of the given rational matrix, (2) by associating a real rational function, along with Rouch$\text{\'e}$ theorem for the rational matrix and (3) by a numerical radius inequality for a block matrix for the rational matrix. These bounds are compared when the coefficients are unitary matrices. Numerical examples are given to illustrate the results obtained.
This letter investigates the problem of controlling an aerial manipulator, composed of an omnidirectional tilting drone equipped with a five-degrees-of-freedom robotic arm. The robot has to interact with the environment to inspect structures and perform non-destructive measurements. A parallel force-impedance control technique is developed to establish contact with the designed surface with a desired force profile. During the interaction, a pushing phase is required to create a vacuum between the surface and the echometer sensor mounted at the end-effector, to measure the thickness of the interaction surface. Repetitive measures are performed to show the repeatability of the algorithm.
Researchers would often like to leverage data from a collection of sources (e.g., primary studies in a meta-analysis) to estimate causal effects in a target population of interest. However, traditional meta-analytic methods do not produce causally interpretable estimates for a well-defined target population. In this paper, we present the CausalMetaR R package, which implements efficient and robust methods to estimate causal effects in a given internal or external target population using multi-source data. The package includes estimators of average and subgroup treatment effects for the entire target population. To produce efficient and robust estimates of causal effects, the package implements doubly robust and non-parametric efficient estimators and supports using flexible data-adaptive (e.g., machine learning techniques) methods and cross-fitting techniques to estimate the nuisance models (e.g., the treatment model, the outcome model). We describe the key features of the package and demonstrate how to use the package through an example.
We study the asymptotic properties of an estimator of Hurst parameter of a stochastic differential equation driven by a fractional Brownian motion with $H > 1/2$. Utilizing the theory of asymptotic expansion of Skorohod integrals introduced by Nualart and Yoshida [NY19], we derive an asymptotic expansion formula of the distribution of the estimator. As an corollary, we also obtain a mixed central limit theorem for the statistic, indicating that the rate of convergence is $n^{-\frac12}$, which improves the results in the previous literature. To handle second-order quadratic variations appearing in the estimator, a theory of exponent has been developed based on weighted graphs to estimate asymptotic orders of norms of functionals involved.
Generalized linear models (GLMs) arguably represent the standard approach for statistical regression beyond the Gaussian likelihood scenario. When Bayesian formulations are employed, the general absence of a tractable posterior distribution has motivated the development of deterministic approximations, which are generally more scalable than sampling techniques. Among them, expectation propagation (EP) showed extreme accuracy, usually higher than many variational Bayes solutions. However, the higher computational cost of EP posed concerns about its practical feasibility, especially in high-dimensional settings. We address these concerns by deriving a novel efficient formulation of EP for GLMs, whose cost scales linearly in the number of covariates p. This reduces the state-of-the-art O(p^2 n) per-iteration computational cost of the EP routine for GLMs to O(p n min{p,n}), with n being the sample size. We also show that, for binary models and log-linear GLMs approximate predictive means can be obtained at no additional cost. To preserve efficient moment matching for count data, we propose employing a combination of log-normal Laplace transform approximations, avoiding numerical integration. These novel results open the possibility of employing EP in settings that were believed to be practically impossible. Improvements over state-of-the-art approaches are illustrated both for simulated and real data. The efficient EP implementation is available at //github.com/niccoloanceschi/EPglm.
We explore theoretical aspects of boundary conditions for lattice Boltzmann methods, focusing on a toy two-velocities scheme. By mapping lattice Boltzmann schemes to Finite Difference schemes, we facilitate rigorous consistency and stability analyses. We develop kinetic boundary conditions for inflows and outflows, highlighting the trade-off between accuracy and stability, which we successfully overcome. Stability is assessed using GKS (Gustafsson, Kreiss, and Sundstr{\"o}m) analysis and -- when this approach fails on coarse meshes -- spectral and pseudo-spectral analyses of the scheme's matrix that explain effects germane to low resolutions.
Machine learning (ML) methods, which fit to data the parameters of a given parameterized model class, have garnered significant interest as potential methods for learning surrogate models for complex engineering systems for which traditional simulation is expensive. However, in many scientific and engineering settings, generating high-fidelity data on which to train ML models is expensive, and the available budget for generating training data is limited, so that high-fidelity training data are scarce. ML models trained on scarce data have high variance, resulting in poor expected generalization performance. We propose a new multifidelity training approach for scientific machine learning via linear regression that exploits the scientific context where data of varying fidelities and costs are available: for example, high-fidelity data may be generated by an expensive fully resolved physics simulation whereas lower-fidelity data may arise from a cheaper model based on simplifying assumptions. We use the multifidelity data within an approximate control variate framework to define new multifidelity Monte Carlo estimators for linear regression models. We provide bias and variance analysis of our new estimators that guarantee the approach's accuracy and improved robustness to scarce high-fidelity data. Numerical results demonstrate that our multifidelity training approach achieves similar accuracy to the standard high-fidelity only approach with orders-of-magnitude reduced high-fidelity data requirements.
The Lippmann--Schwinger--Lanczos (LSL) algorithm has recently been shown to provide an efficient tool for imaging and direct inversion of synthetic aperture radar data in multi-scattering environments \cite{DrMoZa3}, where the data set is limited to the monostatic, a.k.a. single input/single output (SISO) measurements. The approach is based on constructing data-driven estimates of internal fields via a reduced-order model (ROM) framework and then plugging them into the Lippmann-Schwinger integral equation. However, the approximations of the internal solutions may have more error due to missing the off diagonal elements of the multiple input/multiple output (MIMO) matrix valued transfer function. This, in turn, may result in multiple echoes in the image. Here we present a ROM-based data completion algorithm to mitigate this problem. First, we apply the LSL algorithm to the SISO data as in \cite{DrMoZa3} to obtain approximate reconstructions as well as the estimate of internal field. Next, we use these estimates to calculate a forward Lippmann-Schwinger integral to populate the missing off-diagonal data (the lifting step). Finally, to update the reconstructions, we solve the Lippmann-Schwinger equation using the original SISO data, where the internal fields are constructed from the lifted MIMO data. The steps of obtaining the approximate reconstructions and internal fields and populating the missing MIMO data entries can be repeated for complex models to improve the images even further. Efficiency of the proposed approach is demonstrated on 2D and 2.5D numerical examples, where we see reconstructions are improved substantially.
Graph representation learning for hypergraphs can be used to extract patterns among higher-order interactions that are critically important in many real world problems. Current approaches designed for hypergraphs, however, are unable to handle different types of hypergraphs and are typically not generic for various learning tasks. Indeed, models that can predict variable-sized heterogeneous hyperedges have not been available. Here we develop a new self-attention based graph neural network called Hyper-SAGNN applicable to homogeneous and heterogeneous hypergraphs with variable hyperedge sizes. We perform extensive evaluations on multiple datasets, including four benchmark network datasets and two single-cell Hi-C datasets in genomics. We demonstrate that Hyper-SAGNN significantly outperforms the state-of-the-art methods on traditional tasks while also achieving great performance on a new task called outsider identification. Hyper-SAGNN will be useful for graph representation learning to uncover complex higher-order interactions in different applications.