Pairwise comparison models are used for quantitatively evaluating utility and ranking in various fields. The increasing scale of modern problems underscores the need to understand statistical inference in these models when the number of subjects diverges, which is currently lacking in the literature except in a few special instances. This paper addresses this gap by establishing an asymptotic normality result for the maximum likelihood estimator in a broad class of pairwise comparison models. The key idea lies in identifying the Fisher information matrix as a weighted graph Laplacian matrix which can be studied via a meticulous spectral analysis. Our findings provide the first unified theory for performing statistical inference in a wide range of pairwise comparison models beyond the Bradley--Terry model, benefiting practitioners with a solid theoretical guarantee for their use. Simulations utilizing synthetic data are conducted to validate the asymptotic normality result, followed by a hypothesis test using a tennis competition dataset.
Source conditions are a key tool in regularisation theory that are needed to derive error estimates and convergence rates for ill-posed inverse problems. In this paper, we provide a recipe to practically compute source condition elements as the solution of convex minimisation problems that can be solved with first-order algorithms. We demonstrate the validity of our approach by testing it on two inverse problem case studies in machine learning and image processing: sparse coefficient estimation of a polynomial via LASSO regression and recovering an image from a subset of the coefficients of its discrete Fourier transform. We further demonstrate that the proposed approach can easily be modified to solve the machine learning task of identifying the optimal sampling pattern in the Fourier domain for a given image and variational regularisation method, which has applications in the context of sparsity promoting reconstruction from magnetic resonance imaging data.
Generative machine learning models have shown notable success in identifying architectures for metamaterials - materials whose behavior is determined primarily by their internal organization - that match specific target properties. By examining kirigami metamaterials, in which dependencies between cuts yield complex design restrictions, we demonstrate that this perceived success in the employment of generative models for metamaterials might be akin to survivorship bias. We assess the performance of the four most popular generative models - the Variational Autoencoder (VAE), the Generative Adversarial Network (GAN), the Wasserstein GAN (WGAN), and the Denoising Diffusion Probabilistic Model (DDPM) - in generating kirigami structures. Prohibiting cut intersections can prevent the identification of an appropriate similarity measure for kirigami metamaterials, significantly impacting the effectiveness of VAE and WGAN, which rely on the Euclidean distance - a metric shown to be unsuitable for considered geometries. This imposes significant limitations on employing modern generative models for the creation of diverse metamaterials.
Several subjective proposals have been made for interpreting the strength of evidence in likelihood ratios and Bayes factors. I identify a more objective scaling by modelling the effect of evidence on belief. The resulting scale with base 3.73 aligns with previous proposals and may partly explain intuitions.
In this short note we formulate a stabilizer formalism in the language of noncommutative graphs. The classes of noncommutative graphs we consider are obtained via unitary representations of compact groups, and suitably chosen operators on finite-dimensional Hilbert spaces. Furthermore, in this framework, we generalize previous results in this area for determining when such noncommutative graphs have anticliques.
Tactile sensing in mobile robots remains under-explored, mainly due to challenges related to sensor integration and the complexities of distributed sensing. In this work, we present a tactile sensing architecture for mobile robots based on wheel-mounted acoustic waveguides. Our sensor architecture enables tactile sensing along the entire circumference of a wheel with a single active component: an off-the-shelf acoustic rangefinder. We present findings showing that our sensor, mounted on the wheel of a mobile robot, is capable of discriminating between different terrains, detecting and classifying obstacles with different geometries, and performing collision detection via contact localization. We also present a comparison between our sensor and sensors traditionally used in mobile robots, and point to the potential for sensor fusion approaches that leverage the unique capabilities of our tactile sensing architecture. Our findings demonstrate that autonomous mobile robots can further leverage our sensor architecture for diverse mapping tasks requiring knowledge of terrain material, surface topology, and underlying structure.
Bayesian statistical graphical models are typically either continuous and parametric (Gaussian, parameterized by the graph-dependent precision matrix with Wishart-type priors) or discrete and non-parametric (with graph-dependent structure of probabilities of cells and Dirichlet-type priors). We propose to break this dichotomy by introducing two discrete parametric graphical models on finite decomposable graphs: the graph negative multinomial and the graph multinomial distributions. These models interpolate between the product of univariate negative binomial laws and the negative multinomial distribution, and between the product of binomial laws and the multinomial distribution, respectively. We derive their Markov decomposition and present related probabilistic models representations. We also introduce graphical versions of the Dirichlet distribution and inverted Dirichlet distribution, which serve as conjugate priors for the two discrete graphical Markov models. We derive explicit normalizing constants for both graphical Dirichlet laws and demonstrate that their independence structure (a graphical version of neutrality) yields a strong hyper Markov property for both Bayesian models. We also provide characterization theorems for graphical Dirichlet laws via strong hyper Markov property. Finally, we develop a model selection procedure for the Bayesian graphical negative multinomial model with respective Dirichlet-type priors.
The categorical Gini covariance is a dependence measure between a numerical variable and a categorical variable. The Gini covariance measures dependence by quantifying the difference between the conditional and unconditional distributional functions. A value of zero for the categorical Gini covariance implies independence of the numerical variable and the categorical variable. We propose a non-parametric test for testing the independence between a numerical and categorical variable using the categorical Gini covariance. We used the theory of U-statistics to find the test statistics and study the properties. The test has an asymptotic normal distribution. As the implementation of a normal-based test is difficult, we develop a jackknife empirical likelihood (JEL) ratio test for testing independence. Extensive Monte Carlo simulation studies are carried out to validate the performance of the proposed JEL-based test. We illustrate the test procedure using real a data set.
A new approach is developed for computational modelling of microstructure evolution problems. The approach combines the phase-field method with the recently-developed laminated element technique (LET) which is a simple and efficient method to model weak discontinuities using nonconforming finite-element meshes. The essence of LET is in treating the elements that are cut by an interface as simple laminates of the two phases, and this idea is here extended to propagating interfaces so that the volume fraction of the phases and the lamination orientation vary accordingly. In the proposed LET-PF approach, the phase-field variable (order parameter), which is governed by an evolution equation of the Ginzburg-Landau type, plays the role of a level-set function that implicitly defines the position of the (sharp) interface. The mechanical equilibrium subproblem is then solved using the semisharp LET technique. Performance of LET-PF is illustrated by numerical examples. In particular, it is shown that, for the problems studied, LET-PF exhibits higher accuracy than the conventional phase-field method so that, for instance, qualitatively correct results can be obtained using a significantly coarser mesh, and thus at a lower computational cost.
It is critical to deploy complicated neural network models on hardware with limited resources. This paper proposes a novel model quantization method, named the Low-Cost Proxy-Based Adaptive Mixed-Precision Model Quantization (LCPAQ), which contains three key modules. The hardware-aware module is designed by considering the hardware limitations, while an adaptive mixed-precision quantization module is developed to evaluate the quantization sensitivity by using the Hessian matrix and Pareto frontier techniques. Integer linear programming is used to fine-tune the quantization across different layers. Then the low-cost proxy neural architecture search module efficiently explores the ideal quantization hyperparameters. Experiments on the ImageNet demonstrate that the proposed LCPAQ achieves comparable or superior quantization accuracy to existing mixed-precision models. Notably, LCPAQ achieves 1/200 of the search time compared with existing methods, which provides a shortcut in practical quantization use for resource-limited devices.
Prediction models are increasingly proposed for guiding treatment decisions, but most fail to address the special role of treatments, leading to inappropriate use. This paper highlights the limitations of using standard prediction models for treatment decision support. We identify 'causal blind spots' in three common approaches to handling treatments in prediction modelling and illustrate potential harmful consequences in several medical applications. We advocate for an extension of guidelines for development, reporting, clinical evaluation and monitoring of prediction models to ensure that the intended use of the model is matched to an appropriate risk estimand. For decision support this requires a shift towards developing predictions under the specific treatment options under consideration ('predictions under interventions'). We argue that this will improve the efficacy of prediction models in guiding treatment decisions and prevent potential negative effects on patient outcomes.