We study the non-parametric estimation of the value ${\theta}(f )$ of a linear functional evaluated at an unknown density function f with support on $R_+$ based on an i.i.d. sample with multiplicative measurement errors. The proposed estimation procedure combines the estimation of the Mellin transform of the density $f$ and a regularisation of the inverse of the Mellin transform by a spectral cut-off. In order to bound the mean squared error we distinguish several scenarios characterised through different decays of the upcoming Mellin transforms and the smoothnes of the linear functional. In fact, we identify scenarios, where a non-trivial choice of the upcoming tuning parameter is necessary and propose a data-driven choice based on a Goldenshluger-Lepski method. Additionally, we show minimax-optimality over Mellin-Sobolev spaces of the estimator.
While in recent years a number of new statistical approaches have been proposed to model group differences with a different assumption on the nature of the measurement invariance of the instruments, the tools for detecting local misspecifications of these models have not been fully developed yet. In this study, we present a novel approach using a Deep Neural Network (DNN). We compared the proposed model with the most popular traditional methods: Modification Indices (MI) and Expected Parameter Change (EPC) indicators from the Confirmatory Factor Analysis (CFA) modeling, logistic DIF detection, and sequential procedure introduced with the CFA alignment approach. Simulation studies show that the proposed method outperformed traditional methods in almost all scenarios, or it was at least as accurate as the best one. We also provide an empirical example utilizing European Social Survey data including items known to be miss-translated, which are correctly identified with presented DNN approach.
Estimating and reacting to external disturbances is of fundamental importance for robust control of quadrotors. Existing estimators typically require significant tuning or training with a large amount of data, including the ground truth, to achieve satisfactory performance. This paper proposes a data-efficient differentiable moving horizon estimation (DMHE) algorithm that can automatically tune the MHE parameters online and also adapt to different scenarios. We achieve this by deriving the analytical gradient of the estimated trajectory from MHE with respect to the tuning parameters, enabling end-to-end learning for auto-tuning. Most interestingly, we show that the gradient can be calculated efficiently from a Kalman filter in a recursive form. Moreover, we develop a model-based policy gradient algorithm to learn the parameters directly from the trajectory tracking errors without the need for the ground truth. The proposed DMHE can be further embedded as a layer with other neural networks for joint optimization. Finally, we demonstrate the effectiveness of the proposed method via both simulation and experiments on quadrotors, where challenging scenarios such as sudden payload change and flying in downwash are examined.
A binary modified de Bruijn sequence is an infinite and periodic binary sequence derived by removing a zero from the longest run of zeros in a binary de Bruijn sequence. The minimal polynomial of the modified sequence is its unique least-degree characteristic polynomial. Leveraging on a recent characterization, we devise a novel general approach to determine the minimal polynomial. We translate the characterization into a problem of identifying a Hamiltonian cycle in a specially constructed graph. Along the way, we demonstrate the usefullness of computational tools from the cycle joining method in the modified setup.
In this work, we study a random orthogonal projection based least squares estimator for the stable solution of a multivariate nonparametric regression (MNPR) problem. More precisely, given an integer $d\geq 1$ corresponding to the dimension of the MNPR problem, a positive integer $N\geq 1$ and a real parameter $\alpha\geq -\frac{1}{2},$ we show that a fairly large class of $d-$variate regression functions are well and stably approximated by its random projection over the orthonormal set of tensor product $d-$variate Jacobi polynomials with parameters $(\alpha,\alpha).$ The associated uni-variate Jacobi polynomials have degree at most $N$ and their tensor products are orthonormal over $\mathcal U=[0,1]^d,$ with respect to the associated multivariate Jacobi weights. In particular, if we consider $n$ random sampling points $\mathbf X_i$ following the $d-$variate Beta distribution, with parameters $(\alpha+1,\alpha+1),$ then we give a relation involving $n, N, \alpha$ to ensure that the resulting $(N+1)^d\times (N+1)^d$ random projection matrix is well conditioned. Moreover, we provide squared integrated as well as $L^2-$risk errors of this estimator. Precise estimates of these errors are given in the case where the regression function belongs to an isotropic Sobolev space $H^s(I^d),$ with $s> \frac{d}{2}.$ Also, to handle the general and practical case of an unknown distribution of the $\mathbf X_i,$ we use Shepard's scattered interpolation scheme in order to generate fairly precise approximations of the observed data at $n$ i.i.d. sampling points $\mathbf X_i$ following a $d-$variate Beta distribution. Finally, we illustrate the performance of our proposed multivariate nonparametric estimator by some numerical simulations with synthetic as well as real data.
We propose a new wavelet-based method for density estimation when the data are size-biased. More specifically, we consider a power of the density of interest, where this power exceeds 1/2. Warped wavelet bases are employed, where warping is attained by some continuous cumulative distribution function. This can be seen as a general framework in which the conventional orthonormal wavelet estimation is the case where warping distribution is the standard uniform c.d.f. We show that both linear and nonlinear wavelet estimators are consistent, with optimal and/or near-optimal rates. Monte Carlo simulations are performed to compare four special settings which are easy to interpret in practice. An application with a real dataset on fatal traffic accidents involving alcohol illustrates the method. We observe that warped bases provide more flexible and superior estimates for both simulated and real data. Moreover, we find that estimating the power of a density (for instance, its square root) further improves the results.
For multivariate stationary time series many important properties, such as partial correlation, graphical models and autoregressive representations are encoded in the inverse of its spectral density matrix. This is not true for nonstationary time series, where the pertinent information lies in the inverse infinite dimensional covariance matrix operator associated with the multivariate time series. This necessitates the study of the covariance of a multivariate nonstationary time series and its relationship to its inverse. We show that if the rows/columns of the infinite dimensional covariance matrix decay at a certain rate then the rate (up to a factor) transfers to the rows/columns of the inverse covariance matrix. This is used to obtain a nonstationary autoregressive representation of the time series and a Baxter-type bound between the parameters of the autoregressive infinite representation and the corresponding finite autoregressive projection. The aforementioned results lay the foundation for the subsequent analysis of locally stationary time series. In particular, we show that smoothness properties on the covariance matrix transfer to (i) the inverse covariance (ii) the parameters of the vector autoregressive representation and (iii) the partial covariances. All results are set up in such a way that the constants involved depend only on the eigenvalue of the covariance matrix and can be applied in the high-dimensional settings with non-diverging eigenvalues.
We establish a new perturbation theory for orthogonal polynomials using a Riemann--Hilbert approach and consider applications in numerical linear algebra and random matrix theory. This new approach shows that the orthogonal polynomials with respect to two measures can be effectively compared using the difference of their Stieltjes transforms on a suitably chosen contour. Moreover, when two measures are close and satisfy some regularity conditions, we use the theta functions of a hyperelliptic Riemann surface to derive explicit and accurate expansion formulae for the perturbed orthogonal polynomials. In contrast to other approaches, a key strength of the methodology is that estimates can remain valid as the degree of the polynomial grows. The results are applied to analyze several numerical algorithms from linear algebra, including the Lanczos tridiagonalization procedure, the Cholesky factorization and the conjugate gradient algorithm. As a case study, we investigate these algorithms applied to a general spiked sample covariance matrix model by considering the eigenvector empirical spectral distribution and its limits. For the first time, we give precise estimates on the output of the algorithms, applied to this wide class of random matrices, as the number of iterations diverges. In this setting, beyond the first order expansion, we also derive a new mesoscopic central limit theorem for the associated orthogonal polynomials and other quantities relevant to numerical algorithms.
In this work, the problem of 4 degree-of-freedom (3D position and heading) robot-to-robot relative frame transformation estimation using onboard odometry and inter-robot distance measurements is studied. Firstly, we present a theoretical analysis of the problem, namely the derivation and interpretation of the Cramer-Rao Lower Bound (CRLB), the Fisher Information Matrix (FIM) and its determinant. Secondly, we propose optimization-based methods to solve the problem, including a quadratically constrained quadratic programming (QCQP) and the corresponding semidefinite programming (SDP) relaxation. Moreover, we address practical issues that are ignored in previous works, such as accounting for spatial-temporal offsets between the ultra-wideband (UWB) and odometry sensors, rejecting UWB outliers and checking for singular configurations before commencing operation. Lastly, extensive simulations and real-life experiments with aerial robots show that the proposed QCQP and SDP methods outperform state-of-the-art methods, especially in geometrically poor or large measurement noise conditions. In general, the QCQP method provides the best results at the expense of computational time, while the SDP method runs much faster and is sufficiently accurate in most cases.
Conventional speech spoofing countermeasures (CMs) are designed to make a binary decision on an input trial. However, a CM trained on a closed-set database is theoretically not guaranteed to perform well on unknown spoofing attacks. In some scenarios, an alternative strategy is to let the CM defer a decision when it is not confident. The question is then how to estimate a CM's confidence regarding an input trial. We investigated a few confidence estimators that can be easily plugged into a CM. On the ASVspoof2019 logical access database, the results demonstrate that an energy-based estimator and a neural-network-based one achieved acceptable performance in identifying unknown attacks in the test set. On a test set with additional unknown attacks and bona fide trials from other databases, the confidence estimators performed moderately well, and the CMs better discriminated bona fide and spoofed trials that had a high confidence score. Additional results also revealed the difficulty in enhancing a confidence estimator by adding unknown attacks to the training set.
We develop an approach to risk minimization and stochastic optimization that provides a convex surrogate for variance, allowing near-optimal and computationally efficient trading between approximation and estimation error. Our approach builds off of techniques for distributionally robust optimization and Owen's empirical likelihood, and we provide a number of finite-sample and asymptotic results characterizing the theoretical performance of the estimator. In particular, we show that our procedure comes with certificates of optimality, achieving (in some scenarios) faster rates of convergence than empirical risk minimization by virtue of automatically balancing bias and variance. We give corroborating empirical evidence showing that in practice, the estimator indeed trades between variance and absolute performance on a training sample, improving out-of-sample (test) performance over standard empirical risk minimization for a number of classification problems.