We extend Huber's (1963) inequality for the joint distribution function of negative dependent scores in round-robin tournaments. As a byproduct, this extension implies convergence in probability of the maximal score in round-robin tournaments in a more general setting.
We show that solution to the Hermite-Pad\'{e} type I approximation problem leads in a natural way to a subclass of solutions of the Hirota (discrete Kadomtsev-Petviashvili) system and of its adjoint linear problem. Our result explains the appearence of various ingredients of the integrable systems theory in application to multiple orthogonal polynomials, numerical algorthms, random matrices, and in other branches of mathematical physics and applied mathematics where the Hermite-Pad\'{e} approximation problem is relevant. We present also the geometric algorithm, based on the notion of Desargues maps, of construction of solutions of the problem in the projective space over the field of rational functions. As a byproduct we obtain the corresponding generalization of the Wynn recurrence. We isolate the boundary data of the Hirota system which provide solutions to Hermite-Pad\'{e} problem showing that the corresponding reduction lowers dimensionality of the system. In particular, we obtain certain equations which, in addition to the known ones given by Paszkowski, can be considered as direct analogs of the Frobenius identities. We study the place of the reduced system within the integrability theory, which results in finding multidimensional (in the sense of number of variables) extension of the discrete-time Toda chain equations.
We investigate the problem of approximating the matrix function $f(A)$ by $r(A)$, with $f$ a Markov function, $r$ a rational interpolant of $f$, and $A$ a symmetric Toeplitz matrix. In a first step, we obtain a new upper bound for the relative interpolation error $1-r/f$ on the spectral interval of $A$. By minimizing this upper bound over all interpolation points, we obtain a new, simple and sharp a priori bound for the relative interpolation error. We then consider three different approaches of representing and computing the rational interpolant $r$. Theoretical and numerical evidence is given that any of these methods for a scalar argument allows to achieve high precision, even in the presence of finite precision arithmetic. We finally investigate the problem of efficiently evaluating $r(A)$, where it turns out that the relative error for a matrix argument is only small if we use a partial fraction decomposition for $r$ following Antoulas and Mayo. An important role is played by a new stopping criterion which ensures to automatically find the degree of $r$ leading to a small error, even in presence of finite precision arithmetic.
Given an $n$-point metric space $(\mathcal{X},d)$ where each point belongs to one of $m=O(1)$ different categories or groups and a set of integers $k_1, \ldots, k_m$, the fair Max-Min diversification problem is to select $k_i$ points belonging to category $i\in [m]$, such that the minimum pairwise distance between selected points is maximized. The problem was introduced by Moumoulidou et al. [ICDT 2021] and is motivated by the need to down-sample large data sets in various applications so that the derived sample achieves a balance over diversity, i.e., the minimum distance between a pair of selected points, and fairness, i.e., ensuring enough points of each category are included. We prove the following results: 1. We first consider general metric spaces. We present a randomized polynomial time algorithm that returns a factor $2$-approximation to the diversity but only satisfies the fairness constraints in expectation. Building upon this result, we present a $6$-approximation that is guaranteed to satisfy the fairness constraints up to a factor $1-\epsilon$ for any constant $\epsilon$. We also present a linear time algorithm returning an $m+1$ approximation with exact fairness. The best previous result was a $3m-1$ approximation. 2. We then focus on Euclidean metrics. We first show that the problem can be solved exactly in one dimension. For constant dimensions, categories and any constant $\epsilon>0$, we present a $1+\epsilon$ approximation algorithm that runs in $O(nk) + 2^{O(k)}$ time where $k=k_1+\ldots+k_m$. We can improve the running time to $O(nk)+ poly(k)$ at the expense of only picking $(1-\epsilon) k_i$ points from category $i\in [m]$. Finally, we present algorithms suitable to processing massive data sets including single-pass data stream algorithms and composable coresets for the distributed processing.
A priori error bounds have been derived for different balancing-related model reduction methods. The most classical result is a bound for balanced truncation and singular perturbation approximation that is applicable for asymptotically stable linear time-invariant systems with homogeneous initial conditions. Recently, there have been a few attempts to generalize the balancing-related reduction methods to the case with inhomogeneous initial conditions, but the existing error bounds for these generalizations are quite restrictive. Particularly, it is required to restrict the initial conditions to a low-dimensional subspace, which has to be chosen before the reduced model is constructed. In this paper, we propose an estimator that circumvents this hard constraint completely. Our estimator is applicable to a large class of reduction methods, whereas the former results were only derived for certain specific methods. Moreover, our approach yields to significantly more effective error estimation, as also will be demonstrated numerically.
We study the off-policy evaluation (OPE) problem in an infinite-horizon Markov decision process with continuous states and actions. We recast the $Q$-function estimation into a special form of the nonparametric instrumental variables (NPIV) estimation problem. We first show that under one mild condition the NPIV formulation of $Q$-function estimation is well-posed in the sense of $L^2$-measure of ill-posedness with respect to the data generating distribution, bypassing a strong assumption on the discount factor $\gamma$ imposed in the recent literature for obtaining the $L^2$ convergence rates of various $Q$-function estimators. Thanks to this new well-posed property, we derive the first minimax lower bounds for the convergence rates of nonparametric estimation of $Q$-function and its derivatives in both sup-norm and $L^2$-norm, which are shown to be the same as those for the classical nonparametric regression (Stone, 1982). We then propose a sieve two-stage least squares estimator and establish its rate-optimality in both norms under some mild conditions. Our general results on the well-posedness and the minimax lower bounds are of independent interest to study not only other nonparametric estimators for $Q$-function but also efficient estimation on the value of any target policy in off-policy settings.
This paper deals with robust inference for parametric copula models. Estimation using Canonical Maximum Likelihood might be unstable, especially in the presence of outliers. We propose to use a procedure based on the Maximum Mean Discrepancy (MMD) principle. We derive non-asymptotic oracle inequalities, consistency and asymptotic normality of this new estimator. In particular, the oracle inequality holds without any assumption on the copula family, and can be applied in the presence of outliers or under misspecification. Moreover, in our MMD framework, the statistical inference of copula models for which there exists no density with respect to the Lebesgue measure on $[0,1]^d$, as the Marshall-Olkin copula, becomes feasible. A simulation study shows the robustness of our new procedures, especially compared to pseudo-maximum likelihood estimation. An R package implementing the MMD estimator for copula models is available.
The Gromov-Hausdorff distance $(d_{GH})$ proves to be a useful distance measure between shapes. In order to approximate $d_{GH}$ for compact subsets $X,Y\subset\mathbb{R}^d$, we look into its relationship with $d_{H,iso}$, the infimum Hausdorff distance under Euclidean isometries. As already known for dimension $d\geq 2$, the $d_{H,iso}$ cannot be bounded above by a constant factor times $d_{GH}$. For $d=1$, however, we prove that $d_{H,iso}\leq\frac{5}{4}d_{GH}$. We also show that the bound is tight. In effect, this gives rise to an $O(n\log{n})$-time algorithm to approximate $d_{GH}$ with an approximation factor of $\left(1+\frac{1}{4}\right)$.
Knowledge Graph (KG) embedding is a fundamental problem in data mining research with many real-world applications. It aims to encode the entities and relations in the graph into low dimensional vector space, which can be used for subsequent algorithms. Negative sampling, which samples negative triplets from non-observed ones in the training data, is an important step in KG embedding. Recently, generative adversarial network (GAN), has been introduced in negative sampling. By sampling negative triplets with large scores, these methods avoid the problem of vanishing gradient and thus obtain better performance. However, using GAN makes the original model more complex and hard to train, where reinforcement learning must be used. In this paper, motivated by the observation that negative triplets with large scores are important but rare, we propose to directly keep track of them with the cache. However, how to sample from and update the cache are two important questions. We carefully design the solutions, which are not only efficient but also achieve a good balance between exploration and exploitation. In this way, our method acts as a "distilled" version of previous GA-based methods, which does not waste training time on additional parameters to fit the full distribution of negative triplets. The extensive experiments show that our method can gain significant improvement in various KG embedding models, and outperform the state-of-the-art negative sampling methods based on GAN.
Implicit probabilistic models are models defined naturally in terms of a sampling procedure and often induces a likelihood function that cannot be expressed explicitly. We develop a simple method for estimating parameters in implicit models that does not require knowledge of the form of the likelihood function or any derived quantities, but can be shown to be equivalent to maximizing likelihood under some conditions. Our result holds in the non-asymptotic parametric setting, where both the capacity of the model and the number of data examples are finite. We also demonstrate encouraging experimental results.
In this paper we study the frequentist convergence rate for the Latent Dirichlet Allocation (Blei et al., 2003) topic models. We show that the maximum likelihood estimator converges to one of the finitely many equivalent parameters in Wasserstein's distance metric at a rate of $n^{-1/4}$ without assuming separability or non-degeneracy of the underlying topics and/or the existence of more than three words per document, thus generalizing the previous works of Anandkumar et al. (2012, 2014) from an information-theoretical perspective. We also show that the $n^{-1/4}$ convergence rate is optimal in the worst case.