Compared to the nominal scale, the ordinal scale for a categorical outcome variable has the property of making a monotonicity assumption for the covariate effects meaningful. This assumption is encoded in the commonly used proportional odds model, but there it is combined with other parametric assumptions such as linearity and additivity. Herein, the considered models are non-parametric and the only condition imposed is that the effects of the covariates on the outcome categories are stochastically monotone according to the ordinal scale. We are not aware of the existence of other comparable multivariable models that would be suitable for inference purposes. We generalize our previously proposed Bayesian monotonic multivariable regression model to ordinal outcomes, and propose an estimation procedure based on reversible jump Markov chain Monte Carlo. The model is based on a marked point process construction, which allows it to approximate arbitrary monotonic regression function shapes, and has a built-in covariate selection property. We study the performance of the proposed approach through extensive simulation studies, and demonstrate its practical application in two real data examples.
We present a framework for performing regression when both covariate and response are probability distributions on a compact interval $\Omega\subset\mathbb{R}$. Our regression model is based on the theory of optimal transportation and links the conditional Fr\'echet mean of the response distribution to the covariate distribution via an optimal transport map. We define a Fr\'echet-least-squares estimator of this regression map, and establish its consistency and rate of convergence to the true map, under both full and partial observation of the regression pairs. Computation of the estimator is shown to reduce to an isotonic regression problem, and thus our regression model can be implemented with ease. We illustrate our methodology using real and simulated data.
In the field of finance, insurance, and system reliability, etc., it is often of interest to measure the dependence among variables by modeling a multivariate distribution using a copula. The copula models with parametric assumptions are easy to estimate but can be highly biased when such assumptions are false, while the empirical copulas are non-smooth and often not genuine copula making the inference about dependence challenging in practice. As a compromise, the empirical Bernstein copula provides a smooth estimator but the estimation of tuning parameters remains elusive. In this paper, by using the so-called empirical checkerboard copula we build a hierarchical empirical Bayes model that enables the estimation of a smooth copula function for arbitrary dimensions. The proposed estimator based on the multivariate Bernstein polynomials is itself a genuine copula and the selection of its dimension-varying degrees is data-dependent. We also show that the proposed copula estimator provides a more accurate estimate of several multivariate dependence measures which can be obtained in closed form. We investigate the asymptotic and finite-sample performance of the proposed estimator and compare it with some nonparametric estimators through simulation studies. An application to portfolio risk management is presented along with a quantification of estimation uncertainty.
The logistic and probit link functions are the most common choices for regression models with a binary response. However, these choices are not robust to the presence of outliers/unexpected observations. The robit link function, which is equal to the inverse CDF of the Student's $t$-distribution, provides a robust alternative to the probit and logistic link functions. A multivariate normal prior for the regression coefficients is the standard choice for Bayesian inference in robit regression models. The resulting posterior density is intractable and a Data Augmentation (DA) Markov chain is used to generate approximate samples from the desired posterior distribution. Establishing geometric ergodicity for this DA Markov chain is important as it provides theoretical guarantees for asymptotic validity of MCMC standard errors for desired posterior expectations/quantiles. Previous work [Roy(2012)] established geometric ergodicity of this robit DA Markov chain assuming (i) the sample size $n$ dominates the number of predictors $p$, and (ii) an additional constraint which requires the sample size to be bounded above by a fixed constant which depends on the design matrix $X$. In particular, modern high-dimensional settings where $n < p$ are not considered. In this work, we show that the robit DA Markov chain is trace-class (i.e., the eigenvalues of the corresponding Markov operator are summable) for arbitrary choices of the sample size $n$, the number of predictors $p$, the design matrix $X$, and the prior mean and variance parameters. The trace-class property implies geometric ergodicity. Moreover, this property allows us to conclude that the sandwich robit chain (obtained by inserting an inexpensive extra step in between the two steps of the DA chain) is strictly better than the robit DA chain in an appropriate sense.
In this paper we study nonparametric estimators of copulas and copula densities. We first focus our study on a density copula estimator based on a polynomial orthogonal projection of the joint density. A new copula estimator is then deduced. Its asymptotic properties are studied: we provide a large functional class for which this construction is optimal in the minimax and maxiset sense and we propose a method selection for the smoothing parameter. An intensive simulation study shows the very good performance of both copulas and copula densities estimators which we compare to a large panel of competitors. A real dataset in actuarial science illustrates this approach.
In the study of life tables the random variable of interest is usually assumed discrete since mortality rates are studied for integer ages. In dynamic life tables a time domain is included to account for the evolution effect of the hazard rates in time. In this article we follow a survival analysis approach and use a nonparametric description of the hazard rates. We construct a discrete time stochastic processes that reflects dependence across age as well as in time. This process is used as a bayesian nonparametric prior distribution for the hazard rates for the study of evolutionary life tables. Prior properties of the process are studied and posterior distributions are derived. We present a simulation study, with the inclusion of right censored observations, as well as a real data analysis to show the performance of our model.
We study the problem of designing consistent sequential one- and two-sample tests in a nonparametric setting. Guided by the principle of \emph{testing by betting}, we reframe the task of constructing sequential tests into that of selecting payoff functions that maximize the wealth of a fictitious bettor, betting against the null in a repeated game. The resulting sequential test rejects the null when the bettor's wealth process exceeds an appropriate threshold. We propose a general strategy for selecting payoff functions as predictable estimates of the \emph{witness function} associated with the variational representation of some statistical distance measures, such as integral probability metrics~(IPMs) and $\varphi$-divergences. Overall, this approach ensures that (i) the wealth process is a non-negative martingale under the null, thus allowing tight control over the type-I error, and (ii) it grows to infinity almost surely under the alternative, thus implying consistency. We accomplish this by designing composite e-processes that remain bounded in expectation under the null, but grow to infinity under the alternative. We instantiate the general test for some common distance metrics to obtain sequential versions of Kolmogorov-Smirnov~(KS) test, $\chi^2$-test and kernel-MMD test, and empirically demonstrate their ability to adapt to the unknown hardness of the problem. The sequential testing framework constructed in this paper is versatile, and we end with a discussion on applying these ideas to two related problems: testing for higher-order stochastic dominance, and testing for symmetry.
It is argued that all model based approaches to the selection of covariates in linear regression have failed. This applies to frequentist approaches based on P-values and to Bayesian approaches although for different reasons. In the first part of the paper 13 model based procedures are compared to the model-free Gaussian covariate procedure in terms of the covariates selected and the time required. The comparison is based on four data sets and two simulations. There is nothing special about these data sets which are often used as examples in the literature. All the model based procedures failed. In the second part of the paper it is argued that the cause of this failure is the very use of a model. If the model involves all the available covariates standard P-values can be used. The use of P-values in this situation is quite straightforward. As soon as the model specifies only some unknown subset of the covariates the problem being to identify this subset the situation changes radically. There are many P-values, they are dependent and most of them are invalid. The Bayesian paradigm also assumes a correct model but although there are no conceptual problems with a large number of covariates there is a considerable overhead causing computational and allocation problems even for moderately sized data sets. The Gaussian covariate procedure is based on P-values which are defined as the probability that a random Gaussian covariate is better than the covariate being considered. These P-values are exact and valid whatever the situation. The allocation requirements and the algorithmic complexity are both linear in the size of the data making the procedure capable of handling large data sets. It outperforms all the other procedures in every respect.
In this paper, we consider the time-inhomogeneous nonlinear time series regression for a general class of locally stationary time series. On one hand, we propose sieve nonparametric estimators for the time-varying regression functions which can achieve the min-max optimal rate. On the other hand, we develop a unified simultaneous inferential theory which can be used to conduct both structural and exact form testings on the functions. Our proposed statistics are powerful even under locally weak alternatives. We also propose a multiplier bootstrapping procedure for practical implementation. Our methodology and theory do not require any structural assumptions on the regression functions and we also allow the functions to be supported in an unbounded domain. We also establish sieve approximation theory for 2-D functions in unbounded domain and a Gaussian approximation result for affine and quadratic forms for high dimensional locally stationary time series, which can be of independent interest. Numerical simulations and a real financial data analysis are provided to support our results.
Robots are still poor at traversing cluttered large obstacles required for important applications like search and rescue. By contrast, animals are excellent at doing so, often using direct physical interaction with obstacles rather than avoiding them. Here, towards understanding the dynamics of cluttered obstacle traversal, we developed a minimalistic stochastic dynamics simulation inspired by our recent study of insects traversing grass-like beams. The 2-D model system consists of a forward self-propelled circular locomotor translating on a frictionless level plane with a lateral random force and interacting with two adjacent horizontal beams that form a gate. We found that traversal probability increases monotonically with propulsive force, but first increases then decreases with random force magnitude. For asymmetric beams with different stiffness, traversal is more likely towards the side of the less stiff beam. These observations are in accord with those expected from a potential energy landscape approach. Furthermore, we extended the single gate in a lattice configuration to form a large cluttered obstacle field. A Markov chain Monte Carlo method was applied to predict traversal in the large field, using the input-output probability map obtained from single gate simulations. This method achieved high accuracy in predicting the statistical distribution of the final location of the body within the obstacle field, while saving computation time by a factor of 10^5.
Large margin nearest neighbor (LMNN) is a metric learner which optimizes the performance of the popular $k$NN classifier. However, its resulting metric relies on pre-selected target neighbors. In this paper, we address the feasibility of LMNN's optimization constraints regarding these target points, and introduce a mathematical measure to evaluate the size of the feasible region of the optimization problem. We enhance the optimization framework of LMNN by a weighting scheme which prefers data triplets which yield a larger feasible region. This increases the chances to obtain a good metric as the solution of LMNN's problem. We evaluate the performance of the resulting feasibility-based LMNN algorithm using synthetic and real datasets. The empirical results show an improved accuracy for different types of datasets in comparison to regular LMNN.