Meta-regression is often used to form hypotheses about what is associated with heterogeneity in a meta-analysis and to estimate the extent to which effects can vary between cohorts and other distinguishing factors. However, study-level variables, called moderators, that are available and used in the meta-regression analysis will rarely explain all of the heterogeneity. Therefore, measuring and trying to understand residual heterogeneity is still important in a meta-regression, although it is not clear how some heterogeneity measures should be used in the meta-regression context. The coefficient of variation, and its variants, are useful measures of relative heterogeneity. We consider these measures in the context of meta-regression which allows researchers to investigate heterogeneity at different levels of the moderator and also average relative heterogeneity overall. We also provide CIs for the measures and our simulation studies show that these intervals have good coverage properties. We recommend that these measures and corresponding intervals could provide useful insights into moderators that may be contributing to the presence of heterogeneity in a meta-analysis and lead to a better understanding of estimated mean effects.
We consider a participatory budgeting problem in which each voter submits a proposal for how to divide a single divisible resource (such as money or time) among several possible alternatives (such as public projects or activities) and these proposals must be aggregated into a single aggregate division. Under $\ell_1$ preferences -- for which a voter's disutility is given by the $\ell_1$ distance between the aggregate division and the division he or she most prefers -- the social welfare-maximizing mechanism, which minimizes the average $\ell_1$ distance between the outcome and each voter's proposal, is incentive compatible (Goel et al. 2016). However, it fails to satisfy the natural fairness notion of proportionality, placing too much weight on majority preferences. Leveraging a connection between market prices and the generalized median rules of Moulin (1980), we introduce the independent markets mechanism, which is both incentive compatible and proportional. We unify the social welfare-maximizing mechanism and the independent markets mechanism by defining a broad class of moving phantom mechanisms that includes both. We show that every moving phantom mechanism is incentive compatible. Finally, we characterize the social welfare-maximizing mechanism as the unique Pareto-optimal mechanism in this class, suggesting an inherent tradeoff between Pareto optimality and proportionality.
We consider the problem of finding tuned regularized parameter estimators for linear models. We start by showing that three known optimal linear estimators belong to a wider class of estimators that can be formulated as a solution to a weighted and constrained minimization problem. The optimal weights, however, are typically unknown in many applications. This begs the question, how should we choose the weights using only the data? We propose using the covariance fitting SPICE-methodology to obtain data-adaptive weights and show that the resulting class of estimators yields tuned versions of known regularized estimators - such as ridge regression, LASSO, and regularized least absolute deviation. These theoretical results unify several important estimators under a common umbrella. The resulting tuned estimators are also shown to be practically relevant by means of a number of numerical examples.
We study the Bahadur efficiency of several weighted L2--type goodness--of--fit tests based on the empirical characteristic function. The methods considered are for normality and exponentiality testing, and for testing goodness--of--fit to the logistic distribution. Our results are helpful in deciding which specific test a potential practitioner should apply. For the celebrated BHEP and energy tests for normality we obtain novel efficiency results, with some of them in the multivariate case, while in the case of the logistic distribution this is the first time that efficiencies are computed for any composite goodness--of--fit test.
This paper studies the inference of the regression coefficient matrix under multivariate response linear regressions in the presence of hidden variables. A novel procedure for constructing confidence intervals of entries of the coefficient matrix is proposed. Our method first utilizes the multivariate nature of the responses by estimating and adjusting the hidden effect to construct an initial estimator of the coefficient matrix. By further deploying a low-dimensional projection procedure to reduce the bias introduced by the regularization in the previous step, a refined estimator is proposed and shown to be asymptotically normal. The asymptotic variance of the resulting estimator is derived with closed-form expression and can be consistently estimated. In addition, we propose a testing procedure for the existence of hidden effects and provide its theoretical justification. Both our procedures and their analyses are valid even when the feature dimension and the number of responses exceed the sample size. Our results are further backed up via extensive simulations and a real data analysis.
We study the propagation of singularities in solutions of linear convection equations with spatially heterogeneous nonlocal interactions. A spatially varying nonlocal horizon parameter is adopted in the model, which measures the range of nonlocal interactions. Via heterogeneous localization, this can lead to the seamless coupling of the local and nonlocal models. We are interested in understanding the impact on singularity propagation due to the heterogeneities of nonlocal horizon and the local and nonlocal transition. We first analytically derive equations to characterize the propagation of different types of singularities for various forms of nonlocal horizon parameters in the nonlocal regime. We then use asymptotically compatible schemes to discretize the equations and carry out numerical simulations to illustrate the propagation patterns in different scenarios.
In this work, we introduce a time memory formalism in poroelasticity model that couples the pressure and displacement. We assume this multiphysics process occurs in multicontinuum media. The mathematical model contains a coupled system of equations for pressures in each continuum and elasticity equations for displacements of the medium. We assume that the temporal dynamics is governed by fractional derivatives following some works in the literature. We derive an implicit finite difference approximation for time discretization based on the Caputo time fractional derivative. A Discrete Fracture Model (DFM) is used to model fluid flow through fractures and treat the complex network of fractures. We assume different fractional powers in fractures and matrix due to slow and fast dynamics. We develop a coarse grid approximation based on the Generalized Multiscale Finite Element Method (GMsFEM), where we solve local spectral problems for construction of the multiscale basis functions. We present numerical results for the two-dimensional model problems in fractured heterogeneous porous media. We investigate error analysis between reference (fine-scale) solution and multiscale solution with different numbers of multiscale basis functions. The results show that the proposed method can provide good accuracy on a coarse grid.
Counterfactual explanations are usually generated through heuristics that are sensitive to the search's initial conditions. The absence of guarantees of performance and robustness hinders trustworthiness. In this paper, we take a disciplined approach towards counterfactual explanations for tree ensembles. We advocate for a model-based search aiming at "optimal" explanations and propose efficient mixed-integer programming approaches. We show that isolation forests can be modeled within our framework to focus the search on plausible explanations with a low outlier score. We provide comprehensive coverage of additional constraints that model important objectives, heterogeneous data types, structural constraints on the feature space, along with resource and actionability restrictions. Our experimental analyses demonstrate that the proposed search approach requires a computational effort that is orders of magnitude smaller than previous mathematical programming algorithms. It scales up to large data sets and tree ensembles, where it provides, within seconds, systematic explanations grounded on well-defined models solved to optimality.
This book develops an effective theory approach to understanding deep neural networks of practical relevance. Beginning from a first-principles component-level picture of networks, we explain how to determine an accurate description of the output of trained networks by solving layer-to-layer iteration equations and nonlinear learning dynamics. A main result is that the predictions of networks are described by nearly-Gaussian distributions, with the depth-to-width aspect ratio of the network controlling the deviations from the infinite-width Gaussian description. We explain how these effectively-deep networks learn nontrivial representations from training and more broadly analyze the mechanism of representation learning for nonlinear models. From a nearly-kernel-methods perspective, we find that the dependence of such models' predictions on the underlying learning algorithm can be expressed in a simple and universal way. To obtain these results, we develop the notion of representation group flow (RG flow) to characterize the propagation of signals through the network. By tuning networks to criticality, we give a practical solution to the exploding and vanishing gradient problem. We further explain how RG flow leads to near-universal behavior and lets us categorize networks built from different activation functions into universality classes. Altogether, we show that the depth-to-width ratio governs the effective model complexity of the ensemble of trained networks. By using information-theoretic techniques, we estimate the optimal aspect ratio at which we expect the network to be practically most useful and show how residual connections can be used to push this scale to arbitrary depths. With these tools, we can learn in detail about the inductive bias of architectures, hyperparameters, and optimizers.
This paper focuses on the expected difference in borrower's repayment when there is a change in the lender's credit decisions. Classical estimators overlook the confounding effects and hence the estimation error can be magnificent. As such, we propose another approach to construct the estimators such that the error can be greatly reduced. The proposed estimators are shown to be unbiased, consistent, and robust through a combination of theoretical analysis and numerical testing. Moreover, we compare the power of estimating the causal quantities between the classical estimators and the proposed estimators. The comparison is tested across a wide range of models, including linear regression models, tree-based models, and neural network-based models, under different simulated datasets that exhibit different levels of causality, different degrees of nonlinearity, and different distributional properties. Most importantly, we apply our approaches to a large observational dataset provided by a global technology firm that operates in both the e-commerce and the lending business. We find that the relative reduction of estimation error is strikingly substantial if the causal effects are accounted for correctly.
We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CelebA images at 1024^2. We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher-quality version of the CelebA dataset.