To mitigate the imbalance in the number of assignees in the Hospitals/Residents problem, Goko et al. [Goko et al., Maximally Satisfying Lower Quotas in the Hospitals/Residents Problem with Ties, Proc. STACS 2022, pp. 31:1--31:20] studied the Hospitals/Residents problem with lower quotas whose goal is to find a stable matching that satisfies lower quotas as much as possible. In their paper, preference lists are assumed to be complete, that is, the preference list of each resident (resp., hospital) is assumed to contain all the hospitals (resp., residents). In this paper, we study a more general model where preference lists may be incomplete. For four natural scenarios, we obtain maximum gaps of the best and worst solutions, approximability results, and inapproximability results.
There is growing interest in developing causal inference methods for multi-valued treatments with a focus on pairwise average treatment effects. Here we focus on a clinically important, yet less-studied estimand: causal drug-drug interactions (DDIs), which quantifies the degree to which the causal effect of drug A is altered by the presence versus the absence of drug B. Confounding adjustment when studying the effects of DDIs can be accomplished via inverse probability of treatment weighting (IPTW), a standard approach originally developed for binary treatments and later generalized to multi-valued treatments. However, this approach generally results in biased results when the propensity score model is misspecified. Motivated by the need for more robust techniques, we propose two empirical likelihood-based weighting approaches that allow for specifying a set of propensity score models, with the second method balancing user-specified covariates directly, by incorporating additional, nonparametric constraints. The resulting estimators from both methods are consistent when the postulated set of propensity score models contains a correct one; this property has been termed multiple robustness. We then evaluate their finite sample performance through simulation. The results demonstrate that the proposed estimators outperform the standard IPTW method in terms of both robustness and efficiency. Finally, we apply the proposed methods to evaluate the impact of renin-angiotensin system inhibitors (RAS-I) on the comparative nephrotoxicity of nonsteroidal anti-inflammatory drugs (NSAID) and opioids, using data derived from electronic medical records from a large multi-hospital health system.
This paper is devoted to the robust approximation with a variational phase field approach of multiphase mean curvature flows with possibly highly contrasted mobilities. The case of harmonically additive mobilities has been addressed recently using a suitable metric to define the gradient flow of the phase field approximate energy. We generalize this approach to arbitrary nonnegative mobilities using a decomposition as sums of harmonically additive mobilities. We establish the consistency of the resulting method by analyzing the sharp interface limit of the flow: a formal expansion of the phase field shows that the method is of second order. We propose a simple numerical scheme to approximate the solutions to our new model. Finally, we present some numerical experiments in dimensions 2 and 3 that illustrate the interest and effectiveness of our approach, in particular for approximating flows in which the mobility of some phases is zero.
Algorithmic decision-making in societal contexts, such as retail pricing, loan administration, recommendations on online platforms, etc., often involves experimentation with decisions for the sake of learning, which results in perceptions of unfairness among people impacted by these decisions. It is hence necessary to embed appropriate notions of fairness in such decision-making processes. The goal of this paper is to highlight the rich interface between temporal notions of fairness and online decision-making through a novel meta-objective of ensuring fairness at the time of decision. Given some arbitrary comparative fairness notion for static decision-making (e.g., students should pay at most 90% of the general adult price), a corresponding online decision-making algorithm satisfies fairness at the time of decision if the said notion of fairness is satisfied for any entity receiving a decision in comparison to all the past decisions. We show that this basic requirement introduces new methodological challenges in online decision-making. We illustrate the novel approaches necessary to address these challenges in the context of stochastic convex optimization with bandit feedback under a comparative fairness constraint that imposes lower bounds on the decisions received by entities depending on the decisions received by everyone in the past. The paper showcases novel research opportunities in online decision-making stemming from temporal fairness concerns.
This work considers Gaussian process interpolation with a periodized version of the Mat{\'e}rn covariance function (Stein, 1999, Section 6.7) with Fourier coefficients $\phi$($\alpha$^2 + j^2)^(--$\nu$--1/2). Convergence rates are studied for the joint maximum likelihood estimation of $\nu$ and $\phi$ when the data is sampled according to the model. The mean integrated squared error is also analyzed with fixed and estimated parameters, showing that maximum likelihood estimation yields asymptotically the same error as if the ground truth was known. Finally, the case where the observed function is a ''deterministic'' element of a continuous Sobolev space is also considered, suggesting that bounding assumptions on some parameters can lead to different estimates.
Interval-censored multi-state data arise in many studies of chronic diseases, where the health status of a subject can be characterized by a finite number of disease states and the transition between any two states is only known to occur over a broad time interval. We formulate the effects of potentially time-dependent covariates on multi-state processes through semiparametric proportional intensity models with random effects. We adopt nonparametric maximum likelihood estimation (NPMLE) under general interval censoring and develop a stable expectation-maximization (EM) algorithm. We show that the resulting parameter estimators are consistent and that the finite-dimensional components are asymptotically normal with a covariance matrix that attains the semiparametric efficiency bound and can be consistently estimated through profile likelihood. In addition, we demonstrate through extensive simulation studies that the proposed numerical and inferential procedures perform well in realistic settings. Finally, we provide an application to a major epidemiologic cohort study.
In this paper, we investigate the matrix estimation problem in the multi-response regression model with measurement errors. A nonconvex error-corrected estimator based on a combination of the amended loss function and the nuclear norm regularizer is proposed to estimate the matrix parameter. Then under the (near) low-rank assumption, we analyse statistical and computational theoretical properties of global solutions of the nonconvex regularized estimator from a general point of view. In the statistical aspect, we establish the nonasymptotic recovery bound for any global solution of the nonconvex estimator, under restricted strong convexity on the loss function. In the computational aspect, we solve the nonconvex optimization problem via the proximal gradient method. The algorithm is proved to converge to a near-global solution and achieve a linear convergence rate. In addition, we also verify sufficient conditions for the general results to be held, in order to obtain probabilistic consequences for specific types of measurement errors, including the additive noise and missing data. Finally, theoretical consequences are demonstrated by several numerical experiments on corrupted errors-in-variables multi-response regression models. Simulation results reveal excellent consistency with our theory under high-dimensional scaling.
Classical results in general equilibrium theory assume divisible goods and convex preferences of market participants. In many real-world markets, participants have non-convex preferences and the allocation problem needs to consider complex constraints. Electricity markets are a prime example. In such markets, Walrasian prices are impossible, and heuristic pricing rules based on the dual of the relaxed allocation problem are used in practice. However, these rules have been criticized for high side-payments and inadequate congestion signals. We show that existing pricing heuristics optimize specific design goals that can be conflicting. The trade-offs can be substantial, and we establish that the design of pricing rules is fundamentally a multi-objective optimization problem addressing different incentives. In addition to traditional multi-objective optimization techniques using weighing of individual objectives, we introduce a novel parameter-free pricing rule that minimizes incentives for market participants to deviate locally. Our findings show how the new pricing rule capitalizes on the upsides of existing pricing rules under scrutiny today. It leads to prices that incur low make-whole payments while providing adequate congestion signals and low lost opportunity costs. Our suggested pricing rule does not require weighing of objectives, it is computationally scalable, and balances trade-offs in a principled manner, addressing an important policy issue in electricity markets.
To estimate causal effects, analysts performing observational studies in health settings utilize several strategies to mitigate bias due to confounding by indication. There are two broad classes of approaches for these purposes: use of confounders and instrumental variables (IVs). Because such approaches are largely characterized by untestable assumptions, analysts must operate under an indefinite paradigm that these methods will work imperfectly. In this tutorial, we formalize a set of general principles and heuristics for estimating causal effects in the two approaches when the assumptions are potentially violated. This crucially requires reframing the process of observational studies as hypothesizing potential scenarios where the estimates from one approach are less inconsistent than the other. While most of our discussion of methodology centers around the linear setting, we touch upon complexities in non-linear settings and flexible procedures such as target minimum loss-based estimation (TMLE) and double machine learning (DML). To demonstrate the application of our principles, we investigate the use of donepezil off-label for mild cognitive impairment (MCI). We compare and contrast results from confounder and IV methods, traditional and flexible, within our analysis and to a similar observational study and clinical trial.
We consider a potential outcomes model in which interference may be present between any two units but the extent of interference diminishes with spatial distance. The causal estimand is the global average treatment effect, which compares outcomes under the counterfactuals that all or no units are treated. We study a class of designs in which space is partitioned into clusters that are randomized into treatment and control. For each design, we estimate the treatment effect using a Horvitz-Thompson estimator that compares the average outcomes of units with all or no neighbors treated, where the neighborhood radius is of the same order as the cluster size dictated by the design. We derive the estimator's rate of convergence as a function of the design and degree of interference and use this to obtain estimator-design pairs that achieve near-optimal rates of convergence under relatively minimal assumptions on interference. We prove that the estimators are asymptotically normal and provide a variance estimator. For practical implementation of the designs, we suggest partitioning space using clustering algorithms.
This paper focuses on the expected difference in borrower's repayment when there is a change in the lender's credit decisions. Classical estimators overlook the confounding effects and hence the estimation error can be magnificent. As such, we propose another approach to construct the estimators such that the error can be greatly reduced. The proposed estimators are shown to be unbiased, consistent, and robust through a combination of theoretical analysis and numerical testing. Moreover, we compare the power of estimating the causal quantities between the classical estimators and the proposed estimators. The comparison is tested across a wide range of models, including linear regression models, tree-based models, and neural network-based models, under different simulated datasets that exhibit different levels of causality, different degrees of nonlinearity, and different distributional properties. Most importantly, we apply our approaches to a large observational dataset provided by a global technology firm that operates in both the e-commerce and the lending business. We find that the relative reduction of estimation error is strikingly substantial if the causal effects are accounted for correctly.