亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

I propose a new type of confidence interval for correct asymptotic inference after using data to select a model of interest without assuming any model is correctly specified. This hybrid confidence interval is constructed by combining techniques from the selective inference and post-selection inference literatures to yield a short confidence interval across a wide range of data realizations. I show that hybrid confidence intervals have correct asymptotic coverage, uniformly over a large class of probability distributions that do not bound scaled model parameters. I illustrate the use of these confidence intervals in the problem of inference after using the LASSO objective function to select a regression model of interest and provide evidence of their desirable length and coverage properties in small samples via a set of Monte Carlo experiments that entail a variety of different data distributions as well as an empirical application to the predictors of diabetes disease progression.

相關內容

Researchers are often interested in learning not only the effect of treatments on outcomes, but also the pathways through which these effects operate. A mediator is a variable that is affected by treatment and subsequently affects outcome. Existing methods for penalized mediation analyses either assume that finite-dimensional linear models are sufficient to remove confounding bias, or perform no confounding control at all. In practice, these assumptions may not hold. We propose a method that considers the confounding functions as nuisance parameters to be estimated using data-adaptive methods. We then use a novel regularization method applied to this objective function to identify a set of important mediators. We derive the asymptotic properties of our estimator and establish the oracle property under certain assumptions. Asymptotic results are also presented in a local setting which contrast the proposal with the standard adaptive lasso. We also propose a perturbation bootstrap technique to provide asymptotically valid post-selection inference for the mediated effects of interest. The performance of these methods will be discussed and demonstrated through simulation studies.

The goal of this paper is to develop a practical and general-purpose approach to construct confidence intervals for differentially private parametric estimation. We find that the parametric bootstrap is a simple and effective solution. It cleanly reasons about variability of both the data sample and the randomized privacy mechanism and applies "out of the box" to a wide class of private estimation routines. It can also help correct bias caused by clipping data to limit sensitivity. We prove that the parametric bootstrap gives consistent confidence intervals in two broadly relevant settings, including a novel adaptation to linear regression that avoids accessing the covariate data multiple times. We demonstrate its effectiveness for a variety of estimators, and find that it provides confidence intervals with good coverage even at modest sample sizes and performs better than alternative approaches.

This article studies global testing of the slope function in functional linear regression model in the framework of reproducing kernel Hilbert space. We propose a new testing statistic based on smoothness regularization estimators. The asymptotic distribution of the testing statistic is established under null hypothesis. It is shown that the null asymptotic distribution is determined jointly by the reproducing kernel and the covariance function. Our theoretical analysis shows that the proposed testing is consistent over a class of smooth local alternatives. Despite the generality of the method of regularization, we show the procedure is easily implementable. Numerical examples are provided to demonstrate the empirical advantages over the competing methods.

In vertical federated learning (FL), the features of a data sample are distributed across multiple agents. As such, inter-agent collaboration can be beneficial not only during the learning phase, as is the case for standard horizontal FL, but also during the inference phase. A fundamental theoretical question in this setting is how to quantify the cost, or performance loss, of decentralization for learning and/or inference. In this paper, we consider general supervised learning problems with any number of agents, and provide a novel information-theoretic quantification of the cost of decentralization in the presence of privacy constraints on inter-agent communication within a Bayesian framework. The cost of decentralization for learning and/or inference is shown to be quantified in terms of conditional mutual information terms involving features and label variables.

We advocate for a practical Maximum Likelihood Estimation (MLE) approach towards designing loss functions for regression and forecasting, as an alternative to the typical approach of direct empirical risk minimization on a specific target metric. The MLE approach is better suited to capture inductive biases such as prior domain knowledge in datasets, and can output post-hoc estimators at inference time that can optimize different types of target metrics. We present theoretical results to demonstrate that our approach is competitive with any estimator for the target metric under some general conditions. In two example practical settings, Poisson and Pareto regression, we show that our competitive results can be used to prove that the MLE approach has better excess risk bounds than directly minimizing the target metric. We also demonstrate empirically that our method instantiated with a well-designed general purpose mixture likelihood family can obtain superior performance for a variety of tasks across time-series forecasting and regression datasets with different data distributions.

This paper studies the localization behaviour of Bose-Einstein condensates in disorder potentials, modeled by a Gross-Pitaevskii eigenvalue problem on a bounded interval. In the regime of weak particle interaction, we are able to quantify exponential localization of the ground state, depending on statistical parameters and the strength of the potential. Numerical studies further show delocalization if we leave the identified parameter range, which is in agreement with experimental data. These mathematical and numerical findings allow the prediction of physically relevant regimes where localization of ground states may be observed experimentally.

We propose a framework for estimation and inference when the model may be misspecified. We rely on a local asymptotic approach where the degree of misspecification is indexed by the sample size. We construct estimators whose mean squared error is minimax in a neighborhood of the reference model, based on one-step adjustments. In addition, we provide confidence intervals that contain the true parameter under local misspecification. As a tool to interpret the degree of misspecification, we map it to the local power of a specification test of the reference model. Our approach allows for systematic sensitivity analysis when the parameter of interest may be partially or irregularly identified. As illustrations, we study three applications: an empirical analysis of the impact of conditional cash transfers in Mexico where misspecification stems from the presence of stigma effects of the program, a cross-sectional binary choice model where the error distribution is misspecified, and a dynamic panel data binary choice model where the number of time periods is small and the distribution of individual effects is misspecified.

In this paper, we consider possibly misspecified stochastic differential equation models driven by L\'{e}vy processes. Regardless of whether the driving noise is Gaussian or not, Gaussian quasi-likelihood estimator can estimate unknown parameters in the drift and scale coefficients. However, in the misspecified case, the asymptotic distribution of the estimator varies by the correction of the misspecification bias, and consistent estimators for the asymptotic variance proposed in the correctly specified case may lose theoretical validity. As one of its solutions, we propose a bootstrap method for approximating the asymptotic distribution. We show that our bootstrap method theoretically works in both correctly specified case and misspecified case without assuming the precise distribution of the driving noise.

We develop a post-selective Bayesian framework to jointly and consistently estimate parameters in group-sparse linear regression models. After selection with the Group LASSO (or generalized variants such as the overlapping, sparse, or standardized Group LASSO), uncertainty estimates for the selected parameters are unreliable in the absence of adjustments for selection bias. Existing post-selective approaches are limited to uncertainty estimation for (i) real-valued projections onto very specific selected subspaces for the group-sparse problem, (ii) selection events categorized broadly as polyhedral events that are expressible as linear inequalities in the data variables. Our Bayesian methods address these gaps by deriving a likelihood adjustment factor, and an approximation thereof, that eliminates bias from selection. Paying a very nominal price for this adjustment, experiments on simulated data, and data from the Human Connectome Project demonstrate the efficacy of our methods for a joint estimation of group-sparse parameters and their uncertainties post selection.

Inference of directed relations given some unspecified interventions, that is, the target of each intervention is not known, is important yet challenging. For instance, it is of high interest to unravel the regulatory roles of genes with inherited genetic variants like single-nucleotide polymorphisms (SNPs), which can be unspecified interventions because of their regulatory function on some unknown genes. In this article, we test hypothesized directed relations with unspecified interventions. First, we derive conditions to yield an identifiable model. Unlike classical inference, hypothesis testing requires identifying ancestral relations and relevant interventions for each hypothesis-specific primary variable, referring to as causal discovery. Towards this end, we propose a peeling algorithm to establish a hierarchy of primary variables as nodes, starting with leaf nodes at the hierarchy's bottom, for which we derive a difference-of-convex (DC) algorithm for nonconvex minimization. Moreover, we prove that the peeling algorithm yields consistent causal discovery, and the DC algorithm is a low-order polynomial algorithm capable of finding a global minimizer almost surely under the data generating distribution. Second, we propose a modified likelihood ratio test, eliminating nuisance parameters to increase power. To enhance finite-sample performance, we integrate the modified likelihood ratio test with a data perturbation scheme by accounting for the uncertainty of identifying ancestral relations and relevant interventions. Also, we show that the distribution of a data-perturbation test statistic converges to the target distribution in high dimensions. Numerical examples demonstrate the utility and effectiveness of the proposed methods, including an application to infer gene regulatory networks.

北京阿比特科技有限公司