亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this article, we investigate the robust optimal design problem for the prediction of response when the fitted regression models are only approximately specified, and observations might be missing completely at random. The intuitive idea is as follows: We assume that data are missing at random, and the complete case analysis is applied. To account for the occurrence of missing data, the design criterion we choose is the mean, for the missing indicator, of the averaged (over the design space) mean squared errors of the predictions. To describe the uncertainty in the specification of the real underlying model, we impose a neighborhood structure on the deterministic part of the regression response and maximize, analytically, the \textbf{M}ean of the averaged \textbf{M}ean squared \textbf{P}rediction \textbf{E}rrors (MMPE), over the entire neighborhood. The maximized MMPE is the ``worst'' loss in the neighborhood of the fitted regression model. Minimizing the maximum MMPE over the class of designs, we obtain robust ``minimax'' designs. The robust designs constructed afford protection from increases in prediction errors resulting from model misspecifications.

相關內容

Numerous studies have examined the associations between long-term exposure to fine particulate matter (PM2.5) and adverse health outcomes. Recently, many of these studies have begun to employ high-resolution predicted PM2.5 concentrations, which are subject to measurement error. Previous approaches for exposure measurement error correction have either been applied in non-causal settings or have only considered a categorical exposure. Moreover, most procedures have failed to account for uncertainty induced by error correction when fitting an exposure-response function (ERF). To remedy these deficiencies, we develop a multiple imputation framework that combines regression calibration and Bayesian techniques to estimate a causal ERF. We demonstrate how the output of the measurement error correction steps can be seamlessly integrated into a Bayesian additive regression trees (BART) estimator of the causal ERF. We also demonstrate how locally-weighted smoothing of the posterior samples from BART can be used to create a more accurate ERF estimate. Our proposed approach also properly propagates the exposure measurement error uncertainty to yield accurate standard error estimates. We assess the robustness of our proposed approach in an extensive simulation study. We then apply our methodology to estimate the effects of PM2.5 on all-cause mortality among Medicare enrollees in New England from 2000-2012.

Multivariate analysis-of-variance (MANOVA) is a well established tool to examine multivariate endpoints. While classical approaches depend on restrictive assumptions like normality and homogeneity, there is a recent trend to more general and flexible proce dures. In this paper, we proceed on this path, but do not follow the typical mean-focused perspective. Instead we consider general quantiles, in particular the median, for a more robust multivariate analysis. The resulting methodology is applicable for all kind of factorial designs and shown to be asymptotically valid. Our theoretical results are complemented by an extensive simulation study for small and moderate sample sizes. An illustrative data analysis is also presented.

We propose a method for estimation and inference for bounds for heterogeneous causal effect parameters in general sample selection models where the treatment can affect whether an outcome is observed and no exclusion restrictions are available. The method provides conditional effect bounds as functions of policy relevant pre-treatment variables. It allows for conducting valid statistical inference on the unidentified conditional effects. We use a flexible debiased/double machine learning approach that can accommodate non-linear functional forms and high-dimensional confounders. Easily verifiable high-level conditions for estimation and misspecification robust inference guarantees are provided as well. Re-analyzing data from a large scale field experiment on Facebook, we find significant depolarization effects of counter-attitudinal news subscription nudges. The effect bounds are highly heterogeneous and suggest strong depolarization effects for moderates, conservatives, and younger users.

Symbol-level precoding (SLP) manipulates the transmitted signals to accurately exploit the multi-user interference (MUI) in the multi-user downlink. This enables that all the resultant interference contributes to correct detection, which is the so-called constructive interference (CI). Its performance superiority comes at the cost of solving a nonlinear optimization problem on a symbol-by-symbol basis, for which the resulting complexity becomes prohibitive in realistic wireless communication systems. In this paper, we investigate low-complexity SLP algorithms for both phase-shift keying (PSK) and quadrature amplitude modulation (QAM). Specifically, we first prove that the max-min SINR balancing (SB) SLP problem for PSK signaling is not separable, which is contrary to the power minimization (PM) SLP problem, and accordingly, existing decomposition methods are not applicable. Next, we establish an explicit duality between the PM-SLP and SB-SLP problems for PSK modulation. The proposed duality facilitates obtaining the solution to the SB-SLP given the solution to the PM-SLP without the need for one-dimension search, and vice versa. We then propose a closed-form power scaling algorithm to solve the SB-SLP via PM-SLP to take advantage of the separability of the PM-SLP. As for QAM modulation, we convert the PM-SLP problem into a separable equivalent optimization problem, and decompose the new problem into several simple parallel subproblems with closed-form solutions, leveraging the proximal Jacobian alternating direction method of multipliers (PJ-ADMM). We further prove that the proposed duality can be generalized to the multi-level modulation case, based on which a power scaling parallel inverse-free algorithm is also proposed to solve the SB-SLP for QAM signaling. Numerical results show that the proposed algorithms offer optimal performance with lower complexity than the state-of-the-art.

Ordinal optimization (OO) is a widely-studied technique for optimizing discrete-event dynamic systems (DEDS). It evaluates the performance of the system designs in a finite set by sampling and aims to correctly make ordinal comparison of the designs. A well-known method in OO is the optimal computing budget allocation (OCBA). It builds the optimality conditions for the number of samples allocated to each design, and the sample allocation that satisfies the optimality conditions is shown to asymptotically maximize the probability of correct selection for the best design. In this paper, we investigate two popular OCBA algorithms. With known variances for samples of each design, we characterize their convergence rates with respect to different performance measures. We first demonstrate that the two OCBA algorithms achieve the optimal convergence rate under measures of probability of correct selection and expected opportunity cost. It fills the void of convergence analysis for OCBA algorithms. Next, we extend our analysis to the measure of cumulative regret, a main measure studied in the field of machine learning. We show that with minor modification, the two OCBA algorithms can reach the optimal convergence rate under cumulative regret. It indicates the potential of broader use of algorithms designed based on the OCBA optimality conditions.

Standard multi-modal models assume the use of the same modalities in training and inference stages. However, in practice, the environment in which multi-modal models operate may not satisfy such assumption. As such, their performances degrade drastically if any modality is missing in the inference stage. We ask: how can we train a model that is robust to missing modalities? This paper seeks a set of good practices for multi-modal action recognition, with a particular interest in circumstances where some modalities are not available at an inference time. First, we study how to effectively regularize the model during training (e.g., data augmentation). Second, we investigate on fusion methods for robustness to missing modalities: we find that transformer-based fusion shows better robustness for missing modality than summation or concatenation. Third, we propose a simple modular network, ActionMAE, which learns missing modality predictive coding by randomly dropping modality features and tries to reconstruct them with the remaining modality features. Coupling these good practices, we build a model that is not only effective in multi-modal action recognition but also robust to modality missing. Our model achieves the state-of-the-arts on multiple benchmarks and maintains competitive performances even in missing modality scenarios. Codes are available at //github.com/sangminwoo/ActionMAE.

The purpose of this article is to develop a general parametric estimation theory that allows the derivation of the limit distribution of estimators in non-regular models where the true parameter value may lie on the boundary of the parameter space or where even identifiability fails. For that, we propose a more general local approximation of the parameter space (at the true value) than previous studies. This estimation theory is comprehensive in that it can handle penalized estimation as well as quasi-maximum likelihood estimation under such non-regular models. Besides, our results can apply to the so-called non-ergodic statistics, where the Fisher information is random in the limit, including the regular experiment that is locally asymptotically mixed normal. In penalized estimation, depending on the boundary constraint, even the Bridge estimator with $q<1$ does not necessarily give selection consistency. Therefore, some sufficient condition for selection consistency is described, precisely evaluating the balance between the boundary constraint and the form of the penalty. Examples handled in the paper are: (i) ML estimation of the generalized inverse Gaussian distribution, (ii) quasi-ML estimation of the diffusion parameter in a non-ergodic It\^o process whose parameter space consists of positive semi-definite symmetric matrices, while the drift parameter is treated as nuisance and (iii) penalized ML estimation of variance components of random effects in linear mixed models.

We consider performing simulation experiments in the presence of covariates. Here, covariates refer to some input information other than system designs to the simulation model that can also affect the system performance. To make decisions, decision makers need to know the covariate values of the problem. Traditionally in simulation-based decision making, simulation samples are collected after the covariate values are known; in contrast, as a new framework, simulation with covariates starts the simulation before the covariate values are revealed, and collects samples on covariate values that might appear later. Then, when the covariate values are revealed, the collected simulation samples are directly used to predict the desired results. This framework significantly reduces the decision time compared to the traditional way of simulation. In this paper, we follow this framework and suppose there are a finite number of system designs. We adopt the metamodel of stochastic kriging (SK) and use it to predict the system performance of each design and the best design. The goal is to study how fast the prediction errors diminish with the number of covariate points sampled. This is a fundamental problem in simulation with covariates and helps quantify the relationship between the offline simulation efforts and the online prediction accuracy. Particularly, we adopt measures of the maximal integrated mean squared error (IMSE) and integrated probability of false selection (IPFS) for assessing errors of the system performance and the best design predictions. Then, we establish convergence rates for the two measures under mild conditions. Last, these convergence behaviors are illustrated numerically using test examples.

This PhD thesis contains several contributions to the field of statistical causal modeling. Statistical causal models are statistical models embedded with causal assumptions that allow for the inference and reasoning about the behavior of stochastic systems affected by external manipulation (interventions). This thesis contributes to the research areas concerning the estimation of causal effects, causal structure learning, and distributionally robust (out-of-distribution generalizing) prediction methods. We present novel and consistent linear and non-linear causal effects estimators in instrumental variable settings that employ data-dependent mean squared prediction error regularization. Our proposed estimators show, in certain settings, mean squared error improvements compared to both canonical and state-of-the-art estimators. We show that recent research on distributionally robust prediction methods has connections to well-studied estimators from econometrics. This connection leads us to prove that general K-class estimators possess distributional robustness properties. We, furthermore, propose a general framework for distributional robustness with respect to intervention-induced distributions. In this framework, we derive sufficient conditions for the identifiability of distributionally robust prediction methods and present impossibility results that show the necessity of several of these conditions. We present a new structure learning method applicable in additive noise models with directed trees as causal graphs. We prove consistency in a vanishing identifiability setup and provide a method for testing substructure hypotheses with asymptotic family-wise error control that remains valid post-selection. Finally, we present heuristic ideas for learning summary graphs of nonlinear time-series models.

Analyzing observational data from multiple sources can be useful for increasing statistical power to detect a treatment effect; however, practical constraints such as privacy considerations may restrict individual-level information sharing across data sets. This paper develops federated methods that only utilize summary-level information from heterogeneous data sets. Our federated methods provide doubly-robust point estimates of treatment effects as well as variance estimates. We derive the asymptotic distributions of our federated estimators, which are shown to be asymptotically equivalent to the corresponding estimators from the combined, individual-level data. We show that to achieve these properties, federated methods should be adjusted based on conditions such as whether models are correctly specified and stable across heterogeneous data sets.

北京阿比特科技有限公司