亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Response functions linking regression predictors to properties of the response distribution are fundamental components in many statistical models. However, the choice of these functions is typically based on the domain of the modeled quantities and is not further scrutinized. For example, the exponential response function is usually assumed for parameters restricted to be positive although it implies a multiplicative model which may not necessarily be desired. Consequently, applied researchers might easily face misleading results when relying on defaults without further investigation. As an alternative to the exponential response function, we propose the use of the softplus function to construct alternative link functions for parameters restricted to be positive. As a major advantage, we can construct differentiable link functions corresponding closely to the identity function for positive values of the regression predictor, which implies an quasi-additive model and thus allows for an additive interpretation of the estimated effects by practitioners. We demonstrate the applicability of the softplus response function using both simulations and real data. In four applications featuring count data regression and Bayesian distributional regression, we contrast our approach to the commonly used exponential response function.

相關內容

We propose a new wavelet-based method for density estimation when the data are size-biased. More specifically, we consider a power of the density of interest, where this power exceeds 1/2. Warped wavelet bases are employed, where warping is attained by some continuous cumulative distribution function. This can be seen as a general framework in which the conventional orthonormal wavelet estimation is the case where warping distribution is the standard uniform c.d.f. We show that both linear and nonlinear wavelet estimators are consistent, with optimal and/or near-optimal rates. Monte Carlo simulations are performed to compare four special settings which are easy to interpret in practice. An application with a real dataset on fatal traffic accidents involving alcohol illustrates the method. We observe that warped bases provide more flexible and superior estimates for both simulated and real data. Moreover, we find that estimating the power of a density (for instance, its square root) further improves the results.

By extending single-species distribution models, multi-species distribution models and joint species distribution models are able to describe the relationship between environmental variables and a community of species. It is also possible to model either the marginal distribution of each species (multi-species models) in the community or their joint distribution (joint species models) under certain assumptions, but a model describing both entities simultaneously has not been available. We propose a novel model that allows description of both the joint distribution of multiple species and models for the marginal single-species distributions within the framework of multivariate transformation models. Model parameters can be estimated from abundance data by two approximate maximum-likelihood procedures. Using a model community of three fish-eating birds, we demonstrate that inter-specific food competition over the course of a year can be modeled using count transformation models equipped with three time-dependent Spearman's rank correlation parameters. We use the same data set to compare the performance of our model to that of a competitor model from the literature on species distribution modeling. Multi-species count transformation models provide an alternative to multi- and joint- species distribution models. In addition to marginal transformation models capturing single-species distributions, the interaction between species can be expressed by Spearman's rank correlations in an overarching model formulation that allows simultaneous inferences for all model parameters. A software implementation is available in the "cotram" add-on package to the R system for statistical computing.

Phase-type (PH) distributions are a popular tool for the analysis of univariate risks in numerous actuarial applications. Their multivariate counterparts (MPH$^\ast$), however, have not seen such a proliferation, due to lack of explicit formulas and complicated estimation procedures. A simple construction of multivariate phase-type distributions -- mPH -- is proposed for the parametric description of multivariate risks, leading to models of considerable probabilistic flexibility and statistical tractability. The main idea is to start different Markov processes at the same state, and allow them to evolve independently thereafter, leading to dependent absorption times. By dimension augmentation arguments, this construction can be cast into the umbrella of MPH$^\ast$ class, but enjoys explicit formulas which the general specification lacks, including common measures of dependence. Moreover, it is shown that the class is still rich enough to be dense on the set of multivariate risks supported on the positive orthant, and it is the smallest known sub-class to have this property. In particular, the latter result provides a new short proof of the denseness of the MPH$^\ast$ class. In practice this means that the mPH class allows for modeling of bivariate risks with any given correlation or copula. We derive an EM algorithm for its statistical estimation, and illustrate it on bivariate insurance data. Extensions to more general settings are outlined.

Classical methods for quantile regression fail in cases where the quantile of interest is extreme and only few or no training data points exceed it. Asymptotic results from extreme value theory can be used to extrapolate beyond the range of the data, and several approaches exist that use linear regression, kernel methods or generalized additive models. Most of these methods break down if the predictor space has more than a few dimensions or if the regression function of extreme quantiles is complex. We propose a method for extreme quantile regression that combines the flexibility of random forests with the theory of extrapolation. Our extremal random forest (ERF) estimates the parameters of a generalized Pareto distribution, conditional on the predictor vector, by maximizing a local likelihood with weights extracted from a quantile random forest. Under certain assumptions, we show consistency of the estimated parameters. Furthermore, we penalize the shape parameter in this likelihood to regularize its variability in the predictor space. Simulation studies show that our ERF outperforms both classical quantile regression methods and existing regression approaches from extreme value theory. We apply our methodology to extreme quantile prediction for U.S. wage data.

In recent years, the literature on Bayesian high-dimensional variable selection has rapidly grown. It is increasingly important to understand whether these Bayesian methods can consistently estimate the model parameters. To this end, shrinkage priors are useful for identifying relevant signals in high-dimensional data. For multivariate linear regression models with Gaussian response variables, Bai and Ghosh (2018) proposed a multivariate Bayesian model with shrinkage priors (MBSP) for estimation and variable selection in high-dimensional settings. However, the proofs of posterior consistency for the MBSP method (Theorems 3 and 4 of Bai and Ghosh (2018) were incorrect. In this paper, we provide a corrected proof of Theorems 3 and 4 of Bai and Ghosh (2018). We leverage these new proofs to extend the MBSP model to multivariate generalized linear models (GLMs). Under our proposed model (MBSP-GLM), multiple responses belonging to the exponential family are simultaneously modeled and mixed-type responses are allowed. We show that the MBSP-GLM model achieves strong posterior consistency when $p$ grows at a subexponential rate with $n$. Furthermore, we quantify the posterior contraction rate at which the posterior shrinks around the true regression coefficients and allow the dimension of the responses $q$ to grow as $n$ grows. Thus, we strengthen the previous results on posterior consistency, which did not provide rate results. This greatly expands the scope of the MBSP model to include response variables of many data types, including binary and count data. To the best of our knowledge, this is the first posterior contraction result for multivariate Bayesian GLMs.

Empirical risk minimization (ERM) is known in practice to be non-robust to distributional shift where the training and the test distributions are different. A suite of approaches, such as importance weighting, and variants of distributionally robust optimization (DRO), have been proposed to solve this problem. But a line of recent work has empirically shown that these approaches do not significantly improve over ERM in real applications with distribution shift. The goal of this work is to obtain a comprehensive theoretical understanding of this intriguing phenomenon. We first posit the class of Generalized Reweighting (GRW) algorithms, as a broad category of approaches that iteratively update model parameters based on iterative reweighting of the training samples. We show that when overparameterized models are trained under GRW, the resulting models are close to that obtained by ERM. We also show that adding small regularization which does not greatly affect the empirical training accuracy does not help. Together, our results show that a broad category of what we term GRW approaches are not able to achieve distributionally robust generalization. Our work thus has the following sobering takeaway: to make progress towards distributionally robust generalization, we either have to develop non-GRW approaches, or perhaps devise novel classification/regression loss functions that are adapted to the class of GRW approaches.

We consider the problem of discovering $K$ related Gaussian directed acyclic graphs (DAGs), where the involved graph structures share a consistent causal order and sparse unions of supports. Under the multi-task learning setting, we propose a $l_1/l_2$-regularized maximum likelihood estimator (MLE) for learning $K$ linear structural equation models. We theoretically show that the joint estimator, by leveraging data across related tasks, can achieve a better sample complexity for recovering the causal order (or topological order) than separate estimations. Moreover, the joint estimator is able to recover non-identifiable DAGs, by estimating them together with some identifiable DAGs. Lastly, our analysis also shows the consistency of union support recovery of the structures. To allow practical implementation, we design a continuous optimization problem whose optimizer is the same as the joint estimator and can be approximated efficiently by an iterative algorithm. We validate the theoretical analysis and the effectiveness of the joint estimator in experiments.

This PhD thesis contains several contributions to the field of statistical causal modeling. Statistical causal models are statistical models embedded with causal assumptions that allow for the inference and reasoning about the behavior of stochastic systems affected by external manipulation (interventions). This thesis contributes to the research areas concerning the estimation of causal effects, causal structure learning, and distributionally robust (out-of-distribution generalizing) prediction methods. We present novel and consistent linear and non-linear causal effects estimators in instrumental variable settings that employ data-dependent mean squared prediction error regularization. Our proposed estimators show, in certain settings, mean squared error improvements compared to both canonical and state-of-the-art estimators. We show that recent research on distributionally robust prediction methods has connections to well-studied estimators from econometrics. This connection leads us to prove that general K-class estimators possess distributional robustness properties. We, furthermore, propose a general framework for distributional robustness with respect to intervention-induced distributions. In this framework, we derive sufficient conditions for the identifiability of distributionally robust prediction methods and present impossibility results that show the necessity of several of these conditions. We present a new structure learning method applicable in additive noise models with directed trees as causal graphs. We prove consistency in a vanishing identifiability setup and provide a method for testing substructure hypotheses with asymptotic family-wise error control that remains valid post-selection. Finally, we present heuristic ideas for learning summary graphs of nonlinear time-series models.

We propose a two-stage neural model to tackle question generation from documents. First, our model estimates the probability that word sequences in a document are ones that a human would pick when selecting candidate answers by training a neural key-phrase extractor on the answers in a question-answering corpus. Predicted key phrases then act as target answers and condition a sequence-to-sequence question-generation model with a copy mechanism. Empirically, our key-phrase extraction model significantly outperforms an entity-tagging baseline and existing rule-based approaches. We further demonstrate that our question generation system formulates fluent, answerable questions from key phrases. This two-stage system could be used to augment or generate reading comprehension datasets, which may be leveraged to improve machine reading systems or in educational settings.

Given a knowledge base or KB containing (noisy) facts about common nouns or generics, such as "all trees produce oxygen" or "some animals live in forests", we consider the problem of inferring additional such facts at a precision similar to that of the starting KB. Such KBs capture general knowledge about the world, and are crucial for various applications such as question answering. Different from commonly studied named entity KBs such as Freebase, generics KBs involve quantification, have more complex underlying regularities, tend to be more incomplete, and violate the commonly used locally closed world assumption (LCWA). We show that existing KB completion methods struggle with this new task, and present the first approach that is successful. Our results demonstrate that external information, such as relation schemas and entity taxonomies, if used appropriately, can be a surprisingly powerful tool in this setting. First, our simple yet effective knowledge guided tensor factorization approach achieves state-of-the-art results on two generics KBs (80% precise) for science, doubling their size at 74%-86% precision. Second, our novel taxonomy guided, submodular, active learning method for collecting annotations about rare entities (e.g., oriole, a bird) is 6x more effective at inferring further new facts about them than multiple active learning baselines.

北京阿比特科技有限公司