亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Unlike univariate extreme value theory, multivariate extreme value distributions cannot be specified through a finite-dimensional parameter family of distributions. Instead, the many facets of multivariate extremes are mirrored in the inherent dependence structure of component-wise maxima which must be dissociated from the limiting extreme behaviour of its marginal distribution functions before a probabilistic characterisation of an extreme value quality can be determined. Mechanisms applied to elicit extremal dependence typically rely on standardisation of the unknown marginal distribution functions from which pseudo-observations for either Pareto or Fr\'echet marginals result. The relative merits of both of these choices for transformation of marginals have been discussed in the literature, particularly in the context of domains of attraction of an extreme value distribution. This paper is set within this context of modelling penultimate dependence as it proposes a unifying class of estimators for the residual dependence index that eschews consideration of choice of marginals. In addition, a reduced bias variant of the new class of estimators is introduced and their asymptotic properties are developed. The pivotal role of the unifying marginal transform in effectively removing bias is borne by a comprehensive simulation study. The leading application in this paper comprises an analysis of asymptotic independence between rainfall occurrences originating from monsoon-related events at several locations in Ghana.

相關內容

Logistic regression models for binomial responses are routinely used in statistical practice. However, the maximum likelihood estimate may not exist due to data separability. We address this issue by considering a conjugate prior penalty which always produces finite estimates. Such a specification has a clear Bayesian interpretation and enjoys several invariance properties, making it an appealing prior choice. We show that the proposed method leads to an accurate approximation of the reduced-bias approach of Firth (1993), resulting in estimators with smaller asymptotic bias than the maximum-likelihood and whose existence is always guaranteed. Moreover, the considered penalized likelihood can be expressed as a genuine likelihood, in which the original data are replaced with a collection of pseudo-counts. Hence, our approach may leverage well established and scalable algorithms for logistic regression. We compare our estimator with alternative reduced-bias methods, vastly improving their computational performance and achieving appealing inferential results.

We propose a new discretization method for PDEs on moving domains in the setting of unfitted finite element methods, which is provably higher-order accurate in space and time. In the considered setting, the physical domain that evolves essentially arbitrarily through a time-independent computational background domain, is represented by a level set function. For the time discretization, the application of standard time stepping schemes that are based on finite difference approximations of the time derivative is not directly possible, as the degrees of freedom may get active or inactive across such a finite difference stencil in time. In [Lehrenfeld, Olshanskii. An Eulerian finite element method for PDEs in time-dependent domains. ESAIM: M2AN, 53:585--614, 2019] this problem is overcome by extending the discrete solution at every timestep to a sufficiently large neighborhood so that all the degrees of freedom that are relevant at the next time step stay active. But that paper focuses on low-order methods. We advance these results with introducing and analyzing realizable techniques for the extension to higher order. To obtain higher-order convergence in space and time, we combine the BDF time stepping with the isoparametric unfitted FEM. The latter has been used and analyzed for several stationary problems before. However, for moving domains the key ingredient in the method, the transformation of the underlying mesh, becomes time-dependent which gives rise to some technical issues. We treat these with special care, carry out an a priori error analysis and two numerical experiments.

We propose a new simulation-based estimation method, adversarial estimation, for structural models. The estimator is formulated as the solution to a minimax problem between a generator (which generates synthetic observations using the structural model) and a discriminator (which classifies if an observation is synthetic). The discriminator maximizes the accuracy of its classification while the generator minimizes it. We show that, with a sufficiently rich discriminator, the adversarial estimator attains parametric efficiency under correct specification and the parametric rate under misspecification. We advocate the use of a neural network as a discriminator that can exploit adaptivity properties and attain fast rates of convergence. We apply our method to the elderly's saving decision model and show that our estimator uncovers the bequest motive as an important source of saving across the wealth distribution, not only for the rich.

This paper is concerned with two improved variants of the Hutch++ algorithm for estimating the trace of a square matrix, implicitly given through matrix-vector products. Hutch++ combines randomized low-rank approximation in a first phase with stochastic trace estimation in a second phase. In turn, Hutch++ only requires $O\left(\varepsilon^{-1}\right)$ matrix-vector products to approximate the trace within a relative error $\varepsilon$ with high probability. This compares favorably with the $O\left(\varepsilon^{-2}\right)$ matrix-vector products needed when using stochastic trace estimation alone. In Hutch++, the number of matrix-vector products is fixed a priori and distributed in a prescribed fashion among the two phases. In this work, we derive an adaptive variant of Hutch++, which outputs an estimate of the trace that is within some prescribed error tolerance with a controllable failure probability, while splitting the matrix-vector products in a near-optimal way among the two phases. For the special case of symmetric positive semi-definite matrix, we present another variant of Hutch++, called Nystr\"om++, which utilizes the so called Nystr\"om approximation and requires only one pass over the matrix, as compared to two passes with Hutch++. We extend the analysis of Hutch++ to Nystr\"om++. Numerical experiments demonstrate the effectiveness of our two new algorithms.

In the context of principal components analysis (PCA), the bootstrap is commonly applied to solve a variety of inference problems, such as constructing confidence intervals for the eigenvalues of the population covariance matrix $\Sigma$. However, when the data are high-dimensional, there are relatively few theoretical guarantees that quantify the performance of the bootstrap. Our aim in this paper is to analyze how well the bootstrap can approximate the joint distribution of the leading eigenvalues of the sample covariance matrix $\hat\Sigma$, and we establish non-asymptotic rates of approximation with respect to the multivariate Kolmogorov metric. Under certain assumptions, we show that the bootstrap can achieve the dimension-free rate of ${\tt{r}}(\Sigma)/\sqrt n$ up to logarithmic factors, where ${\tt{r}}(\Sigma)$ is the effective rank of $\Sigma$, and $n$ is the sample size. From a methodological standpoint, our work also illustrates that applying a transformation to the eigenvalues of $\hat\Sigma$ before bootstrapping is an important consideration in high-dimensional settings.

Fast and accurate predictions of uncertainties in the computed dose are crucial for the determination of robust treatment plans in radiation therapy. This requires the solution of particle transport problems with uncertain parameters or initial conditions. Monte Carlo methods are often used to solve transport problems especially for applications which require high accuracy. In these cases, common non-intrusive solution strategies that involve repeated simulations of the problem at different points in the parameter space quickly become infeasible due to their long run-times. Intrusive methods however limit the usability in combination with proprietary simulation engines. In our previous paper [51], we demonstrated the application of a new non-intrusive uncertainty quantification approach for Monte Carlo simulations in proton dose calculations with normally distributed errors on realistic patient data. In this paper, we introduce a generalized formulation and focus on a more in-depth theoretical analysis of this method concerning bias, error and convergence of the estimates. The multivariate input model of the proposed approach further supports almost arbitrary error correlation models. We demonstrate how this framework can be used to model and efficiently quantify complex auto-correlated and time-dependent errors.

When evaluating and comparing models using leave-one-out cross-validation (LOO-CV), the uncertainty of the estimate is typically assessed using the variance of the sampling distribution. Considering the uncertainty is important, as the variability of the estimate can be high in some cases. An important result by Bengio and Grandvalet (2004) states that no general unbiased variance estimator can be constructed, that would apply for any utility or loss measure and any model. We show that it is possible to construct an unbiased estimator considering a specific predictive performance measure and model. We demonstrate an unbiased sampling distribution variance estimator for the Bayesian normal model with fixed model variance using the expected log pointwise predictive density (elpd) utility score. This example demonstrates that it is possible to obtain improved, problem-specific, unbiased estimators for assessing the uncertainty in LOO-CV estimation.

Estimating individualized treatment rules - particularly in the context of right-censored outcomes - is challenging because the treatment effect heterogeneity of interest is often small, thus difficult to detect. While this motivates the use of very large datasets such as those from multiple health systems or centres, data privacy may be of concern with participating data centres reluctant to share individual-level data. In this case study on the treatment of depression, we demonstrate an application of distributed regression for privacy protection used in combination with dynamic weighted survival modelling (DWSurv) to estimate an optimal individualized treatment rule whilst obscuring individual-level data. In simulations, we demonstrate the flexibility of this approach to address local treatment practices that may affect confounding, and show that DWSurv retains its double robustness even when performed through a (weighted) distributed regression approach. The work is motivated by, and illustrated with, an analysis of treatment for unipolar depression using the United Kingdom's Clinical Practice Research Datalink.

Implicit probabilistic models are models defined naturally in terms of a sampling procedure and often induces a likelihood function that cannot be expressed explicitly. We develop a simple method for estimating parameters in implicit models that does not require knowledge of the form of the likelihood function or any derived quantities, but can be shown to be equivalent to maximizing likelihood under some conditions. Our result holds in the non-asymptotic parametric setting, where both the capacity of the model and the number of data examples are finite. We also demonstrate encouraging experimental results.

We develop an approach to risk minimization and stochastic optimization that provides a convex surrogate for variance, allowing near-optimal and computationally efficient trading between approximation and estimation error. Our approach builds off of techniques for distributionally robust optimization and Owen's empirical likelihood, and we provide a number of finite-sample and asymptotic results characterizing the theoretical performance of the estimator. In particular, we show that our procedure comes with certificates of optimality, achieving (in some scenarios) faster rates of convergence than empirical risk minimization by virtue of automatically balancing bias and variance. We give corroborating empirical evidence showing that in practice, the estimator indeed trades between variance and absolute performance on a training sample, improving out-of-sample (test) performance over standard empirical risk minimization for a number of classification problems.

北京阿比特科技有限公司