亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In various scientific fields, researchers are interested in exploring the relationship between some response variable Y and a vector of covariates X. In order to make use of a specific model for the dependence structure, it first has to be checked whether the conditional density function of Y given X fits into a given parametric family. We propose a consistent bootstrap-based goodness-of-fit test for this purpose. The test statistic traces the difference between a nonparametric and a semi-parametric estimate of the marginal distribution function of Y. As its asymptotic null distribution is not distribution-free, a parametric bootstrap method is used to determine the critical value. A simulation study shows that, in some cases, the new method is more sensitive to deviations from the parametric model than other tests found in the literature. We also apply our method to real-world datasets.

相關內容

We consider the problem of causal inference based on observational data (or the related missing data problem) with a binary or discrete treatment variable. In that context, we study inference for the counterfactual density functions and contrasts thereof, which can provide more nuanced information than counterfactual means and the average treatment effect. We impose the shape-constraint of log-concavity, a type of unimodality constraint, on the counterfactual densities, and then develop doubly robust estimators of the log-concave counterfactual density based on augmented inverse-probability weighted pseudo-outcomes. We provide conditions under which the estimator is consistent in various global metrics. We also develop asymptotically valid pointwise confidence intervals for the counterfactual density functions and differences and ratios thereof, which serve as a building block for more comprehensive analyses of distributional differences. We also present a method for using our estimator to implement density confidence bands.

Large-scale eigenvalue problems arise in various fields of science and engineering and demand computationally efficient solutions. In this study, we investigate the subspace approximation for parametric linear eigenvalue problems, aiming to mitigate the computational burden associated with high-fidelity systems. We provide general error estimates under non-simple eigenvalue conditions, establishing the theoretical foundations for our methodology. Numerical examples, ranging from one-dimensional to three-dimensional setups, are presented to demonstrate the efficacy of reduced basis method in handling parametric variations in boundary conditions and coefficient fields to achieve significant computational savings while maintaining high accuracy, making them promising tools for practical applications in large-scale eigenvalue computations.

An extremely schematic model of the forces acting an a sailing yacht equipped with a system of foils is here presented and discussed. The role of the foils is to raise the hull from the water in order to reduce the total resistance and then increase the speed. Some CFD simulations are providing the total resistance of the bare hull at some values of speed and displacement, as well as the characteristics (drag and lift coefficients) of the 2D foil sections used for the appendages. A parametric study has been performed for the characterization of a foil of finite dimensions. The equilibrium of the vertical forces and longitudinal moments, as well as a reduced displacement, is obtained by controlling the pitch angle of the foils. The value of the total resistance of the yacht with foils is then compared with the case without foils, evidencing the speed regime where an advantage is obtained, if any.

This manuscript describes the notions of blocker and interdiction applied to well-known optimization problems. The main interest of these two concepts is the capability to analyze the existence of a combinatorial structure after some modifications. We focus on graph modification, like removing vertices or links in a network. In the interdiction version, we have a budget for modification to reduce as much as possible the size of a given combinatorial structure. Whereas, for the blocker version, we minimize the number of modifications such that the network does not contain a given combinatorial structure. Blocker and interdiction problems have some similarities and can be applied to well-known optimization problems. We consider matching, connectivity, shortest path, max flow, and clique problems. For these problems, we analyze either the blocker version or the interdiction one. Applying the concept of blocker or interdiction to well-known optimization problems can change their complexities. Some optimization problems become harder when one of these two notions is applied. For this reason, we propose some complexity analysis to show when an optimization problem, or the associated decision problem, becomes harder. Another fundamental aspect developed in the manuscript is the use of exact methods to tackle these optimization problems. The main way to solve these problems is to use integer linear programming to model them. An interesting aspect of integer linear programming is the possibility to analyze theoretically the strength of these models, using cutting planes. For most of the problems studied in this manuscript, a polyhedral analysis is performed to prove the strength of inequalities or describe new families of inequalities. The exact algorithms proposed are based on Branch-and-Cut or Branch-and-Price algorithm, where dedicated separation and pricing algorithms are proposed.

There are concerns about the fairness of clinical prediction models. 'Fair' models are defined as those for which their performance or predictions are not inappropriately influenced by protected attributes such as ethnicity, gender, or socio-economic status. Researchers have raised concerns that current algorithmic fairness paradigms enforce strict egalitarianism in healthcare, levelling down the performance of models in higher-performing subgroups instead of improving it in lower-performing ones. We propose assessing the fairness of a prediction model by expanding the concept of net benefit, using it to quantify and compare the clinical impact of a model in different subgroups. We use this to explore how a model distributes benefit across a population, its impact on health inequalities, and its role in the achievement of health equity. We show how resource constraints might introduce necessary trade-offs between health equity and other objectives of healthcare systems. We showcase our proposed approach with the development of two clinical prediction models: 1) a prognostic type 2 diabetes model used by clinicians to enrol patients into a preventive care lifestyle intervention programme, and 2) a lung cancer screening algorithm used to allocate diagnostic scans across the population. This approach helps modellers better understand if a model upholds health equity by considering its performance in a clinical and social context.

Weighting with the inverse probability of censoring is an approach to deal with censoring in regression analyses where the outcome may be missing due to right-censoring. In this paper, three separate approaches involving this idea in a setting where the Kaplan--Meier estimator is used for estimating the censoring probability are compared. In more detail, the three approaches involve weighted regression, regression with a weighted outcome, and regression of a jack-knife pseudo-observation based on a weighted estimator. Expressions of the asymptotic variances are given in each case and the expressions are compared to each other and to the uncensored case. In terms of low asymptotic variance, a clear winner cannot be found. Which approach will have the lowest asymptotic variance depends on the censoring distribution. Expressions of the limit of the standard sandwich variance estimator in the three cases are also provided, revealing an overestimation under the implied assumptions.

This paper is devoted to the stochastic approximation of entropically regularized Wasserstein distances between two probability measures, also known as Sinkhorn divergences. The semi-dual formulation of such regularized optimal transportation problems can be rewritten as a non-strongly concave optimisation problem. It allows to implement a Robbins-Monro stochastic algorithm to estimate the Sinkhorn divergence using a sequence of data sampled from one of the two distributions. Our main contribution is to establish the almost sure convergence and the asymptotic normality of a new recursive estimator of the Sinkhorn divergence between two probability measures in the discrete and semi-discrete settings. We also study the rate of convergence of the expected excess risk of this estimator in the absence of strong concavity of the objective function. Numerical experiments on synthetic and real datasets are also provided to illustrate the usefulness of our approach for data analysis.

Generalized linear models are a popular tool in applied statistics, with their maximum likelihood estimators enjoying asymptotic Gaussianity and efficiency. As all models are wrong, it is desirable to understand these estimators' behaviours under model misspecification. We study semiparametric multilevel generalized linear models, where only the conditional mean of the response is taken to follow a specific parametric form. Pre-existing estimators from mixed effects models and generalized estimating equations require specificaiton of a conditional covariance, which when misspecified can result in inefficient estimates of fixed effects parameters. It is nevertheless often computationally attractive to consider a restricted, finite dimensional class of estimators, as these models naturally imply. We introduce sandwich regression, that selects the estimator of minimal variance within a parametric class of estimators over all distributions in the full semiparametric model. We demonstrate numerically on simulated and real data the attractive improvements our sandwich regression approach enjoys over classical mixed effects models and generalized estimating equations.

We consider the problem of estimating the error when solving a system of differential algebraic equations. Richardson extrapolation is a classical technique that can be used to judge when computational errors are irrelevant and estimate the discretization error. We have simulated molecular dynamics with constraints using the GROMACS library and found that the output is not always amenable to Richardson extrapolation. We derive and illustrate Richardson extrapolation using a variety of numerical experiments. We identify two necessary conditions that are not always satisfied by the GROMACS library.

We develop some graph-based tests for spherical symmetry of a multivariate distribution using a method based on data augmentation. These tests are constructed using a new notion of signs and ranks that are computed along a path obtained by optimizing an objective function based on pairwise dissimilarities among the observations in the augmented data set. The resulting tests based on these signs and ranks have the exact distribution-free property, and irrespective of the dimension of the data, the null distributions of the test statistics remain the same. These tests can be conveniently used for high-dimensional data, even when the dimension is much larger than the sample size. Under appropriate regularity conditions, we prove the consistency of these tests in high dimensional asymptotic regime, where the dimension grows to infinity while the sample size may or may not grow with the dimension. We also propose a generalization of our methods to take care of the situations, where the center of symmetry is not specified by the null hypothesis. Several simulated data sets and a real data set are analyzed to demonstrate the utility of the proposed tests.

北京阿比特科技有限公司