亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Invariant coordinate selection (ICS) is an unsupervised multivariate data transformation useful in many contexts such as outlier detection or clustering. It is based on the simultaneous diagonalization of two affine equivariant and positive definite scatter matrices. Its classical implementation relies on a non-symmetric eigenvalue problem (EVP) by diagonalizing one scatter relatively to the other. In case of collinearity, at least one of the scatter matrices is singular and the problem cannot be solved. To address this limitation, three approaches are proposed based on: a Moore-Penrose pseudo inverse (GINV), a dimension reduction (DR), and a generalized singular value decomposition (GSVD). Their properties are investigated theoretically and in different empirical applications. Overall, the extension based on GSVD seems the most promising even if it restricts the choice of scatter matrices that can be expressed as cross-products. In practice, some of the approaches also look suitable in the context of data in high dimension low sample size (HDLSS).

相關內容

We introduce an algebraic concept of the frame for abstract conditional independence (CI) models, together with basic operations with respect to which such a frame should be closed: copying and marginalization. Three standard examples of such frames are (discrete) probabilistic CI structures, semi-graphoids and structural semi-graphoids. We concentrate on those frames which are closed under the operation of set-theoretical intersection because, for these, the respective families of CI models are lattices. This allows one to apply the results from lattice theory and formal concept analysis to describe such families in terms of implications among CI statements. The central concept of this paper is that of self-adhesivity defined in algebraic terms, which is a combinatorial reflection of the self-adhesivity concept studied earlier in context of polymatroids and information theory. The generalization also leads to a self-adhesivity operator defined on the hyper-level of CI frames. We answer some of the questions related to this approach and raise other open questions. The core of the paper is in computations. The combinatorial approach to computation might overcome some memory and space limitation of software packages based on polyhedral geometry, in particular, if SAT solvers are utilized. We characterize some basic CI families over 4 variables in terms of canonical implications among CI statements. We apply our method in information-theoretical context to the task of entropic region demarcation over 5 variables.

Modelling multivariate spatio-temporal data with complex dependency structures is a challenging task but can be simplified by assuming that the original variables are generated from independent latent components. If these components are found, they can be modelled univariately. Blind source separation aims to recover the latent components by estimating the unmixing transformation based on the observed data only. The current methods for spatio-temporal blind source separation are restricted to linear unmixing, and nonlinear variants have not been implemented. In this paper, we extend identifiable variational autoencoder to the nonlinear nonstationary spatio-temporal blind source separation setting and demonstrate its performance using comprehensive simulation studies. Additionally, we introduce two alternative methods for the latent dimension estimation, which is a crucial task in order to obtain the correct latent representation. Finally, we illustrate the proposed methods using a meteorological application, where we estimate the latent dimension and the latent components, interpret the components, and show how nonstationarity can be accounted and prediction accuracy can be improved by using the proposed nonlinear blind source separation method as a preprocessing method.

Split conformal prediction techniques are applied to regression problems with circular responses by introducing a suitable conformity score, leading to prediction sets with adaptive arc length and finite-sample coverage guarantees for any circular predictive model under exchangeable data. Leveraging the high performance of existing predictive models designed for linear responses, we analyze a general projection procedure that converts any linear response regression model into one suitable for circular responses. When random forests serve as basis models in this projection procedure, we harness the out-of-bag dynamics to eliminate the necessity for a separate calibration sample in the construction of prediction sets. For synthetic and real datasets the resulting projected random forests model produces more efficient out-of-bag conformal prediction sets, with shorter median arc length, when compared to the split conformal prediction sets generated by two existing alternative models.

Many economic panel and dynamic models, such as rational behavior and Euler equations, imply that the parameters of interest are identified by conditional moment restrictions. We introduce a novel inference method without any prior information about which conditioning instruments are weak or irrelevant. Building on Bierens (1990), we propose penalized maximum statistics and combine bootstrap inference with model selection. Our method optimizes asymptotic power by solving a data-dependent max-min problem for tuning parameter selection. Extensive Monte Carlo experiments, based on an empirical example, demonstrate the extent to which our inference procedure is superior to those available in the literature.

The problem of regression extrapolation, or out-of-distribution generalization, arises when predictions are required at test points outside the range of the training data. In such cases, the non-parametric guarantees for regression methods from both statistics and machine learning typically fail. Based on the theory of tail dependence, we propose a novel statistical extrapolation principle. After a suitable, data-adaptive marginal transformation, it assumes a simple relationship between predictors and the response at the boundary of the training predictor samples. This assumption holds for a wide range of models, including non-parametric regression functions with additive noise. Our semi-parametric method, progression, leverages this extrapolation principle and offers guarantees on the approximation error beyond the training data range. We demonstrate how this principle can be effectively integrated with existing approaches, such as random forests and additive models, to improve extrapolation performance on out-of-distribution samples.

Radial basis functions (RBFs) play an important role in function interpolation, in particular in an arbitrary set of interpolation nodes. The accuracy of the interpolation depends on a parameter called the shape parameter. There are many approaches in literature on how to appropriately choose it as to increase the accuracy of interpolation while avoiding instability issues. However, finding the optimal shape parameter value in general remains a challenge. In this work, we present a novel approach to determine the shape parameter in RBFs. First, we construct an optimisation problem to obtain a shape parameter that leads to an interpolation matrix with bounded condition number, then, we introduce a data-driven method that controls the condition of the interpolation matrix to avoid numerically unstable interpolations, while keeping a very good accuracy. In addition, a fall-back procedure is proposed to enforce a strict upper bound on the condition number, as well as a learning strategy to improve the performance of the data-driven method by learning from previously run simulations. We present numerical test cases to assess the performance of the proposed methods in interpolation tasks and in a RBF based finite difference (RBF-FD) method, in one and two-space dimensions.

We derive an implicit description of the image of a semialgebraic set under a birational map, provided that the denominators of the map are positive on the set. For statistical models which are globally rationally identifiable, this yields model-defining constraints which facilitate model membership testing, representation learning, and model equivalence tests. Many examples illustrate the applicability of our results. The implicit equations recover well-known Markov properties of classical graphical models, as well as other well-studied equations such as the Verma constraint. They also provide Markov properties for generalizations of these frameworks, such as colored or interventional graphical models, staged trees, and the recently introduced Lyapunov models. Under a further mild assumption, we show that our implicit equations generate the vanishing ideal of the model up to a saturation, generalizing previous results of Geiger, Meek and Sturmfels, Duarte and G\"orgen, Sullivant, and others.

By filling in missing values in datasets, imputation allows these datasets to be used with algorithms that cannot handle missing values by themselves. However, missing values may in principle contribute useful information that is lost through imputation. The missing-indicator approach can be used in combination with imputation to instead represent this information as a part of the dataset. There are several theoretical considerations why missing-indicators may or may not be beneficial, but there has not been any large-scale practical experiment on real-life datasets to test this question for machine learning predictions. We perform this experiment for three imputation strategies and a range of different classification algorithms, on the basis of twenty real-life datasets. In a follow-up experiment, we determine attribute-specific missingness thresholds for each classifier above which missing-indicators are more likely than not to increase classification performance. And in a second follow-up experiment, we evaluate numerical imputation of one-hot encoded categorical attributes. We reach the following conclusions. Firstly, missing-indicators generally increase classification performance. Secondly, with missing-indicators, nearest neighbour and iterative imputation do not lead to better performance than simple mean/mode imputation. Thirdly, for decision trees, pruning is necessary to prevent overfitting. Fourthly, the thresholds above which missing-indicators are more likely than not to improve performance are lower for categorical attributes than for numerical attributes. Lastly, mean imputation of numerical attributes preserves some of the information from missing values. Consequently, when not using missing-indicators it can be advantageous to apply mean imputation to one-hot encoded categorical attributes instead of mode imputation.

Although quantile regression has emerged as a powerful tool for understanding various quantiles of a response variable conditioned on a set of covariates, the development of quantile regression for count responses has received far less attention. This paper proposes a new Bayesian approach to quantile regression for count data, which provides a more flexible and interpretable alternative to the existing approaches. The proposed approach associates the continuous latent variable with the discrete response and nonparametrically estimates the joint distribution of the latent variable and a set of covariates. Then, by regressing the estimated continuous conditional quantile on the covariates, the posterior distributions of the covariate effects on the conditional quantiles are obtained through general Bayesian updating via simple optimization. The simulation study and real data analysis demonstrate that the proposed method overcomes the existing limitations and enhances quantile estimation and interpretation of variable relationships, making it a valuable tool for practitioners handling count data.

In various scientific fields, researchers are interested in exploring the relationship between some response variable Y and a vector of covariates X. In order to make use of a specific model for the dependence structure, it first has to be checked whether the conditional density function of Y given X fits into a given parametric family. We propose a consistent bootstrap-based goodness-of-fit test for this purpose. The test statistic traces the difference between a nonparametric and a semi-parametric estimate of the marginal distribution function of Y. As its asymptotic null distribution is not distribution-free, a parametric bootstrap method is used to determine the critical value. A simulation study shows that, in some cases, the new method is more sensitive to deviations from the parametric model than other tests found in the literature. We also apply our method to real-world datasets.

北京阿比特科技有限公司