亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We introduce a simple diagnostic test for assessing the goodness of fit of linear regression, and in particular for detecting hidden confounding. We propose to evaluate the sensitivity of the regression coefficient with respect to changes of the marginal distribution of covariates by comparing the so-called higher-order least squares with the usual least squares estimates. In spite of its simplicity, this strategy is extremely general and powerful. Specifically, we show that it allows to distinguish between confounded and unconfounded predictor variables as well as determining ancestor variables in structural equation models.

相關內容

Shape constraints such as positive semi-definiteness (PSD) for matrices or convexity for functions play a central role in many applications in machine learning and sciences, including metric learning, optimal transport, and economics. Yet, very few function models exist that enforce PSD-ness or convexity with good empirical performance and theoretical guarantees. In this paper, we introduce a kernel sum-of-squares model for functions that take values in the PSD cone, which extends kernel sums-of-squares models that were recently proposed to encode non-negative scalar functions. We provide a representer theorem for this class of PSD functions, show that it constitutes a universal approximator of PSD functions, and derive eigenvalue bounds in the case of subsampled equality constraints. We then apply our results to modeling convex functions, by enforcing a kernel sum-of-squares representation of their Hessian, and show that any smooth and strongly convex function may be thus represented. Finally, we illustrate our methods on a PSD matrix-valued regression task, and on scalar-valued convex regression.

Mesh sensitivity of finite element solution for linear elliptic partial differential equations is analyzed. A bound for the change in the finite element solution is obtained in terms of the mesh deformation and its gradient. The bound shows how the finite element solution changes continuously with the mesh. The result holds in any dimension and for arbitrary unstructured simplicial meshes, general linear elliptic partial differential equations, and general finite element approximations.

In Bayesian analysis, the selection of a prior distribution is typically done by considering each parameter in the model. While this can be convenient, in many scenarios it may be desirable to place a prior on a summary measure of the model instead. In this work, we propose a prior on the model fit, as measured by a Bayesian coefficient of determination (R2), which then induces a prior on the individual parameters. We achieve this by placing a beta prior on R2 and then deriving the induced prior on the global variance parameter for generalized linear mixed models. We derive closed-form expressions in many scenarios and present several approximation strategies when an analytic form is not possible and/or to allow for easier computation. In these situations, we suggest to approximate the prior by using a generalized beta prime distribution that matches it closely. This approach is quite flexible and can be easily implemented in standard Bayesian software. Lastly, we demonstrate the performance of the method on simulated data where it particularly shines in high-dimensional examples as well as real-world data which shows its ability to model spatial correlation in the random effects.

We consider the problem of deciding termination of single-path while loops with integer variables, affine updates, and affine guard conditions. The question is whether such a loop terminates on all integer initial values. This problem is known to be decidable for the subclass of loops whose update matrices are diagonalisable, but the general case has remained open since being conjectured decidable by Tiwari in 2004. In this paper we show decidability of determining termination for arbitrary update matrices, confirming Tiwari's conjecture. For the class of loops considered in this paper, the question of deciding termination on a specific initial value is a longstanding open problem in number theory. The key to our decision procedure is in showing how to circumvent the difficulties inherent in deciding termination on a fixed initial value.

We consider the problem of parameter estimation in slowly varying regression models with sparsity constraints. We formulate the problem as a mixed integer optimization problem and demonstrate that it can be reformulated exactly as a binary convex optimization problem through a novel exact relaxation. The relaxation utilizes a new equality on Moore-Penrose inverses that convexifies the non-convex objective function while coinciding with the original objective on all feasible binary points. This allows us to solve the problem significantly more efficiently and to provable optimality using a cutting plane-type algorithm. We develop a highly optimized implementation of such algorithm, which substantially improves upon the asymptotic computational complexity of a straightforward implementation. We further develop a heuristic method that is guaranteed to produce a feasible solution and, as we empirically illustrate, generates high quality warm-start solutions for the binary optimization problem. We show, on both synthetic and real-world datasets, that the resulting algorithm outperforms competing formulations in comparable times across a variety of metrics including out-of-sample predictive performance, support recovery accuracy, and false positive rate. The algorithm enables us to train models with 10,000s of parameters, is robust to noise, and able to effectively capture the underlying slowly changing support of the data generating process.

This paper introduces a new class of numerical methods for the time integration of evolution equations set as Cauchy problems of ODEs or PDEs. The systematic design of these methods mixes the Runge-Kutta collocation formalism with collocation techniques, in such a way that the methods are linearly implicit and have high order. The fact that these methods are implicit allows to avoid CFL conditions when the large systems to integrate come from the space discretization of evolution PDEs. Moreover, these methods are expected to be efficient since they only require to solve one linear system of equations at each time step, and efficient techniques from the literature can be used to do so. After the introduction of the methods, we set suitable definitions of consistency and stability for these methods. This allows for a proof that arbitrarily high order linearly implicit methods exist and converge when applied to ODEs. Eventually, we perform numerical experiments on ODEs and PDEs that illustrate our theoretical results for ODEs, and compare our methods with standard methods for several evolution PDEs.

We introduce a simple diagnostic test for assessing the overall or partial goodness of fit of linear regression. We propose to evaluate the sensitivity of the regression coefficient with respect to changes of the marginal distribution of covariates by comparing the so-called higher-order least squares with the usual least squares estimates. In spite of its simplicity, this strategy is extremely general and powerful, including high-dimensional settings. Specifically, we show that it allows to distinguish between confounded and unconfounded predictor variables as well as determining ancestor variables in linear structural equation models assuming some non-Gaussianity. Thus, we provide a test for partial goodness of fit.

Gibbs sampling methods for mixture models are based on data augmentation schemes that account for the unobserved partition in the data. Conditional samplers are known to suffer from slow mixing in infinite mixtures, where some form of truncation, either deterministic or random, is required. In mixtures with random number of components, the exploration of parameter spaces of different dimensions can also be challenging. We tackle these issues by expressing the mixture components in the random order of appearance in an exchangeable sequence directed by the mixing distribution. We derive a sampler that is straightforward to implement for mixing distributions with tractable size-biased ordered weights. In infinite mixtures, no form of truncation is necessary. As for finite mixtures with random dimension, a simple updating of the number of components is obtained by a blocking argument, thus, easing challenges found in trans-dimensional moves via Metropolis-Hasting steps. Additionally, sampling occurs in the space of ordered partitions with blocks labelled in the least element order. This improves mixing and promotes a consistent labelling of mixture components throughout iterations. The performance of the proposed algorithm is evaluated on simulated data.

Meta-regression is often used to form hypotheses about what is associated with heterogeneity in a meta-analysis and to estimate the extent to which effects can vary between cohorts and other distinguishing factors. However, study-level variables, called moderators, that are available and used in the meta-regression analysis will rarely explain all of the heterogeneity. Therefore, measuring and trying to understand residual heterogeneity is still important in a meta-regression, although it is not clear how some heterogeneity measures should be used in the meta-regression context. The coefficient of variation, and its variants, are useful measures of relative heterogeneity. We consider these measures in the context of meta-regression which allows researchers to investigate heterogeneity at different levels of the moderator and also average relative heterogeneity overall. We also provide CIs for the measures and our simulation studies show that these intervals have good coverage properties. We recommend that these measures and corresponding intervals could provide useful insights into moderators that may be contributing to the presence of heterogeneity in a meta-analysis and lead to a better understanding of estimated mean effects.

Hypercontractivity is one of the most powerful tools in Boolean function analysis. Originally studied over the discrete hypercube, recent years have seen increasing interest in extensions to settings like the $p$-biased cube, slice, or Grassmannian, where variants of hypercontractivity have found a number of breakthrough applications including the resolution of Khot's 2-2 Games Conjecture (Khot, Minzer, Safra FOCS 2018). In this work, we develop a new theory of hypercontractivity on high dimensional expanders (HDX), an important class of expanding complexes that has recently seen similarly impressive applications in both coding theory and approximate sampling. Our results lead to a new understanding of the structure of Boolean functions on HDX, including a tight analog of the KKL Theorem and a new characterization of non-expanding sets. Unlike previous settings satisfying hypercontractivity, HDX can be asymmetric, sparse, and very far from products, which makes the application of traditional proof techniques challenging. We handle these barriers with the introduction of two new tools of independent interest: a new explicit combinatorial Fourier basis for HDX that behaves well under restriction, and a new local-to-global method for analyzing higher moments. Interestingly, unlike analogous second moment methods that apply equally across all types of expanding complexes, our tools rely inherently on simplicial structure. This suggests a new distinction among high dimensional expanders based upon their behavior beyond the second moment.

北京阿比特科技有限公司