亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Conformal prediction equips machine learning models with a reasonable notion of uncertainty quantification without making strong distributional assumptions. It wraps around any black-box prediction model and converts point predictions into set predictions that have a predefined marginal coverage guarantee. However, conformal prediction only works if we fix the underlying machine learning model in advance. A relatively unaddressed issue in conformal prediction is that of model selection and/or aggregation: for a given problem, which of the plethora of prediction methods (random forests, neural nets, regularized linear models, etc.) should we conformalize? This paper proposes a new approach towards conformal model aggregation in online settings that is based on combining the prediction sets from several algorithms by voting, where weights on the models are adapted over time based on past performance.

相關內容

Estimating the sharing of genetic effects across different conditions is important to many statistical analyses of genomic data. The patterns of sharing arising from these data are often highly heterogeneous. To flexibly model these heterogeneous sharing patterns, Urbut et al. (2019) proposed the multivariate adaptive shrinkage (MASH) method to jointly analyze genetic effects across multiple conditions. However, multivariate analyses using MASH (as well as other multivariate analyses) require good estimates of the sharing patterns, and estimating these patterns efficiently and accurately remains challenging. Here we describe new empirical Bayes methods that provide improvements in speed and accuracy over existing methods. The two key ideas are: (1) adaptive regularization to improve accuracy in settings with many conditions; (2) improving the speed of the model fitting algorithms by exploiting analytical results on covariance estimation. In simulations, we show that the new methods provide better model fits, better out-of-sample performance, and improved power and accuracy in detecting the true underlying signals. In an analysis of eQTLs in 49 human tissues, our new analysis pipeline achieves better model fits and better out-of-sample performance than the existing MASH analysis pipeline. We have implemented the new methods, which we call ``Ultimate Deconvolution'', in an R package, udr, available on GitHub.

We study the problem of parametric estimation for continuously observed stochastic differential equation driven by fractional Brownian motion. Under some assumptions on drift and diffusion coefficients, we construct maximum likelihood estimator and establish its the asymptotic normality and moment convergence of the drift parameter when a small dispersion coefficient vanishes.

Optimal experimental design (OED) aims to choose the observations in an experiment to be as informative as possible, according to certain statistical criteria. In the linear case (when the observations depend linearly on the unknown parameters), it seeks the optimal weights over rows of the design matrix A under certain criteria. Classical OED assumes a discrete design space and thus a design matrix with finite dimensions. In many practical situations, however, the design space is continuous-valued, so that the OED problem is one of optimizing over a continuous-valued design space. The objective becomes a functional over the probability measure, instead of over a finite dimensional vector. This change of perspective requires a new set of techniques that can handle optimizing over probability measures, and Wasserstein gradient flow becomes a natural candidate. Both the first-order criticality and the convexity properties of the OED objective are presented. Computationally Monte Carlo particle simulation is deployed to formulate the main algorithm. This algorithm is applied to two elliptic inverse problems.

We devise fast and provably accurate algorithms to transform between an $N\times N \times N$ Cartesian voxel representation of a three-dimensional function and its expansion into the ball harmonics, that is, the eigenbasis of the Dirichlet Laplacian on the unit ball in $\mathbb{R}^3$. Given $\varepsilon > 0$, our algorithms achieve relative $\ell^1$ - $\ell^\infty$ accuracy $\varepsilon$ in time $O(N^3 (\log N)^2 + N^3 |\log \varepsilon|^2)$, while their dense counterparts have time complexity $O(N^6)$. We illustrate our methods on numerical examples.

We consider a convex constrained Gaussian sequence model and characterize necessary and sufficient conditions for the least squares estimator (LSE) to be optimal in a minimax sense. For a closed convex set $K\subset \mathbb{R}^n$ we observe $Y=\mu+\xi$ for $\xi\sim N(0,\sigma^2\mathbb{I}_n)$ and $\mu\in K$ and aim to estimate $\mu$. We characterize the worst case risk of the LSE in multiple ways by analyzing the behavior of the local Gaussian width on $K$. We demonstrate that optimality is equivalent to a Lipschitz property of the local Gaussian width mapping. We also provide theoretical algorithms that search for the worst case risk. We then provide examples showing optimality or suboptimality of the LSE on various sets, including $\ell_p$ balls for $p\in[1,2]$, pyramids, solids of revolution, and multivariate isotonic regression, among others.

In causal inference, sensitivity models assess how unmeasured confounders could alter causal analyses, but the sensitivity parameter -- which quantifies the degree of unmeasured confounding -- is often difficult to interpret. For this reason, researchers sometimes compare the sensitivity parameter to an estimate for measured confounding. This is known as calibration. Although calibration can aid interpretation, it is typically conducted post hoc, and uncertainty in the point estimate for measured confounding is rarely accounted for. To address these limitations, we propose novel calibrated sensitivity models, which directly bound the degree of unmeasured confounding by a multiple of measured confounding. The calibrated sensitivity parameter is interpretable as an intuitive unit-less ratio of unmeasured to measured confounding, and uncertainty due to estimating measured confounding can be incorporated. Incorporating this uncertainty shows causal analyses can be less or more robust to unmeasured confounding than would have been suggested by standard approaches. We develop efficient estimators and inferential methods for bounds on the average treatment effect with three calibrated sensitivity models, establishing parametric efficiency and asymptotic normality under doubly robust style nonparametric conditions. We illustrate our methods with a data analysis of the effect of mothers' smoking on infant birthweight.

Studies on software tutoring systems for complex learning have shown that confusion has a beneficial relationship with the learning experience and student engagement (Arguel et al., 2017). Causing confusion can prevent boredom while signs of confusion can serve as a signal of genuine learning and as a predecessor for frustration. There is little to no research on the role of confusion in early childhood education and playful learning, as these studies primarily focus on high school and university students during complex learning tasks. Despite that, the field acknowledges that confusion may be caused by inconsistency between information and a student's internal model referred to as cognitive disequilibrium known from the theory of cognitive development, which was originally theorized based on observational studies on young children (D'Mello and Graesser, 2012). Therefore, there is reason to expect that the virtues of confusion also apply to young children engaging in learning activities, such as playful learning. To investigate the role of confusion in playful learning, we conducted an observational study, in which the behavior and expressed emotions of young children were collected by familiar pedagogues, using a web app, while they engaged with playful learning games designed for kindergartens. The expressed emotions were analyzed using a likelihood metric to determine the likely transitions between emotions (D'Mello and Graesser, 2012). The preliminary results showed that during short play sessions, children express confusion, frustration, and boredom. Furthermore, the observed emotional transitions were matched with previously established models of affect dynamics during complex learning. We argue that games with a learning objective can benefit by purposely confusing the player and how the player's confusion may be managed to improve the learning experience.

Markov state modeling has gained popularity in various scientific fields due to its ability to reduce complex time series data into transitions between a few states. Yet, current frameworks are limited by assuming a single Markov chain describes the data, and they suffer an inability to discern heterogeneities. As a solution, this paper proposes a variational expectation-maximization algorithm that identifies a mixture of Markov chains in a time-series data set. The method is agnostic to the definition of the Markov states, whether data-driven (e.g. by spectral clustering) or based on domain knowledge. Variational EM efficiently and organically identifies the number of Markov chains and dynamics of each chain without expensive model comparisons or posterior sampling. The approach is supported by a theoretical analysis and numerical experiments, including simulated and observational data sets based on ${\tt Last.fm}$ music listening, ultramarathon running, and gene expression. The results show the new algorithm is competitive with contemporary mixture modeling approaches and powerful in identifying meaningful heterogeneities in time series data.

Symbolic computation for systems of differential equations is often computationally expensive. Many practical differential models have a form of polynomial or rational ODE system with specified outputs. A basic symbolic approach to analyze these models is to compute and then symbolically process the polynomial system obtained by sufficiently many Lie derivatives of the output functions with respect to the vector field given by the ODE system. In this paper, we present a method for speeding up Gr\"obner basis computation for such a class of polynomial systems by using specific monomial ordering, including weights for the variables, coming from the structure of the ODE model. We provide empirical results that show improvement across different symbolic computing frameworks and apply the method to speed up structural identifiability analysis of ODE models.

We propose a meta-learning method for positive and unlabeled (PU) classification, which improves the performance of binary classifiers obtained from only PU data in unseen target tasks. PU learning is an important problem since PU data naturally arise in real-world applications such as outlier detection and information retrieval. Existing PU learning methods require many PU data, but sufficient data are often unavailable in practice. The proposed method minimizes the test classification risk after the model is adapted to PU data by using related tasks that consist of positive, negative, and unlabeled data. We formulate the adaptation as an estimation problem of the Bayes optimal classifier, which is an optimal classifier to minimize the classification risk. The proposed method embeds each instance into a task-specific space using neural networks. With the embedded PU data, the Bayes optimal classifier is estimated through density-ratio estimation of PU densities, whose solution is obtained as a closed-form solution. The closed-form solution enables us to efficiently and effectively minimize the test classification risk. We empirically show that the proposed method outperforms existing methods with one synthetic and three real-world datasets.

北京阿比特科技有限公司