亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

So far, many researchers have investigated the following question: Given total number of citations, what is the estimated range of the h index? Here we consider the converse question. Namely, the aim of this paper is to estimate the total number of citations of a researcher using only his h index, his h core and perhaps a relatively small number of his citations from the tail. For these purposes, we use the asymptotic formula for the mode size of the Durfee square when n tends to infinity, which was proved by Canfield, Corteel and Savage (1998), seven years before Hirsch (2005) defined the h index. This formula confirms the asymptotic normality of the Hirsch citation h index. Using this asymptotic formula, in Section 4 we propose five? estimates of a total number of citations of a researcher using his h index and his h core. These estimates are refined mainly using small additional citations from the h tail of a researcher. Related numerous computational results are given in Section 5. Notice that the relative errors delta(B) of the estimate B of a total number of citations of a researcher are surprisingly close to zero for E. Garfield, H.D. White (Table 2), G. Andrews, L. Leydesdorf and C.D. Savage (Table 5).

相關內容

Mediation analysis is widely used in health science research to evaluate the extent to which an intermediate variable explains an observed exposure-outcome relationship. However, the validity of analysis can be compromised when the exposure is measured with error. Motivated by the Health Professionals Follow-up Study (HPFS), we investigate the impact of exposure measurement error on assessing mediation with a survival outcome, based on the Cox proportional hazards outcome model. When the outcome is rare and there is no exposure-mediator interaction, we show that the uncorrected estimators of the natural indirect and direct effects can be biased into either direction, but the uncorrected estimator of the mediation proportion is approximately unbiased as long as the measurement error is not large or the mediator-exposure association is not strong. We develop ordinary regression calibration and risk set regression calibration approaches to correct the exposure measurement error-induced bias when estimating mediation effects and allowing for an exposure-mediator interaction in the Cox outcome model. The proposed approaches require a validation study to characterize the measurement error process. We apply the proposed approaches to the HPFS (1986-2016) to evaluate extent to which reduced body mass index mediates the protective effect of vigorous physical activity on the risk of cardiovascular diseases, and compare the finite-sample properties of the proposed estimators via simulations.

We examine the tension between academic impact - the volume of citations received by publications - and scientific disruption. Intuitively, one would expect disruptive scientific work to be rewarded by high volumes of citations and, symmetrically, impactful work to also be disruptive. A number of recent studies have instead shown that such intuition is often at odds with reality. In this paper, we break down the relationship between impact and disruption with a detailed correlation analysis in two large data sets of publications in Computer Science and Physics. We find that highly disruptive papers tend to be cited at higher rates than average. Contrastingly, the opposite is not true, as we do not find highly impactful papers to be particularly disruptive. Notably, these results qualitatively hold even within individual scientific careers, as we find that - on average - an author's most disruptive work tends to be well cited, whereas their most cited work does not tend to be disruptive. We discuss the implications of our findings in the context of academic evaluation systems, and show how they can contribute to reconcile seemingly contradictory results in the literature.

Current physics-informed (standard or operator) neural networks still rely on accurately learning the initial conditions of the system they are solving. In contrast, standard numerical methods evolve such initial conditions without needing to learn these. In this study, we propose to improve current physics-informed deep learning strategies such that initial conditions do not need to be learned and are represented exactly in the predicted solution. Moreover, this method guarantees that when a DeepONet is applied multiple times to time step a solution, the resulting function is continuous.

In real life, success is often contingent upon multiple critical steps that are distant in time from each other and from the final reward. These critical steps are challenging to identify with traditional reinforcement learning (RL) methods that rely on the Bellman equation for credit assignment. Here, we present a new RL algorithm that uses offline contrastive learning to hone in on critical steps. This algorithm, which we call contrastive introspection (ConSpec), can be added to any existing RL algorithm. ConSpec learns a set of prototypes for the critical steps in a task by a novel contrastive loss and delivers an intrinsic reward when the current state matches one of these prototypes. The prototypes in ConSpec provide two key benefits for credit assignment: (1) They enable rapid identification of all the critical steps. (2) They do so in a readily interpretable manner, enabling out-of-distribution generalization when sensory features are altered. Distinct from other contemporary RL approaches to credit assignment, ConSpec takes advantage of the fact that it is easier to retrospectively identify the small set of steps that success is contingent upon than it is to prospectively predict reward at every step taken in the environment. Altogether, ConSpec improves learning in a diverse set of RL tasks, including both those with explicit, discrete critical steps and those with complex, continuous critical steps.

In this article we extend and strengthen the seminal work by Niyogi, Smale, and Weinberger on the learning of the homotopy type from a sample of an underlying space. In their work, Niyogi, Smale, and Weinberger studied samples of $C^2$ manifolds with positive reach embedded in $\mathbb{R}^d$. We extend their results in the following ways: In the first part of our paper we consider both manifolds of positive reach -- a more general setting than $C^2$ manifolds -- and sets of positive reach embedded in $\mathbb{R}^d$. The sample $P$ of such a set $\mathcal{S}$ does not have to lie directly on it. Instead, we assume that the two one-sided Hausdorff distances -- $\varepsilon$ and $\delta$ -- between $P$ and $\mathcal{S}$ are bounded. We provide explicit bounds in terms of $\varepsilon$ and $ \delta$, that guarantee that there exists a parameter $r$ such that the union of balls of radius $r$ centred at the sample $P$ deformation-retracts to $\mathcal{S}$. In the second part of our paper we study homotopy learning in a significantly more general setting -- we investigate sets of positive reach and submanifolds of positive reach embedded in a \emph{Riemannian manifold with bounded sectional curvature}. To this end we introduce a new version of the reach in the Riemannian setting inspired by the cut locus. Yet again, we provide tight bounds on $\varepsilon$ and $\delta$ for both cases (submanifolds as well as sets of positive reach), exhibiting the tightness by an explicit construction.

While several previous studies have devised methods for segmentation of polyps, most of these methods are not rigorously assessed on multi-center datasets. Variability due to appearance of polyps from one center to another, difference in endoscopic instrument grades, and acquisition quality result in methods with good performance on in-distribution test data, and poor performance on out-of-distribution or underrepresented samples. Unfair models have serious implications and pose a critical challenge to clinical applications. We adapt an implicit bias mitigation method which leverages Bayesian epistemic uncertainties during training to encourage the model to focus on underrepresented sample regions. We demonstrate the potential of this approach to improve generalisability without sacrificing state-of-the-art performance on a challenging multi-center polyp segmentation dataset (PolypGen) with different centers and image modalities.

When students and users of statistical methods first learn about regression analysis there is an emphasis on the technical details of models and estimation methods that invariably runs ahead of the purposes for which these models might be used. More broadly, statistics is widely understood to provide a body of techniques for "modelling data", underpinned by what we describe as the "true model myth", according to which the task of the statistician/data analyst is to build a model that closely approximates the true data generating process. By way of our own historical examples and a brief review of mainstream clinical research journals, we describe how this perspective leads to a range of problems in the application of regression methods, including misguided "adjustment" for covariates, misinterpretation of regression coefficients and the widespread fitting of regression models without a clear purpose. We then outline an alternative approach to the teaching and application of regression methods, which begins by focussing on clear definition of the substantive research question within one of three distinct types: descriptive, predictive, or causal. The simple univariable regression model may be introduced as a tool for description, while the development and application of multivariable regression models should proceed differently according to the type of question. Regression methods will no doubt remain central to statistical practice as they provide a powerful tool for representing variation in a response or outcome variable as a function of "input" variables, but their conceptualisation and usage should follow from the purpose at hand.

A challenging category of robotics problems arises when sensing incurs substantial costs. This paper examines settings in which a robot wishes to limit its observations of state, for instance, motivated by specific considerations of energy management, stealth, or implicit coordination. We formulate the problem of planning under uncertainty when the robot's observations are intermittent but their timing is known via a pre-declared schedule. After having established the appropriate notion of an optimal policy for such settings, we tackle the problem of joint optimization of the cumulative execution cost and the number of state observations, both in expectation under discounts. To approach this multi-objective optimization problem, we introduce an algorithm that can identify the Pareto front for a class of schedules that are advantageous in the discounted setting. The algorithm proceeds in an accumulative fashion, prepending additions to a working set of schedules and then computing incremental changes to the value functions. Because full exhaustive construction becomes computationally prohibitive for moderate-sized problems, we propose a filtering approach to prune the working set. Empirical results demonstrate that this filtering is effective at reducing computation while incurring only negligible reduction in quality. In summarizing our findings, we provide a characterization of the run-time vs quality trade-off involved.

Vector addition systems with states (VASS) are a popular model for concurrent systems. However, many decision problems have prohibitively high complexity. Therefore, it is sometimes useful to consider overapproximating semantics in which these problems can be decided more efficiently. We study an overapproximation, called monus semantics, that slightly relaxes the semantics of decrements: A key property of a vector addition systems is that in order to decrement a counter, this counter must have a positive value. In contrast, our semantics allows decrements of zero-valued counters: If such a transition is executed, the counter just remains zero. It turns out that if only a subset of transitions is used with monus semantics (and the others with classical semantics), then reachability is undecidable. However, we show that if monus semantics is used throughout, reachability remains decidable. In particular, we show that reachability for VASS with monus semantics is as hard as that of classical VASS (i.e. Ackermann-hard), while the zero-reachability and coverability are easier (i.e. EXPSPACE-complete and NP-complete, respectively). We provide a comprehensive account of the complexity of the general reachability problem, reachability of zero configurations, and coverability under monus semantics. We study these problems in general VASS, two-dimensional VASS, and one-dimensional VASS, with unary and binary counter updates.

Calibration is a pivotal aspect in predictive modeling, as it ensures that the predictions closely correspond with what we observe empirically. The contemporary calibration framework, however, is predominantly focused on prediction models where the outcome is a binary variable. We extend the logistic calibration framework to the generalized calibration framework which includes all members of the exponential family of distributions. We propose two different methods to estimate the calibration curve in this setting, a generalized linear model and a non-parametric smoother. In addition, we define two measures that summarize the calibration performance. The generalized calibration slope which quantifies the amount of over- or underfitting and the generalized calibration slope or calibration-in-the-large that measures the agreement between the global empirical average and the average predicted value. We provide an illustrative example using a simulated data set and hereby show how we can utilize the generalized calibration framework to assess the calibration of different types of prediction models.

北京阿比特科技有限公司