亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this short note, we discuss the Barndorff-Nielsen lemma, which is a generalization of well-known Borel-Cantelli lemma. Although the result stated in the Barndorff-Nielsen lemma is correct, it does not follow from the argument proposed in the corresponding proof. In this note, we show this and offer an alternative proof of this lemma. We also propose a new generalization of Borel-Cantelli lemma.

相關內容

In health cohort studies, repeated measures of markers are often used to describe the natural history of a disease. Joint models allow to study their evolution by taking into account the possible informative dropout usually due to clinical events. However, joint modeling developments mostly focused on continuous Gaussian markers while, in an increasing number of studies, the actual quantity of interest is non-directly measurable; it constitutes a latent variable evaluated by a set of observed indicators from questionnaires or measurement scales. Classical examples include anxiety, fatigue, cognition. In this work,we explain how joint models can be extended to the framework of a latent quantity measured over time by indicators of different nature (e.g. continuous, binary, ordinal). The longitudinal submodel describes the evolution over time of the quantity of interest defined as a latent process in a structural mixed model, and links the latent process to each observation of the indicators through appropriate measurement models. Simultaneously, the risk of multi-cause event is modelled via a proportional cause-specific hazard model that includes a function of the mixed model elements as linear predictor to take into account the association between the latent process and the risk of event. Estimation, carried out in the maximum likelihood framework and implemented in the R-package JLPM, has been validated by simulations. The methodology is illustrated in the French cohort on Multiple-System Atrophy (MSA), a rare and fatal neurodegenerative disease, with the study of dysphagia progression over time stopped by the occurrence of death.

This work addresses the cooperation facilitator (CF) model, in which network nodes coordinate through a rate limited communication device. For independent multiple-access channel (MAC) encoders, the CF model is known to show significant rate benefits, even when the rate of cooperation is negligible. Specifically, the benefit in MAC sum-rate, as a function of the cooperation rate $C_{CF}$, sometimes has an infinite slope at $C_{CF}=0$. This work studies the question of whether cooperation through a CF can yield similar infinite-slope benefits when applied to internal network encoders in which dependence among MAC transmitters can be established without the help of the CF. Towards this end, this work studies the CF model when applied to relay nodes of a single-source, single-terminal, diamond network consisting of a broadcast channel followed by a MAC. In the relay channel with orthogonal receiver components, careful generalization of the partial-decode-forward/compress-forward lower bound to the CF model yields sufficient conditions for an infinite-slope benefit. Additional results include derivation of a family of diamond networks for which the infinite-slope rate-benefit derives directly from the properties of the corresponding MAC component when studied in isolation.

How does language differ across one's Facebook status updates vs. one's text messages (SMS)? In this study, we show how Facebook and SMS use differs in psycho-linguistic characteristics and how these differences drive downstream analyses with an illustration of depression diagnosis. We use a sample of consenting participants who shared Facebook status updates, SMS data, and answered a standard psychological depression screener. We quantify domain differences using psychologically driven lexical methods and find that language on Facebook involves more personal concerns, experiences, and content features while the language in SMS contains more informal and style features. Next, we estimate depression from both text domains, using a depression model trained on Facebook data, and find a drop in accuracy when predicting self-reported depression assessments from the SMS-based depression estimates. Finally, we evaluate a simple domain adaption correction based on words driving the cross-platform differences and applied it to the SMS-derived depression estimates, resulting in significant improvement in prediction. Our work shows the Facebook vs. SMS difference in language use and suggests the necessity of cross-domain adaption for text-based predictions.

This paper focuses on quantifications whose nature, we believe, is generally undervalued within the Knowledge Representation community: they range over a set of concepts, i.e., of intensional objects identified in the ontology. Hence, we extend first order logic to allow referring to the intension of a symbol, i.e., to the concept it represents. Our formalism is more elaboration tolerant than simpler formalisms that require reification, but also introduces the possibility of syntactically incorrect formula.We introduce a guarding mechanism to make formula syntactically correct, and present a method to verify correctness. The complexity of the method is linear with the length of the formula. We also extend FO($\cdot$) (aka FO-dot), a logic-based knowledge representation language, in a similar way, and show how it helped solve practical problems. The value of expressing intensional statements has been well-established in modal logic. We show how our approach expands on the understanding of intensions as studied in modal settings by, e.g., Fitting, in a way that is of value in non-modal settings as well.

To reduce the spread of misinformation, social media platforms may take enforcement actions against offending content, such as adding informational warning labels, reducing distribution, or removing content entirely. However, both their actions and their inactions have been controversial and plagued by allegations of partisan bias. The controversy in part can be explained by a lack of clarity around what actions should be taken, as they may not neatly reduce to questions of factual accuracy. When decisions are contested, the legitimacy of decision-making processes becomes crucial to public acceptance. Platforms have tried to legitimize their decisions by following well-defined procedures through rules and codebooks. In this paper, we consider an alternate source of legitimacy -- the will of the people. Surprisingly little is known about what ordinary people want the platforms to do about specific content. We provide empirical evidence about lay raters' preferences for platform actions on 368 news articles. Our results confirm that on many items there is no clear consensus on which actions to take. There is no partisan difference in terms of how many items deserve platform actions but liberals do prefer somewhat more action on content from conservative sources, and vice versa. We find a clear hierarchy of perceived severity, with inform being the least severe action, followed by reduce, and then remove. We also find that judgments about two holistic properties, misleadingness and harm, could serve as an effective proxy to determine what actions would be approved by a majority of raters. We conclude with the promise of the will of the people while acknowledging the practical details that would have to be worked out.

The focus of disentanglement approaches has been on identifying independent factors of variation in data. However, the causal variables underlying real-world observations are often not statistically independent. In this work, we bridge the gap to real-world scenarios by analyzing the behavior of the most prominent disentanglement approaches on correlated data in a large-scale empirical study (including 4260 models). We show and quantify that systematically induced correlations in the dataset are being learned and reflected in the latent representations, which has implications for downstream applications of disentanglement such as fairness. We also demonstrate how to resolve these latent correlations, either using weak supervision during training or by post-hoc correcting a pre-trained model with a small number of labels.

Knowledge graphs (KGs) are of great importance to many real world applications, but they generally suffer from incomplete information in the form of missing relations between entities. Knowledge graph completion (also known as relation prediction) is the task of inferring missing facts given existing ones. Most of the existing work is proposed by maximizing the likelihood of observed instance-level triples. Not much attention, however, is paid to the ontological information, such as type information of entities and relations. In this work, we propose a type-augmented relation prediction (TaRP) method, where we apply both the type information and instance-level information for relation prediction. In particular, type information and instance-level information are encoded as prior probabilities and likelihoods of relations respectively, and are combined by following Bayes' rule. Our proposed TaRP method achieves significantly better performance than state-of-the-art methods on three benchmark datasets: FB15K, YAGO26K-906, and DB111K-174. In addition, we show that TaRP achieves significantly improved data efficiency. More importantly, the type information extracted from a specific dataset can generalize well to other datasets through the proposed TaRP model.

Product recommendation systems are important for major movie studios during the movie greenlight process and as part of machine learning personalization pipelines. Collaborative Filtering (CF) models have proved to be effective at powering recommender systems for online streaming services with explicit customer feedback data. CF models do not perform well in scenarios in which feedback data is not available, in cold start situations like new product launches, and situations with markedly different customer tiers (e.g., high frequency customers vs. casual customers). Generative natural language models that create useful theme-based representations of an underlying corpus of documents can be used to represent new product descriptions, like new movie plots. When combined with CF, they have shown to increase the performance in cold start situations. Outside of those cases though in which explicit customer feedback is available, recommender engines must rely on binary purchase data, which materially degrades performance. Fortunately, purchase data can be combined with product descriptions to generate meaningful representations of products and customer trajectories in a convenient product space in which proximity represents similarity. Learning to measure the distance between points in this space can be accomplished with a deep neural network that trains on customer histories and on dense vectorizations of product descriptions. We developed a system based on Collaborative (Deep) Metric Learning (CML) to predict the purchase probabilities of new theatrical releases. We trained and evaluated the model using a large dataset of customer histories, and tested the model for a set of movies that were released outside of the training window. Initial experiments show gains relative to models that do not train on collaborative preferences.

We report an evaluation of the effectiveness of the existing knowledge base embedding models for relation prediction and for relation extraction on a wide range of benchmarks. We also describe a new benchmark, which is much larger and complex than previous ones, which we introduce to help validate the effectiveness of both tasks. The results demonstrate that knowledge base embedding models are generally effective for relation prediction but unable to give improvements for the state-of-art neural relation extraction model with the existing strategies, while pointing limitations of existing methods.

Using low dimensional vector space to represent words has been very effective in many NLP tasks. However, it doesn't work well when faced with the problem of rare and unseen words. In this paper, we propose to leverage the knowledge in semantic dictionary in combination with some morphological information to build an enhanced vector space. We get an improvement of 2.3% over the state-of-the-art Heidel Time system in temporal expression recognition, and obtain a large gain in other name entity recognition (NER) tasks. The semantic dictionary Hownet alone also shows promising results in computing lexical similarity.

北京阿比特科技有限公司