亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

During recent crises like COVID-19, microblogging platforms have become popular channels for affected people seeking assistance such as medical supplies and rescue operations from emergency responders and the public. Despite this common practice, the affordances of microblogging services for help-seeking during crises that needs immediate attention are not well understood. To fill this gap, we analyzed 8K posts from COVID-19 patients or caregivers requesting urgent medical assistance on Weibo, the largest microblogging site in China. Our mixed-methods analyses suggest that existing microblogging functions need to be improved in multiple aspects to sufficiently facilitate help-seeking in emergencies, including capabilities of search and tracking requests, ease of use, and privacy protection. We also find that people tend to stick to certain well-established functions for publishing requests, even after better alternatives emerge. These findings have implications for designing microblogging tools to better support help requesting and responding during crises.

相關內容

Drawing a direct analogy with the well-studied vibration or elastic modes, we introduce an object's fracture modes, which constitute its preferred or most natural ways of breaking. We formulate a sparsified eigenvalue problem, which we solve iteratively to obtain the n lowest-energy modes. These can be precomputed for a given shape to obtain a prefracture pattern that can substitute the state of the art for realtime applications at no runtime cost but significantly greater realism. Furthermore, any realtime impact can be projected onto our modes to obtain impact-dependent fracture patterns without the need for any online crack propagation simulation. We not only introduce this theoretically novel concept, but also show its fundamental and practical superiority in a diverse set of examples and contexts.

The Covid-19 pandemic has caused impressive damages and disruptions in social, economic, and health systems (among others), and posed unprecedented challenges to public health and policy/decision-makers concerning the design and implementation of measures to mitigate its strong negative impacts. The Portuguese health authorities are currently using some decision analysis-like techniques to assess the impact of this pandemic and implementing measures for each county, region, or the whole country. Such decision tools led to some criticism and many stakeholders asked for novel approaches, in particular those having in consideration dynamical changes in the pandemic behavior arising, e.g., from new virus variants or vaccines. A multidisciplinary team formed by researchers of the Covid-19 Committee of Instituto Superior T\'ecnico at Universidade de Lisboa (CCIST analysts team) and medical doctors from the Crisis Office of the Portuguese Medical Association (GCOM experts team) gathered efforts and worked together in order to propose a new tool to help politicians and decision-makers in the combat of the pandemic. This paper presents the main steps and elements, which led to the construction of a pandemic impact assessment composite indicator, applied to the particular case of {\sc{Covid-19}} in Portugal. A multiple criteria approach based on an additive multi-attribute value theory (MAVT) aggregation model was used to construct the pandemic assessment composite indicator (PACI). The parameters of the additive model were built through a sociotechnical co-constructive interactive process between CCIST and GCOM team members. The deck of cards method was the technical tool adopted to help in building the value functions and the assessment of the criteria weights.

Annotated data is an essential ingredient in natural language processing for training and evaluating machine learning models. It is therefore very desirable for the annotations to be of high quality. Recent work, however, has shown that several popular datasets contain a surprising amount of annotation errors or inconsistencies. To alleviate this issue, many methods for annotation error detection have been devised over the years. While researchers show that their approaches work well on their newly introduced datasets, they rarely compare their methods to previous work or on the same datasets. This raises strong concerns on methods' general performance and makes it difficult to asses their strengths and weaknesses. We therefore reimplement 18 methods for detecting potential annotation errors and evaluate them on 9 English datasets for text classification as well as token and span labeling. In addition, we define a uniform evaluation setup including a new formalization of the annotation error detection task, evaluation protocol and general best practices. To facilitate future research and reproducibility, we release our datasets and implementations in an easy-to-use and open source software package.

There are many examples of cases where access to improved models of human behavior and cognition has allowed creation of robots which can better interact with humans, and not least in road vehicle automation this is a rapidly growing area of research. Human-robot interaction (HRI) therefore provides an important applied setting for human behavior modeling - but given the vast complexity of human behavior, how complete and accurate do these models need to be? Here, we outline some possible ways of thinking about this problem, starting from the suggestion that modelers need to keep the right end goal in sight: A successful human-robot interaction, in terms of safety, performance, and human satisfaction. Efforts toward model completeness and accuracy should be focused on those aspects of human behavior to which interaction success is most sensitive. We emphasise that identifying which those aspects are is a difficult scientific objective in its own right, distinct for each given HRI context. We propose and exemplify an approach to formulating a priori hypotheses on this matter, in cases where robots are to be involved in interactions which currently take place between humans, such as in automated driving. Our perspective also highlights some possible risks of overreliance on machine-learned models of human behavior in HRI, and how to mitigate against those risks.

Natural language understanding (NLU) has made massive progress driven by large benchmarks, but benchmarks often leave a long tail of infrequent phenomena underrepresented. We reflect on the question: have transfer learning methods sufficiently addressed the poor performance of benchmark-trained models on the long tail? We conceptualize the long tail using macro-level dimensions (e.g., underrepresented genres, topics, etc.), and perform a qualitative meta-analysis of 100 representative papers on transfer learning research for NLU. Our analysis asks three questions: (i) Which long tail dimensions do transfer learning studies target? (ii) Which properties of adaptation methods help improve performance on the long tail? (iii) Which methodological gaps have greatest negative impact on long tail performance? Our answers highlight major avenues for future research in transfer learning for the long tail. Lastly, using our meta-analysis framework, we perform a case study comparing the performance of various adaptation methods on clinical narratives, which provides interesting insights that may enable us to make progress along these future avenues.

Most children infected with COVID-19 have no or mild symptoms and can recover automatically by themselves, but some pediatric COVID-19 patients need to be hospitalized or even to receive intensive medical care (e.g., invasive mechanical ventilation or cardiovascular support) to recover from the illnesses. Therefore, it is critical to predict the severe health risk that COVID-19 infection poses to children to provide precise and timely medical care for vulnerable pediatric COVID-19 patients. However, predicting the severe health risk for COVID-19 patients including children remains a significant challenge because many underlying medical factors affecting the risk are still largely unknown. In this work, instead of searching for a small number of most useful features to make prediction, we design a novel large-scale bag-of-words like method to represent various medical conditions and measurements of COVID-19 patients. After some simple feature filtering based on logistical regression, the large set of features is used with a deep learning method to predict both the hospitalization risk for COVID-19 infected children and the severe complication risk for the hospitalized pediatric COVID-19 patients. The method was trained and tested on the datasets of the Biomedical Advanced Research and Development Authority (BARDA) Pediatric COVID-19 Data Challenge held from Sept. 15 to Dec. 17, 2021. The results show that the approach can rather accurately predict the risk of hospitalization and severe complication for pediatric COVID-19 patients and deep learning is more accurate than other machine learning methods.

Covid-19 has radically changed our lives, with many governments and businesses mandating work-from-home (WFH) and remote education. However, work-from-home policy is not always known globally, and even when enacted, compliance can vary. These uncertainties suggest a need to measure WFH and confirm actual policy implementation. We show new algorithms that detect WFH from changes in network use during the day. We show that change-sensitive networks reflect mobile computer use, detecting WFH from changes in network intensity, the diurnal and weekly patterns of IP address response. Our algorithm provides new analysis of existing, continuous, global scans of most of the responsive IPv4 Internet (about 5.1M /24 blocks). Reuse of existing data allows us to study the emergence of Covid-19, revealing global reactions. We demonstrate the algorithm in networks with known ground truth, evaluate the data reconstruction and algorithm design choices with studies of real-world data, and validate our approach by testing random samples against news reports. In addition to Covid-related WFH, we also find other government-mandated lockdowns. Our results show the first use of network intensity to infer-real world behavior and policies.

Interpretability methods are developed to understand the working mechanisms of black-box models, which is crucial to their responsible deployment. Fulfilling this goal requires both that the explanations generated by these methods are correct and that people can easily and reliably understand them. While the former has been addressed in prior work, the latter is often overlooked, resulting in informal model understanding derived from a handful of local explanations. In this paper, we introduce explanation summary (ExSum), a mathematical framework for quantifying model understanding, and propose metrics for its quality assessment. On two domains, ExSum highlights various limitations in the current practice, helps develop accurate model understanding, and reveals easily overlooked properties of the model. We also connect understandability to other properties of explanations such as human alignment, robustness, and counterfactual minimality and plausibility.

With the advances of data-driven machine learning research, a wide variety of prediction problems have been tackled. It has become critical to explore how machine learning and specifically deep learning methods can be exploited to analyse healthcare data. A major limitation of existing methods has been the focus on grid-like data; however, the structure of physiological recordings are often irregular and unordered which makes it difficult to conceptualise them as a matrix. As such, graph neural networks have attracted significant attention by exploiting implicit information that resides in a biological system, with interactive nodes connected by edges whose weights can be either temporal associations or anatomical junctions. In this survey, we thoroughly review the different types of graph architectures and their applications in healthcare. We provide an overview of these methods in a systematic manner, organized by their domain of application including functional connectivity, anatomical structure and electrical-based analysis. We also outline the limitations of existing techniques and discuss potential directions for future research.

Clinical Named Entity Recognition (CNER) aims to identify and classify clinical terms such as diseases, symptoms, treatments, exams, and body parts in electronic health records, which is a fundamental and crucial task for clinical and translational research. In recent years, deep neural networks have achieved significant success in named entity recognition and many other Natural Language Processing (NLP) tasks. Most of these algorithms are trained end to end, and can automatically learn features from large scale labeled datasets. However, these data-driven methods typically lack the capability of processing rare or unseen entities. Previous statistical methods and feature engineering practice have demonstrated that human knowledge can provide valuable information for handling rare and unseen cases. In this paper, we address the problem by incorporating dictionaries into deep neural networks for the Chinese CNER task. Two different architectures that extend the Bi-directional Long Short-Term Memory (Bi-LSTM) neural network and five different feature representation schemes are proposed to handle the task. Computational results on the CCKS-2017 Task 2 benchmark dataset show that the proposed method achieves the highly competitive performance compared with the state-of-the-art deep learning methods.

北京阿比特科技有限公司