亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Solo research is a result of individual authorship decisions which accumulate over time, accompanying academic careers. This research is the first to comprehensively study the gender solo research gap among all internationally visible scientists within a whole national higher education system: we examine the gap through individual publication portfolios constructed for each Polish university professor with at least a doctoral degree and internationally visible in the decade of 2009-2018. Solo research is a special case of academic publishing where scientists compete individually, sending clear signals about their research ability. Solo research has been expected to disappear for half a century, but it continues to exist. Our focus is on how male and female scientists of various biological ages, age groups, academic positions, and institutional types make use of, and benefit from, solo publishing. We tested the hypothesis that male and female scientists differ in their use of solo publishing, and we termed this difference the gender solo research gap. The highest share of solo research for both genders is noted for middle-aged scientists working as associate professors rather than for young scientists as in previous studies. The low journal prestige level of female solo publications may suggest the propensity of women scientists to choose less competitive publication outlets. In our unique biographical, administrative, publication, and citation database (Polish Science Observatory), we have metadata on all Polish scientists present in Scopus (N-25,463) and on their 158,743 Scopus-indexed articles published in 2009-2018, including 18,900 solo articles.

相關內容

 LESS 是一個開源的樣式語言,受到 Sass 的影響。嚴格來說,LESS 是一個嵌套的元語言,符合語法規范的 CSS 語句也是符合規范的 Less 代碼。

Biometric technology has been increasingly deployed in the past decade, offering greater security and convenience than traditional methods of personal recognition. Although biometric signals' quality heavily affects a biometric system's performance, prior research on evaluating quality is limited. Quality is a critical issue in security, especially in adverse scenarios involving surveillance cameras, forensics, portable devices, or remote access through the Internet. This article analyzes what factors negatively impact biometric quality, how to overcome them, and how to incorporate quality measures into biometric systems. A review of the state of the art in these matters gives an overall framework for the challenges of biometric quality.

Deep learning belongs to the field of artificial intelligence, where machines perform tasks that typically require some kind of human intelligence. Similar to the basic structure of a brain, a deep learning algorithm consists of an artificial neural network, which resembles the biological brain structure. Mimicking the learning process of humans with their senses, deep learning networks are fed with (sensory) data, like texts, images, videos or sounds. These networks outperform the state-of-the-art methods in different tasks and, because of this, the whole field saw an exponential growth during the last years. This growth resulted in way over 10,000 publications per year in the last years. For example, the search engine PubMed alone, which covers only a sub-set of all publications in the medical field, provides already over 11,000 results in Q3 2020 for the search term 'deep learning', and around 90% of these results are from the last three years. Consequently, a complete overview over the field of deep learning is already impossible to obtain and, in the near future, it will potentially become difficult to obtain an overview over a subfield. However, there are several review articles about deep learning, which are focused on specific scientific fields or applications, for example deep learning advances in computer vision or in specific tasks like object detection. With these surveys as a foundation, the aim of this contribution is to provide a first high-level, categorized meta-survey of selected reviews on deep learning across different scientific disciplines. The categories (computer vision, language processing, medical informatics and additional works) have been chosen according to the underlying data sources (image, language, medical, mixed). In addition, we review the common architectures, methods, pros, cons, evaluations, challenges and future directions for every sub-category.

Within the environmental context, several tools based on simulations have been proposed to analyze the physical phenomena of heat and mass transfer in porous materials. However, it is still an open challenge to propose tools that do not require to perform computations to catch the dominant processes. Thus, this article proposes to explore advantages of using a dimensionless analysis by scaling the governing equations of heat and mass transfer. Proposed methodology introduces dimensionless numbers and their nonlinear distortions. The relevant investigation enables to enhance the preponderant phenomena to \emph{(i)} compare different categories of materials, \emph{(ii)} evaluate the competition between heat and mass transfer for each material or \emph{(iii)} describe the transfer in multi-layered wall configurations. It also permits to define hygrothermal kinetic, geometric and dynamic similarities among different physical materials. Equivalent systems can be characterized in the framework of experimental or wall designs. Three cases are presented for similarity studies in terms of \emph{(i)} equivalent material length, \emph{(ii)} time of heat and mass transfer and \emph{(iii)} experimental configurations. All these advantages are illustrated in the given article considering $49$ building materials separated in $7$ categories.

Computer vision and multimedia information processing have made extreme progress within the last decade and many tasks can be done with a level of accuracy as if done by humans, or better. This is because we leverage the benefits of huge amounts of data available for training, we have enormous computer processing available and we have seen the evolution of machine learning as a suite of techniques to process data and deliver accurate vision-based systems. What kind of applications do we use this processing for ? We use this in autonomous vehicle navigation or in security applications, searching CCTV for example, and in medical image analysis for healthcare diagnostics. One application which is not widespread is image or video search directly by users. In this paper we present the need for such image finding or re-finding by examining human memory and when it fails, thus motivating the need for a different approach to image search which is outlined, along with the requirements of computer vision to support it.

Over the past decades the machine and deep learning community has celebrated great achievements in challenging tasks such as image classification. The deep architecture of artificial neural networks together with the plenitude of available data makes it possible to describe highly complex relations. Yet, it is still impossible to fully capture what the deep learning model has learned and to verify that it operates fairly and without creating bias, especially in critical tasks, for instance those arising in the medical field. One example for such a task is the detection of distinct facial expressions, called Action Units, in facial images. Considering this specific task, our research aims to provide transparency regarding bias, specifically in relation to gender and skin color. We train a neural network for Action Unit classification and analyze its performance quantitatively based on its accuracy and qualitatively based on heatmaps. A structured review of our results indicates that we are able to detect bias. Even though we cannot conclude from our results that lower classification performance emerged solely from gender and skin color bias, these biases must be addressed, which is why we end by giving suggestions on how the detected bias can be avoided.

PurposeThe purpose of this paper is to present empirical evidence on the implementation, acceptance and quality-related aspects of research information systems (RIS) in academic institutions.Design/methodology/approachThe study is based on a 2018 survey with 160 German universities and research institutions.FindingsThe paper presents recent figures about the implementation of RIS in German academic institutions, including results on the satisfaction, perceived usefulness and ease of use. It contains also information about the perceived data quality and the preferred quality management. RIS acceptance can be achieved only if the highest possible quality of the data is to be ensured. For this reason, the impact of data quality on the technology acceptance model (TAM) is examined, and the relation between the level of data quality and user acceptance of the associated institutional RIS is addressed.Research limitations/implicationsThe data provide empirical elements for a better understanding of the role of the data quality for the acceptance of RIS, in the framework of a TAM. The study puts the focus on commercial and open-source solutions while in-house developments have been excluded. Also, mainly because of the small sample size, the data analysis was limited to descriptive statistics.Practical implicationsThe results are helpful for the management of RIS projects, to increase acceptance and satisfaction with the system, and for the further development of RIS functionalities.Originality/valueThe number of empirical studies on the implementation and acceptance of RIS is low, and very few address in this context the question of data quality. The study tries to fill the gap.

As software becomes more complex and assumes an even greater role in our lives, formal verification is set to become the gold standard in securing software systems into the future, since it can guarantee the absence of errors and entire classes of attack. Recent advances in formal verification are being used to secure everything from unmanned drones to the internet. At the same time, the usable security research community has made huge progress in improving the usability of security products and end-users comprehension of security issues. However, there have been no human-centered studies focused on the impact of formal verification on the use and adoption of formally verified software products. We propose a research agenda to fill this gap and to contribute with the first collection of studies on people's mental models on formal verification and associated security and privacy guarantees and threats. The proposed research has the potential to increase the adoption of more secure products and it can be directly used by the security and formal methods communities to create more effective and secure software tools.

The problem of Approximate Nearest Neighbor (ANN) search is fundamental in computer science and has benefited from significant progress in the past couple of decades. However, most work has been devoted to pointsets whereas complex shapes have not been sufficiently treated. Here, we focus on distance functions between discretized curves in Euclidean space: they appear in a wide range of applications, from road segments to time-series in general dimension. For $\ell_p$-products of Euclidean metrics, for any $p$, we design simple and efficient data structures for ANN, based on randomized projections, which are of independent interest. They serve to solve proximity problems under a notion of distance between discretized curves, which generalizes both discrete Fr\'echet and Dynamic Time Warping distances. These are the most popular and practical approaches to comparing such curves. We offer the first data structures and query algorithms for ANN with arbitrarily good approximation factor, at the expense of increasing space usage and preprocessing time over existing methods. Query time complexity is comparable or significantly improved by our algorithms, our algorithm is especially efficient when the length of the curves is bounded.

Multispectral imaging is an important technique for improving the readability of written or printed text where the letters have faded, either due to deliberate erasing or simply due to the ravages of time. Often the text can be read simply by looking at individual wavelengths, but in some cases the images need further enhancement to maximise the chances of reading the text. There are many possible enhancement techniques and this paper assesses and compares an extended set of dimensionality reduction methods for image processing. We assess 15 dimensionality reduction methods in two different manuscripts. This assessment was performed both subjectively by asking the opinions of scholars who were experts in the languages used in the manuscripts which of the techniques they preferred and also by using the Davies-Bouldin and Dunn indexes for assessing the quality of the resulted image clusters. We found that the Canonical Variates Analysis (CVA) method which was using a Matlab implementation and we have used previously to enhance multispectral images, it was indeed superior to all the other tested methods. However it is very likely that other approaches will be more suitable in specific circumstance so we would still recommend that a range of these techniques are tried. In particular, CVA is a supervised clustering technique so it requires considerably more user time and effort than a non-supervised technique such as the much more commonly used Principle Component Analysis Approach (PCA). If the results from PCA are adequate to allow a text to be read then the added effort required for CVA may not be justified. For the purposes of comparing the computational times and the image results, a CVA method is also implemented in C programming language and using the GNU (GNUs Not Unix) Scientific Library (GSL) and the OpenCV (OPEN source Computer Vision) computer vision programming library.

Music recommender systems (MRS) have experienced a boom in recent years, thanks to the emergence and success of online streaming services, which nowadays make available almost all music in the world at the user's fingertip. While today's MRS considerably help users to find interesting music in these huge catalogs, MRS research is still facing substantial challenges. In particular when it comes to build, incorporate, and evaluate recommendation strategies that integrate information beyond simple user--item interactions or content-based descriptors, but dig deep into the very essence of listener needs, preferences, and intentions, MRS research becomes a big endeavor and related publications quite sparse. The purpose of this trends and survey article is twofold. We first identify and shed light on what we believe are the most pressing challenges MRS research is facing, from both academic and industry perspectives. We review the state of the art towards solving these challenges and discuss its limitations. Second, we detail possible future directions and visions we contemplate for the further evolution of the field. The article should therefore serve two purposes: giving the interested reader an overview of current challenges in MRS research and providing guidance for young researchers by identifying interesting, yet under-researched, directions in the field.

北京阿比特科技有限公司