Biological age is an important sociodemographic factor in studies on academic careers (research productivity, scholarly impact, and collaboration patterns). It is assumed that the academic age, or the time elapsed from the first publication, is a good proxy for biological age. In this study, we analyze the limitations of the proxy in academic career studies, using as an example the entire population of Polish academic scientists visible in the last decade in global science and holding at least a PhD (N = 20,569). The proxy works well for science, technology, engineering, mathematics, and medicine (STEMM) disciplines; however, for non-STEMM disciplines (particularly for humanities and social sciences), it has a dramatically worse performance. This negative conclusion is particularly important for systems that have only become recently visible in global academic journals. The micro-level data suggest a delayed participation of social scientists and humanists in global science networks, with practical implications for predicting biological age from academic age. We calculate correlation coefficients, present contingency analysis of academic career stages with academic positions and age groups, and create a linear multivariate regression model. Our research suggests that in scientifically developing countries, academic age as a proxy for biological age must be used more cautiously than in advanced countries: ideally, it must be used only for STEMM disciplines.
Recent breakthroughs in high resolution imaging of biomolecules in solution with cryo-electron microscopy (cryo-EM) have unlocked new doors for the reconstruction of molecular volumes, thereby promising further advances in biology, chemistry, and pharmacological research amongst others. Despite significant headway, the immense challenges in cryo-EM data analysis remain legion and intricately inter-disciplinary in nature, requiring insights from physicists, structural biologists, computer scientists, statisticians, and applied mathematicians. Meanwhile, recent next-generation volume reconstruction algorithms that combine generative modeling with end-to-end unsupervised deep learning techniques have shown promising results on simulated data, but still face considerable hurdles when applied to experimental cryo-EM images. In light of the proliferation of such methods and given the interdisciplinary nature of the task, we propose here a critical review of recent advances in the field of deep generative modeling for high resolution cryo-EM volume reconstruction. The present review aims to (i) compare and contrast these new methods, while (ii) presenting them from a perspective and using terminology familiar to scientists in each of the five aforementioned fields with no specific background in cryo-EM. The review begins with an introduction to the mathematical and computational challenges of deep generative models for cryo-EM volume reconstruction, along with an overview of the baseline methodology shared across this class of algorithms. Having established the common thread weaving through these different models, we provide a practical comparison of these state-of-the-art algorithms, highlighting their relative strengths and weaknesses, along with the assumptions that they rely on. This allows us to identify bottlenecks in current methods and avenues for future research.
The promise of increased agility, autonomy, scalability, and reusability has made the microservices architecture a \textit{de facto} standard for the development of large-scale and cloud-native commercial applications. Software patterns are an important design tool, and often they are selected and combined with the goal of obtaining a set of desired quality attributes. However, from a research standpoint, many patterns have not been widely validated against industry practice, making them not much more than interesting theories. To address this, we investigated how practitioners perceive the impact of 14 patterns on 7 quality attributes. Hence, we conducted 9 semi-structured interviews to collect industry expertise regarding (1) knowledge and adoption of software patterns, (2) the perceived architectural trade-offs of patterns, and (3) metrics professionals use to measure quality attributes. We found that many of the trade-offs reported in our study matched the documentation of each respective pattern, and identified several gains and pains which have not yet been reported, leading to novel insight about microservice patterns.
We present Project IRL (In Real Life), a suite of five mobile apps we created to explore novel ways of supporting in-person social interactions with augmented reality. In recent years, the tone of public discourse surrounding digital technology has become increasingly critical, and technology's influence on the way people relate to each other has been blamed for making people feel "alone together," diverting their attention from truly engaging with one another when they interact in person. Motivated by this challenge, we focus on an under-explored design space: playful co-located interactions. We evaluated the apps through a deployment study that involved interviews and participant observations with 101 people. We synthesized the results into a series of design guidelines that focus on four themes: (1) device arrangement (e.g., are people sharing one phone, or does each person have their own?), (2) enablers (e.g., should the activity focus on an object, body part, or pet?), (3) affordances of modifying reality (i.e., features of the technology that enhance its potential to encourage various aspects of social interaction), and (4) co-located play (i.e., using technology to make in-person play engaging and inviting). We conclude by presenting our design guidelines for future work on embodied social AR.
The electrical generation and transmission infrastructures of many countries are under increased pressure. This partially reflects the move towards low carbon economies and the increased reliance on renewable power generation systems. There has been a reduction in the use of traditional fossil fuel generation systems, which provide a stable base load, and this has been replaced with more unpredictable renewable generation. As a consequence, the available load on the grid is becoming more unstable. To cope with this variability, the UK National Grid has placed emphasis on the investigation of various technical mechanisms (e.g. implementation of smart grids, energy storage technologies, auxiliary power sources), which may be able to prevent critical situations, when the grid may become sometimes unstable. The successful implementation of these mechanisms may require large numbers of electrical consumers (e.g. HVAC systems, food refrigeration systems) for example to make additional investments in energy storage technologies (food refrigeration systems) or to integrate their electrical demand from industrial processes into the National Grid (HVAC systems). However, in the situation of food refrigeration systems, during these critical situations, even if the thermal inertia within refrigeration systems may maintain effective performance of the device for a short period of time (e.g. under 1 minute) when the electrical input load into the system is reduced, this still carries the paramount risk of food safety even for very short periods of time (e.g. under 1 minute). Therefore before considering any future actions (e.g. investing in energy storage technologies) to prevent the critical situations when grid becomes unstable, it is also needed to understand during the normal use how the temperature profiles evolve along the time inside these massive networks of food refrigeration systems.
In this paper we examine the concept of complexity as it applies to generative and evolutionary art and design. Complexity has many different, discipline specific definitions, such as complexity in physical systems (entropy), algorithmic measures of information complexity and the field of "complex systems". We apply a series of different complexity measures to three different evolutionary art datasets and look at the correlations between complexity and individual aesthetic judgement by the artist (in the case of two datasets) or the physically measured complexity of generative 3D forms. Our results show that the degree of correlation is different for each set and measure, indicating that there is no overall "better" measure. However, specific measures do perform well on individual datasets, indicating that careful choice can increase the value of using such measures. We then assess the value of complexity measures for the audience by undertaking a large-scale survey on the perception of complexity and aesthetics. We conclude by discussing the value of direct measures in generative and evolutionary art, reinforcing recent findings from neuroimaging and psychology which suggest human aesthetic judgement is informed by many extrinsic factors beyond the measurable properties of the object being judged.
Recommender systems have been widely applied in different real-life scenarios to help us find useful information. Recently, Reinforcement Learning (RL) based recommender systems have become an emerging research topic. It often surpasses traditional recommendation models even most deep learning-based methods, owing to its interactive nature and autonomous learning ability. Nevertheless, there are various challenges of RL when applying in recommender systems. Toward this end, we firstly provide a thorough overview, comparisons, and summarization of RL approaches for five typical recommendation scenarios, following three main categories of RL: value-function, policy search, and Actor-Critic. Then, we systematically analyze the challenges and relevant solutions on the basis of existing literature. Finally, under discussion for open issues of RL and its limitations of recommendation, we highlight some potential research directions in this field.
Recommender systems, a pivotal tool to alleviate the information overload problem, aim to predict user's preferred items from millions of candidates by analyzing observed user-item relations. As for tackling the sparsity and cold start problems encountered by recommender systems, uncovering hidden (indirect) user-item relations by employing side information and knowledge to enrich observed information for the recommendation has been proven promising recently; and its performance is largely determined by the scalability of recommendation models in the face of the high complexity and large scale of side information and knowledge. Making great strides towards efficiently utilizing complex and large-scale data, research into graph embedding techniques is a major topic. Equipping recommender systems with graph embedding techniques contributes to outperforming the conventional recommendation implementing directly based on graph topology analysis and has been widely studied these years. This article systematically retrospects graph embedding-based recommendation from embedding techniques for bipartite graphs, general graphs, and knowledge graphs, and proposes a general design pipeline of that. In addition, comparing several representative graph embedding-based recommendation models with the most common-used conventional recommendation models, on simulations, manifests that the conventional models overall outperform the graph embedding-based ones in predicting implicit user-item interactions, revealing the relative weakness of graph embedding-based recommendation in these tasks. To foster future research, this article proposes constructive suggestions on making a trade-off between graph embedding-based recommendation and the conventional recommendation in different tasks as well as some open questions.
Images can convey rich semantics and induce various emotions in viewers. Recently, with the rapid advancement of emotional intelligence and the explosive growth of visual data, extensive research efforts have been dedicated to affective image content analysis (AICA). In this survey, we will comprehensively review the development of AICA in the recent two decades, especially focusing on the state-of-the-art methods with respect to three main challenges -- the affective gap, perception subjectivity, and label noise and absence. We begin with an introduction to the key emotion representation models that have been widely employed in AICA and description of available datasets for performing evaluation with quantitative comparison of label noise and dataset bias. We then summarize and compare the representative approaches on (1) emotion feature extraction, including both handcrafted and deep features, (2) learning methods on dominant emotion recognition, personalized emotion prediction, emotion distribution learning, and learning from noisy data or few labels, and (3) AICA based applications. Finally, we discuss some challenges and promising research directions in the future, such as image content and context understanding, group emotion clustering, and viewer-image interaction.
It has been a long time that computer architecture and systems are optimized to enable efficient execution of machine learning (ML) algorithms or models. Now, it is time to reconsider the relationship between ML and systems, and let ML transform the way that computer architecture and systems are designed. This embraces a twofold meaning: the improvement of designers' productivity, and the completion of the virtuous cycle. In this paper, we present a comprehensive review of work that applies ML for system design, which can be grouped into two major categories, ML-based modelling that involves predictions of performance metrics or some other criteria of interest, and ML-based design methodology that directly leverages ML as the design tool. For ML-based modelling, we discuss existing studies based on their target level of system, ranging from the circuit level to the architecture/system level. For ML-based design methodology, we follow a bottom-up path to review current work, with a scope of (micro-)architecture design (memory, branch prediction, NoC), coordination between architecture/system and workload (resource allocation and management, data center management, and security), compiler, and design automation. We further provide a future vision of opportunities and potential directions, and envision that applying ML for computer architecture and systems would thrive in the community.
Since deep neural networks were developed, they have made huge contributions to everyday lives. Machine learning provides more rational advice than humans are capable of in almost every aspect of daily life. However, despite this achievement, the design and training of neural networks are still challenging and unpredictable procedures. To lower the technical thresholds for common users, automated hyper-parameter optimization (HPO) has become a popular topic in both academic and industrial areas. This paper provides a review of the most essential topics on HPO. The first section introduces the key hyper-parameters related to model training and structure, and discusses their importance and methods to define the value range. Then, the research focuses on major optimization algorithms and their applicability, covering their efficiency and accuracy especially for deep learning networks. This study next reviews major services and toolkits for HPO, comparing their support for state-of-the-art searching algorithms, feasibility with major deep learning frameworks, and extensibility for new modules designed by users. The paper concludes with problems that exist when HPO is applied to deep learning, a comparison between optimization algorithms, and prominent approaches for model evaluation with limited computational resources.