In this study, we propose a clustering-based approach on time-series data to capture COVID-19 spread patterns in the early period of the pandemic. We analyze the spread dynamics based on the early and post stages of COVID-19 for different countries based on different geographical locations. Furthermore, we investigate the confinement policies and the effect they made on the spread. We found that implementations of the same confinement policies exhibit different results in different countries. Specifically, lockdowns become less effective in densely populated regions, because of the reluctance to comply with social distancing measures. Lack of testing, contact tracing, and social awareness in some countries forestall people from self-isolation and maintaining social distance. Large labor camps with unhealthy living conditions also aid in high community transmissions in countries depending on foreign labor. Distrust in government policies and fake news instigate the spread in both developed and under-developed countries. Large social gatherings play a vital role in causing rapid outbreaks almost everywhere. While some countries were able to contain the spread by implementing strict and widely adopted confinement policies, some others contained the spread with the help of social distancing measures and rigorous testing capacity. An early and rapid response at the beginning of the pandemic is necessary to contain the spread, yet it is not always sufficient.
The ability to rewire ties in communication networks is vital for large-scale human cooperation and the spread of new ideas. Especially important for knowledge dissemination is the ability to form new weak ties -- ties which act as bridges between distant parts of the social system and enable the flow of novel information. Here we show that lack of researcher co-location during the COVID-19 lockdown caused the loss of more than 4800 weak ties over 18 months in the email network of a large North American university -- the MIT campus. Furthermore, we find that the re-introduction of partial co-location through a hybrid work mode starting in September 2021 led to a partial regeneration of weak ties, especially between researchers who work in close proximity. We quantify the effect of co-location in renewing ties -- a process that we have termed nexogenesis -- through a novel model based on physical proximity, which is able to reproduce all empirical observations. Results highlight that employees who are not co-located are less likely to form ties, weakening the spread of information in the workplace. Such findings could contribute to a better understanding of the spatio-temporal dynamics of human communication networks -- and help organizations that are moving towards the implementation of hybrid work policies to evaluate the minimum amount of in-person interaction necessary for a healthy work life.
Diverse disciplines are interested in how the coordination of interacting agents' movements, emotions, and physiology over time impacts social behavior. Here, we describe a new multivariate procedure for automating the investigation of this kind of behaviorally-relevant "interactional synchrony", and introduce a novel interactional synchrony measure based on features of dynamic time warping (DTW) paths. We demonstrate that our DTW path-based measure of interactional synchrony between facial action units of two people interacting freely in a natural social interaction can be used to predict how much trust they will display in a subsequent Trust Game. We also show that our approach outperforms univariate head movement models, models that consider participants' facial action units independently, and models that use previously proposed synchrony or similarity measures. The insights of this work can be applied to any research question that aims to quantify the temporal coordination of multiple signals over time, but has immediate applications in psychology, medicine, and robotics.
Context. Computer workers in general, and software developers specifically, are under a high amount of stress due to continuous deadlines and, often, over-commitment. Objective. This study investigates the effects of a neuroplasticity practice, a specific breathing practice, on the attention awareness, well-being, perceived productivity, and self-efficacy of computer workers. Method. We created a questionnaire mainly from existing, validated scales as entry and exit survey for data points for comparison before and after the intervention. The intervention was a 12-week program with a weekly live session that included a talk on a well-being topic and a facilitated group breathing session. During the intervention period, we solicited one daily journal note and one weekly well-being rating. We replicated the intervention in a similarly structured 8-week program. The data was analyzed using a Bayesian multi-level model for the quantitative part and thematic analysis for the qualitative part. Results. The intervention showed improvements in participants' experienced inner states despite an ongoing pandemic and intense outer circumstances for most. Over the course of the study, we found an improvement in the participants' ratings of how often they found themselves in good spirits as well as in a calm and relaxed state. We also aggregate a large number of deep inner reflections and growth processes that may not have surfaced for the participants without deliberate engagement in such a program. Conclusion. The data indicates usefulness and effectiveness of an intervention for computer workers in terms of increasing well-being and resilience. Everyone needs a way to deliberately relax, unplug, and recover. Breathing practice is a simple way to do so, and the results call for establishing a larger body of work to make this common practice.
In this paper we examine the concept of complexity as it applies to generative and evolutionary art and design. Complexity has many different, discipline specific definitions, such as complexity in physical systems (entropy), algorithmic measures of information complexity and the field of "complex systems". We apply a series of different complexity measures to three different evolutionary art datasets and look at the correlations between complexity and individual aesthetic judgement by the artist (in the case of two datasets) or the physically measured complexity of generative 3D forms. Our results show that the degree of correlation is different for each set and measure, indicating that there is no overall "better" measure. However, specific measures do perform well on individual datasets, indicating that careful choice can increase the value of using such measures. We then assess the value of complexity measures for the audience by undertaking a large-scale survey on the perception of complexity and aesthetics. We conclude by discussing the value of direct measures in generative and evolutionary art, reinforcing recent findings from neuroimaging and psychology which suggest human aesthetic judgement is informed by many extrinsic factors beyond the measurable properties of the object being judged.
Estimation of heterogeneous treatment effects is an active area of research in causal inference. Most of the existing methods, however, focus on estimating the conditional average treatment effects of a single, binary treatment given a set of pre-treatment covariates. In this paper, we propose a method to estimate the heterogeneous causal effects of high-dimensional treatments, which poses unique challenges in terms of estimation and interpretation. The proposed approach is based on a Bayesian mixture of regularized regressions to identify groups of units who exhibit similar patterns of treatment effects. By directly modeling cluster membership with covariates, the proposed methodology allows one to explore the unit characteristics that are associated with different patterns of treatment effects. Our motivating application is conjoint analysis, which is a popular survey experiment in social science and marketing research and is based on a high-dimensional factorial design. We apply the proposed methodology to the conjoint data, where survey respondents are asked to select one of two immigrant profiles with randomly selected attributes. We find that a group of respondents with a relatively high degree of prejudice appears to discriminate against immigrants from non-European countries like Iraq. An open-source software package is available for implementing the proposed methodology.
International academic collaborations cultivate diversity in the research landscape and facilitate multiperspective methods, as the scope of each country's science depends on its needs, history, wealth etc. Moreover the quality of science differ significantly amongst nations\cite{king2004scientific}, which renders international collaborations a potential source to understand the dynamics between countries and their advancements. Analyzing these collaborations can reveal sharing expertise between two countries in different fields, the most well-known institutions of a nation, the overall success of collaborative efforts compared to local ones etc. Such analysis were initially performed using statistical metrics \cite{melin1996studying}, but network analysis has later proven much more expressive \cite{wagner2005mapping,gonzalez2008coauthorship}. In this exploratory analysis, we aim to examine the collaboration patterns between French and US institutions. Towards this, we capitalize on the Microsoft Academic Graph MAG \cite{sinha2015overview}, the largest open bibliographic dataset that contains detailed information for authors, publications and institutions. We use the coordinates of the world map to tally affiliations to France or USA. In cases where the coordinates of an affiliation were absent, we used its Wikipedia url and named entity recognition to identify the country of its address in the Wikipedia page. We need to stress that institute names have been volatile (due to University federations created) in the last decade in France, so this is a best effort trial. The results indicate an intensive and increasing scientific production in with , with certain institutions such as Harvard, MIT and CNRS standing out.
Opinions are an integral part of how we perceive the world and each other. They shape collective action, playing a role in democratic processes, the evolution of norms, and cultural change. For decades, researchers in the social and natural sciences have tried to describe how shifting individual perspectives and social exchange lead to archetypal states of public opinion like consensus and polarization. Here we review some of the many contributions to the field, focusing both on idealized models of opinion dynamics, and attempts at validating them with observational data and controlled sociological experiments. By further closing the gap between models and data, these efforts may help us understand how to face current challenges that require the agreement of large groups of people in complex scenarios, such as economic inequality, climate change, and the ongoing fracture of the sociopolitical landscape.
We study Nash-dynamics in the context of blockchain protocols. Specifically, we introduce a formal model, within which one can assess whether the Nash dynamics can lead utility maximizing participants to defect from "honest" protocol operation, towards variations that exhibit one or more undesirable infractions, such as abstaining from participation and extending conflicting protocol histories. Blockchain protocols that do not lead to such infraction states are said to be compliant. Armed with this model, we study the compliance of various Proof-of-Work (PoW) and Proof-of-Stake (PoS) protocols, with respect to different utility functions and reward schemes, leading to the following results: i) PoS ledgers under resource-proportional rewards can be compliant if costs are negligible, but non-compliant if costs are significant, ii) PoW and PoS under block-proportional rewards exhibit different compliance behavior, depending on the lossiness of the network, iii) considering externalities, such as exchange rate fluctuations, we quantify the benefit of economic penalties in the context of PoS protocols with respect to compliance.
This paper proposes a general unplanned incident analysis framework for public transit systems from the supply and demand sides using automated fare collection (AFC) and automated vehicle location (AVL) data. Specifically, on the supply side, we propose an incident-based network redundancy index to analyze the network's ability to provide alternative services under a specific rail disruption. The impacts on operations are analyzed through the headway changes. On the demand side, the analysis takes place at two levels: aggregate flows and individual response. We calculate the demand changes of different rail lines, rail stations, bus routes, and bus stops to better understand the passenger flow redistribution under incidents. Individual behavior is analyzed using a binary logit model based on inferred passengers' mode choices and socio-demographics using AFC data. The public transit system of the Chicago Transit Authority is used as a case study. Two rail disruption cases are analyzed, one with high network redundancy around the impacted stations and the other with low. Results show that the service frequency of the incident line was largely reduced (by around 30% ~ 70%) during the incident time. Nearby rail lines with substitutional functions were also slightly affected. Passengers showed different behavioral responses in the two incident scenarios. In the low redundancy case, most of the passengers chose to use nearby buses to move, either to their destinations or to the nearby rail lines. In the high redundancy case, most of the passengers transferred directly to nearby lines. Corresponding policy implications and operating suggestions are discussed.
When we humans look at a video of human-object interaction, we can not only infer what is happening but we can even extract actionable information and imitate those interactions. On the other hand, current recognition or geometric approaches lack the physicality of action representation. In this paper, we take a step towards a more physical understanding of actions. We address the problem of inferring contact points and the physical forces from videos of humans interacting with objects. One of the main challenges in tackling this problem is obtaining ground-truth labels for forces. We sidestep this problem by instead using a physics simulator for supervision. Specifically, we use a simulator to predict effects and enforce that estimated forces must lead to the same effect as depicted in the video. Our quantitative and qualitative results show that (a) we can predict meaningful forces from videos whose effects lead to accurate imitation of the motions observed, (b) by jointly optimizing for contact point and force prediction, we can improve the performance on both tasks in comparison to independent training, and (c) we can learn a representation from this model that generalizes to novel objects using few shot examples.