Population protocols are a model of distributed computation intended for the study of networks of independent computing agents with dynamic communication structure. Each agent has a finite number of states, and communication opportunities occur nondeterministically, allowing the agents involved to change their states based on each other's states. Population protocols are often studied in terms of reaching a consensus on whether the input configuration satisfied some predicate. A desirable property of a computation model is modularity, the ability to combine existing simpler computations in a straightforward way. In the present paper we present a more general notion of functionality implemented by a population protocol. This notion allows to design multiphase protocols as combinations of independently defined phases. The additional generality also increases the range of behaviours that can be captured in applications.
Antisocial behavior can be contagious, spreading from individual to individual and rippling through social networks. Moreover, it can spread not only through third-party influence from observation, just like innovations or individual behavior do, but also through direct experience, via "pay-it-forward" retaliation. Here, we distinguish between the effects of observation and victimization for the contagion of antisocial behavior by analyzing large-scale digital-trace data. We study the spread of cheating in more than a million matches of an online multiplayer first-person shooter game, in which up to 100 players compete individually or in teams against strangers. We identify event sequences in which a player who observes or is killed by a certain number of cheaters starts cheating, and evaluate the extent to which these sequences would appear if we preserve the team and interaction structure but assume alternative gameplay scenarios. The results reveal that social contagion is only likely to exist for those who both observe and experience cheating, suggesting that third-party influence and "pay-it-forward" reciprocity interact positively. In addition, the effect is present only for those who both observe and experience more than once, suggesting that cheating is more likely to spread after repeated or multi-source exposure. Approaching online games as models of social systems, we use the findings to discuss strategies for targeted interventions to stem the spread of cheating and antisocial behavior more generally in online communities, schools, organizations, and sports.
We formalize the simulation paradigm of cryptography in terms of category theory and show that protocols secure against abstract attacks form a symmetric monoidal category, thus giving an abstract model of composable security definitions in cryptography. Our model is able to incorporate computational security, set-up assumptions and various attack models such as colluding or independently acting subsets of adversaries in a modular, flexible fashion. We conclude by using string diagrams to rederive no-go results concerning the limits of bipartite and tripartite cryptography, ruling out e.g., composable commitments and broadcasting. On the way, we exhibit two categorical constructions of resource theories that might be of independent interest: one capturing resources shared among n parties and one capturing resource conversions that succeed asymptotically.
Ensuring public safety in a Smart City (SC) environment is a critical and increasingly complicated task due to the involvement of multiple agencies and the city's expansion across cyber and social layers. In this paper, we propose an extensive form perfect information game to model interactions and optimal city resource allocations when a Terrorist Organization (TO) performs attacks on multiple targets across two conceptual SC levels, a physical, and a cyber-social. The Smart City Defense Game (SCDG) considers three players that initially are entitled to a specific finite budget. Two SC agencies that have to defend their physical or social territories respectively, fight against a common enemy, the TO. Each layer consists of multiple targets and the attack outcome depends on whether the resources allocated there by the associated agency, exceed or not the TO's. Each player's utility is equal to the number of successfully defended targets. The two agencies are allowed to make budget transfers provided that it is beneficial for both. We completely characterize the Sub-game Perfect Nash Equilibrium (SPNE) of the SCDG that consists of strategies for optimal resource exchanges between SC agencies and accounts for the TO's budget allocation across the physical and social targets. Also, we present numerical and comparative results demonstrating that when the SC players act according to the SPNE, they maximize the number of successfully defended targets. The SCDG is shown to be a promising solution for modeling critical resource allocations between SC parties in the face of multi-layer simultaneous terrorist attacks.
We discuss local linear smooth backfitting for additive non-parametric models. This procedure is well known for achieving optimal convergence rates under appropriate smoothness conditions. In particular, it allows for the estimation of each component of an additive model with the same asymptotic accuracy as if the other components were known. The asymptotic discussion of local linear smooth backfitting is rather complex because typically an overwhelming notation is required for a detailed discussion. In this paper we interpret the local linear smooth backfitting estimator as a projection of the data onto a linear space with a suitably chosen semi-norm. This approach simplifies both the mathematical discussion as well as the intuitive understanding of properties of this version of smooth backfitting.
We propose and study a class of rearrangement problems under a novel pick-n-swap prehensile manipulation model, in which a robotic manipulator, capable of carrying an item and making item swaps, is tasked to sort items stored in lattices of variable dimensions in a time-optimal manner. We systematically analyze the intrinsic optimality structure, which is fairly rich and intriguing, under different levels of item distinguishability (fully labeled, where each item has a unique label, or partially labeled, where multiple items may be of the same type) and different lattice dimensions. Focusing on the most practical setting of one and two dimensions, we develop low polynomial time cycle-following based algorithms that optimally perform rearrangements on 1D lattices under both fully- and partially-labeled settings. On the other hand, we show that rearrangement on 2D and higher dimensional lattices becomes computationally intractable to optimally solve. Despite their NP-hardness, we prove that efficient cycle-following based algorithms remain asymptotically optimal for 2D fully- and partially-labeled settings, in expectation, using the interesting fact that random permutations induce only a small number of cycles. We further improve these algorithms to provide 1.x-optimality when the number of items is small. Simulation studies corroborate the effectiveness of our algorithms.
A syntactic model is presented for the specification of finite-state synchronous digital logic systems with complex input/output interfaces, which control the flow of data between opaque computational elements, and for the composition of compatible systems to form closed-loop systems with no inputs or outputs. This model improves upon similar existing models with a novel approach to specifying input and output ports in a way which is uniform and symmetric. An automaton model is also presented for encoding arbitrary computational processes, and an algorithm is presented to generate an automaton representation of a closed-loop system. Using the automaton model, the problem of timing-agnostic verification of closed-loop systems against a desired behavioural specification, encoded as the similarity of closed-loop systems in terms of the set of computations performed, is shown to be decidable. The relationship between the models and real-world implementations of systems is discussed.
Technology companies have been leading the way to a renewable energy transformation, by investing in renewable energy sources to reduce the carbon footprint of their datacenters. In addition to helping build new solar and wind farms, companies make power purchase agreements or purchase carbon offsets, rather than relying on renewable energy every hour of the day, every day of the week (24/7). Relying on renewable energy 24/7 is challenging due to the intermittent nature of wind and solar energy. Inherent variations in solar and wind energy production causes excess or lack of supply at different times. To cope with the fluctuations of renewable energy generation, multiple solutions must be applied. These include: capacity sizing with a mix of solar and wind power, energy storage options, and carbon aware workload scheduling. However, depending on the region and datacenter workload characteristics, the carbon-optimal solution varies. Existing work in this space does not give a holistic view of the trade-offs of each solution and often ignore the embodied carbon cost of the solutions. In this work, we provide a framework, Carbon Explorer, to analyze the multi-dimensional solution space by taking into account operational and embodided footprint of the solutions to help make datacenters operate on renewable energy 24/7. The solutions we analyze include capacity sizing with a mix of solar and wind power, battery storage, and carbon aware workload scheduling, which entails shifting the workloads from times when there is lack of renewable supply to times with abundant supply. Carbon Explorer will be open-sourced soon.
Architectural Education faces limitations due to its tactile approach to learning in classrooms with only 2-D and 3-D tools. At a higher level, virtual reality provides a potential for delivering more information to individuals undergoing design learning. This paper investigates a hypothesis establishing grounds towards a new research in Building Information Modeling (BIM) and Virtual Reality (VR). The hypothesis is projected to determine best practices for content creation and tactile object virtual interaction, which potentially can improve learning in architectural & construction education with a less costly approach and ease of access to well-known buildings. We explored this hypothesis in a step-by-step game design demonstration in VR, by showcasing the exploration of the Farnsworth House and reproducing assemblage of the same with different game levels of difficulty which correspond with varying BIM levels of development (LODs). The game design prototype equally provides an entry way and learning style for users with or without a formal architectural or construction education seeking to understand design tectonics within diverse or cross-disciplinary study cases. This paper shows that developing geometric abstract concepts of design pedagogy, using varying LODs for game content and levels, while utilizing newly developed features such as snap-to-grid, snap-to-position and snap-to-angle to improve user engagement during assemblage may provide deeper learning objectives for architectural precedent study.
Federated learning (FL) is an emerging, privacy-preserving machine learning paradigm, drawing tremendous attention in both academia and industry. A unique characteristic of FL is heterogeneity, which resides in the various hardware specifications and dynamic states across the participating devices. Theoretically, heterogeneity can exert a huge influence on the FL training process, e.g., causing a device unavailable for training or unable to upload its model updates. Unfortunately, these impacts have never been systematically studied and quantified in existing FL literature. In this paper, we carry out the first empirical study to characterize the impacts of heterogeneity in FL. We collect large-scale data from 136k smartphones that can faithfully reflect heterogeneity in real-world settings. We also build a heterogeneity-aware FL platform that complies with the standard FL protocol but with heterogeneity in consideration. Based on the data and the platform, we conduct extensive experiments to compare the performance of state-of-the-art FL algorithms under heterogeneity-aware and heterogeneity-unaware settings. Results show that heterogeneity causes non-trivial performance degradation in FL, including up to 9.2% accuracy drop, 2.32x lengthened training time, and undermined fairness. Furthermore, we analyze potential impact factors and find that device failure and participant bias are two potential factors for performance degradation. Our study provides insightful implications for FL practitioners. On the one hand, our findings suggest that FL algorithm designers consider necessary heterogeneity during the evaluation. On the other hand, our findings urge system providers to design specific mechanisms to mitigate the impacts of heterogeneity.
Objects are made of parts, each with distinct geometry, physics, functionality, and affordances. Developing such a distributed, physical, interpretable representation of objects will facilitate intelligent agents to better explore and interact with the world. In this paper, we study physical primitive decomposition---understanding an object through its components, each with physical and geometric attributes. As annotated data for object parts and physics are rare, we propose a novel formulation that learns physical primitives by explaining both an object's appearance and its behaviors in physical events. Our model performs well on block towers and tools in both synthetic and real scenarios; we also demonstrate that visual and physical observations often provide complementary signals. We further present ablation and behavioral studies to better understand our model and contrast it with human performance.