This white paper was written by the members of the Work Group focusing on design practices of the COST Action 18230 - Interactive Narrative Design for Complexity Representation (INDCOR, WG1). It presents an overview of Interactive Digital Narratives (IDNs) design for complexity representations through IDN workflows and methodologies, IDN authoring tools and applications. It provides definitions of the central elements of the IDN alongside its best practices, designs and methods. Finally, it describes complexity as a feature of IDN, with related examples. In summary, this white paper serves as an orienting map for the field of IDN design, understanding where we are in the contemporary panorama while charting the grounds of their promising futures.
In today's world, many technologically advanced countries have realized that real power lies not in physical strength but in educated minds. As a result, every country has embarked on restructuring its education system to meet the demands of technology. As a country in the midst of these developments, we cannot remain indifferent to this transformation in education. In the Information Age of the 21st century, rapid access to information is crucial for the development of individuals and societies. To take our place among the knowledge societies in a world moving rapidly towards globalization, we must closely follow technological innovations and meet the requirements of technology. This can be achieved by providing learning opportunities to anyone interested in acquiring education in their area of interest. This study focuses on the advantages and disadvantages of internet-based learning compared to traditional teaching methods, the importance of computer usage in internet-based learning, negative factors affecting internet-based learning, and the necessary recommendations for addressing these issues. In today's world, it is impossible to talk about education without technology or technology without education.
Compositional data are contemporarily defined as positive vectors, the ratios among whose elements are of interest to the researcher. Financial statement analysis by means of accounting ratios fulfils this definition to the letter. Compositional data analysis solves the major problems in statistical analysis of standard financial ratios at industry level, such as skewness, non-normality, non-linearity and dependence of the results on the choice of which accounting figure goes to the numerator and to the denominator of the ratio. In spite of this, compositional applications to financial statement analysis are still rare. In this article, we present some transformations within compositional data analysis that are particularly useful for financial statement analysis. We show how to compute industry or sub-industry means of standard financial ratios from a compositional perspective. We show how to visualise firms in an industry with a compositional biplot, to classify them with compositional cluster analysis and to relate financial and non-financial indicators with compositional regression models. We show an application to the accounting statements of Spanish wineries using DuPont analysis, and a step-by-step tutorial to the compositional freeware CoDaPack.
While the Blackboard Architecture has been in use since the 1980s, it has recently been proposed for modeling computer networks to assess their security. To do this, it must account for complex network attack patterns involving multiple attack routes and possible mid-attack system state changes. This paper proposes a data structure which can be used to model paths from an ingress point to a given egress point in Blackboard Architecture-modeled computer networks. It is designed to contain the pertinent information required for a systematic traversal through a changing network. This structure, called a reality path, represents a single potential pathway through the network with a given set of facts in a particular sequence of states. Another structure, called variants, is used during traversal of nodes (called containers) modeled in the network. The two structures - reality paths and variants - facilitate the use of a traversal algorithm, which will find all possible attack paths in Blackboard Architecture-modeled networks. This paper introduces and assesses the efficacy of variants and reality paths
The nature of interaction within Interactive Digital Narrative (IDN) is inherently complex. This is due, in part, to the wide range of potential interaction modes through which IDNs can be conceptualised, produced and deployed and the complex dynamics this might entail. The purpose of this whitepaper is to provide IDN practitioners with the essential knowledge on the nature of interaction in IDNs and allow them to make informed design decisions that lead to the incorporation of complexity thinking throughout the design pipeline, the implementation of the work, and the ways its audience perceives it. This white paper is concerned with the complexities of authoring, delivering and processing dynamic interactive contents from the perspectives of both creators and audiences. This white paper is part of a series of publications by the INDCOR COST Action 18230 (Interactive Narrative Design for Complexity Representations), which all clarify how IDNs representing complexity can be understood and applied (INDCOR WP 0 - 5, 2023).
In recent years, online social networks have been the target of adversaries who seek to introduce discord into societies, to undermine democracies and to destabilize communities. Often the goal is not to favor a certain side of a conflict but to increase disagreement and polarization. To get a mathematical understanding of such attacks, researchers use opinion-formation models from sociology, such as the Friedkin--Johnsen model, and formally study how much discord the adversary can produce when altering the opinions for only a small set of users. In this line of work, it is commonly assumed that the adversary has full knowledge about the network topology and the opinions of all users. However, the latter assumption is often unrealistic in practice, where user opinions are not available or simply difficult to estimate accurately. To address this concern, we raise the following question: Can an attacker sow discord in a social network, even when only the network topology is known? We answer this question affirmatively. We present approximation algorithms for detecting a small set of users who are highly influential for the disagreement and polarization in the network. We show that when the adversary radicalizes these users and if the initial disagreement/polarization in the network is not very high, then our method gives a constant-factor approximation on the setting when the user opinions are known. To find the set of influential users, we provide a novel approximation algorithm for a variant of MaxCut in graphs with positive and negative edge weights. We experimentally evaluate our methods, which have access only to the network topology, and we find that they have similar performance as methods that have access to the network topology and all user opinions. We further present an NP-hardness proof, which was an open question by Chen and Racz [IEEE Trans. Netw. Sci. Eng., 2021].
Structural bias or segregation of networks refers to situations where two or more disparate groups are present in the network, so that the groups are highly connected internally, but loosely connected to each other. In many cases it is of interest to increase the connectivity of disparate groups so as to, e.g., minimize social friction, or expose individuals to diverse viewpoints. A commonly-used mechanism for increasing the network connectivity is to add edge shortcuts between pairs of nodes. In many applications of interest, edge shortcuts typically translate to recommendations, e.g., what video to watch, or what news article to read next. The problem of reducing structural bias or segregation via edge shortcuts has recently been studied in the literature, and random walks have been an essential tool for modeling navigation and connectivity in the underlying networks. Existing methods, however, either do not offer approximation guarantees, or engineer the objective so that it satisfies certain desirable properties that simplify the optimization~task. In this paper we address the problem of adding a given number of shortcut edges in the network so as to directly minimize the average hitting time and the maximum hitting time between two disparate groups. Our algorithm for minimizing average hitting time is a greedy bicriteria that relies on supermodularity. In contrast, maximum hitting time is not supermodular. Despite, we develop an approximation algorithm for that objective as well, by leveraging connections with average hitting time and the asymmetric k-center problem.
While a strength of Interactive Digital Narratives (IDN) is its support for multiperspectivity, this also poses a substantial challenge to its evaluation. Moreover, evaluation has to assess the system's ability to represent a complex reality as well as the user's understanding of that complex reality as a result of the experience of interacting with the system. This is needed to measure an IDN's efficiency and effectiveness in representing the chosen complex phenomenon. We here present some empirical methods employed by INDCOR members in their research including UX toolkits and scales. Particularly, we consider the impact of IDN on transformative learning and its evaluation through self-reporting and other alternatives.
Multivariate sequential data collected in practice often exhibit temporal irregularities, including nonuniform time intervals and component misalignment. However, if uneven spacing and asynchrony are endogenous characteristics of the data rather than a result of insufficient observation, the information content of these irregularities plays a defining role in characterizing the multivariate dependence structure. Existing approaches for probabilistic forecasting either overlook the resulting statistical heterogeneities, are susceptible to imputation biases, or impose parametric assumptions on the data distribution. This paper proposes an end-to-end solution that overcomes these limitations by allowing the observation arrival times to play the central role of model construction, which is at the core of temporal irregularities. To acknowledge temporal irregularities, we first enable unique hidden states for components so that the arrival times can dictate when, how, and which hidden states to update. We then develop a conditional flow representation to non-parametrically represent the data distribution, which is typically non-Gaussian, and supervise this representation by carefully factorizing the log-likelihood objective to select conditional information that facilitates capturing time variation and path dependency. The broad applicability and superiority of the proposed solution are confirmed by comparing it with existing approaches through ablation studies and testing on real-world datasets.
Model complexity is a fundamental problem in deep learning. In this paper we conduct a systematic overview of the latest studies on model complexity in deep learning. Model complexity of deep learning can be categorized into expressive capacity and effective model complexity. We review the existing studies on those two categories along four important factors, including model framework, model size, optimization process and data complexity. We also discuss the applications of deep learning model complexity including understanding model generalization capability, model optimization, and model selection and design. We conclude by proposing several interesting future directions.
Machine-learning models have demonstrated great success in learning complex patterns that enable them to make predictions about unobserved data. In addition to using models for prediction, the ability to interpret what a model has learned is receiving an increasing amount of attention. However, this increased focus has led to considerable confusion about the notion of interpretability. In particular, it is unclear how the wide array of proposed interpretation methods are related, and what common concepts can be used to evaluate them. We aim to address these concerns by defining interpretability in the context of machine learning and introducing the Predictive, Descriptive, Relevant (PDR) framework for discussing interpretations. The PDR framework provides three overarching desiderata for evaluation: predictive accuracy, descriptive accuracy and relevancy, with relevancy judged relative to a human audience. Moreover, to help manage the deluge of interpretation methods, we introduce a categorization of existing techniques into model-based and post-hoc categories, with sub-groups including sparsity, modularity and simulatability. To demonstrate how practitioners can use the PDR framework to evaluate and understand interpretations, we provide numerous real-world examples. These examples highlight the often under-appreciated role played by human audiences in discussions of interpretability. Finally, based on our framework, we discuss limitations of existing methods and directions for future work. We hope that this work will provide a common vocabulary that will make it easier for both practitioners and researchers to discuss and choose from the full range of interpretation methods.