The rising diffusion of information systems (IS) throughout society poses an increasingly serious threat to privacy as a social value. One approach to alleviating this threat is to establish transparency of information privacy practices (TIPP) so that consumers can better understand how their information is processed. However, the design of transparency artifacts (eg, privacy notices) has clearly not followed this approach, given the ever-increasing volume of information processing. Hence, consumers face a situation where they cannot see the 'forest for the trees' when aiming to ascertain whether information processing meets their privacy expectations. A key problem is that overly comprehensive information presentation results in information overload and is thus counterproductive for establishing TIPP. We depart from the extant design logic of transparency artifacts and develop a theoretical foundation (TIPP theory) for transparency artifact designs useful for establishing TIPP from the perspective of privacy as a social value. We present TIPP theory in two parts to capture the sociotechnical interplay. The first part translates abstract knowledge on the IS artifact and privacy into a description of social subsystems of transparency artifacts, and the second part conveys prescriptive design knowledge in form of a corresponding IS design theory. TIPP theory establishes a bridge from the complexity of the privacy concept to a metadesign for transparency artifacts that is useful to establish TIPP in any IS. In essence, transparency artifacts must accomplish more than offering comprehensive information; they must also be adaptive to the current information needs of consumers.
Online social networks offer vast opportunities for computational social science, but effective user embedding is crucial for downstream tasks. Traditionally, researchers have used pre-defined network-based user features, such as degree, and centrality measures, and/or content-based features, such as posts and reposts. However, these measures may not capture the complex characteristics of social media users. In this study, we propose a user embedding method based on the URL domain co-occurrence network, which is simple but effective for representing social media users in competing events. We assessed the performance of this method in binary classification tasks using benchmark datasets that included Twitter users related to COVID-19 infodemic topics (QAnon, Biden, Ivermectin). Our results revealed that user embeddings generated directly from the retweet network, and those based on language, performed below expectations. In contrast, our domain-based embeddings outperformed these methods while reducing computation time. These findings suggest that the domain-based user embedding can serve as an effective tool to characterize social media users participating in competing events, such as political campaigns and public health crises.
Statistical techniques are needed to analyse data structures with complex dependencies such that clinically useful information can be extracted. Individual-specific networks, which capture dependencies in complex biological systems, are often summarized by graph-theoretical features. These features, which lend themselves to outcome modelling, can be subject to high variability due to arbitrary decisions in network inference and noise. Correlation-based adjacency matrices often need to be sparsified before meaningful graph-theoretical features can be extracted, requiring the data analysts to determine an optimal threshold.. To address this issue, we propose to incorporate a flexible weighting function over the full range of possible thresholds to capture the variability of graph-theoretical features over the threshold domain. The potential of this approach, which extends concepts from functional data analysis to a graph-theoretical setting, is explored in a plasmode simulation study using real functional magnetic resonance imaging (fMRI) data from the Autism Brain Imaging Data Exchange (ABIDE) Preprocessed initiative. The simulations show that our modelling approach yields accurate estimates of the functional form of the weight function, improves inference efficiency, and achieves a comparable or reduced root mean square prediction error compared to competitor modelling approaches. This assertion holds true in settings where both complex functional forms underlie the outcome-generating process and a universal threshold value is employed. We demonstrate the practical utility of our approach by using resting-state fMRI data to predict biological age in children. Our study establishes the flexible modelling approach as a statistically principled, serious competitor to ad-hoc methods with superior performance.
Over the past decades, cognitive neuroscientists and behavioral economists have recognized the value of describing the process of decision making in detail and modeling the emergence of decisions over time. For example, the time it takes to decide can reveal more about an agents true hidden preferences than only the decision itself. Similarly, data that track the ongoing decision process such as eye movements or neural recordings contain critical information that can be exploited, even if no decision is made. Here, we argue that artificial intelligence (AI) research would benefit from a stronger focus on insights about how decisions emerge over time and incorporate related process data to improve AI predictions in general and human-AI interactions in particular. First, we introduce a highly established computational framework that assumes decisions to emerge from the noisy accumulation of evidence, and we present related empirical work in psychology, neuroscience, and economics. Next, we discuss to what extent current approaches in multi-agent AI do or do not incorporate process data and models of decision making. Finally, we outline how a more principled inclusion of the evidence-accumulation framework into the training and use of AI can help to improve human-AI interactions in the future.
This article investigates synthetic model-predictive control (MPC) problems to demonstrate that an increased precision of the internal prediction model (PM) automatially entails an improvement of the controller as a whole. In contrast to reinforcement learning (RL), MPC uses the PM to predict subsequent states of the controlled system (CS), instead of directly recommending suitable actions. To assess how the precision of the PM translates into the quality of the model-predictive controller, we compare a DNN-based PM to the optimal baseline PM for three well-known control problems of varying complexity. The baseline PM achieves perfect accuracy by accessing the simulation of the CS itself. Based on the obtained results, we argue that an improvement of the PM will always improve the controller as a whole, without considering the impact of other components such as action selection (which, in this article, relies on evolutionary optimization).
In natural language processing (NLP), deep neural networks (DNNs) could model complex interactions between context and have achieved impressive results on a range of NLP tasks. Prior works on feature interaction attribution mainly focus on studying symmetric interaction that only explains the additional influence of a set of words in combination, which fails to capture asymmetric influence that contributes to model prediction. In this work, we propose an asymmetric feature interaction attribution explanation model that aims to explore asymmetric higher-order feature interactions in the inference of deep neural NLP models. By representing our explanation with an directed interaction graph, we experimentally demonstrate interpretability of the graph to discover asymmetric feature interactions. Experimental results on two sentiment classification datasets show the superiority of our model against the state-of-the-art feature interaction attribution methods in identifying influential features for model predictions. Our code is available at //github.com/StillLu/ASIV.
In epidemiology and social sciences, propensity score methods are popular for estimating treatment effects using observational data, and multiple imputation is popular for handling covariate missingness. However, how to appropriately use multiple imputation for propensity score analysis is not completely clear. This paper aims to bring clarity on the consistency (or lack thereof) of methods that have been proposed, focusing on the within approach (where the effect is estimated separately in each imputed dataset and then the multiple estimates are combined) and the across approach (where typically propensity scores are averaged across imputed datasets before being used for effect estimation). We show that the within method is valid and can be used with any causal effect estimator that is consistent in the full-data setting. Existing across methods are inconsistent, but a different across method that averages the inverse probability weights across imputed datasets is consistent for propensity score weighting. We also comment on methods that rely on imputing a function of the missing covariate rather than the covariate itself, including imputation of the propensity score and of the probability weight. Based on consistency results and practical flexibility, we recommend generally using the standard within method. Throughout, we provide intuition to make the results meaningful to the broad audience of applied researchers.
Many data science students and practitioners don't see the value in making time to learn and adopt good coding practices as long as the code "works". However, code standards are an important part of modern data science practice, and they play an essential role in the development of data acumen. Good coding practices lead to more reliable code and save more time than they cost, making them important even for beginners. We believe that principled coding is vital for quality data science practice. To effectively instill these practices within academic programs, instructors and programs need to begin establishing these practices early, to reinforce them often, and to hold themselves to a higher standard while guiding students. We describe key aspects of good coding practices for data science, illustrating with examples in R and in Python, though similar standards are applicable to other software environments. Practical coding guidelines are organized into a top ten list.
We study various aspects of the first-order transduction quasi-order, which provides a way of measuring the relative complexity of classes of structures based on whether one can encode the other using a formula of first-order (FO) logic. In contrast with the conjectured simplicity of the transduction quasi-order for monadic second-order logic, the FO-transduction quasi-order is very complex; in particular, we prove that the quotient partial order is not a lattice, although it is a bounded distributive join-semilattice, as is the subposet of additive classes. Many standard properties from structural graph theory and model theory naturally appear in this quasi-order. For example, we characterize transductions of paths, cubic graphs, and cubic trees in terms of bandwidth, bounded degree, and treewidth. We establish that the classes of all graphs with pathwidth at most~$k$, for $k\geq 1$, form a strict hierarchy in the FO-transduction quasi-order and leave open whether same is true for treewidth. This leads to considering whether properties admit maximum or minimum classes in this quasi-order. We prove that many properties do not admit a maximum class, and that star forests are the minimum class that is not a transduction of a class with bounded degree, which can be seen as an instance of transduction duality. We close with a notion of dense analogues of sparse classes, and discuss several related conjectures. As a ubiquitous tool in our results, we prove a normal form for FO-transductions that manifests the locality of FO logic. This is among several other technical results about FO-transductions which we anticipate being broadly useful.
Knowledge graphs (KGs) of real-world facts about entities and their relationships are useful resources for a variety of natural language processing tasks. However, because knowledge graphs are typically incomplete, it is useful to perform knowledge graph completion or link prediction, i.e. predict whether a relationship not in the knowledge graph is likely to be true. This paper serves as a comprehensive survey of embedding models of entities and relationships for knowledge graph completion, summarizing up-to-date experimental results on standard benchmark datasets and pointing out potential future research directions.
Recent advances in 3D fully convolutional networks (FCN) have made it feasible to produce dense voxel-wise predictions of volumetric images. In this work, we show that a multi-class 3D FCN trained on manually labeled CT scans of several anatomical structures (ranging from the large organs to thin vessels) can achieve competitive segmentation results, while avoiding the need for handcrafting features or training class-specific models. To this end, we propose a two-stage, coarse-to-fine approach that will first use a 3D FCN to roughly define a candidate region, which will then be used as input to a second 3D FCN. This reduces the number of voxels the second FCN has to classify to ~10% and allows it to focus on more detailed segmentation of the organs and vessels. We utilize training and validation sets consisting of 331 clinical CT images and test our models on a completely unseen data collection acquired at a different hospital that includes 150 CT scans, targeting three anatomical organs (liver, spleen, and pancreas). In challenging organs such as the pancreas, our cascaded approach improves the mean Dice score from 68.5 to 82.2%, achieving the highest reported average score on this dataset. We compare with a 2D FCN method on a separate dataset of 240 CT scans with 18 classes and achieve a significantly higher performance in small organs and vessels. Furthermore, we explore fine-tuning our models to different datasets. Our experiments illustrate the promise and robustness of current 3D FCN based semantic segmentation of medical images, achieving state-of-the-art results. Our code and trained models are available for download: //github.com/holgerroth/3Dunet_abdomen_cascade.