Small and Medium Enterprises (SMEs) are pivotal in the global economy, accounting for over 90% of businesses and 60% of employment worldwide. Despite their significance, SMEs have been disregarded from cybersecurity initiatives, rendering them ill-equipped to deal with the growing frequency, sophistication, and destructiveness of cyber-attacks. We systematically reviewed the cybersecurity literature on SMEs published between 2017 and 2023. We focus on research discussing cyber threats, adopted controls, challenges, and constraints SMEs face in pursuing cybersecurity resilience. Our search yielded 916 studies that we narrowed to 77 relevant papers. We identified 44 unique themes and categorised them as novel findings or established knowledge. This distinction revealed that research on SMEs is shallow and has made little progress in understanding SMEs' roles, threats, and needs. Studies often repeated early discoveries without replicating or offering new insights. The existing research indicates that the main challenges to attaining cybersecurity resilience of SMEs are a lack of awareness of the cybersecurity risks, limited cybersecurity literacy and constrained financial resources. However, resource availability varied between developed and developing countries. Our analysis indicated a relationship among these themes, suggesting that limited literacy is the root cause of awareness and resource constraint issues.
Randomized experiments are a powerful methodology for data-driven evaluation of decisions or interventions. Yet, their validity may be undermined by network interference. This occurs when the treatment of one unit impacts not only its outcome but also that of connected units, biasing traditional treatment effect estimations. Our study introduces a new framework to accommodate complex and unknown network interference, moving beyond specialized models in the existing literature. Our framework, which we term causal message-passing, is grounded in a high-dimensional approximate message passing methodology and is specifically tailored to experimental design settings with prevalent network interference. Utilizing causal message-passing, we present a practical algorithm for estimating the total treatment effect and demonstrate its efficacy in four numerical scenarios, each with its unique interference structure.
The advent of Web3 has ushered in a new era of decentralized digital economy, promising a shift from centralized authority to distributed, peer-to-peer interactions. However, the underlying infrastructure of this decentralized ecosystem often relies on centralized cloud providers, creating a paradoxical concentration of value and power. This paper investigates the mechanics of value accrual and extraction within the Web3 ecosystem, focusing on the roles and revenues of centralized clouds. Through an analysis of publicly available material, we elucidate the financial implications of cloud services in purportedly decentralized contexts. We further explore the individual's perspective of value creation and accumulation, examining the interplay between user participation and centralized monetization strategies. Key findings indicate that while blockchain technology has the potential to significantly reduce infrastructure costs for financial services, the current Web3 landscape is marked by a substantial reliance on cloud providers for hosting, scalability, and performance.
In recent years, Large Language Models (LLMs) have demonstrated remarkable generative abilities, but can they judge the quality of their own generations? A popular concept, referred to as self-refinement, postulates that LLMs can detect and correct the errors in their generations when asked to do so. However, recent empirical evidence points in the opposite direction, suggesting that LLMs often struggle to accurately identify errors when reasoning is involved. To address this, we propose a reasoning with refinement objective called ART: Ask, Refine, and Trust, which asks necessary questions to decide when an LLM should refine its output, and either affirm or withhold trust in its refinement by ranking the refinement and the initial prediction. On two multistep reasoning tasks of mathematical word problems (GSM8K) and question answering (StrategyQA), ART achieves a performance gain of +5 points over self-refinement baselines, while using a much smaller model as the decision maker. We also demonstrate the benefit of using smaller models to make refinement decisions as a cost-effective alternative to fine-tuning a larger model.
The increasing success of Large Language Models (LLMs) in variety of tasks lead to their widespread use in our lives which necessitates the examination of these models from different perspectives. The alignment of these models to human values is an essential concern in order to establish trust that we have safe and responsible systems. In this paper, we aim to find out which values and principles are embedded in LLMs in the process of moral justification. For this purpose, we come up with three different moral perspective categories: Western tradition perspective (WT), Abrahamic tradition perspective (AT), and Spiritualist/Mystic tradition perspective (SMT). In two different experiment settings, we asked models to choose principles from the three for suggesting a moral action and evaluating the moral permissibility of an action if one tries to justify an action on these categories, respectively. Our experiments indicate that tested LLMs favors the Western tradition moral perspective over others. Additionally, we observe that there potentially exists an over-alignment towards religious values represented in the Abrahamic Tradition, which causes models to fail to recognize an action is immoral if it is presented as a "religious-action". We believe that these results are essential in order to direct our attention in future efforts.
Evaluation is essential to understanding the value that digital creativity brings to people's experience, for example in terms of their enjoyment, creativity, and engagement. There is a substantial body of research on how to design and evaluate interactive arts and digital creativity applications. There is also extensive Human-Computer Interaction (HCI) literature on how to evaluate user interfaces and user experiences. However, it can be difficult for artists, practitioners, and researchers to navigate such a broad and disparate collection of materials when considering how to evaluate technology they create that is at the intersection of art and interaction. This chapter provides a guide to designing robust user studies of creative applications at the intersection of art, technology and interaction, which we refer to as Media and Arts Technology (MAT). We break MAT studies down into two main kinds: proof-of-concept and comparative studies. As MAT studies are exploratory in nature, their evaluation requires the collection and analysis of both qualitative data such as free text questionnaire responses, interviews, and observations, and also quantitative data such as questionnaires, number of interactions, and length of time spent interacting. This chapter draws on over 15 years of experience of designing and evaluating novel interactive systems to provide a concrete template on how to structure a study to evaluate MATs that is both rigorous and repeatable, and how to report study results that are publishable and accessible to a wide readership in art and science communities alike.
Text-based misinformation permeates online discourses, yet evidence of people's ability to discern truth from such deceptive textual content is scarce. We analyze a novel TV game show data where conversations in a high-stake environment between individuals with conflicting objectives result in lies. We investigate the manifestation of potentially verifiable language cues of deception in the presence of objective truth, a distinguishing feature absent in previous text-based deception datasets. We show that there exists a class of detectors (algorithms) that have similar truth detection performance compared to human subjects, even when the former accesses only the language cues while the latter engages in conversations with complete access to all potential sources of cues (language and audio-visual). Our model, built on a large language model, employs a bottleneck framework to learn discernible cues to determine truth, an act of reasoning in which human subjects often perform poorly, even with incentives. Our model detects novel but accurate language cues in many cases where humans failed to detect deception, opening up the possibility of humans collaborating with algorithms and ameliorating their ability to detect the truth.
Exploring data is crucial in data analysis, as it helps users understand and interpret the data more effectively. However, performing effective data exploration requires in-depth knowledge of the dataset and expertise in data analysis techniques. Not being familiar with either can create obstacles that make the process time-consuming and overwhelming for data analysts. To address this issue, we introduce InsightPilot, an LLM (Large Language Model)-based, automated data exploration system designed to simplify the data exploration process. InsightPilot automatically selects appropriate analysis intents, such as understanding, summarizing, and explaining. Then, these analysis intents are concretized by issuing corresponding intentional queries (IQueries) to create a meaningful and coherent exploration sequence. In brief, an IQuery is an abstraction and automation of data analysis operations, which mimics the approach of data analysts and simplifies the exploration process for users. By employing an LLM to iteratively collaborate with a state-of-the-art insight engine via IQueries, InsightPilot is effective in analyzing real-world datasets, enabling users to gain valuable insights through natural language inquiries. We demonstrate the effectiveness of InsightPilot in a case study, showing how it can help users gain valuable insights from their datasets.
Explainable Artificial Intelligence (XAI) is transforming the field of Artificial Intelligence (AI) by enhancing the trust of end-users in machines. As the number of connected devices keeps on growing, the Internet of Things (IoT) market needs to be trustworthy for the end-users. However, existing literature still lacks a systematic and comprehensive survey work on the use of XAI for IoT. To bridge this lacking, in this paper, we address the XAI frameworks with a focus on their characteristics and support for IoT. We illustrate the widely-used XAI services for IoT applications, such as security enhancement, Internet of Medical Things (IoMT), Industrial IoT (IIoT), and Internet of City Things (IoCT). We also suggest the implementation choice of XAI models over IoT systems in these applications with appropriate examples and summarize the key inferences for future works. Moreover, we present the cutting-edge development in edge XAI structures and the support of sixth-generation (6G) communication services for IoT applications, along with key inferences. In a nutshell, this paper constitutes the first holistic compilation on the development of XAI-based frameworks tailored for the demands of future IoT use cases.
Recently, Mutual Information (MI) has attracted attention in bounding the generalization error of Deep Neural Networks (DNNs). However, it is intractable to accurately estimate the MI in DNNs, thus most previous works have to relax the MI bound, which in turn weakens the information theoretic explanation for generalization. To address the limitation, this paper introduces a probabilistic representation of DNNs for accurately estimating the MI. Leveraging the proposed MI estimator, we validate the information theoretic explanation for generalization, and derive a tighter generalization bound than the state-of-the-art relaxations.
Graph Neural Networks (GNNs) have been studied from the lens of expressive power and generalization. However, their optimization properties are less well understood. We take the first step towards analyzing GNN training by studying the gradient dynamics of GNNs. First, we analyze linearized GNNs and prove that despite the non-convexity of training, convergence to a global minimum at a linear rate is guaranteed under mild assumptions that we validate on real-world graphs. Second, we study what may affect the GNNs' training speed. Our results show that the training of GNNs is implicitly accelerated by skip connections, more depth, and/or a good label distribution. Empirical results confirm that our theoretical results for linearized GNNs align with the training behavior of nonlinear GNNs. Our results provide the first theoretical support for the success of GNNs with skip connections in terms of optimization, and suggest that deep GNNs with skip connections would be promising in practice.