The Circular Economy (CE) is regarded as a solution to the environmental crisis. However, mainstream CE measures skirt around challenging the ethos of ever-increasing economic growth,overlooking social impacts and under-representing solutions such as reducing overall consumption. Circular Societies (CS) address these concerns by challenging this ethos. They emphasize ground-up social reorganization, address over-consumption through sufficiency strategies, and highlight the need for considering the complex inter-dependencies between nature, society, and technology on local,regional and global levels. However, no blueprint exists for forming CSs. An initial objective of my thesis is exploring existing social-network ontologies and developing a broadly applicable model for CSs. Since ground-up social reorganization on local, regional,and global levels has compounding effects on network complexities, a technological framework digitizing these inter-dependencies is necessary. Finally, adhering to CS principles of transparency and democratization, a system of trust is necessary to achieve collaborative consensus of the network state.
This paper focuses on statistical modelling using additive Gaussian process (GP) models and their efficient implementation for large-scale spatio-temporal data with a multi-dimensional grid structure. To achieve this, we exploit the Kronecker product structures of the covariance kernel. While this method has gained popularity in the GP literature, the existing approach is limited to covariance kernels with a tensor product structure and does not allow flexible modelling and selection of interaction effects. This is considered an important component in spatio-temporal analysis. We extend the method to a more general class of additive GP models that accounts for main effects and selected interaction effects. Our approach allows for easy identification and interpretation of interaction effects. The proposed model is applied to the analysis of NO$_2$ concentrations during the COVID-19 lockdown in London. Our scalable method enables analysis of large-scale, hourly-recorded data collected from 59 different stations across the city, providing additional insights to findings from previous research using daily or weekly averaged data.
Pricing insurance for risks associated with information technology systems presents a complex modelling challenge, combining the disciplines of operations management, security, and economics. This work proposes a socioeconomic model for cyber-insurance decisions compromised of entity relationship diagrams, security maturity models, and economic models, addressing a long-standing research challenge of capturing organizational structure in the design and pricing of cyber-insurance policies. Insurance pricing is usually informed by the long experience insurance companies have of the magnitude and frequency of losses that arise in organizations based on their size, industry sector, and location. Consequently, their calculations of premia will start from a baseline determined by these considerations. A unique challenge of cyber-insurance is that data history is limited and not necessarily informative of future loss risk meaning that established actuarial methodology for other lines of insurance may not be the optimal pricing strategy. The model proposed in this paper provides a vehicle for agreement between practitioners in the cyber-insurance ecosystem on cyber-security risks and allows for the users to choose their desired level of abstraction in the description of a system.
Learning optimal control policies directly on physical systems is challenging since even a single failure can lead to costly hardware damage. Most existing model-free learning methods that guarantee safety, i.e., no failures, during exploration are limited to local optima. A notable exception is the GoSafe algorithm, which, unfortunately, cannot handle high-dimensional systems and hence cannot be applied to most real-world dynamical systems. This work proposes GoSafeOpt as the first algorithm that can safely discover globally optimal policies for high-dimensional systems while giving safety and optimality guarantees. We demonstrate the superiority of GoSafeOpt over competing model-free safe learning methods on a robot arm that would be prohibitive for GoSafe.
A Digital Twin (DT) is a virtual replica of a physical object or system, created to monitor, analyze, and optimize its behavior and characteristics. A Spatial Digital Twin (SDT) is a specific type of digital twin that emphasizes the geospatial aspects of the physical entity, incorporating precise location and dimensional attributes for a comprehensive understanding within its spatial environment. The current body of research on SDTs primarily concentrates on analyzing their potential impact and opportunities within various application domains. As building an SDT is a complex process and requires a variety of spatial computing technologies, it is not straightforward for practitioners and researchers of this multi-disciplinary domain to grasp the underlying details of enabling technologies of the SDT. In this paper, we are the first to systematically analyze different spatial technologies relevant to building an SDT in layered approach (starting from data acquisition to visualization). More specifically, we present the key components of SDTs into four layers of technologies: (i) data acquisition; (ii) spatial database management \& big data analytics systems; (iii) GIS middleware software, maps \& APIs; and (iv) key functional components such as visualizing, querying, mining, simulation and prediction. Moreover, we discuss how modern technologies such as AI/ML, blockchains, and cloud computing can be effectively utilized in enabling and enhancing SDTs. Finally, we identify a number of research challenges and opportunities in SDTs. This work serves as an important resource for SDT researchers and practitioners as it explicitly distinguishes SDTs from traditional DTs, identifies unique applications, outlines the essential technological components of SDTs, and presents a vision for their future development along with the challenges that lie ahead.
The digitalization of the reproductive body has engaged myriads of cutting-edge technologies in supporting people to know and tackle their intimate health. Generally understood as female technologies (aka female-oriented technologies or 'FemTech'), these products and systems collect a wide range of intimate data which are processed, transferred, saved and shared with other parties. In this paper, we explore how the "data-hungry" nature of this industry and the lack of proper safeguarding mechanisms, standards, and regulations for vulnerable data can lead to complex harms or faint agentic potential. We adopted mixed methods in exploring users' understanding of the security and privacy (SP) of these technologies. Our findings show that while users can speculate the range of harms and risks associated with these technologies, they are not equipped and provided with the technological skills to protect themselves against such risks. We discuss a number of approaches, including participatory threat modelling and SP by design, in the context of this work and conclude that such approaches are critical to protect users in these sensitive systems.
The current study focuses on systematically analyzing the recent advances in the field of Multimodal eXplainable Artificial Intelligence (MXAI). In particular, the relevant primary prediction tasks and publicly available datasets are initially described. Subsequently, a structured presentation of the MXAI methods of the literature is provided, taking into account the following criteria: a) The number of the involved modalities, b) The stage at which explanations are produced, and c) The type of the adopted methodology (i.e. mathematical formalism). Then, the metrics used for MXAI evaluation are discussed. Finally, a comprehensive analysis of current challenges and future research directions is provided.
Smart buildings are increasingly using Internet of Things (IoT)-based wireless sensing systems to reduce their energy consumption and environmental impact. As a result of their compact size and ability to sense, measure, and compute all electrical properties, Internet of Things devices have become increasingly important in our society. A major contribution of this study is the development of a comprehensive IoT-based framework for smart city energy management, incorporating multiple components of IoT architecture and framework. An IoT framework for intelligent energy management applications that employ intelligent analysis is an essential system component that collects and stores information. Additionally, it serves as a platform for the development of applications by other companies. Furthermore, we have studied intelligent energy management solutions based on intelligent mechanisms. The depletion of energy resources and the increase in energy demand have led to an increase in energy consumption and building maintenance. The data collected is used to monitor, control, and enhance the efficiency of the system.
Encompassing a diverse population of developers, non-technical users, organizations, and many other stakeholders, open source software (OSS) development has expanded to broader social movements from the initial product development aims. Ideology, as a coherent system of ideas, offers value commitments and normative implications for any social movement, so does OSS ideology for the open source movement. However, the literature on open source ideology is often fragile, or lacking in empirical evidence. In this paper, we sought to develop a comprehensive empirical theory of ideologies in open source software movement. Following a grounded theory procedure, we collected and analyzed data from 22 semi-structured interviews and 41 video recordings of Open Source Initiative (OSI) board members' public speeches. An empirical theory of OSS ideology emerged in our analysis, with six key categories: membership, norms/values, goals, activities, resources, and positions/group relations; each consists of a number of themes and subthemes. We discussed a subset of carefully selected themes and subthemes in detail based on their theoretical significance. With this ideological lens, we examined the implications and insights into open source development, and shed light on the research into open source as a social-cultural construction in the future.
Diffusion models are a class of deep generative models that have shown impressive results on various tasks with dense theoretical founding. Although diffusion models have achieved impressive quality and diversity of sample synthesis than other state-of-the-art models, they still suffer from costly sampling procedure and sub-optimal likelihood estimation. Recent studies have shown great enthusiasm on improving the performance of diffusion model. In this article, we present a first comprehensive review of existing variants of the diffusion models. Specifically, we provide a first taxonomy of diffusion models and categorize them variants to three types, namely sampling-acceleration enhancement, likelihood-maximization enhancement and data-generalization enhancement. We also introduce in detail other five generative models (i.e., variational autoencoders, generative adversarial networks, normalizing flow, autoregressive models, and energy-based models), and clarify the connections between diffusion models and these generative models. Then we make a thorough investigation into the applications of diffusion models, including computer vision, natural language processing, waveform signal processing, multi-modal modeling, molecular graph generation, time series modeling, and adversarial purification. Furthermore, we propose new perspectives pertaining to the development of this generative model.
Along with the massive growth of the Internet from the 1990s until now, various innovative technologies have been created to bring users breathtaking experiences with more virtual interactions in cyberspace. Many virtual environments with thousands of services and applications, from social networks to virtual gaming worlds, have been developed with immersive experience and digital transformation, but most are incoherent instead of being integrated into a platform. In this context, metaverse, a term formed by combining meta and universe, has been introduced as a shared virtual world that is fueled by many emerging technologies, such as fifth-generation networks and beyond, virtual reality, and artificial intelligence (AI). Among such technologies, AI has shown the great importance of processing big data to enhance immersive experience and enable human-like intelligence of virtual agents. In this survey, we make a beneficial effort to explore the role of AI in the foundation and development of the metaverse. We first deliver a preliminary of AI, including machine learning algorithms and deep learning architectures, and its role in the metaverse. We then convey a comprehensive investigation of AI-based methods concerning six technical aspects that have potentials for the metaverse: natural language processing, machine vision, blockchain, networking, digital twin, and neural interface, and being potential for the metaverse. Subsequently, several AI-aided applications, such as healthcare, manufacturing, smart cities, and gaming, are studied to be deployed in the virtual worlds. Finally, we conclude the key contribution of this survey and open some future research directions in AI for the metaverse.