As we make tremendous advances in machine learning and artificial intelligence technosciences, there is a renewed understanding in the AI community that we must ensure that humans being are at the center of our deliberations so that we don't end in technology-induced dystopias. As strongly argued by Green in his book Smart Enough City, the incorporation of technology in city environs does not automatically translate into prosperity, wellbeing, urban livability, or social justice. There is a great need to deliberate on the future of the cities worth living and designing. There are philosophical and ethical questions involved along with various challenges that relate to the security, safety, and interpretability of AI algorithms that will form the technological bedrock of future cities. Several research institutes on human centered AI have been established at top international universities. Globally there are calls for technology to be made more humane and human-compatible. For example, Stuart Russell has a book called Human Compatible AI. The Center for Humane Technology advocates for regulators and technology companies to avoid business models and product features that contribute to social problems such as extremism, polarization, misinformation, and Internet addiction. In this paper, we analyze and explore key challenges including security, robustness, interpretability, and ethical challenges to a successful deployment of AI or ML in human-centric applications, with a particular emphasis on the convergence of these challenges. We provide a detailed review of existing literature on these key challenges and analyze how one of these challenges may lead to others or help in solving other challenges. The paper also advises on the current limitations, pitfalls, and future directions of research in these domains, and how it can fill the current gaps and lead to better solutions.
Smart contracts are programs running on blockchain to execute transactions. When input constraints or security properties are violated at runtime, the transaction being executed by a smart contract needs to be reverted to avoid undesirable consequences. On Ethereum, the most popular blockchain that supports smart contracts, developers can choose among three transaction-reverting statements (i.e., require, if...revert, and if...throw) to handle anomalous transactions. While these transaction-reverting statements are vital for preventing smart contracts from exhibiting abnormal behaviors or suffering malicious attacks, there is limited understanding of how they are used in practice. In this work, we perform the first empirical study to characterize transaction-reverting statements in Ethereum smart contracts. We measured the prevalence of these statements in 3,866 verified smart contracts from popular dapps and built a taxonomy of their purposes via manually analyzing 557 transaction-reverting statements. We also compared template contracts and their corresponding custom contracts to understand how developers customize the use of transaction-reverting statements. Finally, we analyzed the security impact of transaction-reverting statements by removing them from smart contracts and comparing the mutated contracts against the original ones. Our study led to important findings, which can shed light on further research in the broad area of smart contract quality assurance and provide practical guidance to smart contract developers on the appropriate use of transaction-reverting statements.
The increasing complexity of modern interlocking poses a major challenge to ensuring railway safety. This calls for application of formal methods for assurance and verification of their safety. We have developed an industry-strength toolset, called SafeCap, for formal verification of interlockings. Our aim was to overcome the main barriers in deploying formal methods in industry. The approach proposed verifies interlocking data developed by signalling engineers in the ways they are designed by industry. It ensures fully-automated verification of safety properties using the state of the art techniques (automated theorem provers and solvers), and provides diagnostics in terms of the notations used by engineers. In the last two years SafeCap has been successfully used to verify 26 real-world mainline interlockings, developed by different suppliers and design offices. SafeCap is currently used in an advisory capacity, supplementing manual checking and testing processes by providing an additional level of verification and enabling earlier identification of errors. We are now developing a safety case to support its use as an alternative to some of these activities.
A smart city can be seen as a framework, comprised of Information and Communication Technologies (ICT). An intelligent network of connected devices that collect data with their sensors and transmit them using cloud technologies in order to communicate with other assets in the ecosystem plays a pivotal role in this framework. Maximizing the quality of life of citizens, making better use of resources, cutting costs, and improving sustainability are the ultimate goals that a smart city is after. Hence, data collected from connected devices will continuously get thoroughly analyzed to gain better insights into the services that are being offered across the city; with this goal in mind that they can be used to make the whole system more efficient. Robots and physical machines are inseparable parts of a smart city. Embodied AI is the field of study that takes a deeper look into these and explores how they can fit into real-world environments. It focuses on learning through interaction with the surrounding environment, as opposed to Internet AI which tries to learn from static datasets. Embodied AI aims to train an agent that can See (Computer Vision), Talk (NLP), Navigate and Interact with its environment (Reinforcement Learning), and Reason (General Intelligence), all at the same time. Autonomous driving cars and personal companions are some of the examples that benefit from Embodied AI nowadays. In this paper, we attempt to do a concise review of this field. We will go through its definitions, its characteristics, and its current achievements along with different algorithms, approaches, and solutions that are being used in different components of it (e.g. Vision, NLP, RL). We will then explore all the available simulators and 3D interactable databases that will make the research in this area feasible. Finally, we will address its challenges and identify its potentials for future research.
A number of studies in Information and Communication Technologies for Development (ICT4D) focus on projects' sustainability and resilience. Over the years, scholars have identified many elements to enable achievement of these goals. Nevertheless, barriers to achieving them are still a common reality in the field. In this paper, we propose that special attention should be paid to communities' relationships, self-organizing, and social capital - and the people's networks that enable them - within ICT4D scholarship and practice, as a way to achieve sustainability and resilience. Building on Green's work (2016) on social change as a force that cannot be understood without focusing on systems and power, we claim that ICT4D would benefit from intentionally growing social capital and fostering networks within its systems. We propose "network weaving" (Holley, 2013) as a practical approach, and we explore its potential to complement and advance existing ICT4D frameworks and practices, including the sense of community of the researchers themselves.
Globally, the discourse of e-government has gathered momentum in public service delivery. No country has been left untouched in the implementation of e-government. Several government departments and agencies are now using information and communication technology (ICTs) to deliver government services and information to citizens, other government departments, and businesses. However, most of the government departments have not provided all of their services electronically or at least the most important ones. Thus, this creates a phenomenon of e-government service gaps. The objective of this study was to investigate the contextual factors enhancing e-government service gaps in a developing country. To achieve this aim, the TOE framework was employed together with a qualitative case study to guide data collection and analysis. The data was collected through semi-structured interviews from government employees who are involved in the implementation of e-government services in Zimbabwe as well as from citizens and businesses. Eleven (11) factors were identified and grouped under the TOE framework. This research contributes significantly to the implementation and utilisation of e-government services in Zimbabwe. The study also contributes to providing a strong theoretical understanding of the factors that enhance e-government service gaps explored in the research model.
Information and communication technologies (ICTs) have increasingly become vital for people on the move including the nearly 80 million displaced due to conflict, violence, and human right violations globally. However, existing research on ICTs and migrants, which almost entirely focused on migrants' ICT use 'en route' or within developed economies principally in the perspectives of researchers from these regions, is very fragmented posing a difficulty in understanding the key objects of research. Moreover, ICTs are often celebrated as liberating and exploitable at migrants' rational discretion even though they are 'double-edged swords' with significant risks, burdens, pressures and inequality challenges particularly for vulnerable migrants including those forcefully displaced and trafficked. Towards addressing these limitations and illuminating future directions, this paper, first, scrutinises the existing research vis-a-vis ICTs' liberating and authoritarian role particularly for vulnerable migrants whereby explicating key issues in the research domain. Second, it identifies key gaps and opportunities for future research. Using a tailored methodology, broad literature relating to ICTs and migration/development published in the period 1990-2020 was surveyed resulting in 157 selected publications which were critically appraised vis-a-vis the key themes, major technologies dealt with, and methodologies and theories/concepts adopted. Furthermore, key insights, trends, gaps, and future research opportunities pertaining to both the existing and missing objects of research in ICTs and migration/development are spotlighted.
The purpose of the present paper is to show how blockchain and IoT technologies can benefit smart city projects, which tend to spread in the context of the sharing economy. The article also aims to describe the challenges and potentialities of smart city projects. It was found that technology platforms can serve as a strategy to build the basis for product development (goods and services) and technology-based innovation.
Interpretability in machine learning (ML) is crucial for high stakes decisions and troubleshooting. In this work, we provide fundamental principles for interpretable ML, and dispel common misunderstandings that dilute the importance of this crucial topic. We also identify 10 technical challenge areas in interpretable machine learning and provide history and background on each problem. Some of these problems are classically important, and some are recent problems that have arisen in the last few years. These problems are: (1) Optimizing sparse logical models such as decision trees; (2) Optimization of scoring systems; (3) Placing constraints into generalized additive models to encourage sparsity and better interpretability; (4) Modern case-based reasoning, including neural networks and matching for causal inference; (5) Complete supervised disentanglement of neural networks; (6) Complete or even partial unsupervised disentanglement of neural networks; (7) Dimensionality reduction for data visualization; (8) Machine learning models that can incorporate physics and other generative or causal constraints; (9) Characterization of the "Rashomon set" of good models; and (10) Interpretable reinforcement learning. This survey is suitable as a starting point for statisticians and computer scientists interested in working in interpretable machine learning.
The concept of smart grid has been introduced as a new vision of the conventional power grid to figure out an efficient way of integrating green and renewable energy technologies. In this way, Internet-connected smart grid, also called energy Internet, is also emerging as an innovative approach to ensure the energy from anywhere at any time. The ultimate goal of these developments is to build a sustainable society. However, integrating and coordinating a large number of growing connections can be a challenging issue for the traditional centralized grid system. Consequently, the smart grid is undergoing a transformation to the decentralized topology from its centralized form. On the other hand, blockchain has some excellent features which make it a promising application for smart grid paradigm. In this paper, we have an aim to provide a comprehensive survey on application of blockchain in smart grid. As such, we identify the significant security challenges of smart grid scenarios that can be addressed by blockchain. Then, we present a number of blockchain-based recent research works presented in different literatures addressing security issues in the area of smart grid. We also summarize several related practical projects, trials, and products that have been emerged recently. Finally, we discuss essential research challenges and future directions of applying blockchain to smart grid security issues.
Machine-learning models have demonstrated great success in learning complex patterns that enable them to make predictions about unobserved data. In addition to using models for prediction, the ability to interpret what a model has learned is receiving an increasing amount of attention. However, this increased focus has led to considerable confusion about the notion of interpretability. In particular, it is unclear how the wide array of proposed interpretation methods are related, and what common concepts can be used to evaluate them. We aim to address these concerns by defining interpretability in the context of machine learning and introducing the Predictive, Descriptive, Relevant (PDR) framework for discussing interpretations. The PDR framework provides three overarching desiderata for evaluation: predictive accuracy, descriptive accuracy and relevancy, with relevancy judged relative to a human audience. Moreover, to help manage the deluge of interpretation methods, we introduce a categorization of existing techniques into model-based and post-hoc categories, with sub-groups including sparsity, modularity and simulatability. To demonstrate how practitioners can use the PDR framework to evaluate and understand interpretations, we provide numerous real-world examples. These examples highlight the often under-appreciated role played by human audiences in discussions of interpretability. Finally, based on our framework, we discuss limitations of existing methods and directions for future work. We hope that this work will provide a common vocabulary that will make it easier for both practitioners and researchers to discuss and choose from the full range of interpretation methods.