Robotic and Autonomous Agricultural Technologies (RAAT) are increasingly available yet may fail to be adopted. This paper focusses specifically on cognitive factors that affect adoption including: inability to generate trust, loss of farming knowledge and reduced social cognition. It is recommended that agriculture develops its own framework for the performance and safety of RAAT drawing on human factors research in aerospace engineering including human inputs (individual variance in knowledge, skills, abilities, preferences, needs and traits), trust, situational awareness and cognitive load. The kinds of cognitive impacts depend on the RAATs level of autonomy, ie whether it has automatic, partial autonomy and autonomous functionality and stage of adoption, ie adoption, initial use or post-adoptive use. The more autonomous a system is, the less a human needs to know to operate it and the less the cognitive load, but it also means farmers have less situational awareness about on farm activities that in turn may affect strategic decision-making about their enterprise. Some cognitive factors may be hidden when RAAT is first adopted but play a greater role during prolonged or intense post-adoptive use. Systems with partial autonomy need intuitive user interfaces, engaging system information, and clear signaling to be trusted with low level tasks; and to compliment and augment high order decision-making on farm.
Exoskeletons and orthoses are wearable mobile systems providing mechanical benefits to the users. Despite significant improvements in the last decades, the technology is not fully mature to be adopted for strenuous and non-programmed tasks. To accommodate this insufficiency, different aspects of this technology need to be analysed and improved. Numerous studies have been trying to address some aspects of exoskeletons, e.g. mechanism design, intent prediction, and control scheme. However, most works have focused on a specific element of design or application without providing a comprehensive review framework. This study aims to analyse and survey the contributing aspects to the improvement and broad adoption of this technology. To address this, after introducing assistive devices and exoskeletons, the main design criteria will be investigated from a physical Human-Robot Interface (HRI) perspective. The study will be further developed by outlining several examples of known assistive devices in different categories. In order to establish an intelligent HRI strategy and enabling intuitive control for users, cognitive HRI will be investigated. Various approaches to this strategy will be reviewed, and a model for intent prediction will be proposed. This model is utilised to predict the gate phase from a single Electromyography (EMG) channel input. The outcomes of modelling show the potential use of single-channel input in low-power assistive devices. Furthermore, the proposed model can provide redundancy in devices with a complex control strategy.
Augmented Business Process Management Systems (ABPMSs) are an emerging class of process-aware information systems that draws upon trustworthy AI technology. An ABPMS enhances the execution of business processes with the aim of making these processes more adaptable, proactive, explainable, and context-sensitive. This manifesto presents a vision for ABPMSs and discusses research challenges that need to be surmounted to realize this vision. To this end, we define the concept of ABPMS, we outline the lifecycle of processes within an ABPMS, we discuss core characteristics of an ABPMS, and we derive a set of challenges to realize systems with these characteristics.
The world is moving into a new era with the deployment of 5G communication infrastructure. Many new developments are deployed centred around this technology. One such advancement is 5G Vehicle to Everything communication. This technology can be used for applications such as driverless delivery of goods, immediate response to emergencies and improving traffic efficiency. The concept of Intelligent Transport Systems (ITS) is built around this system which is completely autonomous. This paper studies the Distributed Denial of Service (DDoS) attack carried out over a 5G network and analyses security attacks, particularly the DDoS attack. The aim is to implement a machine learning model capable of classifying different types of DDoS attacks and predicting the quality of 5G latency. The initial steps of implementation involved the synthetic addition of 5G parameters into the dataset. Subsequently, the data was label encoded, and minority classes were oversampled to match the other classes. Finally, the data was split as training and testing, and machine learning models were applied. Although the paper resulted in a model that predicted DDoS attacks, the dataset acquired significantly lacked 5G related information. Furthermore, the 5G classification model needed more modification. The research was based on largely quantitative research methods in a simulated environment. Hence, the biggest limitation of this research has been the lack of resources for data collection and sole reliance on online data sets. Ideally, a Vehicle to Everything (V2X) project would greatly benefit from an autonomous 5G enabled vehicle connected to a mobile edge cloud. However, this project was conducted solely online on a single PC which further limits the outcomes. Although the model underperformed, this paper can be used as a framework for future research in Intelligent Transport System development.
Ranking, recommendation, and retrieval systems are widely used in online platforms and other societal systems, including e-commerce, media-streaming, admissions, gig platforms, and hiring. In the recent past, a large "fair ranking" research literature has been developed around making these systems fair to the individuals, providers, or content that are being ranked. Most of this literature defines fairness for a single instance of retrieval, or as a simple additive notion for multiple instances of retrievals over time. This work provides a critical overview of this literature, detailing the often context-specific concerns that such an approach misses: the gap between high ranking placements and true provider utility, spillovers and compounding effects over time, induced strategic incentives, and the effect of statistical uncertainty. We then provide a path forward for a more holistic and impact-oriented fair ranking research agenda, including methodological lessons from other fields and the role of the broader stakeholder community in overcoming data bottlenecks and designing effective regulatory environments.
Advances of emerging Information and Communications Technology (ICT) technologies push the boundaries of what is possible and open up new markets for innovative ICT products and services. The adoption of ICT products and systems with security properties depends on consumers' confidence and markets' trust in the security functionalities and whether the assurance measures applied to these products meet the inherent security requirements. Such confidence and trust are primarily gained through the rigorous development of security requirements, validation criteria, evaluation, and certification. Common Criteria for Information Technology Security Evaluation (often referred to as Common Criteria or CC) is an international standard (ISO/IEC 15408) for cyber security certification. In this paper, we conduct a systematic review of the CC standards and its adoptions. Adoption barriers of the CC are also investigated based on the analysis of current trends in security evaluation. Specifically, we share the experiences and lessons gained through the recent Development of Australian Cyber Criteria Assessment (DACCA) project that promotes the CC among stakeholders in ICT security products related to specification, development, evaluation, certification and approval, procurement, and deployment. Best practices on developing Protection Profiles, recommendations, and future directions for trusted cybersecurity advancement are presented.
Controlling the spread of infectious diseases, such as the ongoing SARS-CoV-2 pandemic, is one of the most challenging problems for human civilization. The world is more populous and connected than ever before, and therefore, the rate of contagion for such diseases often becomes stupendous. The development and distribution of testing kits cannot keep up with the demand, making it impossible to test everyone. The next best option is to identify and isolate the people who come in close contact with an infected person. However, this apparently simple process, commonly known as - contact tracing, suffers from two major pitfalls: the requirement of a large amount of manpower to track the infected individuals manually and the breach in privacy and security while automating the process. Here, we propose a Bluetooth based contact tracing hardware with anonymous IDs to solve both the drawbacks of the existing approaches. The hardware will be a wearable device that every user can carry conveniently. This device will measure the distance between two users and exchange the IDs anonymously in the case of a close encounter. The anonymous IDs stored in the device of any newly infected individual will be used to trace the risky contacts and the status of the IDs will be updated consequently by authorized personnel. To demonstrate the concept, we simulate the working procedure and highlight the effectiveness of our technique to curb the spread of any contagious disease.
The explanation dimension of Artificial Intelligence (AI) based system has been a hot topic for the past years. Different communities have raised concerns about the increasing presence of AI in people's everyday tasks and how it can affect people's lives. There is a lot of research addressing the interpretability and transparency concepts of explainable AI (XAI), which are usually related to algorithms and Machine Learning (ML) models. But in decision-making scenarios, people need more awareness of how AI works and its outcomes to build a relationship with that system. Decision-makers usually need to justify their decision to others in different domains. If that decision is somehow based on or influenced by an AI-system outcome, the explanation about how the AI reached that result is key to building trust between AI and humans in decision-making scenarios. In this position paper, we discuss the role of XAI in decision-making scenarios, our vision of Decision-Making with AI-system in the loop, and explore one case from the literature about how XAI can impact people justifying their decisions, considering the importance of building the human-AI relationship for those scenarios.
To make deliberate progress towards more intelligent and more human-like artificial systems, we need to be following an appropriate feedback signal: we need to be able to define and evaluate intelligence in a way that enables comparisons between two systems, as well as comparisons with humans. Over the past hundred years, there has been an abundance of attempts to define and measure intelligence, across both the fields of psychology and AI. We summarize and critically assess these definitions and evaluation approaches, while making apparent the two historical conceptions of intelligence that have implicitly guided them. We note that in practice, the contemporary AI community still gravitates towards benchmarking intelligence by comparing the skill exhibited by AIs and humans at specific tasks such as board games and video games. We argue that solely measuring skill at any given task falls short of measuring intelligence, because skill is heavily modulated by prior knowledge and experience: unlimited priors or unlimited training data allow experimenters to "buy" arbitrary levels of skills for a system, in a way that masks the system's own generalization power. We then articulate a new formal definition of intelligence based on Algorithmic Information Theory, describing intelligence as skill-acquisition efficiency and highlighting the concepts of scope, generalization difficulty, priors, and experience. Using this definition, we propose a set of guidelines for what a general AI benchmark should look like. Finally, we present a benchmark closely following these guidelines, the Abstraction and Reasoning Corpus (ARC), built upon an explicit set of priors designed to be as close as possible to innate human priors. We argue that ARC can be used to measure a human-like form of general fluid intelligence and that it enables fair general intelligence comparisons between AI systems and humans.
Machine Learning models become increasingly proficient in complex tasks. However, even for experts in the field, it can be difficult to understand what the model learned. This hampers trust and acceptance, and it obstructs the possibility to correct the model. There is therefore a need for transparency of machine learning models. The development of transparent classification models has received much attention, but there are few developments for achieving transparent Reinforcement Learning (RL) models. In this study we propose a method that enables a RL agent to explain its behavior in terms of the expected consequences of state transitions and outcomes. First, we define a translation of states and actions to a description that is easier to understand for human users. Second, we developed a procedure that enables the agent to obtain the consequences of a single action, as well as its entire policy. The method calculates contrasts between the consequences of a policy derived from a user query, and of the learned policy of the agent. Third, a format for generating explanations was constructed. A pilot survey study was conducted to explore preferences of users for different explanation properties. Results indicate that human users tend to favor explanations about policy rather than about single actions.
Chatbot has become an important solution to rapidly increasing customer care demands on social media in recent years. However, current work on chatbot for customer care ignores a key to impact user experience - tones. In this work, we create a novel tone-aware chatbot that generates toned responses to user requests on social media. We first conduct a formative research, in which the effects of tones are studied. Significant and various influences of different tones on user experience are uncovered in the study. With the knowledge of effects of tones, we design a deep learning based chatbot that takes tone information into account. We train our system on over 1.5 million real customer care conversations collected from Twitter. The evaluation reveals that our tone-aware chatbot generates as appropriate responses to user requests as human agents. More importantly, our chatbot is perceived to be even more empathetic than human agents.