Brain computer interfaces systems are controlled by users through neurophysiological input for a variety of applications including communication, environmental control, motor rehabilitation, and cognitive training. Although individuals with severe speech and physical impairment are the primary users of this technology, BCIs have emerged as a potential tool for broader populations, especially with regards to delivering cognitive training or interventions with neurofeedback. The goal of this study was to investigate the feasibility of using a BCI system with neurofeedback as an intervention for people with mild Alzheimer's disease. The study focused on visual attention and language since ad is often associated with functional impairments in language and reading. The study enrolled five adults with mild ad in a nine to thirteen week BCI EEG based neurofeedback intervention to improve attention and reading skills. Two participants completed intervention entirely. The remaining three participants could not complete the intervention phase because of restrictions related to covid. Pre and post assessment measures were used to assess reliability of outcome measures and generalization of treatment to functional reading, processing speed, attention, and working memory skills. Participants demonstrated steady improvement in most cognitive measures across experimental phases, although there was not a significant effect of NFB on most measures of attention. One subject demonstrated significantly significant improvement in letter cancellation during NFB. All participants with mild AD learned to operate a BCI system with training. Results have broad implications for the design and use of bci systems for participants with cognitive impairment. Preliminary evidence justifies implementing NFB-based cognitive measures in AD.
Artificial intelligence algorithms are increasingly adopted as decisional aides by public bodies, with the promise of overcoming biases of human decision-makers. At the same time, they may introduce new biases in the human-algorithm interaction. Drawing on psychology and public administration literatures, we investigate two key biases: overreliance on algorithmic advice even in the face of "warning signals" from other sources (automation bias), and selective adoption of algorithmic advice when this corresponds to stereotypes (selective adherence). We assess these via three experimental studies conducted in the Netherlands: In study 1 (N=605), we test automation bias by exploring participants' adherence to an algorithmic prediction compared to an equivalent human-expert prediction. We do not find evidence for automation bias. In study 2 (N=904), we replicate these findings, and also test selective adherence. We find a stronger propensity for adherence when the advice is aligned with group stereotypes, with no significant differences between algorithmic and human-expert advice. In study 3 (N=1,345), we replicate our design with a sample of civil servants. This study was conducted shortly after a major scandal involving public authorities' reliance on an algorithm with discriminatory outcomes (the "childcare benefits scandal"). The scandal is itself illustrative of our theory and patterns diagnosed empirically in our experiment, yet in our study 3, while supporting our prior findings as to automation bias, we do not find patterns of selective adherence. We suggest this is driven by bureaucrats' enhanced awareness of discrimination and algorithmic biases in the aftermath of the scandal. We discuss the implications of our findings for public sector decision-making in the age of automation.
Exoskeletons and orthoses are wearable mobile systems providing mechanical benefits to the users. Despite significant improvements in the last decades, the technology is not fully mature to be adopted for strenuous and non-programmed tasks. To accommodate this insufficiency, different aspects of this technology need to be analysed and improved. Numerous studies have been trying to address some aspects of exoskeletons, e.g. mechanism design, intent prediction, and control scheme. However, most works have focused on a specific element of design or application without providing a comprehensive review framework. This study aims to analyse and survey the contributing aspects to the improvement and broad adoption of this technology. To address this, after introducing assistive devices and exoskeletons, the main design criteria will be investigated from a physical Human-Robot Interface (HRI) perspective. The study will be further developed by outlining several examples of known assistive devices in different categories. In order to establish an intelligent HRI strategy and enabling intuitive control for users, cognitive HRI will be investigated. Various approaches to this strategy will be reviewed, and a model for intent prediction will be proposed. This model is utilised to predict the gate phase from a single Electromyography (EMG) channel input. The outcomes of modelling show the potential use of single-channel input in low-power assistive devices. Furthermore, the proposed model can provide redundancy in devices with a complex control strategy.
Brain functional connectome, the collection of interconnected neural circuits along functional networks, is one of the most cutting edge neuroimaging traits, and has a potential to play a mediating role within the effect pathway between an exposure and an outcome. While existing mediation analytic approaches are capable of providing insight into complex processes, they mainly focus on a univariate mediator or mediator vector, without considering network-variate mediators. To fill the methodological gap and accomplish this exciting and urgent application, in the paper, we propose an integrative mediation analysis under a Bayesian paradigm with networks entailing the mediation effect. To parameterize the network measurements, we introduce individually specified stochastic block models with unknown block allocation, and naturally bridge effect elements through the latent network mediators induced by the connectivity weights across network modules. To enable the identification of truly active mediating components, we simultaneously impose a feature selection across network mediators. We show the superiority of our model in estimating different effect components and selecting active mediating network structures. As a practical illustration of this approach's application to network neuroscience, we characterize the relationship between a therapeutic intervention and opioid abstinence as mediated by brain functional sub-networks.
From denial-of-service attacks to spreading of ransomware or other malware across an organization's network, it is possible that manually operated defenses are not able to respond in real time at the scale required, and when a breach is detected and remediated the damage is already made. Autonomous cyber defenses therefore become essential to mitigate the risk of successful attacks and their damage, especially when the response time, effort and accuracy required in those defenses is impractical or impossible through defenses operated exclusively by humans. Autonomous agents have the potential to use ML with large amounts of data about known cyberattacks as input, in order to learn patterns and predict characteristics of future attacks. Moreover, learning from past and present attacks enable defenses to adapt to new threats that share characteristics with previous attacks. On the other hand, autonomous cyber defenses introduce risks of unintended harm. Actions arising from autonomous defense agents may have harmful consequences of functional, safety, security, ethical, or moral nature. Here we focus on machine learning training, algorithmic feedback, and algorithmic constraints, with the aim of motivating a discussion on achieving trust in autonomous cyber defenses.
`Tracking' is the collection of data about an individual's activity across multiple distinct contexts and the retention, use, or sharing of data derived from that activity outside the context in which it occurred. This paper aims to introduce tracking on the web, smartphones, and the Internet of Things, to an audience with little or no previous knowledge. It covers these topics primarily from the perspective of computer science and human-computer interaction, but also includes relevant law and policy aspects. Rather than a systematic literature review, it aims to provide an over-arching narrative spanning this large research space. Section 1 introduces the concept of tracking. Section 2 provides a short history of the major developments of tracking on the web. Section 3 presents research covering the detection, measurement and analysis of web tracking technologies. Section 4 delves into the countermeasures against web tracking and mechanisms that have been proposed to allow users to control and limit tracking, as well as studies into end-user perspectives on tracking. Section 5 focuses on tracking on `smart' devices including smartphones and the internet of things. Section 6 covers emerging issues affecting the future of tracking across these different platforms.
Both logic programming in general, and Prolog in particular, have a long and fascinating history, intermingled with that of many disciplines they inherited from or catalyzed. A large body of research has been gathered over the last 50 years, supported by many Prolog implementations. Many implementations are still actively developed, while new ones keep appearing. Often, the features added by different systems were motivated by the interdisciplinary needs of programmers and implementors, yielding systems that, while sharing the "classic" core language, and, in particular, the main aspects of the ISO-Prolog standard, also depart from each other in other aspects. This obviously poses challenges for code portability. The field has also inspired many related, but quite different languages that have created their own communities. This article aims at integrating and applying the main lessons learned in the process of evolution of Prolog. It is structured into three major parts. Firstly, we overview the evolution of Prolog systems and the community approximately up to the ISO standard, considering both the main historic developments and the motivations behind several Prolog implementations, as well as other logic programming languages influenced by Prolog. Then, we discuss the Prolog implementations that are most active after the appearance of the standard: their visions, goals, commonalities, and incompatibilities. Finally, we perform a SWOT analysis in order to better identify the potential of Prolog, and propose future directions along which Prolog might continue to add useful features, interfaces, libraries, and tools, while at the same time improving compatibility between implementations.
Mixed reality head-mounted displays (mxdR-HMD) have the potential to visualize volumetric medical imaging data in holograms to provide a true sense of volumetric depth. An effective user interface, however, has yet to be thoroughly studied. Tangible user interfaces (TUIs) enable a tactile interaction with a hologram through an object. The object has physical properties indicating how it might be used with multiple degrees-of-freedom. We propose a TUI using a planar object (PO) for the holographic medical volume visualization and exploration. We refer to it as mxdR hologram slicer (mxdR-HS). Users can slice the hologram to examine particular regions of interest (ROIs) and intermix complementary data and annotations. The mxdR-HS introduces a novel real-time ad-hoc marker-less PO tracking method that works with any PO where corners are visible. The aim of mxdR-HS is to maintain minimum computational latency while preserving practical tracking accuracy to enable seamless TUI integration in the commercial mxdR-HMD, which has limited computational resources. We implemented the mxdR-HS on a commercial Microsoft HoloLens with a built-in depth camera. Our experimental results showed our mxdR-HS had a superior computational latency but marginally lower tracking accuracy than two marker-based tracking methods and resulted in enhanced computational latency and tracking accuracy than 10 marker-less tracking methods. Our mxdR-HS, in a medical environment, can be suggested as a visual guide to display complex volumetric medical imaging data.
To make deliberate progress towards more intelligent and more human-like artificial systems, we need to be following an appropriate feedback signal: we need to be able to define and evaluate intelligence in a way that enables comparisons between two systems, as well as comparisons with humans. Over the past hundred years, there has been an abundance of attempts to define and measure intelligence, across both the fields of psychology and AI. We summarize and critically assess these definitions and evaluation approaches, while making apparent the two historical conceptions of intelligence that have implicitly guided them. We note that in practice, the contemporary AI community still gravitates towards benchmarking intelligence by comparing the skill exhibited by AIs and humans at specific tasks such as board games and video games. We argue that solely measuring skill at any given task falls short of measuring intelligence, because skill is heavily modulated by prior knowledge and experience: unlimited priors or unlimited training data allow experimenters to "buy" arbitrary levels of skills for a system, in a way that masks the system's own generalization power. We then articulate a new formal definition of intelligence based on Algorithmic Information Theory, describing intelligence as skill-acquisition efficiency and highlighting the concepts of scope, generalization difficulty, priors, and experience. Using this definition, we propose a set of guidelines for what a general AI benchmark should look like. Finally, we present a benchmark closely following these guidelines, the Abstraction and Reasoning Corpus (ARC), built upon an explicit set of priors designed to be as close as possible to innate human priors. We argue that ARC can be used to measure a human-like form of general fluid intelligence and that it enables fair general intelligence comparisons between AI systems and humans.
In recent years with the rise of Cloud Computing (CC), many companies providing services in the cloud, are empowered a new series of services to their catalog, such as data mining (DM) and data processing, taking advantage of the vast computing resources available to them. Different service definition proposals have been proposed to address the problem of describing services in CC in a comprehensive way. Bearing in mind that each provider has its own definition of the logic of its services, and specifically of DM services, it should be pointed out that the possibility of describing services in a flexible way between providers is fundamental in order to maintain the usability and portability of this type of CC services. The use of semantic technologies based on the proposal offered by Linked Data (LD) for the definition of services, allows the design and modelling of DM services, achieving a high degree of interoperability. In this article a schema for the definition of DM services on CC is presented, in addition are considered all key aspects of service in CC, such as prices, interfaces, Software Level Agreement, instances or workflow of experimentation, among others. The proposal presented is based on LD, so that it reuses other schemata obtaining a best definition of the service. For the validation of the schema, a series of DM services have been created where some of the best known algorithms such as \textit{Random Forest} or \textit{KMeans} are modeled as services.
During the recent years, correlation filters have shown dominant and spectacular results for visual object tracking. The types of the features that are employed in these family of trackers significantly affect the performance of visual tracking. The ultimate goal is to utilize robust features invariant to any kind of appearance change of the object, while predicting the object location as properly as in the case of no appearance change. As the deep learning based methods have emerged, the study of learning features for specific tasks has accelerated. For instance, discriminative visual tracking methods based on deep architectures have been studied with promising performance. Nevertheless, correlation filter based (CFB) trackers confine themselves to use the pre-trained networks which are trained for object classification problem. To this end, in this manuscript the problem of learning deep fully convolutional features for the CFB visual tracking is formulated. In order to learn the proposed model, a novel and efficient backpropagation algorithm is presented based on the loss function of the network. The proposed learning framework enables the network model to be flexible for a custom design. Moreover, it alleviates the dependency on the network trained for classification. Extensive performance analysis shows the efficacy of the proposed custom design in the CFB tracking framework. By fine-tuning the convolutional parts of a state-of-the-art network and integrating this model to a CFB tracker, which is the top performing one of VOT2016, 18% increase is achieved in terms of expected average overlap, and tracking failures are decreased by 25%, while maintaining the superiority over the state-of-the-art methods in OTB-2013 and OTB-2015 tracking datasets.