This paper highlights the design philosophy and architecture of the Health Guardian, a platform developed by the IBM Digital Health team to accelerate discoveries of new digital biomarkers and development of digital health technologies. The Health Guardian allows for rapid translation of artificial intelligence (AI) research into cloud-based microservices that can be tested with data from clinical cohorts to understand disease and enable early prevention. The platform can be connected to mobile applications, wearables, or Internet of things (IoT) devices to collect health-related data into a secure database. When the analytics are created, the researchers can containerize and deploy their code on the cloud using pre-defined templates, and validate the models using the data collected from one or more sensing devices. The Health Guardian platform currently supports time-series, text, audio, and video inputs with 70+ analytic capabilities and is used for non-commercial scientific research. We provide an example of the Alzheimer's disease (AD) assessment microservice which uses AI methods to extract linguistic features from audio recordings to evaluate an individual's mini-mental state, the likelihood of having AD, and to predict the onset of AD before turning the age of 85. Today, IBM research teams across the globe use the Health Guardian internally as a test bed for early-stage research ideas, and externally with collaborators to support and enhance AI model development and clinical study efforts.
Developing intelligent, interoperable Cyber Threat Information (CTI) sharing technologies can help build strong defences against modern cyber threats. CTIs allow the community to share information about cybercriminals' threats and vulnerabilities and countermeasures to defend themselves or detect malicious activity. A crucial need for success is that the data connected to cyber risks be understandable, organized, and of good quality. The receiving parties may grasp its content and utilize it effectively. This article describes an innovative cyber threat intelligence management platform (CTIMP) for industrial environments, one of the Cyber-pi project's significant elements. The suggested architecture, in particular, uses cyber knowledge from trusted public sources and integrates it with relevant information from the organization's supervised infrastructure in an entirely interoperable and intelligent way. When combined with an advanced visualization mechanism and user interface, the services mentioned above provide administrators with the situational awareness they require while also allowing for extended cooperation, intelligent selection of advanced coping strategies, and a set of automated self-healing rules for dealing with threats.
There has been increasing interest in studying the effect of giving births to unintended pregnancies on later life physical and mental health. In this article, we provide the protocol for our planned observational study on the long-term mental and physical health consequences for mothers who bear children resulting from unintended pregnancies. We aim to use the data from the Wisconsin Longitudinal Study (WLS) and examine the effect of births from unintended pregnancies on a broad range of outcomes, including mental depression, psychological well-being, physical health, alcohol usage, and economic well-being. To strengthen our causal findings, we plan to address our research questions on two subgroups, Catholics and non-Catholics, and discover the "replicable" outcomes for which the effect of unintended pregnancy is negative (or, positive) in both subgroups. Following the idea of non-random cross-screening, the data will be split according to whether the woman is Catholic or not, and then one part of the data will be used to select the hypotheses and design the corresponding tests for the second part of the data. In past use of cross-screening (automatic cross-screening) there was only one team of investigators that dealt with both parts of the data so that the investigators would need to decide on an analysis plan before looking at the data. In this protocol, we describe plans to carry out a novel flexible cross-screening in which there will be two teams of investigators with access only to one part of data and each team will use their part of the data to decide how to plan the analysis for the second team's data. In addition to the above replicability analysis, we also discuss the plan to test the global null hypothesis that is intended to identify the outcomes which are affected by unintended pregnancy for at least one of the two subgroups of Catholics and non-Catholics.
In electromagnetic inverse scattering, we aim to reconstruct object permittivity from scattered waves. Deep learning is a promising alternative to traditional iterative solvers, but it has been used mostly in a supervised framework to regress the permittivity patterns from scattered fields or back-projections. While such methods are fast at test-time and achieve good results for specific data distributions, they are sensitive to the distribution drift of the scattered fields, common in practice. If the distribution of the scattered fields changes due to changes in frequency, the number of transmitters and receivers, or any other real-world factor, an end-to-end neural network must be re-trained or fine-tuned on a new dataset. In this paper, we propose a new data-driven framework for inverse scattering based on deep generative models. We model the target permittivities by a low-dimensional manifold which acts as a regularizer and learned from data. Unlike supervised methods which require both scattered fields and target signals, we only need the target permittivities for training; it can then be used with any experimental setup. We show that the proposed framework significantly outperforms the traditional iterative methods especially for strong scatterers while having comparable reconstruction quality to state-of-the-art deep learning methods like U-Net.
Automated segmentation of anatomical sub-regions with high precision has become a necessity to enable the quantification and characterization of cells/ tissues in histology images. Currently, a machine learning model to analyze sub-anatomical regions of the brain to analyze 2D histological images is not available. The scientists rely on manually segmenting anatomical sub-regions of the brain which is extremely time-consuming and prone to labeler-dependent bias. One of the major challenges in accomplishing such a task is the lack of high-quality annotated images that can be used to train a generic artificial intelligence model. In this study, we employed a UNet-based architecture, compared model performance with various combinations of encoders, image sizes, and sample selection techniques. Additionally, to increase the sample set we resorted to data augmentation which provided data diversity and robust learning. In this study, we trained our best fit model on approximately one thousand annotated 2D brain images stained with Nissl/ Haematoxylin and Tyrosine Hydroxylase enzyme (TH, indicator of dopaminergic neuron viability). The dataset comprises of different animal studies enabling the model to be trained on different datasets. The model effectively is able to detect two sub-regions compacta (SNCD) and reticulata (SNr) in all the images. In spite of limited training data, our best model achieves a mean intersection over union (IOU) of 79% and a mean dice coefficient of 87%. In conclusion, the UNet-based model with EffiecientNet as an encoder outperforms all other encoders, resulting in a first of its kind robust model for multiclass segmentation of sub-brain regions in 2D images.
Everyday, large amounts of sensitive data is distributed across mobile phones, wearable devices, and other sensors. Traditionally, these enormous datasets have been processed on a single system, with complex models being trained to make valuable predictions. Distributed machine learning techniques such as Federated and Split Learning have recently been developed to protect user data and privacy better while ensuring high performance. Both of these distributed learning architectures have advantages and disadvantages. In this paper, we examine these tradeoffs and suggest a new hybrid Federated Split Learning architecture that combines the efficiency and privacy benefits of both. Our evaluation demonstrates how our hybrid Federated Split Learning approach can lower the amount of processing power required by each client running a distributed learning system, reduce training and inference time while keeping a similar accuracy. We also discuss the resiliency of our approach to deep learning privacy inference attacks and compare our solution to other recently proposed benchmarks.
Context: Stack Overflow (SO) has won the intention from software engineers (e.g., architects) to learn, practice, and utilize development knowledge, such as Architectural Knowledge (AK). But little is known about AK communicated in SO, which is a type of high-level but important knowledge in development. Objective: This study aims to investigate the AK in SO posts in terms of their categories and characteristics as well as their usefulness from the point of view of SO users. Method: We conducted an exploratory study by qualitatively analyzing a statistically representative sample of 968 Architecture Related Posts (ARPs) from SO. Results: The main findings are: (1) architecture related questions can be classified into 9 core categories, in which "architecture configuration" is the most common category, followed by the "architecture decision" category, and (2) architecture related questions that provide clear descriptions together with architectural diagrams increase their likelihood of getting more than one answer, while poorly structured architecture questions tend to only get one answer. Conclusions: Our findings suggest that future research can focus on enabling automated approaches and tools that could facilitate the search and (re)use of AK in SO. SO users can refer to our proposed guidelines to compose architecture related questions with the likelihood of getting more responses in SO.
Games and simulators can be a valuable platform to execute complex multi-agent, multiplayer, imperfect information scenarios with significant parallels to military applications: multiple participants manage resources and make decisions that command assets to secure specific areas of a map or neutralize opposing forces. These characteristics have attracted the artificial intelligence (AI) community by supporting development of algorithms with complex benchmarks and the capability to rapidly iterate over new ideas. The success of artificial intelligence algorithms in real-time strategy games such as StarCraft II have also attracted the attention of the military research community aiming to explore similar techniques in military counterpart scenarios. Aiming to bridge the connection between games and military applications, this work discusses past and current efforts on how games and simulators, together with the artificial intelligence algorithms, have been adapted to simulate certain aspects of military missions and how they might impact the future battlefield. This paper also investigates how advances in virtual reality and visual augmentation systems open new possibilities in human interfaces with gaming platforms and their military parallels.
Deployment of Internet of Things (IoT) devices and Data Fusion techniques have gained popularity in public and government domains. This usually requires capturing and consolidating data from multiple sources. As datasets do not necessarily originate from identical sensors, fused data typically results in a complex data problem. Because military is investigating how heterogeneous IoT devices can aid processes and tasks, we investigate a multi-sensor approach. Moreover, we propose a signal to image encoding approach to transform information (signal) to integrate (fuse) data from IoT wearable devices to an image which is invertible and easier to visualize supporting decision making. Furthermore, we investigate the challenge of enabling an intelligent identification and detection operation and demonstrate the feasibility of the proposed Deep Learning and Anomaly Detection models that can support future application that utilizes hand gesture data from wearable devices.
While existing work in robust deep learning has focused on small pixel-level $\ell_p$ norm-based perturbations, this may not account for perturbations encountered in several real world settings. In many such cases although test data might not be available, broad specifications about the types of perturbations (such as an unknown degree of rotation) may be known. We consider a setup where robustness is expected over an unseen test domain that is not i.i.d. but deviates from the training domain. While this deviation may not be exactly known, its broad characterization is specified a priori, in terms of attributes. We propose an adversarial training approach which learns to generate new samples so as to maximize exposure of the classifier to the attributes-space, without having access to the data from the test domain. Our adversarial training solves a min-max optimization problem, with the inner maximization generating adversarial perturbations, and the outer minimization finding model parameters by optimizing the loss on adversarial perturbations generated from the inner maximization. We demonstrate the applicability of our approach on three types of naturally occurring perturbations -- object-related shifts, geometric transformations, and common image corruptions. Our approach enables deep neural networks to be robust against a wide range of naturally occurring perturbations. We demonstrate the usefulness of the proposed approach by showing the robustness gains of deep neural networks trained using our adversarial training on MNIST, CIFAR-10, and a new variant of the CLEVR dataset.