The release of Microsoft's HoloLens headset addresses new types of issues that would have been difficult to design without such a hardware. This semi-transparent visor headset allows the user who wears it to view the projection of 3D virtual objects placed in its real environment. The user can also interact with these 3D objects, which can interact with each other. The framework of this new technology is called mixed reality. We had the opportunity to numerically transform a conventional human nutrition workshop for patients waiting for bariatric surgery by developing a software called HOLO_NUTRI using the HoloLens headset. Unlike our experience of user and conventional programmer specialized in the development of interactive 3D graphics applications, we realized that such a mixed reality experience required specific programming concepts quite different from those of conventional software or those of virtual reality applications, but above all required a thorough reflection about communication for users. In this article, we propose to explain our design of communication (graphic supports, tutorials of use of material, explanatory videos), a step which was crucial for the good progress of our project. The software was used by thirty patients from Le Puy-en-Velay Hospital during 10 sessions of one hour and a half during which patients had to take in hand the headset and software HOLO_NUTRI. We also proposed a series of questions to patients to have an assessment of both the adequacy and the importance of this communication approach for such experience. As the mixed reality technology is very recent but the number of applications based on it significantly increases, the reflection on the implementation of the elements of communication described in this article (videos, exercise of learning for the use of the headset, communication leaflet, etc.) can help developers of such applications.
The use of AI systems in healthcare for the early screening of diseases is of great clinical importance. Deep learning has shown great promise in medical imaging, but the reliability and trustworthiness of AI systems limit their deployment in real clinical scenes, where patient safety is at stake. Uncertainty estimation plays a pivotal role in producing a confidence evaluation along with the prediction of the deep model. This is particularly important in medical imaging, where the uncertainty in the model's predictions can be used to identify areas of concern or to provide additional information to the clinician. In this paper, we review the various types of uncertainty in deep learning, including aleatoric uncertainty and epistemic uncertainty. We further discuss how they can be estimated in medical imaging. More importantly, we review recent advances in deep learning models that incorporate uncertainty estimation in medical imaging. Finally, we discuss the challenges and future directions in uncertainty estimation in deep learning for medical imaging. We hope this review will ignite further interest in the community and provide researchers with an up-to-date reference regarding applications of uncertainty estimation models in medical imaging.
We propose a self-stabilizing leader election (SS-LE) protocol on ring networks in the population protocol model. Given a rough knowledge $\psi = \lceil \log n \rceil + O(1)$ on the population size $n$, the proposed protocol lets the population reach a safe configuration within $O(n^2 \log n)$ steps with high probability starting from any configuration. Thereafter, the population keeps the unique leader forever. Since no protocol solves SS-LE in $o(n^2)$ steps with high probability, the convergence time is near-optimal: the gap is only an $O(\log n)$ multiplicative factor. This protocol uses only $polylog(n)$ states. There exist two state-of-the-art algorithms in current literature that solve SS-LE on ring networks. The first algorithm uses a polynomial number of states and solves SS-LE in $O(n^2)$ steps, whereas the second algorithm requires exponential time but it uses only a constant number of states. Our proposed algorithm provides an excellent middle ground between these two.
Access to individual-level health data is essential for gaining new insights and advancing science. In particular, modern methods based on artificial intelligence rely on the availability of and access to large datasets. In the health sector, access to individual-level data is often challenging due to privacy concerns. A promising alternative is the generation of fully synthetic data, i.e. data generated through a randomised process that have similar statistical properties as the original data, but do not have a one-to-one correspondence with the original individual-level records. In this study, we use a state-of-the-art synthetic data generation method and perform in-depth quality analyses of the generated data for a specific use case in the field of nutrition. We demonstrate the need for careful analyses of synthetic data that go beyond descriptive statistics and provide valuable insights into how to realise the full potential of synthetic datasets. By extending the methods, but also by thoroughly analysing the effects of sampling from a trained model, we are able to largely reproduce significant real-world analysis results in the chosen use case.
Finite-state dimension, introduced early in this century as a finite-state version of classical Hausdorff dimension, is a quantitative measure of the lower asymptotic density of information in an infinite sequence over a finite alphabet, as perceived by finite automata. Finite-state dimension is a robust concept that now has equivalent formulations in terms of finite-state gambling, lossless finite-state data compression, finite-state prediction, entropy rates, and automatic Kolmogorov complexity. The Schnorr-Stimm dichotomy theorem gave the first automata-theoretic characterization of normal sequences, which had been studied in analytic number theory since Borel defined them. This theorem implies that a sequence (or a real number having this sequence as its base-b expansion) is normal if and only if it has finite-state dimension 1. One of the most powerful classical tools for investigating normal numbers is the Weyl criterion, which characterizes normality in terms of exponential sums. Such sums are well studied objects with many connections to other aspects of analytic number theory, and this has made use of Weyl criterion especially fruitful. This raises the question whether Weyl criterion can be generalized from finite-state dimension 1 to arbitrary finite-state dimensions, thereby making it a quantitative tool for studying data compression, prediction, etc. This paper does exactly this. We extend the Weyl criterion from a characterization of sequences with finite-state dimension 1 to a criterion that characterizes every finite-state dimension. This turns out not to be a routine generalization of the original Weyl criterion. Even though exponential sums may diverge for non-normal numbers, finite-state dimension can be characterized in terms of the dimensions of the subsequence limits of the exponential sums. We demonstrate the utility of our criterion though examples.
Exploiting knowledge about the structure of a problem can greatly benefit the efficiency and scalability of an Evolutionary Algorithm (EA). Model-Based EAs (MBEAs) are capable of doing this by explicitly modeling the problem structure. The Gene-pool Optimal Mixing Evolutionary Algorithm (GOMEA) is among the state-of-the-art of MBEAs due to its use of a linkage model and the optimal mixing variation operator. Especially in a Gray-Box Optimization (GBO) setting that allows for partial evaluations, i.e., the relatively efficient evaluation of a partial modification of a solution, GOMEA is known to excel. Such GBO settings are known to exist in various real-world applications to which GOMEA has successfully been applied. In this work, we introduce the GOMEA library, making existing GOMEA code in C++ accessible through Python, which serves as a centralized way of maintaining and distributing code of GOMEA for various optimization domains. Moreover, it allows for the straightforward definition of BBO as well as GBO fitness functions within Python, which are called from the C++ optimization code for each required (partial) evaluation. We describe the structure of the GOMEA library and how it can be used, and we show its performance in both GBO and Black-Box Optimization (BBO).
Mobile health (mHealth) applications have become increasingly valuable in preventive healthcare and in reducing the burden on healthcare organizations. The aim of this paper is to investigate the factors that influence user acceptance of mHealth apps and identify the underlying structure that shapes users' behavioral intention. An online study that employed factorial survey design with vignettes was conducted, and a total of 1,669 participants from eight countries across four continents were included in the study. Structural equation modeling was employed to quantitatively assess how various factors collectively contribute to users' willingness to use mHealth apps. The results indicate that users' digital literacy has the strongest impact on their willingness to use them, followed by their online habit of sharing personal information. Users' concerns about personal privacy only had a weak impact. Furthermore, users' demographic background, such as their country of residence, age, ethnicity, and education, has a significant moderating effect. Our findings have implications for app designers, healthcare practitioners, and policymakers. Efforts are needed to regulate data collection and sharing and promote digital literacy among the general population to facilitate the widespread adoption of mHealth apps.
Many networks in political and social research are bipartite, with edges connecting exclusively across two distinct types of nodes. A common example includes cosponsorship networks, in which legislators are connected indirectly through the bills they support. Yet most existing network models are designed for unipartite networks, where edges can arise between any pair of nodes. We show that using a unipartite network model to analyze bipartite networks, as often done in practice, can result in aggregation bias. To address this methodological problem, we develop a statistical model of bipartite networks by extending the popular mixed-membership stochastic blockmodel. Our model allows researchers to identify the groups of nodes, within each node type, that share common patterns of edge formation. The model also incorporates both node and dyad-level covariates as the predictors of the edge formation patterns. We develop an efficient computational algorithm for fitting the model, and apply it to cosponsorship data from the United States Senate. We show that senators tapped into communities defined by party lines and seniority when forming cosponsorships on bills, while the pattern of cosponsorships depends on the timing and substance of legislations. We also find evidence for norms of reciprocity, and uncover the substantial role played by policy expertise in the formation of cosponsorships between senators and legislation. An open-source software package is available for implementing the proposed methodology.
Along with the massive growth of the Internet from the 1990s until now, various innovative technologies have been created to bring users breathtaking experiences with more virtual interactions in cyberspace. Many virtual environments with thousands of services and applications, from social networks to virtual gaming worlds, have been developed with immersive experience and digital transformation, but most are incoherent instead of being integrated into a platform. In this context, metaverse, a term formed by combining meta and universe, has been introduced as a shared virtual world that is fueled by many emerging technologies, such as fifth-generation networks and beyond, virtual reality, and artificial intelligence (AI). Among such technologies, AI has shown the great importance of processing big data to enhance immersive experience and enable human-like intelligence of virtual agents. In this survey, we make a beneficial effort to explore the role of AI in the foundation and development of the metaverse. We first deliver a preliminary of AI, including machine learning algorithms and deep learning architectures, and its role in the metaverse. We then convey a comprehensive investigation of AI-based methods concerning six technical aspects that have potentials for the metaverse: natural language processing, machine vision, blockchain, networking, digital twin, and neural interface, and being potential for the metaverse. Subsequently, several AI-aided applications, such as healthcare, manufacturing, smart cities, and gaming, are studied to be deployed in the virtual worlds. Finally, we conclude the key contribution of this survey and open some future research directions in AI for the metaverse.
We describe ACE0, a lightweight platform for evaluating the suitability and viability of AI methods for behaviour discovery in multiagent simulations. Specifically, ACE0 was designed to explore AI methods for multi-agent simulations used in operations research studies related to new technologies such as autonomous aircraft. Simulation environments used in production are often high-fidelity, complex, require significant domain knowledge and as a result have high R&D costs. Minimal and lightweight simulation environments can help researchers and engineers evaluate the viability of new AI technologies for behaviour discovery in a more agile and potentially cost effective manner. In this paper we describe the motivation for the development of ACE0.We provide a technical overview of the system architecture, describe a case study of behaviour discovery in the aerospace domain, and provide a qualitative evaluation of the system. The evaluation includes a brief description of collaborative research projects with academic partners, exploring different AI behaviour discovery methods.
Deep neural networks have revolutionized many machine learning tasks in power systems, ranging from pattern recognition to signal processing. The data in these tasks is typically represented in Euclidean domains. Nevertheless, there is an increasing number of applications in power systems, where data are collected from non-Euclidean domains and represented as the graph-structured data with high dimensional features and interdependency among nodes. The complexity of graph-structured data has brought significant challenges to the existing deep neural networks defined in Euclidean domains. Recently, many studies on extending deep neural networks for graph-structured data in power systems have emerged. In this paper, a comprehensive overview of graph neural networks (GNNs) in power systems is proposed. Specifically, several classical paradigms of GNNs structures (e.g., graph convolutional networks, graph recurrent neural networks, graph attention networks, graph generative networks, spatial-temporal graph convolutional networks, and hybrid forms of GNNs) are summarized, and key applications in power systems such as fault diagnosis, power prediction, power flow calculation, and data generation are reviewed in detail. Furthermore, main issues and some research trends about the applications of GNNs in power systems are discussed.