Tinnitus is a prevalent hearing disorder that can be caused by various factors such as age, hearing loss, exposure to loud noises, ear infections or tumors, certain medications, head or neck injuries, and psychological conditions like anxiety and depression. While not every patient requires medical attention, about 20% of sufferers seek clinical intervention. Early diagnosis is crucial for effective treatment. New developments have been made in tinnitus detection to aid in early detection of this illness. Over the past few years, there has been a notable growth in the usage of electroencephalography (EEG) to study variations in oscillatory brain activity related to tinnitus. However, the results obtained from numerous studies vary greatly, leading to conflicting conclusions. Currently, clinicians rely solely on their expertise to identify individuals with tinnitus. Researchers in this field have incorporated various data modalities and machine-learning techniques to aid clinicians in identifying tinnitus characteristics and classifying people with tinnitus. The purpose of writing this article is to review articles that focus on using machine learning (ML) to identify or predict tinnitus patients using EEG signals as input data. We have evaluated 11 articles published between 2016 and 2023 using a systematic literature review (SLR) method. This article arranges perfect summaries of all the research reviewed and compares the significant aspects of each. Additionally, we performed statistical analyses to gain a deeper comprehension of the most recent research in this area. Almost all of the reviewed articles followed a five-step procedure to achieve the goal of tinnitus. Disclosure. Finally, we discuss the open affairs and challenges in this method of tinnitus recognition or prediction and suggest future directions for research.
We present a result according to which certain functions of covariance matrices are maximized at scalar multiples of the identity matrix. This is used to show that experimental designs that are optimal under an assumption of independent, homoscedastic responses can be minimax robust, in broad classes of alternate covariance structures. In particular it can justify the common practice of disregarding possible dependence, or heteroscedasticity, at the design stage of an experiment.
With the rapid progress in virtual reality (VR) technology, the scope of VR applications has greatly expanded across various domains. However, the superiority of VR training over traditional methods and its impact on learning efficacy are still uncertain. To investigate whether VR training is more effective than traditional methods, we designed virtual training systems for mechanical assembly on both VR and desktop platforms, subsequently conducting pre-test and post-test experiments. A cohort of 53 students, all enrolled in engineering drawing course without prior knowledge distinctions, was randomly divided into three groups: physical training, desktop virtual training, and immersive VR training. Our investigation utilized analysis of covariance (ANCOVA) to examine the differences in post-test scores among the three groups while controlling for pre-test scores. The group that received VR training showed the highest scores on the post-test. Another facet of our study delved into the presence of the virtual system. We developed a specialized scale to assess this aspect for our research objectives. Our findings indicate that VR training can enhance the sense of presence, particularly in terms of sensory factors and realism factors. Moreover, correlation analysis uncovers connections between the various dimensions of presence. This study confirms that using VR training can improve learning efficacy and the presence in the context of mechanical assembly, surpassing traditional training methods. Furthermore, it provides empirical evidence supporting the integration of VR technology in higher education and engineering training. This serves as a reference for the practical application of VR technology in different fields.
Generating physical movement behaviours from their symbolic description is a long-standing challenge in artificial intelligence (AI) and robotics, requiring insights into numerical optimization methods as well as into formalizations from symbolic AI and reasoning. In this paper, a novel approach to finding a reward function from a symbolic description is proposed. The intended system behaviour is modelled as a hybrid automaton, which reduces the system state space to allow more efficient reinforcement learning. The approach is applied to bipedal walking, by modelling the walking robot as a hybrid automaton over state space orthants, and used with the compass walker to derive a reward that incentivizes following the hybrid automaton cycle. As a result, training times of reinforcement learning controllers are reduced while final walking speed is increased. The approach can serve as a blueprint how to generate reward functions from symbolic AI and reasoning.
Nestedness is a property of bipartite complex networks that has been shown to characterize the peculiar structure of biological and economical networks. In a nested network, a node of low degree has its neighborhood included in the neighborhood of nodes of higher degree. Emergence of nestedness is commonly due to two different schemes: i) mutualistic behavior of nodes, where nodes of each class have an advantage in associating with each other, such as plant pollination or seed dispersal networks; ii) geographic distribution of species, captured in a so-called biogeographic network where species represent one class and geographical areas the other one. Nestedness has useful applications on real-world networks such as node ranking and link prediction. Motivated by analogies with biological networks, we study the nestedness property of the public Internet peering ecosystem, an important part of the Internet where autonomous systems (ASes) exchange traffic at Internet eXchange Points (IXPs). We propose two representations of this ecosystem using a bipartite graph derived from PeeringDB data. The first graph captures the AS [is member of] IXP relationship which is reminiscent of the mutualistic networks. The second graph groups IXPs into countries, and we define the AS [is present at] country relationship to mimic a biogeographic network. We statistically confirm the nestedness property of both graphs, which has never been observed before in Internet topology data. From this unique observation, we show that we can use node metrics to extract new key ASes and make efficient prediction of newly created links over a two-year period.
While the problem of computing the genus of a knot is now fairly well understood, no algorithm is known for its four-dimensional variants, both in the smooth and in the topological locally flat category. In this article, we investigate a class of knots and links called Hopf arborescent links, which are obtained as the boundaries of some iterated plumbings of Hopf bands. We show that for such links, computing the genus defects, which measure how much the four-dimensional genera differ from the classical genus, is decidable. Our proof is non-constructive, and is obtained by proving that Seifert surfaces of Hopf arborescent links under a relation of minors defined by containment of their Seifert surfaces form a well-quasi-order.
Medical procedures are an essential part of healthcare delivery, and the acquisition of procedural skills is a critical component of medical education. Unfortunately, procedural skill is not evenly distributed among medical providers. Skills may vary within departments or institutions, and across geographic regions, depending on the provider's training and ongoing experience. We present a mixed reality real-time communication system to increase access to procedural skill training and to improve remote emergency assistance. Our system allows a remote expert to guide a local operator through a medical procedure. RGBD cameras capture a volumetric view of the local scene including the patient, the operator, and the medical equipment. The volumetric capture is augmented onto the remote expert's view to allow the expert to spatially guide the local operator using visual and verbal instructions. We evaluated our mixed reality communication system in a study in which experts teach the ultrasound-guided placement of a central venous catheter (CVC) to students in a simulation setting. The study compares state-of-the-art video communication against our system. The results indicate that our system enhances and offers new possibilities for visual communication compared to video teleconference-based training.
Understanding causality helps to structure interventions to achieve specific goals and enables predictions under interventions. With the growing importance of learning causal relationships, causal discovery tasks have transitioned from using traditional methods to infer potential causal structures from observational data to the field of pattern recognition involved in deep learning. The rapid accumulation of massive data promotes the emergence of causal search methods with brilliant scalability. Existing summaries of causal discovery methods mainly focus on traditional methods based on constraints, scores and FCMs, there is a lack of perfect sorting and elaboration for deep learning-based methods, also lacking some considers and exploration of causal discovery methods from the perspective of variable paradigms. Therefore, we divide the possible causal discovery tasks into three types according to the variable paradigm and give the definitions of the three tasks respectively, define and instantiate the relevant datasets for each task and the final causal model constructed at the same time, then reviews the main existing causal discovery methods for different tasks. Finally, we propose some roadmaps from different perspectives for the current research gaps in the field of causal discovery and point out future research directions.
We consider the problem of explaining the predictions of graph neural networks (GNNs), which otherwise are considered as black boxes. Existing methods invariably focus on explaining the importance of graph nodes or edges but ignore the substructures of graphs, which are more intuitive and human-intelligible. In this work, we propose a novel method, known as SubgraphX, to explain GNNs by identifying important subgraphs. Given a trained GNN model and an input graph, our SubgraphX explains its predictions by efficiently exploring different subgraphs with Monte Carlo tree search. To make the tree search more effective, we propose to use Shapley values as a measure of subgraph importance, which can also capture the interactions among different subgraphs. To expedite computations, we propose efficient approximation schemes to compute Shapley values for graph data. Our work represents the first attempt to explain GNNs via identifying subgraphs explicitly and directly. Experimental results show that our SubgraphX achieves significantly improved explanations, while keeping computations at a reasonable level.
Object detection typically assumes that training and test data are drawn from an identical distribution, which, however, does not always hold in practice. Such a distribution mismatch will lead to a significant performance drop. In this work, we aim to improve the cross-domain robustness of object detection. We tackle the domain shift on two levels: 1) the image-level shift, such as image style, illumination, etc, and 2) the instance-level shift, such as object appearance, size, etc. We build our approach based on the recent state-of-the-art Faster R-CNN model, and design two domain adaptation components, on image level and instance level, to reduce the domain discrepancy. The two domain adaptation components are based on H-divergence theory, and are implemented by learning a domain classifier in adversarial training manner. The domain classifiers on different levels are further reinforced with a consistency regularization to learn a domain-invariant region proposal network (RPN) in the Faster R-CNN model. We evaluate our newly proposed approach using multiple datasets including Cityscapes, KITTI, SIM10K, etc. The results demonstrate the effectiveness of our proposed approach for robust object detection in various domain shift scenarios.
While it is nearly effortless for humans to quickly assess the perceptual similarity between two images, the underlying processes are thought to be quite complex. Despite this, the most widely used perceptual metrics today, such as PSNR and SSIM, are simple, shallow functions, and fail to account for many nuances of human perception. Recently, the deep learning community has found that features of the VGG network trained on the ImageNet classification task has been remarkably useful as a training loss for image synthesis. But how perceptual are these so-called "perceptual losses"? What elements are critical for their success? To answer these questions, we introduce a new Full Reference Image Quality Assessment (FR-IQA) dataset of perceptual human judgments, orders of magnitude larger than previous datasets. We systematically evaluate deep features across different architectures and tasks and compare them with classic metrics. We find that deep features outperform all previous metrics by huge margins. More surprisingly, this result is not restricted to ImageNet-trained VGG features, but holds across different deep architectures and levels of supervision (supervised, self-supervised, or even unsupervised). Our results suggest that perceptual similarity is an emergent property shared across deep visual representations.