The integration of technology into exercise regimens has emerged as a strategy to enhance normal human capabilities and return human motor function after injury or illness by enhancing motor learning and retention. Much research has focused on how active devices, whether confined to a lab or made into a wearable format, can apply forces at set times and conditions to optimize the process of learning. However, the focus on active force production often forces devices to either be confined to simple movements or interventions. As such, in this paper, we investigate how passive device behaviors can contribute to the process of motor learning by themselves. Our approach involves using a wearable resistance (WR) device, which is outfitted with elastic bands, to apply a force field that changes in response to a person's movements while performing exercises. We develop a method to measure the produced forces from the device without impeding the function and we characterize the device's force generation abilities. We then present a study assessing the impact of the WR device on motor learning of proper squat form compared to visual or no feedback. Biometrics such as knee and hip angles were used to monitor and assess subject performance. Our findings indicate that the force fields produced while training with the WR device can improve performance in full-body exercises similarly to a more direct visual feedback mechanism, though the improvement is not consistent across all performance metrics. Through our research, we contribute important insights into the application of passive wearable resistance technology in practical exercise settings.
The integration of Artificial Intelligence (AI) into automation systems has the potential to enhance efficiency and to address currently unsolved existing technical challenges. However, the industry-wide adoption of AI is hindered by the lack of standardized documentation for the complex compositions of automation systems, AI software, production hardware, and their interdependencies. This paper proposes a formal model using standards and ontologies to provide clear and structured documentation of AI applications in automation systems. The proposed information model for artificial intelligence in automation systems (AIAS) utilizes ontology design patterns to map and link various aspects of automation systems and AI software. Validated through a practical example, the model demonstrates its effectiveness in improving documentation practices and aiding the sustainable implementation of AI in industrial settings.
Many optimization problems in engineering and industrial design applications can be formulated as optimization problems with highly nonlinear objectives, subject to multiple complex constraints. Solving such optimization problems requires sophisticated algorithms and optimization techniques. A major trend in recent years is the use of nature-inspired metaheustic algorithms (NIMA). Despite the popularity of nature-inspired metaheuristic algorithms, there are still some challenging issues and open problems to be resolved. Two main issues related to current NIMAs are: there are over 540 algorithms in the literature, and there is no unified framework to understand the search mechanisms of different algorithms. Therefore, this paper attempts to analyse some similarities and differences among different algorithms and then presents a generalized evolutionary metaheuristic (GEM) in an attempt to unify some of the existing algorithms. After a brief discussion of some insights into nature-inspired algorithms and some open problems, we propose a generalized evolutionary metaheuristic algorithm to unify more than 20 different algorithms so as to understand their main steps and search mechanisms. We then test the unified GEM using 15 test benchmarks to validate its performance. Finally, further research topics are briefly discussed.
Industrial processes generate vast amounts of time series data, yet extracting meaningful relationships and insights remains challenging. This paper introduces a framework for automated knowledge graph learning from time series data, specifically tailored for industrial applications. Our framework addresses the complexities inherent in industrial datasets, transforming them into knowledge graphs that improve decision-making, process optimization, and knowledge discovery. Additionally, it employs Granger causality to identify key attributes that can inform the design of predictive models. To illustrate the practical utility of our approach, we also present a motivating use case demonstrating the benefits of our framework in a real-world industrial scenario. Further, we demonstrate how the automated conversion of time series data into knowledge graphs can identify causal influences or dependencies between important process parameters.
With the development of speech synthesis, recent research has focused on challenging tasks, such as speaker generation and emotion intensity control. Attribute interpolation is a common approach to these tasks. However, most previous methods for attribute interpolation require specific modules or training methods. We propose an attribute interpolation method in speech synthesis by model merging. Model merging is a method that creates new parameters by only averaging the parameters of base models. The merged model can generate an output with an intermediate feature of the base models. This method is easily applicable without specific modules or training methods, as it uses only existing trained base models. We merged two text-to-speech models to achieve attribute interpolation and evaluated its performance on speaker generation and emotion intensity control tasks. As a result, our proposed method achieved smooth attribute interpolation while keeping the linguistic content in both tasks.
Quantum relative entropy programs are convex optimization problems which minimize a linear functional over an affine section of the epigraph of the quantum relative entropy function. Recently, the self-concordance of a natural barrier function was proved for this set. This has opened up the opportunity to use interior-point methods for nonsymmetric cone programs to solve these optimization problems. In this paper, we show how common structures arising from applications in quantum information theory can be exploited to improve the efficiency of solving quantum relative entropy programs using interior-point methods. First, we show that the natural barrier function for the epigraph of the quantum relative entropy composed with positive linear operators is optimally self-concordant, even when these linear operators map to singular matrices. Second, we show how we can exploit a catalogue of common structures in these linear operators to compute the inverse Hessian products of the barrier function more efficiently. This step is typically the bottleneck when solving quantum relative entropy programs using interior-point methods, and therefore improving the efficiency of this step can significantly improve the computational performance of the algorithm. We demonstrate how these methods can be applied to important applications in quantum information theory, including quantum key distribution, quantum rate-distortion, quantum channel capacities, and estimating the ground state energy of Hamiltonians. Our numerical results show that these techniques improve computation times by up to several orders of magnitude, and allow previously intractable problems to be solved.
The adaptive processing of structured data is a long-standing research topic in machine learning that investigates how to automatically learn a mapping from a structured input to outputs of various nature. Recently, there has been an increasing interest in the adaptive processing of graphs, which led to the development of different neural network-based methodologies. In this thesis, we take a different route and develop a Bayesian Deep Learning framework for graph learning. The dissertation begins with a review of the principles over which most of the methods in the field are built, followed by a study on graph classification reproducibility issues. We then proceed to bridge the basic ideas of deep learning for graphs with the Bayesian world, by building our deep architectures in an incremental fashion. This framework allows us to consider graphs with discrete and continuous edge features, producing unsupervised embeddings rich enough to reach the state of the art on several classification tasks. Our approach is also amenable to a Bayesian nonparametric extension that automatizes the choice of almost all model's hyper-parameters. Two real-world applications demonstrate the efficacy of deep learning for graphs. The first concerns the prediction of information-theoretic quantities for molecular simulations with supervised neural models. After that, we exploit our Bayesian models to solve a malware-classification task while being robust to intra-procedural code obfuscation techniques. We conclude the dissertation with an attempt to blend the best of the neural and Bayesian worlds together. The resulting hybrid model is able to predict multimodal distributions conditioned on input graphs, with the consequent ability to model stochasticity and uncertainty better than most works. Overall, we aim to provide a Bayesian perspective into the articulated research field of deep learning for graphs.
Embedding entities and relations into a continuous multi-dimensional vector space have become the dominant method for knowledge graph embedding in representation learning. However, most existing models ignore to represent hierarchical knowledge, such as the similarities and dissimilarities of entities in one domain. We proposed to learn a Domain Representations over existing knowledge graph embedding models, such that entities that have similar attributes are organized into the same domain. Such hierarchical knowledge of domains can give further evidence in link prediction. Experimental results show that domain embeddings give a significant improvement over the most recent state-of-art baseline knowledge graph embedding models.
Benefit from the quick development of deep learning techniques, salient object detection has achieved remarkable progresses recently. However, there still exists following two major challenges that hinder its application in embedded devices, low resolution output and heavy model weight. To this end, this paper presents an accurate yet compact deep network for efficient salient object detection. More specifically, given a coarse saliency prediction in the deepest layer, we first employ residual learning to learn side-output residual features for saliency refinement, which can be achieved with very limited convolutional parameters while keep accuracy. Secondly, we further propose reverse attention to guide such side-output residual learning in a top-down manner. By erasing the current predicted salient regions from side-output features, the network can eventually explore the missing object parts and details which results in high resolution and accuracy. Experiments on six benchmark datasets demonstrate that the proposed approach compares favorably against state-of-the-art methods, and with advantages in terms of simplicity, efficiency (45 FPS) and model size (81 MB).
As a new classification platform, deep learning has recently received increasing attention from researchers and has been successfully applied to many domains. In some domains, like bioinformatics and robotics, it is very difficult to construct a large-scale well-annotated dataset due to the expense of data acquisition and costly annotation, which limits its development. Transfer learning relaxes the hypothesis that the training data must be independent and identically distributed (i.i.d.) with the test data, which motivates us to use transfer learning to solve the problem of insufficient training data. This survey focuses on reviewing the current researches of transfer learning by using deep neural network and its applications. We defined deep transfer learning, category and review the recent research works based on the techniques used in deep transfer learning.
Deep learning has yielded state-of-the-art performance on many natural language processing tasks including named entity recognition (NER). However, this typically requires large amounts of labeled data. In this work, we demonstrate that the amount of labeled training data can be drastically reduced when deep learning is combined with active learning. While active learning is sample-efficient, it can be computationally expensive since it requires iterative retraining. To speed this up, we introduce a lightweight architecture for NER, viz., the CNN-CNN-LSTM model consisting of convolutional character and word encoders and a long short term memory (LSTM) tag decoder. The model achieves nearly state-of-the-art performance on standard datasets for the task while being computationally much more efficient than best performing models. We carry out incremental active learning, during the training process, and are able to nearly match state-of-the-art performance with just 25\% of the original training data.