Machine learning surrogate emulators are needed in engineering design and optimization tasks to rapidly emulate computationally expensive physics-based models. In micromechanics problems the local full-field response variables are desired at microstructural length scales. While there has been a great deal of work on establishing architectures for these tasks there has been relatively little work on establishing microstructural experimental design strategies. This work demonstrates that intelligent selection of microstructural volume elements for subsequent physics simulations enables the establishment of more accurate surrogate models. There exist two key challenges towards establishing a suitable framework: (1) microstructural feature quantification and (2) establishment of a criteria which encourages construction of a diverse training data set. Three feature extraction strategies are used as well as three design criteria. A novel contrastive feature extraction approach is established for automated self-supervised extraction of microstructural summary statistics. Results indicate that for the problem considered up to a 8\% improvement in surrogate performance may be achieved using the proposed design and training strategy. Trends indicate this approach may be even more beneficial when scaled towards larger problems. These results demonstrate that the selection of an efficient experimental design is an important consideration when establishing machine learning based surrogate models.
Keypoint data has received a considerable amount of attention in machine learning for tasks like action detection and recognition. However, human experts in movement such as doctors, physiotherapists, sports scientists and coaches use a notion of joint angles standardised by the International Society of Biomechanics to precisely and efficiently communicate static body poses and movements. In this paper, we introduce the basic biomechanical notions and show how they can be used to convert common keypoint data into joint angles that uniquely describe the given pose and have various desirable mathematical properties, such as independence of both the camera viewpoint and the person performing the action. We experimentally demonstrate that the joint angle representation of keypoint data is suitable for machine learning applications and can in some cases bring an immediate performance gain. The use of joint angles as a human meaningful representation of kinematic data is in particular promising for applications where interpretability and dialog with human experts is important, such as many sports and medical applications. To facilitate further research in this direction, we will release a python package to convert keypoint data into joint angles as outlined in this paper.
Machine learning (ML) plays an important role in quantum chemistry, providing fast-to-evaluate predictive models for various properties of molecules. However, most existing ML models for molecular electronic properties use density functional theory (DFT) databases as ground truth in training, and their prediction accuracy cannot surpass that of DFT. In this work, we developed a unified ML method for electronic structures of organic molecules using the gold-standard CCSD(T) calculations as training data. Tested on hydrocarbon molecules, our model outperforms DFT with the widely-used hybrid and double hybrid functionals in computational costs and prediction accuracy of various quantum chemical properties. As case studies, we apply the model to aromatic compounds and semiconducting polymers on both ground state and excited state properties, demonstrating its accuracy and generalization capability to complex systems that are hard to calculate using CCSD(T)-level methods.
Current physics-informed (standard or deep operator) neural networks still rely on accurately learning the initial and/or boundary conditions of the system of differential equations they are solving. In contrast, standard numerical methods involve such conditions in computations without needing to learn them. In this study, we propose to improve current physics-informed deep learning strategies such that initial and/or boundary conditions do not need to be learned and are represented exactly in the predicted solution. Moreover, this method guarantees that when a deep operator network is applied multiple times to time-step a solution of an initial value problem, the resulting function is at least continuous.
Understanding the mechanisms through which neural networks extract statistics from input-label pairs through feature learning is one of the most important unsolved problems in supervised learning. Prior works demonstrated that the gram matrices of the weights (the neural feature matrices, NFM) and the average gradient outer products (AGOP) become correlated during training, in a statement known as the neural feature ansatz (NFA). Through the NFA, the authors introduce mapping with the AGOP as a general mechanism for neural feature learning. However, these works do not provide a theoretical explanation for this correlation or its origins. In this work, we further clarify the nature of this correlation, and explain its emergence. We show that this correlation is equivalent to alignment between the left singular structure of the weight matrices and the newly defined pre-activation tangent features at each layer. We further establish that the alignment is driven by the interaction of weight changes induced by SGD with the pre-activation features, and analyze the resulting dynamics analytically at early times in terms of simple statistics of the inputs and labels. Finally, motivated by the observation that the NFA is driven by this centered correlation, we introduce a simple optimization rule that dramatically increases the NFA correlations at any given layer and improves the quality of features learned.
This work presents a methodology to predict a near-optimal spacing function, which defines the element sizes, suitable to perform steady RANS turbulent viscous flow simulations. The strategy aims at utilising existing high fidelity simulations to compute a target spacing function and train an artificial neural network (ANN) to predict the spacing function for new simulations, either unseen operating conditions or unseen geometric configurations. Several challenges induced by the use of highly stretched elements are addressed. The final goal is to substantially reduce the time and human expertise that is nowadays required to produce suitable meshes for simulations. Numerical examples involving turbulent compressible flows in two dimensions are used to demonstrate the ability of the trained ANN to predict a suitable spacing function. The influence of the NN architecture and the size of the training dataset are discussed. Finally, the suitability of the predicted meshes to perform simulations is investigated.
Stress and material deformation field predictions are among the most important tasks in computational mechanics. These predictions are typically made by solving the governing equations of continuum mechanics using finite element analysis, which can become computationally prohibitive considering complex microstructures and material behaviors. Machine learning (ML) methods offer potentially cost effective surrogates for these applications. However, existing ML surrogates are either limited to low-dimensional problems and/or do not provide uncertainty estimates in the predictions. This work proposes an ML surrogate framework for stress field prediction and uncertainty quantification for diverse materials microstructures. A modified Bayesian U-net architecture is employed to provide a data-driven image-to-image mapping from initial microstructure to stress field with prediction (epistemic) uncertainty estimates. The Bayesian posterior distributions for the U-net parameters are estimated using three state-of-the-art inference algorithms: the posterior sampling-based Hamiltonian Monte Carlo method and two variational approaches, the Monte-Carlo Dropout method and the Bayes by Backprop algorithm. A systematic comparison of the predictive accuracy and uncertainty estimates for these methods is performed for a fiber reinforced composite material and polycrystalline microstructure application. It is shown that the proposed methods yield predictions of high accuracy compared to the FEA solution, while uncertainty estimates depend on the inference approach. Generally, the Hamiltonian Monte Carlo and Bayes by Backprop methods provide consistent uncertainty estimates. Uncertainty estimates from Monte Carlo Dropout, on the other hand, are more difficult to interpret and depend strongly on the method's design.
Continual learning refers to the capability of a machine learning model to learn and adapt to new information, without compromising its performance on previously learned tasks. Although several studies have investigated continual learning methods for information retrieval tasks, a well-defined task formulation is still lacking, and it is unclear how typical learning strategies perform in this context. To address this challenge, a systematic task formulation of continual neural information retrieval is presented, along with a multiple-topic dataset that simulates continuous information retrieval. A comprehensive continual neural information retrieval framework consisting of typical retrieval models and continual learning strategies is then proposed. Empirical evaluations illustrate that the proposed framework can successfully prevent catastrophic forgetting in neural information retrieval and enhance performance on previously learned tasks. The results indicate that embedding-based retrieval models experience a decline in their continual learning performance as the topic shift distance and dataset volume of new tasks increase. In contrast, pretraining-based models do not show any such correlation. Adopting suitable learning strategies can mitigate the effects of topic shift and data augmentation.
Gesture is an important mean of non-verbal communication, with visual modality allows human to convey information during interaction, facilitating peoples and human-machine interactions. However, it is considered difficult to automatically recognise gestures. In this work, we explore three different means to recognise hand signs using deep learning: supervised learning based methods, self-supervised methods and visualisation based techniques applied to 3D moving skeleton data. Self-supervised learning used to train fully connected, CNN and LSTM method. Then, reconstruction method is applied to unlabelled data in simulated settings using CNN as a backbone where we use the learnt features to perform the prediction in the remaining labelled data. Lastly, Grad-CAM is applied to discover the focus of the models. Our experiments results show that supervised learning method is capable to recognise gesture accurately, with self-supervised learning increasing the accuracy in simulated settings. Finally, Grad-CAM visualisation shows that indeed the models focus on relevant skeleton joints on the associated gesture.
We introduce a novel two-level overlapping additive Schwarz preconditioner for accelerating the training of scientific machine learning applications. The design of the proposed preconditioner is motivated by the nonlinear two-level overlapping additive Schwarz preconditioner. The neural network parameters are decomposed into groups (subdomains) with overlapping regions. In addition, the network's feed-forward structure is indirectly imposed through a novel subdomain-wise synchronization strategy and a coarse-level training step. Through a series of numerical experiments, which consider physics-informed neural networks and operator learning approaches, we demonstrate that the proposed two-level preconditioner significantly speeds up the convergence of the standard (LBFGS) optimizer while also yielding more accurate machine learning models. Moreover, the devised preconditioner is designed to take advantage of model-parallel computations, which can further reduce the training time.
Agglomeration techniques are important to reduce the computational costs of numerical simulations and stand at the basis of multilevel algebraic solvers. To automatically perform the agglomeration of polyhedral grids, we propose a novel Geometrical Deep Learning-based algorithm that can exploit the geometrical and physical information of the underlying computational domain to construct the agglomerated grid and simultaneously guarantee the agglomerated grid's quality. In particular, we propose a bisection model based on Graph Neural Networks (GNNs) to partition a suitable connectivity graph of computational three-dimensional meshes. The new approach has a high online inference speed and can simultaneously process the graph structure of the mesh, the geometrical information of the mesh (e.g. elements' volumes, centers' coordinates), and the physical information of the domain (e.g. physical parameters). Taking advantage of this new approach, our algorithm can agglomerate meshes of a domain composed of heterogeneous media in an automatic way. The proposed GNN techniques are compared with the k-means algorithm and METIS: standard approaches for graph partitioning that are meant to process only the connectivity information on the mesh. We demonstrate that the performance of our algorithms outperforms available approaches in terms of quality metrics and runtimes. Moreover, we demonstrate that our algorithm also shows a good level of generalization when applied to more complex geometries, such as three-dimensional geometries reconstructed from medical images. Finally, the capabilities of the model in performing agglomeration of heterogeneous domains are tested in the framework of problems containing microstructures and on a complex geometry such as the human brain.