We present a new theoretical and computational framework for modelling electro-chemo-mechanical fracture. The model combines a phase field description of fracture with a fully coupled characterisation of electrolyte behaviour, surface chemical reactions and stress-assisted diffusion. Importantly, a new physics-based formulation is presented to describe electrolyte-containing phase field cracks, appropriately capturing the sensitivity of electrochemical transport and reaction kinetics to the crack opening height. Unlike other existing methods, this approach is shown to accurately capture the results obtained with discrete fracture simulations. The potential of the electro-chemo-mechanical model presented is demonstrated by particularising it to the analysis of hydrogen embrittlement in metallic samples exposed to aqueous electrolytes. The finite element implementation takes as nodal degrees-of-freedom the electrolyte potential, the concentrations of relevant ionic species, the surface coverage, the concentration of diluted species, the displacement field and the phase field order parameter. Particular attention is devoted to improve stability and efficiency, resulting in the development of strategies for avoiding ill-constrained degrees of freedom and lumped integration schemes that eliminate numerical oscillations. The numerical experiments conducted showcase the ability of the model to deliver assumptions-free predictions for systems involving both free-flowing and crack-contained electrolytes. The results obtained highlight the role of electrolyte behaviour in driving the cracking process, evidencing the limitations of existing models.
The Koopman operator provides a linear perspective on non-linear dynamics by focusing on the evolution of observables in an invariant subspace. Observables of interest are typically linearly reconstructed from the Koopman eigenfunctions. Despite the broad use of Koopman operators over the past few years, there exist some misconceptions about the applicability of Koopman operators to dynamical systems with more than one fixed point. In this work, an explanation is provided for the mechanism of lifting for the Koopman operator of nonlinear systems with multiple attractors. Considering the example of the Duffing oscillator, we show that by exploiting the inherent symmetry between the basins of attraction, a linear reconstruction with three degrees of freedom in the Koopman observable space is sufficient to globally linearize the system.
Polyurethane (PU) is an ideal thermal insulation material due to its excellent thermal properties. The incorporation of Phase Change Materials (PCMs) capsules into Polyurethane (PU) has been shown to be effective in building envelopes. This design can significantly increase the stability of the indoor thermal environment and reduce the fluctuation of indoor air temperature. We develop a multiscale model of a PU-PCM foam composite and study the thermal conductivity of this material. Later, the design of materials can be optimized by obtaining thermal conductivity. We conduct a case study based on the performance of this optimized material to fully consider the thermal comfort of the occupants of a building envelope with the application of PU-PCMs composites in a single room. At the same time, we also predict the energy consumption of this case. All the outcomes show that this design is promising, enabling the passive design of building energy and significantly improving occupants' comfort.
Within the framework of computational plasticity, recent advances show that the quasi-static response of an elasto-plastic structure under cyclic loadings may exhibit a time multiscale behaviour. In particular, the system response can be computed in terms of time microscale and macroscale modes using a weakly intrusive multi-time Proper Generalized Decomposition (MT-PGD). In this work, such micro-macro characterization of the time response is exploited to build a data-driven model of the elasto-plastic constitutive relation. This can be viewed as a predictor-corrector scheme where the prediction is driven by the macrotime evolution and the correction is performed via a sparse sampling in space. Once the nonlinear term is forecasted, the multi-time PGD algorithm allows the fast computation of the total strain. The algorithm shows considerable gains in terms of computational time, opening new perspectives in the numerical simulation of history-dependent problems defined in very large time intervals.
Early sensory systems in the brain rapidly adapt to fluctuating input statistics, which requires recurrent communication between neurons. Mechanistically, such recurrent communication is often indirect and mediated by local interneurons. In this work, we explore the computational benefits of mediating recurrent communication via interneurons compared with direct recurrent connections. To this end, we consider two mathematically tractable recurrent linear neural networks that statistically whiten their inputs -- one with direct recurrent connections and the other with interneurons that mediate recurrent communication. By analyzing the corresponding continuous synaptic dynamics and numerically simulating the networks, we show that the network with interneurons is more robust to initialization than the network with direct recurrent connections in the sense that the convergence time for the synaptic dynamics in the network with interneurons (resp. direct recurrent connections) scales logarithmically (resp. linearly) with the spectrum of their initialization. Our results suggest that interneurons are computationally useful for rapid adaptation to changing input statistics. Interestingly, the network with interneurons is an overparameterized solution of the whitening objective for the network with direct recurrent connections, so our results can be viewed as a recurrent linear neural network analogue of the implicit acceleration phenomenon observed in overparameterized feedforward linear neural networks.
1. Automated analysis of bioacoustic recordings using machine learning (ML) methods has the potential to greatly scale biodiversity monitoring efforts. The use of ML for high-stakes applications, such as conservation research, demands a data-centric approach with a focus on utilizing carefully annotated and curated evaluation and training data that is relevant and representative. Creating annotated datasets of sound recordings presents a number of challenges, such as managing large collections of recordings with associated metadata, developing flexible annotation tools that can accommodate the diverse range of vocalization profiles of different organisms, and addressing the scarcity of expert annotators. 2. We present Whombat a user-friendly, browser-based interface for managing audio recordings and annotation projects, with several visualization, exploration, and annotation tools. It enables users to quickly annotate, review, and share annotations, as well as visualize and evaluate a set of machine learning predictions on a dataset. The tool facilitates an iterative workflow where user annotations and machine learning predictions feedback to enhance model performance and annotation quality. 3. We demonstrate the flexibility of Whombat by showcasing two distinct use cases: an project aimed at enhancing automated UK bat call identification at the Bat Conservation Trust (BCT), and a collaborative effort among the USDA Forest Service and Oregon State University researchers exploring bioacoustic applications and extending automated avian classification models in the Pacific Northwest, USA. 4. Whombat is a flexible tool that can effectively address the challenges of annotation for bioacoustic research. It can be used for individual and collaborative work, hosted on a shared server or accessed remotely, or run on a personal computer without the need for coding skills.
Neural dynamical systems with stable attractor structures, such as point attractors and continuous attractors, are hypothesized to underlie meaningful temporal behavior that requires working memory. However, working memory may not support useful learning signals necessary to adapt to changes in the temporal structure of the environment. We show that in addition to the continuous attractors that are widely implicated, periodic and quasi-periodic attractors can also support learning arbitrarily long temporal relationships. Unlike the continuous attractors that suffer from the fine-tuning problem, the less explored quasi-periodic attractors are uniquely qualified for learning to produce temporally structured behavior. Our theory has broad implications for the design of artificial learning systems and makes predictions about observable signatures of biological neural dynamics that can support temporal dependence learning and working memory. Based on our theory, we developed a new initialization scheme for artificial recurrent neural networks that outperforms standard methods for tasks that require learning temporal dynamics. Moreover, we propose a robust recurrent memory mechanism for integrating and maintaining head direction without a ring attractor.
Joint models (JM) for longitudinal and survival data have gained increasing interest and found applications in a wide range of clinical and biomedical settings. These models facilitate the understanding of the relationship between outcomes and enable individualized predictions. In many applications, more complex event processes arise, necessitating joint longitudinal and multistate models. However, their practical application can be hindered by computational challenges due to increased model complexity and large sample sizes. Motivated by a longitudinal multimorbidity analysis of large UK health records, we have developed a scalable Bayesian methodology for such joint multistate models that is capable of handling complex event processes and large datasets, with straightforward implementation. We propose two blockwise inference approaches for different inferential purposes based on different levels of decomposition of the multistate processes. These approaches leverage parallel computing, ease the specification of different models for different transitions, and model/variable selection can be performed within a Bayesian framework using Bayesian leave-one-out cross-validation. Using a simulation study, we show that the proposed approaches achieve satisfactory performance regarding posterior point and interval estimation, with notable gains in sampling efficiency compared to the standard estimation strategy. We illustrate our approaches using a large UK electronic health record dataset where we analysed the coevolution of routinely measured systolic blood pressure (SBP) and the progression of multimorbidity, defined as the combinations of three chronic conditions. Our analysis identified distinct association structures between SBP and different disease transitions.
In this work, we are interested in solving large linear systems stemming from the Extra-Membrane-Intra (EMI) model, which is employed for simulating excitable tissues at a cellular scale. After setting the related systems of partial differential equations (PDEs) equipped with proper boundary conditions, we provide numerical approximation schemes for the EMI PDEs and focus on the resulting large linear systems. We first give a relatively complete spectral analysis using tools from the theory of Generalized Locally Toeplitz matrix sequences. The obtained spectral information is used for designing appropriate (preconditioned) Krylov solvers. We show, through numerical experiments, that the presented solution strategy is robust w.r.t. problem and discretization parameters, efficient and scalable.
Navigating dynamic environments requires the robot to generate collision-free trajectories and actively avoid moving obstacles. Most previous works designed path planning algorithms based on one single map representation, such as the geometric, occupancy, or ESDF map. Although they have shown success in static environments, due to the limitation of map representation, those methods cannot reliably handle static and dynamic obstacles simultaneously. To address the problem, this paper proposes a gradient-based B-spline trajectory optimization algorithm utilizing the robot's onboard vision. The depth vision enables the robot to track and represent dynamic objects geometrically based on the voxel map. The proposed optimization first adopts the circle-based guide-point algorithm to approximate the costs and gradients for avoiding static obstacles. Then, with the vision-detected moving objects, our receding-horizon distance field is simultaneously used to prevent dynamic collisions. Finally, the iterative re-guide strategy is applied to generate the collision-free trajectory. The simulation and physical experiments prove that our method can run in real-time to navigate dynamic environments safely.
Most state-of-the-art machine learning techniques revolve around the optimisation of loss functions. Defining appropriate loss functions is therefore critical to successfully solving problems in this field. We present a survey of the most commonly used loss functions for a wide range of different applications, divided into classification, regression, ranking, sample generation and energy based modelling. Overall, we introduce 33 different loss functions and we organise them into an intuitive taxonomy. Each loss function is given a theoretical backing and we describe where it is best used. This survey aims to provide a reference of the most essential loss functions for both beginner and advanced machine learning practitioners.