Compartmental models provide simple and efficient tools to analyze the relevant transmission processes during an outbreak, to produce short-term forecasts or transmission scenarios, and to assess the impact of vaccination campaigns. However, their calibration is not straightforward, since many factors contribute to the rapid change of the transmission dynamics during an epidemic. For example, there might be changes in the individual awareness, the imposition of non-pharmacological interventions and the emergence of new variants. As a consequence, model parameters such as the transmission rate are doomed to change in time, making their assessment more challenging. Here, we propose to use Physics-Informed Neural Networks (PINNs) to track the temporal changes in the model parameters and provide an estimate of the model state variables. PINNs recently gained attention in many engineering applications thanks to their ability to consider both the information from data (typically uncertain) and the governing equations of the system. The ability of PINNs to identify unknown model parameters makes them particularly suitable to solve ill-posed inverse problems, such as those arising in the application of epidemiological models. Here, we develop a reduced-split approach for the implementation of PINNs to estimate the temporal changes in the state variables and transmission rate of an epidemic based on the SIR model equation and infectious data. The main idea is to split the training first on the epidemiological data, and then on the residual of the system equations. The proposed method is applied to five synthetic test cases and two real scenarios reproducing the first months of the COVID-19 Italian pandemic. Our results show that the split implementation of PINNs outperforms the standard approach in terms of accuracy (up to one order of magnitude) and computational times (speed up of 20%).
Speech emotion conversion is the task of converting the expressed emotion of a spoken utterance to a target emotion while preserving the lexical content and speaker identity. While most existing works in speech emotion conversion rely on acted-out datasets and parallel data samples, in this work we specifically focus on more challenging in-the-wild scenarios and do not rely on parallel data. To this end, we propose a diffusion-based generative model for speech emotion conversion, the EmoConv-Diff, that is trained to reconstruct an input utterance while also conditioning on its emotion. Subsequently, at inference, a target emotion embedding is employed to convert the emotion of the input utterance to the given target emotion. As opposed to performing emotion conversion on categorical representations, we use a continuous arousal dimension to represent emotions while also achieving intensity control. We validate the proposed methodology on a large in-the-wild dataset, the MSP-Podcast v1.10. Our results show that the proposed diffusion model is indeed capable of synthesizing speech with a controllable target emotion. Crucially, the proposed approach shows improved performance along the extreme values of arousal and thereby addresses a common challenge in the speech emotion conversion literature.
Time-optimal control of a multi-rotor remains an open problem due to the under-actuation and nonlinearity of its dynamics, which make it difficult to solve this problem directly. In this paper, the time-optimal control problem of the multi-rotor is studied. Firstly, a thrust limit optimal decomposition method is proposed, which can reasonably decompose the limited thrust into three directions according to the current state and the target state. As a result, the thrust limit constraint is decomposed as a linear constraint. With the linear constraint and decoupled dynamics, a time-optimal guidance trajectory can be obtained. Then, a cost function is defined based on the time-optimal guidance trajectory, which has a quadratic form and can be used to evaluate the time-optimal performance of the system outputs. Finally, based on the cost function, the time-optimal control problem is reformulated as an MPC (Model Predictive Control) problem. The experimental results demonstrate the feasibility and validity of the proposed methods.
Within Bayesian nonparametrics, dependent Dirichlet process mixture models provide a highly flexible approach for conducting inference about the conditional density function. However, several formulations of this class make either rather restrictive modelling assumptions or involve intricate algorithms for posterior inference, thus preventing their widespread use. In response to these challenges, we present a flexible, versatile, and computationally tractable model for density regression based on a single-weights dependent Dirichlet process mixture of normal distributions model for univariate continuous responses. We assume an additive structure for the mean of each mixture component and incorporate the effects of continuous covariates through smooth nonlinear functions. The key components of our modelling approach are penalised B-splines and their bivariate tensor product extension. Our proposed method also seamlessly accommodates parametric effects of categorical covariates, linear effects of continuous covariates, interactions between categorical and/or continuous covariates, varying coefficient terms, and random effects, which is why we refer our model as a Dirichlet process mixture of normal structured additive regression models. A noteworthy feature of our method is its efficiency in posterior simulation through Gibbs sampling, as closed-form full conditional distributions for all model parameters are available. Results from a simulation study demonstrate that our approach successfully recovers true conditional densities and other regression functionals in various challenging scenarios. Applications to a toxicology, disease diagnosis, and agricultural study are provided and further underpin the broad applicability of our modelling framework. An R package, \texttt{DDPstar}, implementing the proposed method is publicly available at \url{//bitbucket.org/mxrodriguez/ddpstar}.
Scientific machine learning for inferring dynamical systems combines data-driven modeling, physics-based modeling, and empirical knowledge. It plays an essential role in engineering design and digital twinning. In this work, we primarily focus on an operator inference methodology that builds dynamical models, preferably in low-dimension, with a prior hypothesis on the model structure, often determined by known physics or given by experts. Then, for inference, we aim to learn the operators of a model by setting up an appropriate optimization problem. One of the critical properties of dynamical systems is stability. However, this property is not guaranteed by the inferred models. In this work, we propose inference formulations to learn quadratic models, which are stable by design. Precisely, we discuss the parameterization of quadratic systems that are locally and globally stable. Moreover, for quadratic systems with no stable point yet bounded (e.g., chaotic Lorenz model), we discuss how to parameterize such bounded behaviors in the learning process. Using those parameterizations, we set up inference problems, which are then solved using a gradient-based optimization method. Furthermore, to avoid numerical derivatives and still learn continuous systems, we make use of an integral form of differential equations. We present several numerical examples, illustrating the preservation of stability and discussing its comparison with the existing state-of-the-art approach to infer operators. By means of numerical examples, we also demonstrate how the proposed methods are employed to discover governing equations and energy-preserving models.
In a topology optimization setting, design-dependent fluidic pressure loads pose several challenges as their direction, magnitude, and location alter with topology evolution. This paper offers a compact 100-line MATLAB code, TOPress, for topology optimization of structures subjected to fluidic pressure loads using the method of moving asymptotes. The code is intended for pedagogical purposes and aims to ease the beginners' and students' learning toward topology optimization with design-dependent fluidic pressure loads. TOPress is developed per the approach first reported in Kumar et al. (Struct Multidisc Optim 61(4):1637-1655, 2020). The Darcy law, in conjunction with the drainage term, is used to model the applied pressure load. The consistent nodal loads are determined from the obtained pressure field. The employed approach facilitates inexpensive computation of the load sensitivities using the adjoint-variable method. Compliance minimization subject to volume constraint optimization problems are solved. The success and efficacy of the code are demonstrated by solving benchmark numerical examples involving pressure loads, wherein the importance of load sensitivities is also demonstrated. TOPress contains six main parts, is described in detail, and is extended to solve different problems. Steps to include a projection filter are provided to achieve loadbearing designs close to~0-1. The code is provided in Appendix~B and can also be downloaded along with its extensions from \url{//github.com/PrabhatIn/TOPress}.
This work aims at improving the energy efficiency of decentralized learning by optimizing the mixing matrix, which controls the communication demands during the learning process. Through rigorous analysis based on a state-of-the-art decentralized learning algorithm, the problem is formulated as a bi-level optimization, with the lower level solved by graph sparsification. A solution with guaranteed performance is proposed for the special case of fully-connected base topology and a greedy heuristic is proposed for the general case. Simulations based on real topology and dataset show that the proposed solution can lower the energy consumption at the busiest node by 54%-76% while maintaining the quality of the trained model.
This study develops a computationally efficient phase-field lattice Boltzmann model with the capability to simulate thermocapillary flows. The model was implemented into the open-source simulation framework, waLBerla, and extended to conduct the collision stage using central moments. The multiphase model was coupled with both a passive-scalar thermal LB, and a RK solution to the energy equation in order to resolve temperature-dependent surface tension phenomena. Various lattice stencils (D3Q7, D3Q15, D3Q19, D3Q27) were tested for the passive-scalar LB and both the second- and fourth-order RK methods were investigated. There was no significant difference observed in the accuracy of the LB or RK schemes. The passive scalar D3Q7 LB discretisation tended to provide computational benefits, while the second order RK scheme is superior in memory usage. This paper makes contributions relating to the modelling of thermocapillary flows and to understanding the behaviour of droplet capture with thermal sources analogous to thermal tweezers. Four primary contributions to the literature are identified. First, a new 3D thermocapillary, central-moment phase-field LB model is presented and implemented in the open-source software, waLBerla. Second, the accuracy and computational performance of various techniques to resolve the energy equation for multiphase, incompressible fluids is investigated. Third, the dynamic droplet transport behaviour in the presence of thermal sources is studied and insight is provided on the potential ability to manipulate droplets based on local domain heating. Finally, a concise analysis of the computational performance together with near-perfect scaling results on NVIDIA and AMD GPU-clusters is shown. This research enables the detailed study of droplet manipulation and control in thermocapillary devices.
Deploying end-to-end speech recognition models with limited computing resources remains challenging, despite their impressive performance. Given the gradual increase in model size and the wide range of model applications, selectively executing model components for different inputs to improve the inference efficiency is of great interest. In this paper, we propose a dynamic layer-skipping method that leverages the CTC blank output from intermediate layers to trigger the skipping of the last few encoder layers for frames with high blank probabilities. Furthermore, we factorize the CTC output distribution and perform knowledge distillation on intermediate layers to reduce computation and improve recognition accuracy. Experimental results show that by utilizing the CTC blank, the encoder layer depth can be adjusted dynamically, resulting in 29% acceleration of the CTC model inference with minor performance degradation.
Agent-based modeling and simulation has evolved as a powerful tool for modeling complex systems, offering insights into emergent behaviors and interactions among diverse agents. Integrating large language models into agent-based modeling and simulation presents a promising avenue for enhancing simulation capabilities. This paper surveys the landscape of utilizing large language models in agent-based modeling and simulation, examining their challenges and promising future directions. In this survey, since this is an interdisciplinary field, we first introduce the background of agent-based modeling and simulation and large language model-empowered agents. We then discuss the motivation for applying large language models to agent-based simulation and systematically analyze the challenges in environment perception, human alignment, action generation, and evaluation. Most importantly, we provide a comprehensive overview of the recent works of large language model-empowered agent-based modeling and simulation in multiple scenarios, which can be divided into four domains: cyber, physical, social, and hybrid, covering simulation of both real-world and virtual environments. Finally, since this area is new and quickly evolving, we discuss the open problems and promising future directions.
Most object recognition approaches predominantly focus on learning discriminative visual patterns while overlooking the holistic object structure. Though important, structure modeling usually requires significant manual annotations and therefore is labor-intensive. In this paper, we propose to "look into object" (explicitly yet intrinsically model the object structure) through incorporating self-supervisions into the traditional framework. We show the recognition backbone can be substantially enhanced for more robust representation learning, without any cost of extra annotation and inference speed. Specifically, we first propose an object-extent learning module for localizing the object according to the visual patterns shared among the instances in the same category. We then design a spatial context learning module for modeling the internal structures of the object, through predicting the relative positions within the extent. These two modules can be easily plugged into any backbone networks during training and detached at inference time. Extensive experiments show that our look-into-object approach (LIO) achieves large performance gain on a number of benchmarks, including generic object recognition (ImageNet) and fine-grained object recognition tasks (CUB, Cars, Aircraft). We also show that this learning paradigm is highly generalizable to other tasks such as object detection and segmentation (MS COCO). Project page: //github.com/JDAI-CV/LIO.