In this paper we consider an approach to improve the performance of exponential integrators/Lawson schemes in cases where the solution of a related, but usually much simpler, problem can be computed efficiently. While for implicit methods such an approach is common (e.g. by using preconditioners), for exponential integrators this has proven more challenging. Here we propose to extract a constant coefficient differential operator from advection-diffusion-reaction equations for which we are then able to compute the required matrix functions efficiently. Both a linear stability analysis and numerical experiments show that the resulting schemes can be unconditionally stable. In fact, we find that exponential integrators and Lawson schemes can have better stability properties than similarly constructed implicit-explicit schemes. We also propose new Lawson type integrators that further improve on these stability properties. The effectiveness of the approach is highlighted by a number of numerical examples in two and three space dimensions.
This paper proposes an actor-critic algorithm for controlling the temperature of a battery pack using a cooling fluid. This is modeled by a coupled 1D partial differential equation (PDE) with a controlled advection term that determines the speed of the cooling fluid. The Hamilton-Jacobi-Bellman (HJB) equation is a PDE that evaluates the optimality of the value function and determines an optimal controller. We propose an algorithm that treats the value network as a Physics-Informed Neural Network (PINN) to solve for the continuous-time HJB equation rather than a discrete-time Bellman optimality equation, and we derive an optimal controller for the environment that we exploit to achieve optimal control. Our experiments show that a hybrid-policy method that updates the value network using the HJB equation and updates the policy network identically to PPO achieves the best results in the control of this PDE system.
Neural closure models have recently been proposed as a method for efficiently approximating small scales in multiscale systems with neural networks. The choice of loss function and associated training procedure has a large effect on the accuracy and stability of the resulting neural closure model. In this work, we systematically compare three distinct procedures: "derivative fitting", "trajectory fitting" with discretise-then-optimise, and "trajectory fitting" with optimise-then-discretise. Derivative fitting is conceptually the simplest and computationally the most efficient approach and is found to perform reasonably well on one of the test problems (Kuramoto-Sivashinsky) but poorly on the other (Burgers). Trajectory fitting is computationally more expensive but is more robust and is therefore the preferred approach. Of the two trajectory fitting procedures, the discretise-then-optimise approach produces more accurate models than the optimise-then-discretise approach. While the optimise-then-discretise approach can still produce accurate models, care must be taken in choosing the length of the trajectories used for training, in order to train the models on long-term behaviour while still producing reasonably accurate gradients during training. Two existing theorems are interpreted in a novel way that gives insight into the long-term accuracy of a neural closure model based on how accurate it is in the short term.
The solution of computational fluid dynamics problems is one of the most computationally hard tasks, especially in the case of complex geometries and turbulent flow regimes. We propose to use Tensor Train (TT) methods, which possess logarithmic complexity in problem size and have great similarities with quantum algorithms in the structure of data representation. We develop the Tensor train Finite Element Method -- TetraFEM -- and the explicit numerical scheme for the solution of the incompressible Navier-Stokes equation via Tensor Trains. We test this approach on the simulation of liquids mixing in a T-shape mixer, which, to our knowledge, was done for the first time using tensor methods in such non-trivial geometries. As expected, we achieve exponential compression in memory of all FEM matrices and demonstrate an exponential speed-up compared to the conventional FEM implementation on dense meshes. In addition, we discuss the possibility of extending this method to a quantum computer to solve more complex problems. This paper is based on work we conducted for Evonik Industries AG.
The reconfigurable intelligent surface (RIS) is an emerging technology that changes how wireless networks are perceived, therefore its potential benefits and applications are currently under intense research and investigation. In this letter, we focus on electromagnetically consistent models for RISs inheriting from a recently proposed model based on mutually coupled loaded wire dipoles. While existing related research focuses on free-space wireless channels thereby ignoring interactions between RIS and scattering objects present in the propagation environment, we introduce an RIS-aided channel model that is applicable to more realistic scenarios, where the scattering objects are modeled as loaded wire dipoles. By adjusting the parameters of the wire dipoles, the properties of general natural and engineered material objects can be modeled. Based on this model, we introduce a provably convergent and efficient iterative algorithm that jointly optimizes the RIS and transmitter configurations to maximize the system sum-rate. Extensive numerical results show the net performance improvement provided by the proposed method compared with existing optimization algorithms.
We propose an efficient, accurate and robust implicit solver for the incompressible Navier-Stokes equations, based on a DG spatial discretization and on the TR-BDF2 method for time discretization. The effectiveness of the method is demonstrated in a number of classical benchmarks, which highlight its superior efficiency with respect to other widely used implicit approaches. The parallel implementation of the proposed method in the framework of the deal.II software package allows for accurate and efficient adaptive simulations in complex geometries, which makes the proposed solver attractive for large scale industrial applications.
We consider the numerical approximation of a sharp-interface model for two-phase flow, which is given by the incompressible Navier-Stokes equations in the bulk domain together with the classical interface conditions on the interface. We propose structure-preserving finite element methods for the model, meaning in particular that volume preservation and energy decay are satisfied on the discrete level. For the evolving fluid interface, we employ parametric finite element approximations that introduce an implicit tangential velocity to improve the quality of the interface mesh. For the two-phase Navier-Stokes equations, we consider two different approaches: an unfitted and a fitted finite element method, respectively. In the unfitted approach, the constructed method is based on an Eulerian weak formulation, while in the fitted approach a novel arbitrary Lagrangian-Eulerian (ALE) weak formulation is introduced. Using suitable discretizations of these two formulations, we introduce two finite element methods and prove their structure-preserving properties. Numerical results are presented to show the accuracy and efficiency of the introduced methods.
We implement a method from computer sciences to address a challenge in Paleolithic archaeology: how to infer cognition differences from material culture. Archaeological material culture is linked to cognition: more complex ancient technologies are assumed to have required complex cognition. We present an application of Petri net analysis to compare Neanderthal tar production technologies and tie the results to cognitive requirements. We applied three complexity metrics, each relying on their own unique definitions of complexity, to the modelled production sequences. Based on the results, we suggest that Neanderthal working memory requirements may have been similar to human preferences regarding working memory use today. This method also enables us to distinguish the high-order cognitive functions combining traits like planning, inhibitory control, and learnings that were likely required by different ancient technological processes. The Petri net approach can contribute to our understanding of technology and cognitive evolution as it can be used on different materials and technologies, across time and species.
Lattice Boltzmann schemes are efficient numerical methods to solve a broad range of problems under the form of conservation laws. However, they suffer from a chronic lack of clear theoretical foundations. In particular, the consistency analysis and the derivation of the modified equations are still open issues. This has prevented, until today, to have an analogous of the Lax equivalence theorem for Lattice Boltzmann schemes. We propose a rigorous consistency study and the derivation of the modified equations for any lattice Boltzmann scheme under acoustic and diffusive scalings. This is done by passing from a kinetic (lattice Boltzmann) to a macroscopic (Finite Difference) point of view at a fully discrete level in order to eliminate the non-conserved moments relaxing away from the equilibrium. We rewrite the lattice Boltzmann scheme as a multi-step Finite Difference scheme on the conserved variables, as introduced in our previous contribution. We then perform the usual analyses for Finite Difference by exploiting its precise characterization using matrices of Finite Difference operators. Though we present the derivation of the modified equations until second-order underacoustic scaling, we provide all the elements to extend it to higher orders, since the kinetic-macroscopic connection is conducted at the fully discrete level. Finally, we show that our strategy yields, in a more rigorous setting, the same results as previous works in the literature.
In this paper, we implement exponential integrators, specifically Integrating Factor (IF) and Exponential Time Differencing (ETD) methods, using pseudo-spectral techniques to solve phase-field equations within a Python framework. These exponential integrators have showcased robust performance and accuracy when addressing stiff nonlinear partial differential equations. We compare these integrators to the well-known implicit-explicit (IMEX) Euler integrators used in phase-field modeling. The synergy between pseudo-spectral techniques and exponential integrators yields significant benefits for modeling intricate systems governed by phase-field dynamics, such as solidification processes and pattern formation. Our comprehensive Python implementation illustrates the effectiveness of this combined approach in solving phase-field model equations. The results obtained from this implementation highlight the accuracy and computational advantages of the ETD method compared to other numerical techniques.
In this work, we develop a new algorithm to solve large-scale incompressible time-dependent fluid--structure interaction (FSI) problems using a matrix-free finite element method in arbitrary Lagrangian--Eulerian (ALE) frame of reference. We derive a semi-implicit time integration scheme which improves the geometry-convective explicit (GCE) scheme for problems involving the interaction between incompressible hyperelastic solids and incompressible fluids. The proposed algorithm relies on the reformulation of the time-discrete problem as a generalized Stokes problem with strongly variable coefficients, for which optimal preconditioners have recently been developed. The resulting algorithm is scalable, optimal, and robust: we test our implementation on model problems that mimic classical Turek benchmarks in two and three dimensions, and investigate timing and scalability results.