The proposed two-dimensional geometrically exact beam element extends our previous work by including the effects of shear distortion, and also of distributed forces and moments acting along the beam. The general flexibility-based formulation exploits the kinematic equations combined with the inverted sectional equations and the integrated form of equilibrium equations. The resulting set of three first-order differential equations is discretized by finite differences and the boundary value problem is converted into an initial value problem using the shooting method. Due to the special structure of the governing equations, the scheme remains explicit even though the first derivatives are approximated by central differences, leading to high accuracy. The main advantage of the adopted approach is that the error can be efficiently reduced by refining the computational grid used for finite differences at the element level while keeping the number of global degrees of freedom low. The efficiency is also increased by dealing directly with the global centerline coordinates and sectional inclination with respect to global axes as the primary unknowns at the element level, thereby avoiding transformations between local and global coordinates. Two formulations of the sectional equations, referred to as the Reissner and Ziegler models, are presented and compared. In particular, stability of an axially loaded beam/column is investigated and the connections to the Haringx and Engesser stability theories are discussed. Both approaches are tested in a series of numerical examples, which illustrate (i) high accuracy with quadratic convergence when the spatial discretization is refined, (ii) easy modeling of variable stiffness along the element (such as rigid joint offsets), (iii) efficient and accurate characterization of the buckling and post-buckling behavior.
We present an explicit temporal discretization of particle-in-cell schemes for the Vlasov equation that results in exact energy conservation when combined with an appropriate spatial discretization. The scheme is inspired by a simple, second-order explicit scheme that conserves energy exactly in the Eulerian context. We show that direct translation to particle-in-cell does not result in strict conservation, but derive a simple correction based on an analytically solvable optimization problem that recovers conservation. While this optimization problem is not guaranteed to have a real solution for every particle, we provide a correction that makes imaginary values extremely rare and still admits $\mathcal{O}(10^{-12})$ fractional errors in energy for practical simulation parameters. We present the scheme in both electrostatic -- where we use the Amp\`{e}re formulation -- and electromagnetic contexts. With an electromagnetic field solve, the field update is most naturally linearly implicit, but the more computationally intensive particle update remains fully explicit. We also show how the scheme can be extended to use the fully explicit leapfrog and pseudospectral analytic time-domain (PSATD) field solvers. The scheme is tested on standard kinetic plasma problems, confirming its conservation properties.
Splitting methods are widely used for solving initial value problems (IVPs) due to their ability to simplify complicated evolutions into more manageable subproblems which can be solved efficiently and accurately. Traditionally, these methods are derived using analytic and algebraic techniques from numerical analysis, including truncated Taylor series and their Lie algebraic analogue, the Baker--Campbell--Hausdorff formula. These tools enable the development of high-order numerical methods that provide exceptional accuracy for small timesteps. Moreover, these methods often (nearly) conserve important physical invariants, such as mass, unitarity, and energy. However, in many practical applications the computational resources are limited. Thus, it is crucial to identify methods that achieve the best accuracy within a fixed computational budget, which might require taking relatively large timesteps. In this regime, high-order methods derived with traditional methods often exhibit large errors since they are only designed to be asymptotically optimal. Machine Learning techniques offer a potential solution since they can be trained to efficiently solve a given IVP with less computational resources. However, they are often purely data-driven, come with limited convergence guarantees in the small-timestep regime and do not necessarily conserve physical invariants. In this work, we propose a framework for finding machine learned splitting methods that are computationally efficient for large timesteps and have provable convergence and conservation guarantees in the small-timestep limit. We demonstrate numerically that the learned methods, which by construction converge quadratically in the timestep size, can be significantly more efficient than established methods for the Schr\"{o}dinger equation if the computational budget is limited.
In this work, we present a model order reduction technique for nonlinear structures assembled from components.The reduced order model is constructed by reducing the substructures with proper orthogonal decomposition and connecting them by a mortar-tied contact formulation. The snapshots for the substructure projection matrices are computed on the substructure level by the proper orthogonal decomposition (POD) method. The snapshots are computed using a random sampling procedure based on a parametrization of boundary conditions. To reduce the computational effort of the snapshot computation full-order simulations of the substructures are only computed when the error of the reduced solution is above a threshold. In numerical examples, we show the accuracy and efficiency of the method for nonlinear problems involving material and geometric nonlinearity as well as non-matching meshes. We are able to predict solutions of systems that we did not compute in our snapshots.
Characteristic formulae give a complete logical description of the behaviour of processes modulo some chosen notion of behavioural semantics. They allow one to reduce equivalence or preorder checking to model checking, and are exactly the formulae in the modal logics characterizing classic behavioural equivalences and preorders for which model checking can be reduced to equivalence or preorder checking. This paper studies the complexity of determining whether a formula is characteristic for some finite, loop-free process in each of the logics providing modal characterizations of the simulation-based semantics in van Glabbeek's branching-time spectrum. Since characteristic formulae in each of those logics are exactly the consistent and prime ones, it presents complexity results for the satisfiability and primality problems, and investigates the boundary between modal logics for which those problems can be solved in polynomial time and those for which they become computationally hard. Amongst other contributions, this article also studies the complexity of constructing characteristic formulae in the modal logics characterizing simulation-based semantics, both when such formulae are presented in explicit form and via systems of equations.
This paper presents a new Bayesian framework for quantifying discretization errors in numerical solutions of ordinary differential equations. By modelling the errors as random variables, we impose a monotonicity constraint on the variances, referred to as discretization error variances. The key to our approach is the use of a shrinkage prior for the variances coupled with variable transformations. This methodology extends existing Bayesian isotonic regression techniques to tackle the challenge of estimating the variances of a normal distribution. An additional key feature is the use of a Gaussian mixture model for the $\log$-$\chi^2_1$ distribution, enabling the development of an efficient Gibbs sampling algorithm for the corresponding posterior.
We perform a quantitative assessment of different strategies to compute the contribution due to surface tension in incompressible two-phase flows using a conservative level set (CLS) method. More specifically, we compare classical approaches, such as the direct computation of the curvature from the level set or the Laplace-Beltrami operator, with an evolution equation for the mean curvature recently proposed in literature. We consider the test case of a static bubble, for which an exact solution for the pressure jump across the interface is available, and the test case of an oscillating bubble, showing pros and cons of the different approaches.
Moist thermodynamics is a fundamental driver of atmospheric dynamics across all scales, making accurate modeling of these processes essential for reliable weather forecasts and climate change projections. However, atmospheric models often make a variety of inconsistent approximations in representing moist thermodynamics. These inconsistencies can introduce spurious sources and sinks of energy, potentially compromising the integrity of the models. Here, we present a thermodynamically consistent and structure preserving formulation of the moist compressible Euler equations. When discretised with a summation by parts method, our spatial discretisation conserves: mass, water, entropy, and energy. These properties are achieved by discretising a skew symmetric form of the moist compressible Euler equations, using entropy as a prognostic variable, and the summation-by-parts property of discrete derivative operators. Additionally, we derive a discontinuous Galerkin spectral element method with energy and tracer variance stable numerical fluxes, and experimentally verify our theoretical results through numerical simulations.
We consider goal-oriented optimal design of experiments for infinite-dimensional Bayesian linear inverse problems governed by partial differential equations (PDEs). Specifically, we seek sensor placements that minimize the posterior variance of a prediction or goal quantity of interest. The goal quantity is assumed to be a nonlinear functional of the inversion parameter. We propose a goal-oriented optimal experimental design (OED) approach that uses a quadratic approximation of the goal-functional to define a goal-oriented design criterion. The proposed criterion, which we call the Gq-optimality criterion, is obtained by integrating the posterior variance of the quadratic approximation over the set of likely data. Under the assumption of Gaussian prior and noise models, we derive a closed-form expression for this criterion. To guide development of discretization invariant computational methods, the derivations are performed in an infinite-dimensional Hilbert space setting. Subsequently, we propose efficient and accurate computational methods for computing the Gq-optimality criterion. A greedy approach is used to obtain Gq-optimal sensor placements. We illustrate the proposed approach for two model inverse problems governed by PDEs. Our numerical results demonstrate the effectiveness of the proposed strategy. In particular, the proposed approach outperforms non-goal-oriented (A-optimal) and linearization-based (c-optimal) approaches.
With the rapid advancements in medical data acquisition and production, increasingly richer representations exist to characterize medical information. However, such large-scale data do not usually meet computing resource constraints or algorithmic complexity, and can only be processed after compression or reduction, at the potential loss of information. In this work, we consider specific Gaussian mixture models (HD-GMM), tailored to deal with high dimensional data and to limit information loss by providing component-specific lower dimensional representations. We also design an incremental algorithm to compute such representations for large data sets, overcoming hardware limitations of standard methods. Our procedure is illustrated in a magnetic resonance fingerprinting study, where it achieves a 97% dictionary compression for faster and more accurate map reconstructions.
We consider the stochastic heat equation driven by a multiplicative Gaussian noise that is white in time and spatially homogeneous in space. Assuming that the spatial correlation function is given by a Riesz kernel of order $\alpha \in (0,1)$, we prove a central limit theorem for power variations and other related functionals of the solution. To our surprise, there is no asymptotic bias despite the low regularity of the noise coefficient in the multiplicative case. We trace this circumstance back to cancellation effects between error terms arising naturally in second-order limit theorems for power variations.