The favored phase field method (PFM) has encountered challenges in the finite strain fracture modeling of nearly or truly incompressible hyperelastic materials. We identified that the underlying cause lies in the innate contradiction between incompressibility and smeared crack opening. Drawing on the stiffness-degradation idea in PFM, we resolved this contradiction through loosening incompressible constraint of the damaged phase without affecting the incompressibility of intact material. By modifying the perturbed Lagrangian approach, we derived a novel mixed formulation. In numerical aspects, the finite element discretization uses the classical Q1/P0 and high-order P2/P1 schemes, respectively. To ease the mesh distortion at large strains, an adaptive mesh deletion technology is also developed. The validity and robustness of the proposed mixed framework are corroborated by four representative numerical examples. By comparing the performance of Q1/P0 and P2/P1, we conclude that the Q1/P0 formulation is a better choice for finite strain fracture in nearly incompressible cases. Moreover, the numerical examples also show that the combination of the proposed framework and methodology has vast potential in simulating complex peeling and tearing problems
String diagrams are a powerful and intuitive graphical syntax, originated in the study of symmetric monoidal categories. In the last few years, they have found application in the modelling of various computational structures, in fields as diverse as Computer Science, Physics, Control Theory, Linguistics, and Biology. In many such proposals, the transformations of the described systems are modelled as rewrite rules of diagrams. These developments demand a mathematical foundation for string diagram rewriting: whereas rewrite theory for terms is well-understood, the two-dimensional nature of string diagrams poses additional challenges. This work systematises and expands a series of recent conference papers laying down such foundation. As first step, we focus on the case of rewrite systems for string diagrammatic theories which feature a Frobenius algebra. This situation ubiquitously appear in various approaches: for instance, in the algebraic semantics of linear dynamical systems, Frobenius structures model the wiring of circuits; in categorical quantum mechanics, they model interacting quantum observables. Our work introduces a combinatorial interpretation of string diagram rewriting modulo Frobenius structures, in terms of double-pushout hypergraph rewriting. Furthermore, we prove this interpretation to be sound and complete. In the last part, we also see that the approach can be generalised to model rewriting modulo multiple Frobenius structures. As a proof of concept, we show how to derive from these results a termination strategy for Interacting Bialgebras, an important rewrite theory in the study of quantum circuits and signal flow graphs.
The task of multi-dimensional numerical integration is frequently encountered in physics and other scientific fields, e.g., in modeling the effects of systematic uncertainties in physical systems and in Bayesian parameter estimation. Multi-dimensional integration is often time-prohibitive on CPUs. Efficient implementation on many-core architectures is challenging as the workload across the integration space cannot be predicted a priori. We propose m-Cubes, a novel implementation of the well-known Vegas algorithm for execution on GPUs. Vegas transforms integration variables followed by calculation of a Monte Carlo integral estimate using adaptive partitioning of the resulting space. m-Cubes improves performance on GPUs by maintaining relatively uniform workload across the processors. As a result, our optimized Cuda implementation for Nvidia GPUs outperforms parallelization approaches proposed in past literature. We further demonstrate the efficiency of m-Cubes by evaluating a six-dimensional integral from a cosmology application, achieving significant speedup and greater precision than the CUBA library's CPU implementation of VEGAS. We also evaluate m-Cubes on a standard integrand test suite. m-Cubes outperforms the serial implementations of the Cuba and GSL libraries by orders of magnitude speedup while maintaining comparable accuracy. Our approach yields a speedup of at least 10 when compared against publicly available Monte Carlo based GPU implementations. In summary, m-Cubes can solve integrals that are prohibitively expensive using standard libraries and custom implementations. A modern C++ interface header-only implementation makes m-Cubes portable, allowing its utilization in complicated pipelines with easy to define stateful integrals. Compatibility with non-Nvidia GPUs is achieved with our initial implementation of m-Cubes using the Kokkos framework.
Algorithm configuration (AC) is concerned with the automated search of the most suitable parameter configuration of a parametrized algorithm. There is currently a wide variety of AC problem variants and methods proposed in the literature. Existing reviews do not take into account all derivatives of the AC problem, nor do they offer a complete classification scheme. To this end, we introduce taxonomies to describe the AC problem and features of configuration methods, respectively. We review existing AC literature within the lens of our taxonomies, outline relevant design choices of configuration approaches, contrast methods and problem variants against each other, and describe the state of AC in industry. Finally, our review provides researchers and practitioners with a look at future research directions in the field of AC.
During locomotion, legged robots interact with the ground by sequentially establishing and breaking contact. The interaction wrenches that arise from contact are used to steer the robot's Center of Mass (CoM) and reject perturbations that make the system deviate from the desired trajectory and often make them fall. The feasibility of a given control target (desired CoM wrench or acceleration) is conditioned by the contact point distribution, ground friction, and actuation limits. In this work, we develop an algorithm to compute the set of feasible wrenches that a legged robot can exert on its CoM through contact. The presented method can be used with any amount of non-coplanar contacts and takes into account actuation limits and limitations based on an inelastic contact model with Coulomb friction. This is exemplified with a planar biped model standing with the feet at different heights. Exploiting assumptions from the contact model, we explain how to compute the set of wrenches that are feasible on the CoM when the contacts remain in position as well as the ones that are feasible when some of the contacts are broken. Therefore, this algorithm can be used to assess whether a switch in contact configuration is feasible while achieving a given control task. Furthermore, the method can be used to identify the directions in which the system is not actuated (i.e. a wrench cannot be exerted in those directions). We show how having a joint be actuated or passive can change the non-actuated wrench directions of a robot at a given pose using a spatial model of a lower-extremity exoskeleton. Therefore, this algorithm is also a useful tool for the design phase of the system. This work presents a useful tool for the control and design of legged systems that extends on the current state of the art.
We present an implicit-explicit finite volume scheme for isentropic two phase flow in all Mach number regimes. The underlying model belongs to the class of symmetric hyperbolic thermodynamically compatible models. The key element of the scheme consists of a linearisation of pressure and enthalpy terms at a reference state. The resulting stiff linear parts are integrated implicitly, whereas the non-linear higher order and transport terms are treated explicitly. Due to the flux splitting, the scheme is stable under a CFL condition which determined by the resolution of the slow material waves and allows large time steps even in the presence of fast acoustic waves. Further the singular Mach number limits of the model are studied and the asymptotic preserving property of the scheme is proven. In numerical simulations the consistency with single phase flow, accuracy and the approximation of material waves in different Mach number regimes are assessed.
Optimal-order uniform-in-time $H^1$-norm error estimates are given for semi- and full discretizations of mean curvature flow of surfaces in arbitrarily high codimension. The proposed and studied numerical method is based on a parabolic system coupling the surface flow to evolution equations for the mean curvature vector and for the orthogonal projection onto the tangent space. The algorithm uses evolving surface finite elements and linearly implicit backward difference formulae. This numerical method admits a convergence analysis in the case of finite elements of polynomial degree at least two and backward difference formulae of orders two to five. Numerical experiments in codimension 2 illustrate and complement our theoretical results.
A novel meshing scheme, based on regular tetra-kai-decahedron, also referred to as truncated octahedron, cells is presented for use in spatial topology optimization. A tetra-kai-decahedron mesh ensures face connectivity between elements thereby eliminating singular solutions from the solution space. Various other benefits of implementing the said mesh are also highlighted, and the corresponding finite element is introduced. Material mask overlay strategy or MMOS, a feature based method for topology optimization is extended for use in 3-dimensions (MMOS-3D) via the aforementioned finite element and spheroidal negative masks. Formulation for density computation and sensitivity analysis for gradient based optimization is developed. Examples on traditional structural topology optimization problems are presented with detailed discussion on efficacy of the proposed approach.
We propose and explore a new, general-purpose method for the implicit time integration of elastica. Key to our approach is the use of a mixed variational principle. In turn its finite element discretization leads to an efficient alternating projections solver with a superset of the desirable properties of many previous fast solution strategies. This framework fits a range of elastic constitutive models and remains stable across a wide span of timestep sizes, material parameters (including problems that are quasi-static and approximately rigid). It is efficient to evaluate and easily applicable to volume, surface, and rods models. We demonstrate the efficacy of our approach on a number of simulated examples across all three codomains.
We use persistent homology and persistence images as an observable of three different variants of the two-dimensional XY model in order to identify and study their phase transitions. We examine models with the classical XY action, a topological lattice action, and an action with an additional nematic term. In particular, we introduce a new way of computing the persistent homology of lattice spin model configurations and, by considering the fluctuations in the output of logistic regression and k-nearest neighbours models trained on persistence images, we develop a methodology to extract estimates of the critical temperature and the critical exponent of the correlation length. We put particular emphasis on finite-size scaling behaviour and producing estimates with quantifiable error. For each model we successfully identify its phase transition(s) and are able to get an accurate determination of the critical temperatures and critical exponents of the correlation length.
We consider a family of unadjusted HMC samplers, which includes standard position HMC samplers and discretizations of the underdamped Langevin process. A detailed analysis and optimization of the parameters is conducted in the Gaussian case. Then, a stochastic gradient version of the samplers is considered, for which dimension-free convergence rates are established for log-concave smooth targets, gathering in a unified framework previous results on both processes. Both results indicate that partial refreshments of the velocity are more efficient than standard full refreshments.