We present a cut finite element method for the heat equation on two overlapping meshes. By overlapping meshes we mean a mesh hierarchy with a stationary background mesh at the bottom and an overlapping mesh that is allowed to move around on top of the background mesh. Overlapping meshes can be used as an alternative to costly remeshing for problems with changing or evolving interior geometry. In this paper the overlapping mesh is prescribed a cG(1) movement, meaning that its location as a function of time is continuous and piecewise linear. For the discrete function space, we use continuous Galerkin in space and discontinuous Galerkin in time, with the addition of a discontinuity on the boundary between the two meshes. The finite element formulation is based on Nitsche's method and also includes an integral term over the space-time boundary that mimics the standard discontinuous Galerkin time-jump term. The cG(1) mesh movement results in a space-time discretization for which existing analysis methodologies either fail or are unsuitable. We therefore propose, to the best of our knowledge, a new energy analysis framework that is general and robust enough to be applicable to the current setting$^*$. The energy analysis consists of a stability estimate that is slightly stronger than the standard basic one and an a priori error estimate that is of optimal order with respect to both time step and mesh size. We also present numerical results for a problem in one spatial dimension that verify the analytic error convergence orders. $*$ UPDATE and CORRECTION: After this work was made public, it was discovered that the core components of the new energy analysis framework seemed to have been discovered independently by us and Cangiani, Dong, and Georgoulis in [1].
This work introduces a reduced order modeling (ROM) framework for the solution of parameterized second-order linear elliptic partial differential equations formulated on unfitted geometries. The goal is to construct efficient projection-based ROMs, which rely on techniques such as the reduced basis method and discrete empirical interpolation. The presence of geometrical parameters in unfitted domain discretizations entails challenges for the application of standard ROMs. Therefore, in this work we propose a methodology based on i) extension of snapshots on the background mesh and ii) localization strategies to decrease the number of reduced basis functions. The method we obtain is computationally efficient and accurate, while it is agnostic with respect to the underlying discretization choice. We test the applicability of the proposed framework with numerical experiments on two model problems, namely the Poisson and linear elasticity problems. In particular, we study several benchmarks formulated on two-dimensional, trimmed domains discretized with splines and we observe a significant reduction of the online computational cost compared to standard ROMs for the same level of accuracy. Moreover, we show the applicability of our methodology to a three-dimensional geometry of a linear elastic problem.
In this work, we study discrete minimizers of the Ginzburg-Landau energy in finite element spaces. Special focus is given to the influence of the Ginzburg-Landau parameter $\kappa$. This parameter is of physical interest as large values can trigger the appearance of vortex lattices. Since the vortices have to be resolved on sufficiently fine computational meshes, it is important to translate the size of $\kappa$ into a mesh resolution condition, which can be done through error estimates that are explicit with respect to $\kappa$ and the spatial mesh width $h$. For that, we first work in an abstract framework for a general class of discrete spaces, where we present convergence results in a problem-adapted $\kappa$-weighted norm. Afterwards we apply our findings to Lagrangian finite elements and a particular generalized finite element construction. In numerical experiments we confirm that our derived $L^2$- and $H^1$-error estimates are indeed optimal in $\kappa$ and $h$.
Probabilistic models based on continuous latent spaces, such as variational autoencoders, can be understood as uncountable mixture models where components depend continuously on the latent code. They have proven to be expressive tools for generative and probabilistic modelling, but are at odds with tractable probabilistic inference, that is, computing marginals and conditionals of the represented probability distribution. Meanwhile, tractable probabilistic models such as probabilistic circuits (PCs) can be understood as hierarchical discrete mixture models, and thus are capable of performing exact inference efficiently but often show subpar performance in comparison to continuous latent-space models. In this paper, we investigate a hybrid approach, namely continuous mixtures of tractable models with a small latent dimension. While these models are analytically intractable, they are well amenable to numerical integration schemes based on a finite set of integration points. With a large enough number of integration points the approximation becomes de-facto exact. Moreover, for a finite set of integration points, the integration method effectively compiles the continuous mixture into a standard PC. In experiments, we show that this simple scheme proves remarkably effective, as PCs learnt this way set new state of the art for tractable models on many standard density estimation benchmarks.
To facilitate widespread adoption of automated engineering design techniques, existing methods must become more efficient and generalizable. In the field of topology optimization, this requires the coupling of modern optimization methods with solvers capable of handling arbitrary problems. In this work, a topology optimization method for general multiphysics problems is presented. We leverage a convolutional neural parameterization of a level set for a description of the geometry and use this in an unfitted finite element method that is differentiable with respect to the level set everywhere in the domain. We construct the parameter to objective map in such a way that the gradient can be computed entirely by automatic differentiation at roughly the cost of an objective function evaluation. The method produces optimized topologies that are similar in performance yet exhibit greater regularity than baseline approaches on standard benchmarks whilst having the ability to solve a more general class of problems, e.g., interface-coupled multiphysics.
Meshfree Lagrangian frameworks for free surface flow simulations do not conserve fluid volume. Meshfree particle methods like SPH are not mimetic, in the sense that discrete mass conservation does not imply discrete volume conservation. On the other hand, meshfree collocation methods typically do not use any notion of mass. As a result, they are neither mass conservative nor volume conservative at the discrete level. In this paper, we give an overview of various sources of conservation errors across different meshfree methods. The present work focuses on one specific issue: unreliable volume and mass definitions. We introduce the concept of representative masses and densities, which are essential for accurate post-processing especially in meshfree collocation methods. Using these, we introduce an artificial compression or expansion in the fluid to rectify errors in volume conservation. Numerical experiments show that the introduced frameworks significantly improve volume conservation behaviour, even for complex industrial test cases such as automotive water crossing.
Real-world manipulation problems in heavy clutter require robots to reason about potential contacts with objects in the environment. We focus on pick-and-place style tasks to retrieve a target object from a shelf where some `movable' objects must be rearranged in order to solve the task. In particular, our motivation is to allow the robot to reason over and consider non-prehensile rearrangement actions that lead to complex robot-object and object-object interactions where multiple objects might be moved by the robot simultaneously, and objects might tilt, lean on each other, or topple. To support this, we query a physics-based simulator to forward simulate these interaction dynamics which makes action evaluation during planning computationally very expensive. To make the planner tractable, we establish a connection between the domain of Manipulation Among Movable Objects and Multi-Agent Pathfinding that lets us decompose the problem into two phases our M4M algorithm iterates over. First we solve a multi-agent planning problem that reasons about the configurations of movable objects but does not forward simulate a physics model. Next, an arm motion planning problem is solved that uses a physics-based simulator but does not search over possible configurations of movable objects. We run simulated and real-world experiments with the PR2 robot and compare against relevant baseline algorithms. Our results highlight that M4M generates complex 3D interactions, and solves at least twice as many problems as the baselines with competitive performance.
Negative control is a strategy for learning the causal relationship between treatment and outcome in the presence of unmeasured confounding. The treatment effect can nonetheless be identified if two auxiliary variables are available: a negative control treatment (which has no effect on the actual outcome), and a negative control outcome (which is not affected by the actual treatment). These auxiliary variables can also be viewed as proxies for a traditional set of control variables, and they bear resemblance to instrumental variables. I propose a family of algorithms based on kernel ridge regression for learning nonparametric treatment effects with negative controls. Examples include dose response curves, dose response curves with distribution shift, and heterogeneous treatment effects. Data may be discrete or continuous, and low, high, or infinite dimensional. I prove uniform consistency and provide finite sample rates of convergence. I estimate the dose response curve of cigarette smoking on infant birth weight adjusting for unobserved confounding due to household income, using a data set of singleton births in the state of Pennsylvania between 1989 and 1991.
Gyroscopic alignment of a fluid occurs when flow structures align with the rotation axis. This often gives rise to highly spatially anisotropic columnar structures that in combination with complex domain boundaries pose challenges for efficient numerical discretizations and computations. We define gyroscopic polynomials to be three-dimensional polynomials expressed in a coordinate system that conforms to rotational alignment. We remap the original domain with radius-dependent boundaries onto a right cylindrical or annular domain to create the computational domain in this coordinate system. We find the volume element expressed in gyroscopic coordinates leads naturally to a hierarchy of orthonormal bases. We build the bases out of Jacobi polynomials in the vertical and generalized Jacobi polynomials in the radial. Because these coordinates explicitly conform to flow structures found in rapidly rotating systems the bases represent fields with a relatively small number of modes. We develop the operator structure for one-dimensional semi-classical orthogonal polynomials as a building block for differential operators in the full three-dimensional cylindrical and annular domains. The differential operators of generalized Jacobi polynomials generate a sparse linear system for discretization of differential operators acting on the gyroscopic bases. This enables efficient simulation of systems with strong gyroscopic alignment.
Predictive simulations of the shock-to-detonation transition (SDT) in heterogeneous energetic materials (EM) are vital to the design and control of their energy release and sensitivity. Due to the complexity of the thermo-mechanics of EM during the SDT, both macro-scale response and sub-grid mesoscale energy localization must be captured accurately. This work proposes an efficient and accurate multiscale framework for SDT simulations of EM. We introduce a new approach for SDT simulation by using deep learning to model the mesoscale energy localization of shock-initiated EM microstructures. The proposed multiscale modeling framework is divided into two stages. First, a physics-aware recurrent convolutional neural network (PARC) is used to model the mesoscale energy localization of shock-initiated heterogeneous EM microstructures. PARC is trained using direct numerical simulations (DNS) of hotspot ignition and growth within microstructures of pressed HMX material subjected to different input shock strengths. After training, PARC is employed to supply hotspot ignition and growth rates for macroscale SDT simulations. We show that PARC can play the role of a surrogate model in a multiscale simulation framework, while drastically reducing the computation cost and providing improved representations of the sub-grid physics. The proposed multiscale modeling approach will provide a new tool for material scientists in designing high-performance and safer energetic materials.
The design and engineering of molecular communication (MC) components capable of processing chemical concentration signals is the key to unleashing the potential of MC for interdisciplinary applications. By controlling the signaling pathway and molecule exchange between cell devices, synthetic biology provides the MC community with tools and techniques to achieve various signal processing functions. In this paper, we propose a design framework to realize any order concentration shift keying (CSK) systems based on simple and reusable single-input single-output cells. The design framework also exploits the distributed multicellular consortia with spatial segregation, which has advantages in system scalability, low genetic manipulation, and signal orthogonality. We also create a small library of reusable engineered cells and apply them to implement binary CSK (BCSK) and quadruple CSK (QCSK) systems to demonstrate the feasibility of our proposed design framework. Importantly, we establish a mathematical framework to theoretically characterize our proposed distributed multicellular systems. Specially, we divide a system into fundamental building blocks, from which we derive the impulse response of each block and the cascade of the impulse responses leads to the end-to-end response of the system. Simulation results obtained from the agent-based simulator BSim not only validate our CSK design framework but also demonstrate the accuracy of the proposed mathematical analysis.