Ferromagnetic materials in indoor environments give rise to disturbances in the ambient magnetic field. Maps of these magnetic disturbances can be used for indoor localisation. A Gaussian process can be used to learn the spatially varying magnitude of the magnetic field using magnetometer measurements and information about the position of the magnetometer. The position of the magnetometer, however, is frequently only approximately known. This negatively affects the quality of the magnetic field map. In this paper, we investigate how an array of magnetometers can be used to improve the quality of the magnetic field map. The position of the array is approximately known, but the relative locations of the magnetometers on the array are known. We include this information in a novel method to make a map of the ambient magnetic field. We study the properties of our method in simulation and show that our method improves the map quality. We also demonstrate the efficacy of our method with experimental data for the mapping of the magnetic field using an array of 30 magnetometers.
This work is concerned with solving high-dimensional Fokker-Planck equations with the novel perspective that solving the PDE can be reduced to independent instances of density estimation tasks based on the trajectories sampled from its associated particle dynamics. With this approach, one sidesteps error accumulation occurring from integrating the PDE dynamics on a parameterized function class. This approach significantly simplifies deployment, as one is free of the challenges of implementing loss terms based on the differential equation. In particular, we introduce a novel class of high-dimensional functions called the functional hierarchical tensor (FHT). The FHT ansatz leverages a hierarchical low-rank structure, offering the advantage of linearly scalable runtime and memory complexity relative to the dimension count. We introduce a sketching-based technique that performs density estimation over particles simulated from the particle dynamics associated with the equation, thereby obtaining a representation of the Fokker-Planck solution in terms of our ansatz. We apply the proposed approach successfully to three challenging time-dependent Ginzburg-Landau models with hundreds of variables.
Slender beams are often employed as constituents in engineering materials and structures. Prior experiments on lattices of slender beams have highlighted their complex failure response, where the interplay between buckling and fracture plays a critical role. In this paper, we introduce a novel computational approach for modeling fracture in slender beams subjected to large deformations. We adopt a state-of-the-art geometrically exact Kirchhoff beam formulation to describe the finite deformations of beams in three-dimensions. We develop a discontinuous Galerkin finite element discretization of the beam governing equations, incorporating discontinuities in the position and tangent degrees of freedom at the inter-element boundaries of the finite elements. Before fracture initiation, we enforce compatibility of nodal positions and tangents weakly, via the exchange of variationally-consistent forces and moments at the interfaces between adjacent elements. At the onset of fracture, these forces and moments transition to cohesive laws modeling interface failure. We conduct a series of numerical tests to verify our computational framework against a set of benchmarks and we demonstrate its ability to capture the tensile and bending fracture modes in beams exhibiting large deformations. Finally, we present the validation of our framework against fracture experiments of dry spaghetti rods subjected to sudden relaxation of curvature.
In this work, an efficient and robust isogeometric three-dimensional solid-beam finite element is developed for large deformations and finite rotations with merely displacements as degrees of freedom. The finite strain theory and hyperelastic constitutive models are considered and B-Spline and NURBS are employed for the finite element discretization. Similar to finite elements based on Lagrange polynomials, also NURBS-based formulations are affected by the non-physical phenomena of locking, which constrains the field variables and negatively impacts the solution accuracy and deteriorates convergence behavior. To avoid this problem within the context of a Solid-Beam formulation, the Assumed Natural Strain (ANS) method is applied to alleviate membrane and transversal shear locking and the Enhanced Assumed Strain (EAS) method against Poisson thickness locking. Furthermore, the Mixed Integration Point (MIP) method is employed to make the formulation more efficient and robust. The proposed novel isogeometric solid-beam element is tested on several single-patch and multi-patch benchmark problems, and it is validated against classical solid finite elements and isoparametric solid-beam elements. The results show that the proposed formulation can alleviate the locking effects and significantly improve the performance of the isogeometric solid-beam element. With the developed element, efficient and accurate predictions of mechanical properties of lattice-based structured materials can be achieved. The proposed solid-beam element inherits both the merits of solid elements e.g. flexible boundary conditions and of the beam elements i.e. higher computational efficiency.
We propose a non-linear state-space model to examine the relationship between CO$_2$ emissions, energy sources, and macroeconomic activity, using data from 1971 to 2019. CO$_2$ emissions are modeled as a weighted sum of fossil fuel use, with emission conversion factors that evolve over time to reflect technological changes. GDP is expressed as the outcome of linearly increasing energy efficiency and total energy consumption. The model is estimated using CO$_2$ data from the Global Carbon Budget, GDP statistics from the World Bank, and energy data from the International Energy Agency (IEA). Projections for CO$_2$ emissions and GDP from 2020 to 2100 from the model are based on energy scenarios from the Shared Socioeconomic Pathways (SSP) and the IEA's Net Zero roadmap. Emissions projections from the model are consistent with these scenarios but predict lower GDP growth. An alternative model version, assuming exponential energy efficiency improvement, produces GDP growth rates more in line with the benchmark projections. Our results imply that if internationally agreed net-zero objectives are to be fulfilled and economic growth is to follow SSP or IEA scenarios, then drastic changes in energy efficiency, not consistent with historical trends, are needed.
This paper introduces a time-domain combined field integral equation for electromagnetic scattering by a perfect electric conductor. The new equation is obtained by leveraging the quasi-Helmholtz projectors, which separate both the unknown and the source fields into solenoidal and irrotational components. These two components are then appropriately rescaled to cure the solution from a loss of accuracy occurring when the time step is large. Yukawa-type integral operators of a purely imaginary wave number are also used as a Calderon preconditioner to eliminate the ill-conditioning of matrix systems. The stabilized time-domain electric and magnetic field integral equations are linearly combined in a Calderon-like fashion, then temporally discretized using a proper pair of trial functions, resulting in a marching-on-in-time linear system. The novel formulation is immune to spurious resonances, dense discretization breakdown, large-time step breakdown and dc instabilities stemming from non-trivial kernels. Numerical results for both simply-connected and multiply-connected scatterers corroborate the theoretical analysis.
In this work we consider the two dimensional instationary Navier-Stokes equations with homogeneous Dirichlet/no-slip boundary conditions. We show error estimates for the fully discrete problem, where a discontinuous Galerkin method in time and inf-sup stable finite elements in space are used. Recently, best approximation type error estimates for the Stokes problem in the $L^\infty(I;L^2(\Omega))$, $L^2(I;H^1(\Omega))$ and $L^2(I;L^2(\Omega))$ norms have been shown. The main result of the present work extends the error estimate in the $L^\infty(I;L^2(\Omega))$ norm to the Navier-Stokes equations, by pursuing an error splitting approach and an appropriate duality argument. In order to discuss the stability of solutions to the discrete primal and dual equations, a specially tailored discrete Gronwall lemma is presented. The techniques developed towards showing the $L^\infty(I;L^2(\Omega))$ error estimate, also allow us to show best approximation type error estimates in the $L^2(I;H^1(\Omega))$ and $L^2(I;L^2(\Omega))$ norms, which complement this work.
Functional magnetic resonance imaging analytical workflows are highly flexible with no definite consensus on how to choose a pipeline. While methods have been developed to explore this analytical space, there is still a lack of understanding of the relationships between the different pipelines. We use community detection algorithms to explore the pipeline space and assess its stability across different contexts. We show that there are subsets of pipelines that give similar results, especially those sharing specific parameters (e.g. number of motion regressors, software packages, etc.), with relative stability across groups of participants. By visualizing the differences between these subsets, we describe the effect of pipeline parameters and derive general relationships in the analytical space.
Finite-dimensional truncations are routinely used to approximate partial differential equations (PDEs), either to obtain numerical solutions or to derive reduced-order models. The resulting discretized equations are known to violate certain physical properties of the system. In particular, first integrals of the PDE may not remain invariant after discretization. Here, we use the method of reduced-order nonlinear solutions (RONS) to ensure that the conserved quantities of the PDE survive its finite-dimensional truncation. In particular, we develop two methods: Galerkin RONS and finite volume RONS. Galerkin RONS ensures the conservation of first integrals in Galerkin-type truncations, whether used for direct numerical simulations or reduced-order modeling. Similarly, finite volume RONS conserves any number of first integrals of the system, including its total energy, after finite volume discretization. Both methods are applicable to general time-dependent PDEs and can be easily incorporated in existing Galerkin-type or finite volume code. We demonstrate the efficacy of our methods on two examples: direct numerical simulations of the shallow water equation and a reduced-order model of the nonlinear Schrodinger equation. As a byproduct, we also generalize RONS to phenomena described by a system of PDEs.
A countable structure is indivisible if for every coloring with finite range there is a monochromatic isomorphic subcopy of the structure. Each indivisible structure $\mathcal{S}$ naturally corresponds to an indivisibility problem $\mathsf{Ind}\ \mathcal{S}$, which outputs such a subcopy given a presentation and coloring. We investigate the Weihrauch complexity of the indivisibility problems for two structures: the rational numbers $\mathbb{Q}$ as a linear order, and the equivalence relation $\mathscr{E}$ with countably many equivalence classes each having countably many members. We separate the Weihrauch degrees of both $\mathsf{Ind}\ \mathbb{Q}$ and $\mathsf{Ind}\ \mathscr{E}$ from several benchmark problems, showing in particular that $\mathsf{C}_\mathbb{N} \vert_\mathrm{W} \mathsf{Ind}\ \mathbb{Q}$ and hence $\mathsf{Ind}\ \mathbb{Q}$ is strictly weaker than the problem of finding an interval in which some color is dense for a given coloring of $\mathbb{Q}$; and that the Weihrauch degree of $\mathsf{Ind}\ \mathscr{E}_k$ is strictly between those of $\mathsf{SRT}^2_k$ and $\mathsf{RT}^2_k$, where $\mathsf{Ind}\ \mathcal{S}_k$ is the restriction of $\mathsf{Ind}\ \mathcal{S}$ to $k$-colorings.
We consider the numerical approximation of a continuum model of antiferromagnetic and ferrimagnetic materials. The state of the material is described in terms of two unit-length vector fields, which can be interpreted as the magnetizations averaging the spins of two sublattices. For the static setting, which requires the solution of a constrained energy minimization problem, we introduce a discretization based on first-order finite elements and prove its $\Gamma$-convergence. Then, we propose and analyze two iterative algorithms for the computation of low-energy stationary points. The algorithms are obtained from (semi-)implicit time discretizations of gradient flows of the energy. Finally, we extend the algorithms to the dynamic setting, which consists of a nonlinear system of two Landau-Lifshitz-Gilbert equations solved by the two fields, and we prove unconditional stability and convergence of the finite element approximations toward a weak solution of the problem. Numerical experiments assess the performance of the algorithms and demonstrate their applicability for the simulation of physical processes involving antiferromagnetic and ferrimagnetic materials.