A data-driven modeling approach is presented to quantify the influence of morphology on effective properties in nanostructured sodium vanadium phosphate $\mathrm{Na}_3\mathrm{V}_2(\mathrm{PO}_4)_3$/ carbon composites (NVP/C), which are used as cathode material in sodium-ion batteries. This approach is based on the combination of advanced imaging techniques, experimental nanostructure characterization and stochastic modeling of the 3D nanostructure consisting of NVP, carbon and pores. By 3D imaging and subsequent post-processing involving image segmentation, the spatial distribution of NVP is resolved in 3D, and the spatial distribution of carbon and pores is resolved in 2D. Based on this information, a parametric stochastic model, specifically a Pluri-Gaussian model, is calibrated to the 3D morphology of the nanostructured NVP/C particles. Model validation is performed by comparing the nanostructure of simulated NVP/C composites with image data in terms of morphological descriptors which have not been used for model calibration. Finally, the stochastic model is used for predictive simulation to quantify the effect of varying the amount of carbon while keeping the amount of NVP constant. The presented methodology opens new possibilities for a ressource-efficient optimization of the morphology of NVP/C particles by modeling and simulation.
Current AI-based methods do not provide comprehensible physical interpretations of the utilized data, extracted features, and predictions/inference operations. As a result, deep learning models trained using high-resolution satellite imagery lack transparency and explainability and can be merely seen as a black box, which limits their wide-level adoption. Experts need help understanding the complex behavior of AI models and the underlying decision-making process. The explainable artificial intelligence (XAI) field is an emerging field providing means for robust, practical, and trustworthy deployment of AI models. Several XAI techniques have been proposed for image classification tasks, whereas the interpretation of image segmentation remains largely unexplored. This paper offers to bridge this gap by adapting the recent XAI classification algorithms and making them usable for muti-class image segmentation, where we mainly focus on buildings' segmentation from high-resolution satellite images. To benchmark and compare the performance of the proposed approaches, we introduce a new XAI evaluation methodology and metric based on "Entropy" to measure the model uncertainty. Conventional XAI evaluation methods rely mainly on feeding area-of-interest regions from the image back to the pre-trained (utility) model and then calculating the average change in the probability of the target class. Those evaluation metrics lack the needed robustness, and we show that using Entropy to monitor the model uncertainty in segmenting the pixels within the target class is more suitable. We hope this work will pave the way for additional XAI research for image segmentation and applications in the remote sensing discipline.
Modern inelastic material model formulations rely on the use of tensor-valued internal variables. When inelastic phenomena include softening, simulations of the former are prone to localization. Thus, an accurate regularization of the tensor-valued internal variables is essential to obtain physically correct results. Here, we focus on the regularization of anisotropic damage at finite strains. Thus, a flexible anisotropic damage model with isotropic, kinematic, and distortional hardening is equipped with three gradient-extensions using a full and two reduced regularizations of the damage tensor. Theoretical and numerical comparisons of the three gradient-extensions yield excellent agreement between the full and the reduced regularization based on a volumetric-deviatoric regularization using only two nonlocal degrees of freedom.
The ultimate goal of any numerical scheme for partial differential equations (PDEs) is to compute an approximation of user-prescribed accuracy at quasi-minimal computational time. To this end, algorithmically, the standard adaptive finite element method (AFEM) integrates an inexact solver and nested iterations with discerning stopping criteria balancing the different error components. The analysis ensuring optimal convergence order of AFEM with respect to the overall computational cost critically hinges on the concept of R-linear convergence of a suitable quasi-error quantity. This work tackles several shortcomings of previous approaches by introducing a new proof strategy. First, the algorithm requires several fine-tuned parameters in order to make the underlying analysis work. A redesign of the standard line of reasoning and the introduction of a summability criterion for R-linear convergence allows us to remove restrictions on those parameters. Second, the usual assumption of a (quasi-)Pythagorean identity is replaced by the generalized notion of quasi-orthogonality from [Feischl, Math. Comp., 91 (2022)]. Importantly, this paves the way towards extending the analysis to general inf-sup stable problems beyond the energy minimization setting. Numerical experiments investigate the choice of the adaptivity parameters.
We present and analyze a discontinuous Galerkin method for the numerical modeling of the non-linear fully-coupled thermo-hydro-mechanic problem. We propose a high-order symmetric weighted interior penalty scheme that supports general polytopal grids and is robust with respect to strong heteorgeneities in the model coefficients. We focus on the treatment of the non-linear convective transport term in the energy conservation equation and we propose suitable stabilization techniques that make the scheme robust for advection-dominated regimes. The stability analysis of the problem and the convergence of the fixed-point linearization strategy are addressed theoretically under mild requirements on the problem's data. A complete set of numerical simulations is presented in order to assess the convergence and robustness properties of the proposed method.
In this work, a class of continuous-time autonomous dynamical systems describing many important phenomena and processes arising in real-world applications is considered. We apply the nonstandard finite difference (NSFD) methodology proposed by Mickens to design a generalized NSFD method for the dynamical system models under consideration. This method is constructed based on a novel non-local approximation for the right-side functions of the dynamical systems. It is proved by rigorous mathematical analyses that the NSFD method is dynamically consistent with respect to positivity, asymptotic stability and three classes of conservation laws, including direct conservation, generalized conservation and sub-conservation laws. Furthermore, the NSFD method is easy to be implemented and can be applied to solve a broad range of mathematical models arising in real-life. Finally, a set of numerical experiments is performed to illustrate the theoretical findings and to show advantages of the proposed NSFD method.
Learnable embedding vector is one of the most important applications in machine learning, and is widely used in various database-related domains. However, the high dimensionality of sparse data in recommendation tasks and the huge volume of corpus in retrieval-related tasks lead to a large memory consumption of the embedding table, which poses a great challenge to the training and deployment of models. Recent research has proposed various methods to compress the embeddings at the cost of a slight decrease in model quality or the introduction of other overheads. Nevertheless, the relative performance of these methods remains unclear. Existing experimental comparisons only cover a subset of these methods and focus on limited metrics. In this paper, we perform a comprehensive comparative analysis and experimental evaluation of embedding compression. We introduce a new taxonomy that categorizes these techniques based on their characteristics and methodologies, and further develop a modular benchmarking framework that integrates 14 representative methods. Under a uniform test environment, our benchmark fairly evaluates each approach, presents their strengths and weaknesses under different memory budgets, and recommends the best method based on the use case. In addition to providing useful guidelines, our study also uncovers the limitations of current methods and suggests potential directions for future research.
A system of coupled oscillators on an arbitrary graph is locally driven by the tendency to mutual synchronization between nearby oscillators, but can and often exhibit nonlinear behavior on the whole graph. Understanding such nonlinear behavior has been a key challenge in predicting whether all oscillators in such a system will eventually synchronize. In this paper, we demonstrate that, surprisingly, such nonlinear behavior of coupled oscillators can be effectively linearized in certain latent dynamic spaces. The key insight is that there is a small number of `latent dynamics filters', each with a specific association with synchronizing and non-synchronizing dynamics on subgraphs so that any observed dynamics on subgraphs can be approximated by a suitable linear combination of such elementary dynamic patterns. Taking an ensemble of subgraph-level predictions provides an interpretable predictor for whether the system on the whole graph reaches global synchronization. We propose algorithms based on supervised matrix factorization to learn such latent dynamics filters. We demonstrate that our method performs competitively in synchronization prediction tasks against baselines and black-box classification algorithms, despite its simple and interpretable architecture.
When modeling scientific and industrial problems, geometries are typically modeled by explicit boundary representations obtained from computer-aided design software. Unfitted (also known as embedded or immersed) finite element methods offer a significant advantage in dealing with complex geometries, eliminating the need for generating unstructured body-fitted meshes. However, current unfitted finite elements on nonlinear geometries are restricted to implicit (possibly high-order) level set geometries. In this work, we introduce a novel automatic computational pipeline to approximate solutions of partial differential equations on domains defined by explicit nonlinear boundary representations. For the geometrical discretization, we propose a novel algorithm to generate quadratures for the bulk and surface integration on nonlinear polytopes required to compute all the terms in unfitted finite element methods. The algorithm relies on a nonlinear triangulation of the boundary, a kd-tree refinement of the surface cells that simplify the nonlinear intersections of surface and background cells to simple cases that are diffeomorphically equivalent to linear intersections, robust polynomial root-finding algorithms and surface parameterization techniques. We prove the correctness of the proposed algorithm. We have successfully applied this algorithm to simulate partial differential equations with unfitted finite elements on nonlinear domains described by computer-aided design models, demonstrating the robustness of the geometric algorithm and showing high-order accuracy of the overall method.
This document defines a method for FIR system modelling which is very trivial as it only depends on phase introduction and removal (allpass filters). As magnitude is not altered, the processing is numerically stable. It is limited to phase alteration which maintains the time domain magnitude to force a system within its linear limits.
Improving the resolution of fluorescence microscopy beyond the diffraction limit can be achievedby acquiring and processing multiple images of the sample under different illumination conditions.One of the simplest techniques, Random Illumination Microscopy (RIM), forms the super-resolvedimage from the variance of images obtained with random speckled illuminations. However, thevalidity of this process has not been fully theorized. In this work, we characterize mathematicallythe sample information contained in the variance of diffraction-limited speckled images as a functionof the statistical properties of the illuminations. We show that an unambiguous two-fold resolutiongain is obtained when the speckle correlation length coincides with the width of the observationpoint spread function. Last, we analyze the difference between the variance-based techniques usingrandom speckled illuminations (as in RIM) and those obtained using random fluorophore activation(as in Super-resolution Optical Fluctuation Imaging, SOFI).