Fracture of viscoelastic materials is considered to be a complex phenomenon due to their highly rate sensitive behavior. In this context, we are interested in the quasi-static response of a viscoelastic solid subjected to damage. This paper outlines a new incremental variational based approach and its computational implementation to model damage in viscoelastic solids. The variational formalism allows us to embed the local constitutive equations into a global incremental potential, the minimization of which provides the solution to the mechanical problem. Softening damage models in their local form are known to result in spurious mesh-sensitive results, and hence non-locality (or regularization) has to be introduced to preserve the mathematical relevance of the problem. In the present paper, we consider two different regularization techniques for the viscoelastic damage model : a particular phase-field and a lip-field approach. The model parameters are calibrated to obtain some equivalence between both these approaches. Numerical results are then presented for the bidimensional case and both these approaches compare well. Numerical results also demonstrate the ability of the model to qualitatively represent the typical rate-dependent behaviour of the viscoelastic materials. Besides, the novelty of the present work lies in the use of lip-field approach for the first time in a viscoelastic context.
We present a novel numerical method for solving the anisotropic diffusion equation in toroidally confined magnetic fields which is efficient, accurate and provably stable. The continuous problem is written in terms of a derivative operator for the perpendicular transport and a linear operator, obtained through field line tracing, for the parallel transport. We derive energy estimates of the solution of the continuous initial boundary value problem. A discrete formulation is presented using operator splitting in time with the summation by parts finite difference approximation of spatial derivatives for the perpendicular diffusion operator. Weak penalty procedures are derived for implementing both boundary conditions and parallel diffusion operator obtained by field line tracing. We prove that the fully-discrete approximation is unconditionally stable and asymptotic preserving. Discrete energy estimates are shown to match the continuous energy estimate given the correct choice of penalty parameters. Convergence tests are shown for the perpendicular operator by itself, and the ``NIMROD benchmark" problem is used as a manufactured solution to show the full scheme converges even in the case where the perpendicular diffusion is zero. Finally, we present a magnetic field with chaotic regions and islands and show the contours of the anisotropic diffusion equation reproduce key features in the field.
The recurrent neural network has been greatly developed for effectively solving time-varying problems corresponding to complex environments. However, limited by the way of centralized processing, the model performance is greatly affected by factors like the silos problems of the models and data in reality. Therefore, the emergence of distributed artificial intelligence such as federated learning (FL) makes it possible for the dynamic aggregation among models. However, the integration process of FL is still server-dependent, which may cause a great risk to the overall model. Also, it only allows collaboration between homogeneous models, and does not have a good solution for the interaction between heterogeneous models. Therefore, we propose a Distributed Computation Model (DCM) based on the consortium blockchain network to improve the credibility of the overall model and effective coordination among heterogeneous models. In addition, a Distributed Hierarchical Integration (DHI) algorithm is also designed for the global solution process. Within a group, permissioned nodes collect the local models' results from different permissionless nodes and then sends the aggregated results back to all the permissionless nodes to regularize the processing of the local models. After the iteration is completed, the secondary integration of the local results will be performed between permission nodes to obtain the global results. In the experiments, we verify the efficiency of DCM, where the results show that the proposed model outperforms many state-of-the-art models based on a federated learning framework.
This paper is to investigate the high-quality analytical reconstructions of multiple source-translation computed tomography (mSTCT) under an extended field of view (FOV). Under the larger FOVs, the previously proposed backprojection filtration (BPF) algorithms for mSTCT, including D-BPF and S-BPF, make some intolerable errors in the image edges due to an unstable backprojection weighting factor and the half-scan mode, which deviates from the intention of mSTCT imaging. In this paper, to achieve reconstruction with as little error as possible under the extremely extended FOV, we propose two strategies, including deriving a no-weighting D-BPF (NWD-BPF) for mSTCT and introducing BPFs into a special full-scan mSTCT (F-mSTCT) to balance errors, i.e., abbreviated as FD-BPF and FS-BPF. For the first strategy, we eliminate this unstable backprojection weighting factor by introducing a special variable relationship in D-BPF. For the second strategy, we combine the F-mSTCT geometry with BPFs to study the performance and derive a suitable redundant weighting function for F-mSTCT. The experiments demonstrate our proposed methods for these strategies. Among them, NWD-BPF can weaken the instability at the image edges but blur the details, and FS-BPF can get high-quality stable images under the extremely extended FOV imaging a large object but requires more projections than FD-BPF. For different practical requirements in extending FOV imaging, we give suggestions on algorithm selection.
In this paper we study a non-local Cahn-Hilliard equation with singular single-well potential and degenerate mobility. This results as a particular case of a more general model derived for a binary, saturated, closed and incompressible mixture, composed by a tumor phase and a healthy phase, evolving in a bounded domain. The general system couples a Darcy-type evolution for the average velocity field with a convective reaction-diffusion type evolution for the nutrient concentration and a non-local convective Cahn-Hilliard equation for the tumor phase. The main mathematical difficulties are related to the proof of the separation property for the tumor phase in the Cahn-Hilliard equation: up to our knowledge, such problem is indeed open in the literature. For this reason, in the present contribution we restrict the analytical study to the Cahn-Hilliard equation only. For the non-local Cahn- Hilliard equation with singular single-well potential and degenerate mobility, we study the existence and uniqueness of weak solutions for spatial dimensions $d\leq 3$. After showing existence, we prove the strict separation property in three spatial dimensions, implying the same property also for lower spatial dimensions, which opens the way to the proof of uniqueness of solutions. Finally, we propose a well posed and gradient stable continuous finite element approximation of the model for $d\leq 3$, which preserves the physical properties of the continuos solution and which is computationally efficient, and we show simulation results in two spatial dimensions which prove the consistency of the proposed scheme and which describe the phase ordering dynamics associated to the system.
A novel numerical strategy is introduced for computing approximations of solutions to a Cahn-Hilliard model with degenerate mobilities. This model has recently been introduced as a second-order phase-field approximation for surface diffusion flows. Its numerical discretization is challenging due to the degeneracy of the mobilities, which generally requires an implicit treatment to avoid stability issues at the price of increased complexity costs. To mitigate this drawback, we consider new first- and second-order Scalar Auxiliary Variable (SAV) schemes that, differently from existing approaches, focus on the relaxation of the mobility, rather than the Cahn-Hilliard energy. These schemes are introduced and analysed theoretically in the general context of gradient flows and then specialised for the Cahn-Hilliard equation with mobilities. Various numerical experiments are conducted to highlight the advantages of these new schemes in terms of accuracy, effectiveness and computational cost.
Computational optimal transport (OT) has recently emerged as a powerful framework with applications in various fields. In this paper we focus on a relaxation of the original OT problem, the entropic OT problem, which allows to implement efficient and practical algorithmic solutions, even in high dimensional settings. This formulation, also known as the Schr\"odinger Bridge problem, notably connects with Stochastic Optimal Control (SOC) and can be solved with the popular Sinkhorn algorithm. In the case of discrete-state spaces, this algorithm is known to have exponential convergence; however, achieving a similar rate of convergence in a more general setting is still an active area of research. In this work, we analyze the convergence of the Sinkhorn algorithm for probability measures defined on the $d$-dimensional torus $\mathbb{T}_L^d$, that admit densities with respect to the Haar measure of $\mathbb{T}_L^d$. In particular, we prove pointwise exponential convergence of Sinkhorn iterates and their gradient. Our proof relies on the connection between these iterates and the evolution along the Hamilton-Jacobi-Bellman equations of value functions obtained from SOC-problems. Our approach is novel in that it is purely probabilistic and relies on coupling by reflection techniques for controlled diffusions on the torus.
Given a set of inelastic material models, a microstructure, a macroscopic structural geometry, and a set of boundary conditions, one can in principle always solve the governing equations to determine the system's mechanical response. However, for large systems this procedure can quickly become computationally overwhelming, especially in three-dimensions when the microstructure is locally complex. In such settings multi-scale modeling offers a route to a more efficient model by holding out the promise of a framework with fewer degrees of freedom, which at the same time faithfully represents, up to a certain scale, the behavior of the system. In this paper, we present a methodology that produces such models for inelastic systems upon the basis of a variational scheme. The essence of the scheme is the construction of a variational statement for the free energy as well as the dissipation potential for a coarse scale model in terms of the free energy and dissipation functions of the fine scale model. From the coarse scale energy and dissipation we can then generate coarse scale material models that are computationally far more efficient than either directly solving the fine scale model or by resorting to FE-square type modeling. Moreover, the coarse scale model preserves the essential mathematical structure of the fine scale model. An essential feature for such schemes is the proper definition of the coarse scale inelastic variables. By way of concrete examples, we illustrate the needed steps to generate successful models via application to problems in classical plasticity, included are comparisons to direct numerical simulations of the microstructure to illustrate the accuracy of the proposed methodology.
We develop a new model for spatial random field reconstruction of a binary-valued spatial phenomenon. In our model, sensors are deployed in a wireless sensor network across a large geographical region. Each sensor measures a non-Gaussian inhomogeneous temporal process which depends on the spatial phenomenon. Two types of sensors are employed: one collects point observations at specific time points, while the other collects integral observations over time intervals. Subsequently, the sensors transmit these time-series observations to a Fusion Center (FC), and the FC infers the spatial phenomenon from these observations. We show that the resulting posterior predictive distribution is intractable and develop a tractable two-step procedure to perform inference. Firstly, we develop algorithms to perform approximate Likelihood Ratio Tests on the time-series observations, compressing them to a single bit for both point sensors and integral sensors. Secondly, once the compressed observations are transmitted to the FC, we utilize a Spatial Best Linear Unbiased Estimator (S-BLUE) to reconstruct the binary spatial random field at any desired spatial location. The performance of the proposed approach is studied using simulation. We further illustrate the effectiveness of our method using a weather dataset from the National Environment Agency (NEA) of Singapore with fields including temperature and relative humidity.
During the usage phase, a technical product system is in permanent interaction with its environment. This interaction can lead to failures that significantly endanger the safety of the user and negatively affect the quality and reliability of the product. Conventional methods of failure analysis focus on the technical product system. The interaction of the product with its environment in the usage phase is not sufficiently considered, resulting in undetected potential failures of the product that lead to complaints. For this purpose, a methodology for failure identification is developed, which is continuously improved through product usage scenarios. The use cases are modelled according to a systems engineering approach with four views. The linking of the product system, physical effects, events and environmental factors enable the analysis of fault chains. These four parameters are subject to great complexity and must be systematically analysed using databases and expert knowledge. The scenarios are continuously updated by field data and complaints. The new approach can identify potential failures in a more systematic and holistic way. Complaints provide direct input on the scenarios. Unknown, previously unrecognized events can be systematically identified through continuous improvement. The complexity of the relationship between the product system and its environmental factors can thus be adequately taken into account in product development. Keywords: failure analysis, methodology, product development, systems engineering, scenario analysis, scenario improvement, environmental factors, product environment, continuous improvement.
When is heterogeneity in the composition of an autonomous robotic team beneficial and when is it detrimental? We investigate and answer this question in the context of a minimally viable model that examines the role of heterogeneous speeds in perimeter defense problems, where defenders share a total allocated speed budget. We consider two distinct problem settings and develop strategies based on dynamic programming and on local interaction rules. We present a theoretical analysis of both approaches and our results are extensively validated using simulations. Interestingly, our results demonstrate that the viability of heterogeneous teams depends on the amount of information available to the defenders. Moreover, our results suggest a universality property: across a wide range of problem parameters the optimal ratio of the speeds of the defenders remains nearly constant.