Mesoscale simulations of discrete defects in metals provide an ideal framework to investigate the micro-scale mechanisms governing the plastic deformation under high thermal and mechanical loading conditions. To bridge size and time-scale while limiting computational effort, typically the concept of representative volume elements (RVEs) is employed. This approach considers the microstructure evolution in a volume that is representative of the overall material behavior. However, in settings with complex thermal and mechanical loading histories careful consideration of the impact of modeling constraints in terms of time scale and simulation domain on predicted results is required. We address the representation of heterogeneous dislocation structure formation in simulation volumes using the example of residual stress formation during cool-down of laser powder-bed fusion (LPBF) of AISI 316L stainless steel. This is achieved by a series of large-scale three-dimensional discrete dislocation dynamics (DDD) simulations assisted by thermo-mechanical finite element modeling of the LPBF process. Our results show that insufficient size of periodic simulation domains can result in dislocation patterns that reflect the boundaries of the primary cell. More pronounced dislocation interaction observed for larger domains highlight the significance of simulation domain constraints for predicting mechanical properties. We formulate criteria that characterize representative volume elements by capturing the conformity of the dislocation structure to the bulk material. This work provides a basis for future investigations of heterogeneous microstructure formation in mesoscale simulations of bulk material behavior.
The Circular Economy (CE) is regarded as a solution to the environmental crisis. However, mainstream CE measures skirt around challenging the ethos of ever-increasing economic growth,overlooking social impacts and under-representing solutions such as reducing overall consumption. Circular Societies (CS) address these concerns by challenging this ethos. They emphasize ground-up social reorganization, address over-consumption through sufficiency strategies, and highlight the need for considering the complex inter-dependencies between nature, society, and technology on local,regional and global levels. However, no blueprint exists for forming CSs. An initial objective of my thesis is exploring existing social-network ontologies and developing a broadly applicable model for CSs. Since ground-up social reorganization on local, regional,and global levels has compounding effects on network complexities, a technological framework digitizing these inter-dependencies is necessary. Finally, adhering to CS principles of transparency and democratization, a system of trust is necessary to achieve collaborative consensus of the network state.
We provide an exact expressions for the 1-Wasserstein distance between independent location-scale distributions. The expressions are represented using location and scale parameters and special functions such as the standard Gaussian CDF or the Gamma function. Specifically, we find that the 1-Wasserstein distance between independent univariate location-scale distributions is equivalent to the mean of a folded distribution within the same family whose underlying location and scale are equal to the difference of the locations and scales of the original distributions. A new linear upper bound on the 1-Wasserstein distance is presented and the asymptotic bounds of the 1-Wasserstein distance are detailed in the Gaussian case. The effect of differential privacy using the Laplace and Gaussian mechanisms on the 1-Wasserstein distance is studied using the closed-form expressions and bounds.
We introduce a generalized additive model for location, scale, and shape (GAMLSS) next of kin aiming at distribution-free and parsimonious regression modelling for arbitrary outcomes. We replace the strict parametric distribution formulating such a model by a transformation function, which in turn is estimated from data. Doing so not only makes the model distribution-free but also allows to limit the number of linear or smooth model terms to a pair of location-scale predictor functions. We derive the likelihood for continuous, discrete, and randomly censored observations, along with corresponding score functions. A plethora of existing algorithms is leveraged for model estimation, including constrained maximum-likelihood, the original GAMLSS algorithm, and transformation trees. Parameter interpretability in the resulting models is closely connected to model selection. We propose the application of a novel best subset selection procedure to achieve especially simple ways of interpretation. All techniques are motivated and illustrated by a collection of applications from different domains, including crossing and partial proportional hazards, complex count regression, non-linear ordinal regression, and growth curves. All analyses are reproducible with the help of the "tram" add-on package to the R system for statistical computing and graphics.
High-dimensional/high-fidelity nonlinear dynamical systems appear naturally when the goal is to accurately model real-world phenomena. Many physical properties are thereby encoded in the internal differential structure of these resulting large-scale nonlinear systems. The high-dimensionality of the dynamics causes computational bottlenecks, especially when these large-scale systems need to be simulated for a variety of situations such as different forcing terms. This motivates model reduction where the goal is to replace the full-order dynamics with accurate reduced-order surrogates. Interpolation-based model reduction has been proven to be an effective tool for the construction of cheap-to-evaluate surrogate models that preserve the internal structure in the case of weak nonlinearities. In this paper, we consider the construction of multivariate interpolants in frequency domain for structured quadratic-bilinear systems. We propose definitions for structured variants of the symmetric subsystem and generalized transfer functions of quadratic-bilinear systems and provide conditions for structure-preserving interpolation by projection. The theoretical results are illustrated using two numerical examples including the simulation of molecular dynamics in crystal structures.
This article proposes a novel high-performance computing approach for the prediction of the temperature field in powder bed fusion (PBF) additive manufacturing processes. In contrast to many existing approaches to part-scale simulations, the underlying computational model consistently resolves physical scan tracks without additional heat source scaling, agglomeration strategies or any other heuristic modeling assumptions. A growing, adaptively refined mesh accurately captures all details of the laser beam motion. Critically, the fine spatial resolution required for resolved scan tracks in combination with the high scan velocities underlying these processes mandates the use of comparatively small time steps to resolve the underlying physics. Explicit time integration schemes are well-suited for this setting, while unconditionally stable implicit time integration schemes are employed for the interlayer cool down phase governed by significantly larger time scales. These two schemes are combined and implemented in an efficient fast operator evaluation framework providing significant performance gains and optimization opportunities. The capabilities of the novel framework are demonstrated through realistic AM examples on the centimeter scale including the first scan-resolved simulation of the entire NIST AM Benchmark cantilever specimen, with a computation time of less than one day. Apart from physical insights gained through these simulation examples, also numerical aspects are thoroughly studied on basis of weak and strong parallel scaling tests. As potential applications, the proposed thermal PBF simulation framework can serve as a basis for microstructure and thermo-mechanical predictions on the part-scale, but also to assess the influence of scan pattern and part geometry on melt pool shape and temperature, which are important indicators for well-known process instabilities.
Using Bayesian methods for extreme value analysis offers an alternative to frequentist ones, with several advantages such as easily dealing with parametric uncertainty or studying irregular models. However, computation can be challenging and the efficiency of algorithms can be altered by poor modelling choices, and among them the parameterization is crucial. We focus on the Poisson process characterization of univariate extremes and outline two key benefits of an orthogonal parameterization. First, Markov chain Monte Carlo convergence is improved when applied on orthogonal parameters. This analysis relies on convergence diagnostics computed on several simulations. Second, orthogonalization also helps deriving Jeffreys and penalized complexity priors, and establishing posterior propriety thereof. Our framework is applied to return level estimation of Garonne flow data (France).
This paper focuses on the study of the order of power series that are linear combinations of a given finite set of power series. The order of a formal power series, known as $\textrm{ord}(f)$, is defined as the minimum exponent of $x$ that has a non-zero coefficient in $f(x)$. Our first result is that the order of the Wronskian of these power series is equivalent up to a polynomial factor, to the maximum order which occurs in the linear combination of these power series. This implies that the Wronskian approach used in (Kayal and Saha, TOCT'2012) to upper bound the order of sum of square roots is optimal up to a polynomial blowup. We also demonstrate similar upper bounds, similar to those of (Kayal and Saha, TOCT'2012), for the order of power series in a variety of other scenarios. We also solve a special case of the inequality testing problem outlined in (Etessami et al., TOCT'2014). In the second part of the paper, we study the equality variant of the sum of square roots problem, which is decidable in polynomial time due to (Bl\"omer, FOCS'1991). We investigate a natural generalization of this problem when the input integers are given as straight line programs. Under the assumption of the Generalized Riemann Hypothesis (GRH), we show that this problem can be reduced to the so-called ``one dimensional'' variant. We identify the key mathematical challenges for solving this ``one dimensional'' variant.
This work addresses the patient-specific characterisation of the morphology and pathologies of muscle-skeletal districts (e.g., wrist, spine) to support diagnostic activities and follow-up exams through the integration of morphological and tissue information. We propose different methods for the integration of morphological information, retrieved from the geometrical analysis of 3D surface models, with tissue information extracted from volume images. For the qualitative and quantitative validation, we will discuss the localisation of bone erosion sites on the wrists to monitor rheumatic diseases and the characterisation of the three functional regions of the spinal vertebrae to study the presence of osteoporotic fractures. The proposed approach supports the quantitative and visual evaluation of possible damages, surgery planning, and early diagnosis or follow-up studies. Finally, our analysis is general enough to be applied to different districts.
Artificial Intelligence (AI) is rapidly becoming integrated into military Command and Control (C2) systems as a strategic priority for many defence forces. The successful implementation of AI is promising to herald a significant leap in C2 agility through automation. However, realistic expectations need to be set on what AI can achieve in the foreseeable future. This paper will argue that AI could lead to a fragility trap, whereby the delegation of C2 functions to an AI could increase the fragility of C2, resulting in catastrophic strategic failures. This calls for a new framework for AI in C2 to avoid this trap. We will argue that antifragility along with agility should form the core design principles for AI-enabled C2 systems. This duality is termed Agile, Antifragile, AI-Enabled Command and Control (A3IC2). An A3IC2 system continuously improves its capacity to perform in the face of shocks and surprises through overcompensation from feedback during the C2 decision-making cycle. An A3IC2 system will not only be able to survive within a complex operational environment, it will also thrive, benefiting from the inevitable shocks and volatility of war.
With the rapid increase of large-scale, real-world datasets, it becomes critical to address the problem of long-tailed data distribution (i.e., a few classes account for most of the data, while most classes are under-represented). Existing solutions typically adopt class re-balancing strategies such as re-sampling and re-weighting based on the number of observations for each class. In this work, we argue that as the number of samples increases, the additional benefit of a newly added data point will diminish. We introduce a novel theoretical framework to measure data overlap by associating with each sample a small neighboring region rather than a single point. The effective number of samples is defined as the volume of samples and can be calculated by a simple formula $(1-\beta^{n})/(1-\beta)$, where $n$ is the number of samples and $\beta \in [0,1)$ is a hyperparameter. We design a re-weighting scheme that uses the effective number of samples for each class to re-balance the loss, thereby yielding a class-balanced loss. Comprehensive experiments are conducted on artificially induced long-tailed CIFAR datasets and large-scale datasets including ImageNet and iNaturalist. Our results show that when trained with the proposed class-balanced loss, the network is able to achieve significant performance gains on long-tailed datasets.