The wide adoption of composite structures in the aerospace industry requires reliable numerical methods to account for the effects of various damage mechanisms, including delamination. Cohesive elements are a versatile and physically representative way of modelling delamination. However, using their standard form which conforms to solid substrate elements, multiple elements are required in the narrow cohesive zone, thereby requiring an excessively fine mesh and hindering the applicability in practical scenarios. The present work focuses on the implementation and testing of triangular thin plate substrate elements and compatible cohesive elements, which satisfy C1-continuity in the domain. The improved regularity meets the continuity requirement coming from the Kirchhoff Plate Theory and the triangular shape allows for conformity to complex geometries. The overall model is validated for mode I delamination, the case with the smallest cohesive zone. Very accurate predictions of the limit load and crack propagation phase are achieved, using elements as large as 11 times the cohesive zone.
We propose a quantum soft-covering problem for a given general quantum channel and one of its output states, which consists in finding the minimum rank of an input state needed to approximate the given channel output. We then prove a one-shot quantum covering lemma in terms of smooth min-entropies by leveraging decoupling techniques from quantum Shannon theory. This covering result is shown to be equivalent to a coding theorem for rate distortion under a posterior (reverse) channel distortion criterion by two of the present authors. Both one-shot results directly yield corollaries about the i.i.d. asymptotics, in terms of the coherent information of the channel. The power of our quantum covering lemma is demonstrated by two additional applications: first, we formulate a quantum channel resolvability problem, and provide one-shot as well as asymptotic upper and lower bounds. Secondly, we provide new upper bounds on the unrestricted and simultaneous identification capacities of quantum channels, in particular separating for the first time the simultaneous identification capacity from the unrestricted one, proving a long-standing conjecture of the last author.
The consistency of the maximum likelihood estimator for mixtures of elliptically-symmetric distributions for estimating its population version is shown, where the underlying distribution $P$ is nonparametric and does not necessarily belong to the class of mixtures on which the estimator is based. In a situation where $P$ is a mixture of well enough separated but nonparametric distributions it is shown that the components of the population version of the estimator correspond to the well separated components of $P$. This provides some theoretical justification for the use of such estimators for cluster analysis in case that $P$ has well separated subpopulations even if these subpopulations differ from what the mixture model assumes.
Topology optimization is an important basis for the design of components. Here, the optimal structure is found within a design space subject to boundary conditions. Thereby, the specific material law has a strong impact on the final design. An important kind of material behavior is hardening: then a, for instance, linear-elastic structure is not optimal if plastic deformation will be induced by the loads. Since hardening behavior has a remarkable impact on the resultant stress field, it needs to be accounted for during topology optimization. In this contribution, we present an extension of the thermodynamic topology optimization that accounts for this non-linear material behavior due to the evolution of plastic strains. For this purpose, we develop a novel surrogate model that allows to compute the plastic strain tensor corresponding to the current structure design for arbitrary hardening behavior. We show the agreement of the model with the classic plasticity model for monotonic loading. Furthermore, we demonstrate the interaction of the topology optimization for hardening material behavior results in structural changes.
We solve the Landau-Lifshitz-Gilbert equation in the finite-temperature regime, where thermal fluctuations are modeled by a random magnetic field whose variance is proportional to the temperature. By rescaling the temperature proportionally to the computational cell size $\Delta x$ ($T \to T\,\Delta x/a_{\text{eff}}$, where $a_{\text{eff}}$ is the lattice constant) [M. B. Hahn, J. Phys. Comm., 3:075009, 2019], we obtain Curie temperatures $T_{\text{C}}$ that are in line with the experimental values for cobalt, iron and nickel. For finite-sized objects such as nanowires (1D) and nanolayers (2D), the Curie temperature varies with the smallest size $d$ of the system. We show that the difference between the computed finite-size $T_{\text{C}}$ and the bulk $T_{\text{C}}$ follows a power-law of the type: $(\xi_0/d)^\lambda$, where $\xi_0$ is the correlation length at zero temperature, and $\lambda$ is a critical exponent. We obtain values of $\xi_0$ in the nanometer range, also in accordance with other simulations and experiments. The computed critical exponent is close to $\lambda=2$ for all considered materials and geometries. This is the expected result for a mean-field approach, but slightly larger than the values observed experimentally.
This manuscript examines the problem of nonlinear stochastic fractional neutral integro-differential equations with weakly singular kernels. Our focus is on obtaining precise estimates to cover all possible cases of Abel-type singular kernels. Initially, we establish the existence, uniqueness, and continuous dependence on the initial value of the true solution, assuming a local Lipschitz condition and linear growth condition. Additionally, we develop the Euler-Maruyama method for the numerical solution of the equation and prove its strong convergence under the same conditions as the well-posedness. Moreover, we determine the accurate convergence rate of this method under global Lipschitz conditions and linear growth conditions.
High-dimensional linear models have been widely studied, but the developments in high-dimensional generalized linear models, or GLMs, have been slower. In this paper, we propose an empirical or data-driven prior leading to an empirical Bayes posterior distribution which can be used for estimation of and inference on the coefficient vector in a high-dimensional GLM, as well as for variable selection. We prove that our proposed posterior concentrates around the true/sparse coefficient vector at the optimal rate, provide conditions under which the posterior can achieve variable selection consistency, and prove a Bernstein--von Mises theorem that implies asymptotically valid uncertainty quantification. Computation of the proposed empirical Bayes posterior is simple and efficient, and is shown to perform well in simulations compared to existing Bayesian and non-Bayesian methods in terms of estimation and variable selection.
Widely available measurement equipment in electrical distribution grids, such as power-quality measurement devices, substation meters, or customer smart meters do not provide phasor measurements due to the lack of high resolution time synchronisation. Instead such measurement devices allow to obtain magnitudes of voltages and currents and the local phase angle between those. In addition, these measurements are subject to measurement errors of up to few percent of the measurand. In order to utilize such measurements for grid monitoring, this paper presents and assesses a stochastic grid calculation approach that allows to derive confidence regions for the resulting current and voltage phasors. Two different metering models are introduced: a PMU model, which is used to validate theoretical properties of the estimator, and an Electric Meter model for which a Gaussian approximation is introduced. The estimator results are compared for the two meter models and case study results for a real Danish distribution grid are presented.
The study of neural operators has paved the way for the development of efficient approaches for solving partial differential equations (PDEs) compared with traditional methods. However, most of the existing neural operators lack the capability to provide uncertainty measures for their predictions, a crucial aspect, especially in data-driven scenarios with limited available data. In this work, we propose a novel Neural Operator-induced Gaussian Process (NOGaP), which exploits the probabilistic characteristics of Gaussian Processes (GPs) while leveraging the learning prowess of operator learning. The proposed framework leads to improved prediction accuracy and offers a quantifiable measure of uncertainty. The proposed framework is extensively evaluated through experiments on various PDE examples, including Burger's equation, Darcy flow, non-homogeneous Poisson, and wave-advection equations. Furthermore, a comparative study with state-of-the-art operator learning algorithms is presented to highlight the advantages of NOGaP. The results demonstrate superior accuracy and expected uncertainty characteristics, suggesting the promising potential of the proposed framework.
We describe a simple deterministic near-linear time approximation scheme for uncapacitated minimum cost flow in undirected graphs with real edge weights, a problem also known as transshipment. Specifically, our algorithm takes as input a (connected) undirected graph $G = (V, E)$, vertex demands $b \in \mathbb{R}^V$ such that $\sum_{v \in V} b(v) = 0$, positive edge costs $c \in \mathbb{R}_{>0}^E$, and a parameter $\varepsilon > 0$. In $O(\varepsilon^{-2} m \log^{O(1)} n)$ time, it returns a flow $f$ such that the net flow out of each vertex is equal to the vertex's demand and the cost of the flow is within a $(1 + \varepsilon)$ factor of optimal. Our algorithm is combinatorial and has no running time dependency on the demands or edge costs. With the exception of a recent result presented at STOC 2022 for polynomially bounded edge weights, all almost- and near-linear time approximation schemes for transshipment relied on randomization to embed the problem instance into low-dimensional space. Our algorithm instead deterministically approximates the cost of routing decisions that would be made if the input were subject to a random tree embedding. To avoid computing the $\Omega(n^2)$ vertex-vertex distances that an approximation of this kind suggests, we also limit the available routing decisions using distances explicitly stored in the well-known Thorup-Zwick distance oracle.
A novel strategy is proposed for the coupling of field and circuit equations when modeling power devices in the low-frequency regime. The resulting systems of differential-algebraic equations have a particular geometric structure which explicitly encodes the energy storage, dissipation, and transfer mechanisms. This implies a power balance on the continuous level which can be preserved under appropriate discretization in space and time. The models and main results are presented in detail for linear constitutive models, but the extension to nonlinear elements and more general coupling mechanisms is possible. The theoretical findings are demonstrated by numerical results.