Spinodal metamaterials, with architectures inspired by natural phase-separation processes, have presented a significant alternative to periodic and symmetric morphologies when designing mechanical metamaterials with extreme performance. While their elastic mechanical properties have been systematically determined, their large-deformation, nonlinear responses have been challenging to predict and design, in part due to limited data sets and the need for complex nonlinear simulations. This work presents a novel physics-enhanced machine learning (ML) and optimization framework tailored to address the challenges of designing intricate spinodal metamaterials with customized mechanical properties in large-deformation scenarios where computational modeling is restrictive and experimental data is sparse. By utilizing large-deformation experimental data directly, this approach facilitates the inverse design of spinodal structures with precise finite-strain mechanical responses. The framework sheds light on instability-induced pattern formation in spinodal metamaterials -- observed experimentally and in selected nonlinear simulations -- leveraging physics-based inductive biases in the form of nonconvex energetic potentials. Altogether, this combined ML, experimental, and computational effort provides a route for efficient and accurate design of complex spinodal metamaterials for large-deformation scenarios where energy absorption and prediction of nonlinear failure mechanisms is essential.
We address the problem of the best uniform approximation of a continuous function on a convex domain. The approximation is by linear combinations of a finite system of functions (not necessarily Chebyshev) under arbitrary linear constraints. By modifying the concept of alternance and of the Remez iterative procedure we present a method, which demonstrates its efficiency in numerical problems. The linear rate of convergence is proved under some favourable assumptions. A special attention is paid to systems of complex exponents, Gaussian functions, lacunar algebraic and trigonometric polynomials. Applications to signal processing, linear ODE, switching dynamical systems, and to Markov-Bernstein type inequalities are considered.
This article concerns the development of a fully conservative, positivity-preserving, and entropy-bounded discontinuous Galerkin scheme for the multicomponent, chemically reacting, compressible Navier-Stokes equations with complex thermodynamics. In particular, we extend to viscous flows the fully conservative, positivity-preserving, and entropy-bounded discontinuous Galerkin method for the chemically reacting Euler equations that we previously introduced. An important component of the formulation is the positivity-preserving Lax-Friedrichs-type viscous flux function devised by Zhang [J. Comput. Phys., 328 (2017), pp. 301-343], which was adapted to multicomponent flows by Du and Yang [J. Comput. Phys., 469 (2022), pp. 111548] in a manner that treats the inviscid and viscous fluxes as a single flux. Here, we similarly extend the aforementioned flux function to multicomponent flows but separate the inviscid and viscous fluxes, resulting in a different dissipation coefficient. This separation of the fluxes allows for use of other inviscid flux functions, as well as enforcement of entropy boundedness on only the convective contribution to the evolved state, as motivated by physical and mathematical principles. We also detail how to account for boundary conditions and incorporate previously developed techniques to reduce spurious pressure oscillations into the positivity-preserving framework. Furthermore, potential issues associated with the Lax-Friedrichs-type viscous flux function in the case of zero species concentrations are discussed and addressed. The resulting formulation is compatible with curved, multidimensional elements and general quadrature rules with positive weights. A variety of multicomponent, viscous flows is computed, ranging from a one-dimensional shock tube problem to multidimensional detonation waves and shock/mixing-layer interaction.
An essential problem in statistics and machine learning is the estimation of expectations involving PDFs with intractable normalizing constants. The self-normalized importance sampling (SNIS) estimator, which normalizes the IS weights, has become the standard approach due to its simplicity. However, the SNIS has been shown to exhibit high variance in challenging estimation problems, e.g, involving rare events or posterior predictive distributions in Bayesian statistics. Further, most of the state-of-the-art adaptive importance sampling (AIS) methods adapt the proposal as if the weights had not been normalized. In this paper, we propose a framework that considers the original task as estimation of a ratio of two integrals. In our new formulation, we obtain samples from a joint proposal distribution in an extended space, with two of its marginals playing the role of proposals used to estimate each integral. Importantly, the framework allows us to induce and control a dependency between both estimators. We propose a construction of the joint proposal that decomposes in two (multivariate) marginals and a coupling. This leads to a two-stage framework suitable to be integrated with existing or new AIS and/or variational inference (VI) algorithms. The marginals are adapted in the first stage, while the coupling can be chosen and adapted in the second stage. We show in several examples the benefits of the proposed methodology, including an application to Bayesian prediction with misspecified models.
This article discusses futility analyses for the MCP-Mod methodology. Formulas are derived for calculating predictive and conditional power for MCP-Mod, which also cover the case when longitudinal models are used allowing to utilize incomplete data from patients at interim. A simulation study is conducted to evaluate the repeated sampling properties of the proposed decision rules and to assess the benefit of using a longitudinal versus a completer only model for decision making at interim. The results suggest that the proposed methods perform adequately and a longitudinal analysis outperforms a completer only analysis, particularly when the recruitment speed is higher and the correlation over time is larger. The proposed methodology is illustrated using real data from a dose-finding study for severe uncontrolled asthma.
Weakly Supervised Semantic Segmentation (WSSS) employs weak supervision, such as image-level labels, to train the segmentation model. Despite the impressive achievement in recent WSSS methods, we identify that introducing weak labels with high mean Intersection of Union (mIoU) does not guarantee high segmentation performance. Existing studies have emphasized the importance of prioritizing precision and reducing noise to improve overall performance. In the same vein, we propose ORANDNet, an advanced ensemble approach tailored for WSSS. ORANDNet combines Class Activation Maps (CAMs) from two different classifiers to increase the precision of pseudo-masks (PMs). To further mitigate small noise in the PMs, we incorporate curriculum learning. This involves training the segmentation model initially with pairs of smaller-sized images and corresponding PMs, gradually transitioning to the original-sized pairs. By combining the original CAMs of ResNet-50 and ViT, we significantly improve the segmentation performance over the single-best model and the naive ensemble model, respectively. We further extend our ensemble method to CAMs from AMN (ResNet-like) and MCTformer (ViT-like) models, achieving performance benefits in advanced WSSS models. It highlights the potential of our ORANDNet as a final add-on module for WSSS models.
Eigenmaps are important in analysis, geometry, and machine learning, especially in nonlinear dimension reduction. Approximation of the eigenmaps of a Laplace operator depends crucially on the scaling parameter $\epsilon$. If $\epsilon$ is too small or too large, then the approximation is inaccurate or completely breaks down. However, an analytic expression for the optimal $\epsilon$ is out of reach. In our work, we use some explicitly solvable models and Monte Carlo simulations to find the approximately optimal range of $\epsilon$ that gives, on average, relatively accurate approximation of the eigenmaps. Numerically we can consider several model situations where eigen-coordinates can be computed analytically, including intervals with uniform and weighted measures, squares, tori, spheres, and the Sierpinski gasket. In broader terms, we intend to study eigen-coordinates on weighted Riemannian manifolds, possibly with boundary, and on some metric measure spaces, such as fractals.
In the field of materials science and manufacturing, a vast amount of heterogeneous data exists, encompassing measurement and simulation data, machine data, publications, and more. This data serves as the bedrock of valuable knowledge that can be leveraged for various engineering applications. However, efficiently storing and handling such diverse data remain significantly challenging, often due to the lack of standardization and integration across different organizational units. Addressing these issues is crucial for fully utilizing the potential of data-driven approaches in these fields. In this paper, we present a novel technology stack named Dataspace Management System (DSMS) for powering dataspace solutions. The core of DSMS lies on its distinctive knowledge management approach tuned to meet the specific demands of the materials science and manufacturing domain, all while adhering to the FAIR principles. This includes data integration, linkage, exploration, visualization, processing, and enrichment, in order to support engineers in decision-making and in solving design and optimization problems. We provide an architectural overview and describe the core components of DSMS. Additionally, we demonstrate the applicability of DSMS to typical data processing tasks in materials science through use cases from two research projects, namely StahlDigital and KupferDigital, both part of the German MaterialDigital initiative.
In the study of extremes, the presence of asymptotic independence signifies that extreme events across multiple variables are probably less likely to occur together. Although well-understood in a bivariate context, the concept remains relatively unexplored when addressing the nuances of joint occurrence of extremes in higher dimensions. In this paper, we propose a notion of mutual asymptotic independence to capture the behavior of joint extremes in dimensions larger than two and contrast it with the classical notion of (pairwise) asymptotic independence. Furthermore, we define $k$-wise asymptotic independence which lies in between pairwise and mutual asymptotic independence. The concepts are compared using examples of Archimedean, Gaussian and Marshall-Olkin copulas among others. Notably, for the popular Gaussian copula, we provide explicit conditions on the correlation matrix for mutual asymptotic independence to hold; moreover, we are able to compute exact tail orders for various tail events.
A non-linear complex system governed by multi-spatial and multi-temporal physics scales cannot be fully understood with a single diagnostic, as each provides only a partial view and much information is lost during data extraction. Combining multiple diagnostics also results in imperfect projections of the system's physics. By identifying hidden inter-correlations between diagnostics, we can leverage mutual support to fill in these gaps, but uncovering these inter-correlations analytically is too complex. We introduce a groundbreaking machine learning methodology to address this issue. Our multimodal approach generates super resolution data encompassing multiple physics phenomena, capturing detailed structural evolution and responses to perturbations previously unobservable. This methodology addresses a critical problem in fusion plasmas: the Edge Localized Mode (ELM), a plasma instability that can severely damage reactor walls. One method to stabilize ELM is using resonant magnetic perturbation to trigger magnetic islands. However, low spatial and temporal resolution of measurements limits the analysis of these magnetic islands due to their small size, rapid dynamics, and complex interactions within the plasma. With super-resolution diagnostics, we can experimentally verify theoretical models of magnetic islands for the first time, providing unprecedented insights into their role in ELM stabilization. This advancement aids in developing effective ELM suppression strategies for future fusion reactors like ITER and has broader applications, potentially revolutionizing diagnostics in fields such as astronomy, astrophysics, and medical imaging.
Graph representation learning for hypergraphs can be used to extract patterns among higher-order interactions that are critically important in many real world problems. Current approaches designed for hypergraphs, however, are unable to handle different types of hypergraphs and are typically not generic for various learning tasks. Indeed, models that can predict variable-sized heterogeneous hyperedges have not been available. Here we develop a new self-attention based graph neural network called Hyper-SAGNN applicable to homogeneous and heterogeneous hypergraphs with variable hyperedge sizes. We perform extensive evaluations on multiple datasets, including four benchmark network datasets and two single-cell Hi-C datasets in genomics. We demonstrate that Hyper-SAGNN significantly outperforms the state-of-the-art methods on traditional tasks while also achieving great performance on a new task called outsider identification. Hyper-SAGNN will be useful for graph representation learning to uncover complex higher-order interactions in different applications.