Finding the optimal design of experiments in the Bayesian setting typically requires estimation and optimization of the expected information gain functional. This functional consists of one outer and one inner integral, separated by the logarithm function applied to the inner integral. When the mathematical model of the experiment contains uncertainty about the parameters of interest and nuisance uncertainty, (i.e., uncertainty about parameters that affect the model but are not themselves of interest to the experimenter), two inner integrals must be estimated. Thus, the already considerable computational effort required to determine good approximations of the expected information gain is increased further. The Laplace approximation has been applied successfully in the context of experimental design in various ways, and we propose two novel estimators featuring the Laplace approximation to alleviate the computational burden of both inner integrals considerably. The first estimator applies Laplace's method followed by a Laplace approximation, introducing a bias. The second estimator uses two Laplace approximations as importance sampling measures for Monte Carlo approximations of the inner integrals. Both estimators use Monte Carlo approximation for the remaining outer integral estimation. We provide four numerical examples demonstrating the applicability and effectiveness of our proposed estimators.
We present here the classical Schwarz method with a time domain decomposition applied to unconstrained parabolic optimal control problems. Unlike Dirichlet-Neumann and Neumann-Neumann algorithms, we find different properties based on the forward-backward structure of the optimality system. Variants can be found using only Dirichlet and Neumann transmission conditions. Some of these variants are only good smoothers, while others could lead to efficient solvers.
We consider the use of multipreconditioning, which allows for multiple preconditioners to be applied in parallel, on high-frequency Helmholtz problems. Typical applications present challenging sparse linear systems which are complex non-Hermitian and, due to the pollution effect, either very large or else still large but under-resolved in terms of the physics. These factors make finding general purpose, efficient and scalable solvers difficult and no one approach has become the clear method of choice. In this work we take inspiration from domain decomposition strategies known as sweeping methods, which have gained notable interest for their ability to yield nearly-linear asymptotic complexity and which can also be favourable for high-frequency problems. While successful approaches exist, such as those based on higher-order interface conditions, perfectly matched layers (PMLs), or complex tracking of wave fronts, they can often be quite involved or tedious to implement. We investigate here the use of simple sweeping techniques applied in different directions which can then be incorporated in parallel into a multipreconditioned GMRES strategy. Preliminary numerical results on a two-dimensional benchmark problem will demonstrate the potential of this approach.
The method of multivariable Mendelian randomization uses genetic variants to instrument multiple exposures, to estimate the effect that a given exposure has on an outcome conditional on all other exposures included in a linear model. Unfortunately, the inclusion of every additional exposure makes a weak instruments problem more likely, because we require conditionally strong genetic predictors of each exposure. This issue is well appreciated in practice, with different versions of F-statistics routinely reported as measures of instument strength. Less transparently, however, these F-statistics are sometimes used to guide instrument selection, and even to decide whether to report empirical results. Rather than discarding findings with low F-statistics, weak instrument-robust methods can provide valid inference under weak instruments. For multivariable Mendelian randomization with two-sample summary data, we encourage use of the inference strategy of Andrews (2018) that reports both robust and non-robust confidence sets, along with a statistic that measures how reliable the non-robust confidence set is in terms of coverage. We also propose a novel adjusted-Kleibergen statistic that corrects for overdispersion heterogeneity in genetic associations with the outcome.
Tow steering technologies, such as Automated fiber placement, enable the fabrication of composite laminates with curvilinear fiber, tow, or tape paths. Designers may therefore tailor tow orientations locally according to the expected local stress state within a structure, such that strong and stiff orientations of the tow are (for example) optimized to provide maximal mechanical benefit. Tow path optimization can be an effective tool in automating this design process, yet has a tendency to create complex designs that may be challenging to manufacture. In the context of tow steering, these complexities can manifest in defects such as tow wrinkling, gaps, overlaps. In this work, we implement manufacturing constraints within the tow path optimization formulation to restrict the minimum tow turning radius and the maximum density of gaps between and overlaps of tows. This is achieved by bounding the local value of the curl and divergence of the vector field associated with the tow orientations. The resulting local constraints are effectively enforced in the optimization framework through the Augmented Lagrangian method. The resulting optimization methodology is demonstrated by designing 2D and 3D structures with optimized tow orientation paths that maximize stiffness (minimize compliance) considering various levels of manufacturing restrictions. The optimized tow paths are shown to be structurally efficient and to respect imposed manufacturing constraints. As expected, the more geometrical complexity that can be achieved by the feedstock tow and placement technology, the higher the stiffness of the resulting optimized design.
Multifactorial experimental designs allow us to assess the contribution of several factors, and potentially their interactions, to one or several responses of interests. Following the principles of the partition of the variance advocated by Sir R.A. Fisher, the experimental responses are factored into the quantitative contribution of main factors and interactions. A popular approach to perform this factorization in both ANOVA and ASCA(+) is through General Linear Models. Subsequently, different inferential approaches can be used to identify whether the contributions are statistically significant or not. Unfortunately, the performance of inferential approaches in terms of Type I and Type II errors can be heavily affected by missing data, outliers and/or the departure from normality of the distribution of the responses, which are commonplace problems in modern analytical experiments. In this paper, we study these problem and suggest good practices of application.
This paper establishes a universal framework for the nonlocal modeling of anisotropic damage at finite strains. By the combination of two recent works, the new framework allows for the flexible incorporation of different established hyperelastic finite strain material formulations into anisotropic damage whilst ensuring mesh-independent results by employing a generic set of micromorphic gradient-extensions. First, the anisotropic damage model, generally satisfying the damage growth criterion, is investigated for the specific choice of a Neo-Hookean material on a single element. Next, the model is applied with different gradient-extensions in structural simulations of an asymmetrically notched specimen to identify an efficient choice in the form of a volumetric-deviatoric regularization. Thereafter, the universal framework, which is without loss of generality here specified for a Neo-Hookean material with a volumetric-deviatoric gradient-extension, successfully serves for the complex simulation of a pressure loaded rotor blade. After acceptance of the manuscript, we make the codes of the material subroutines accessible to the public at //doi.org/10.5281/zenodo.11171630.
This work studies the distributionally robust evaluation of expected values over temporal data. A set of alternative measures is characterized by the causal optimal transport. We prove the strong duality and recast the causality constraint as minimization over an infinite-dimensional test function space. We approximate test functions by neural networks and prove the sample complexity with Rademacher complexity. An example is given to validate the feasibility of technical assumptions. Moreover, when structural information is available to further restrict the ambiguity set, we prove the dual formulation and provide efficient optimization methods. Our framework outperforms the classic counterparts in the distributionally robust portfolio selection problem. The connection with the naive strategy is also investigated numerically.
Mammals can generate autonomous behaviors in various complex environments through the coordination and interaction of activities at different levels of their central nervous system. In this paper, we propose a novel hierarchical learning control framework by mimicking the hierarchical structure of the central nervous system along with their coordination and interaction behaviors. The framework combines the active and passive control systems to improve both the flexibility and reliability of the control system as well as to achieve more diverse autonomous behaviors of robots. Specifically, the framework has a backbone of independent neural network controllers at different levels and takes a three-level dual descending pathway structure, inspired from the functionality of the cerebral cortex, cerebellum, and spinal cord. We comprehensively validated the proposed approach through the simulation as well as the experiment of a hexapod robot in various complex environments, including obstacle crossing and rapid recovery after partial damage. This study reveals the principle that governs the autonomous behavior in the central nervous system and demonstrates the effectiveness of the hierarchical control approach with the salient features of the hierarchical learning control architecture and combination of active and passive control systems.
The optimization of yields in multi-reactor systems, which are advanced tools in heterogeneous catalysis research, presents a significant challenge due to hierarchical technical constraints. To this respect, this work introduces a novel approach called process-constrained batch Bayesian optimization via Thompson sampling (pc-BO-TS) and its generalized hierarchical extension (hpc-BO-TS). This method, tailored for the efficiency demands in multi-reactor systems, integrates experimental constraints and balances between exploration and exploitation in a sequential batch optimization strategy. It offers an improvement over other Bayesian optimization methods. The performance of pc-BO-TS and hpc-BO-TS is validated in synthetic cases as well as in a realistic scenario based on data obtained from high-throughput experiments done on a multi-reactor system available in the REALCAT platform. The proposed methods often outperform other sequential Bayesian optimizations and existing process-constrained batch Bayesian optimization methods. This work proposes a novel approach to optimize the yield of a reaction in a multi-reactor system, marking a significant step forward in digital catalysis and generally in optimization methods for chemical engineering.
We propose a new second-order accurate lattice Boltzmann formulation for linear elastodynamics that is stable for arbitrary combinations of material parameters under a CFL-like condition. The construction of the numerical scheme uses an equivalent first-order hyperbolic system of equations as an intermediate step, for which a vectorial lattice Boltzmann formulation is introduced. The only difference to conventional lattice Boltzmann formulations is the usage of vector-valued populations, so that all computational benefits of the algorithm are preserved. Using the asymptotic expansion technique and the notion of pre-stability structures we further establish second-order consistency as well as analytical stability estimates. Lastly, we introduce a second-order consistent initialization of the populations as well as a boundary formulation for Dirichlet boundary conditions on 2D rectangular domains. All theoretical derivations are numerically verified by convergence studies using manufactured solutions and long-term stability tests.