The rise of machine learning has fueled the discovery of new materials and, especially, metamaterials--truss lattices being their most prominent class. While their tailorable properties have been explored extensively, the design of truss-based metamaterials has remained highly limited and often heuristic, due to the vast, discrete design space and the lack of a comprehensive parameterization. We here present a graph-based deep learning generative framework, which combines a variational autoencoder and a property predictor, to construct a reduced, continuous latent representation covering an enormous range of trusses. This unified latent space allows for the fast generation of new designs through simple operations (e.g., traversing the latent space or interpolating between structures). We further demonstrate an optimization framework for the inverse design of trusses with customized mechanical properties in both the linear and nonlinear regimes, including designs exhibiting exceptionally stiff, auxetic, pentamode-like, and tailored nonlinear behaviors. This generative model can predict manufacturable (and counter-intuitive) designs with extreme target properties beyond the training domain.
The ability to extract material parameters of perovskite from quantitative experimental analysis is essential for rational design of photovoltaic and optoelectronic applications. However, the difficulty of this analysis increases significantly with the complexity of the theoretical model and the number of material parameters for perovskite. Here we use Bayesian optimization to develop an analysis platform that can extract up to 8 fundamental material parameters of an organometallic perovskite semiconductor from a transient photoluminescence experiment, based on a complex full physics model that includes drift-diffusion of carriers and dynamic defect occupation. An example study of thermal degradation reveals that the carrier mobility and trap-assisted recombination coefficient are reduced noticeably, while the defect energy level remains nearly unchanged. The reduced carrier mobility can dominate the overall effect on thermal degradation of perovskite solar cells by reducing the fill factor, despite the opposite effect of the reduced trap-assisted recombination coefficient on increasing the fill factor. In future, this platform can be conveniently applied to other experiments or to combinations of experiments, accelerating materials discovery and optimization of semiconductor materials for photovoltaics and other applications.
Distributionally robust optimization has emerged as an attractive way to train robust machine learning models, capturing data uncertainty and distribution shifts. Recent statistical analyses have proved that robust models built from Wasserstein ambiguity sets have nice generalization guarantees, breaking the curse of dimensionality. However, these results are obtained in specific cases, at the cost of approximations, or under assumptions difficult to verify in practice. In contrast, we establish, in this article, exact generalization guarantees that cover all practical cases, including any transport cost function and any loss function, potentially non-convex and nonsmooth. For instance, our result applies to deep learning, without requiring restrictive assumptions. We achieve this result through a novel proof technique that combines nonsmooth analysis rationale with classical concentration results. Our approach is general enough to extend to the recent versions of Wasserstein/Sinkhorn distributionally robust problems that involve (double) regularizations.
Optogenetics is widely used to study the effects of neural circuit manipulation on behavior. However, the paucity of causal inference methodological work on this topic has resulted in analysis conventions that discard information, and constrain the scientific questions that can be posed. To fill this gap, we introduce a nonparametric causal inference framework for analyzing "closed-loop" designs, which use dynamic policies that assign treatment based on covariates. In this setting, standard methods can introduce bias and occlude causal effects. Building on the sequentially randomized experiments literature in causal inference, our approach extends history-restricted marginal structural models for dynamic regimes. In practice, our framework can identify a wide range of causal effects of optogenetics on trial-by-trial behavior, such as, fast/slow-acting, dose-response, additive/antagonistic, and floor/ceiling. Importantly, it does so without requiring negative controls, and can estimate how causal effect magnitudes evolve across time points. From another view, our work extends "excursion effect" methods--popular in the mobile health literature--to enable estimation of causal contrasts for treatment sequences greater than length one, in the presence of positivity violations. We derive rigorous statistical guarantees, enabling hypothesis testing of these causal effects. We demonstrate our approach on data from a recent study of dopaminergic activity on learning, and show how our method reveals relevant effects obscured in standard analyses.
Regression methods dominate the practice of biostatistical analysis, but biostatistical training emphasises the details of regression models and methods ahead of the purposes for which such modelling might be useful. More broadly, statistics is widely understood to provide a body of techniques for "modelling data", underpinned by what we describe as the "true model myth": that the task of the statistician/data analyst is to build a model that closely approximates the true data generating process. By way of our own historical examples and a brief review of mainstream clinical research journals, we describe how this perspective has led to a range of problems in the application of regression methods, including misguided "adjustment" for covariates, misinterpretation of regression coefficients and the widespread fitting of regression models without a clear purpose. We then outline a new approach to the teaching and application of biostatistical methods, which situates them within a framework that first requires clear definition of the substantive research question at hand within one of three categories: descriptive, predictive, or causal. Within this approach, the simple univariable regression model may be introduced as a tool for description, while the development and application of multivariable regression models as well as other advanced biostatistical methods should proceed differently according to the type of question. Regression methods will no doubt remain central to statistical practice as they provide a powerful tool for representing variation in a response or outcome variable as a function of "input" variables, but their conceptualisation and usage should follow from the purpose at hand.
The main objective of this work is to explore the possibility of incorporating radiomic information from multiple lesions into survival models. We hypothesise that when more lesions are present, their inclusion can improve model performance, and we aim to find an optimal strategy for using multiple distinct regions in modelling. The idea of using multiple regions of interest (ROIs) to extract radiomic features for predictive models has been implemented in many recent works. However, in almost all studies, analogous regions were segmented according to particular criteria for all patients -- for example, the primary tumour and peritumoral area, or subregions of the primary tumour. They can be included in a model in a straightforward way as additional features. A more interesting scenario occurs when multiple distinct ROIs are present, such as multiple lesions in a regionally disseminated cancer. Since the number of such regions may differ between patients, their inclusion in a model is non-trivial and requires additional processing steps. We proposed several methods of handling multiple ROIs representing either ROI or risk aggregation strategy, compared them to a published one, and evaluated their performance in different classes of survival models in a Monte Carlo Cross-Validation scheme. We demonstrated the effectiveness of the methods using a cohort of 115 non-small cell lung cancer patients, for whom we predicted the metastasis risk based on features extracted from PET images in original resolution or interpolated to CT image resolution. For both feature sets, incorporating all available lesions, as opposed to a singular ROI representing the primary tumour, allowed for considerable improvement of predictive ability regardless of the model.
Utilizing robotic systems in the construction industry is gaining popularity due to their build time, precision, and efficiency. In this paper, we introduce a system that allows the coordination of multiple manipulator robots for construction activities. As a case study, we chose robotic brick wall assembly. By utilizing a multi robot system where arm manipulators collaborate with each other, the entirety of a potentially long wall can be assembled simultaneously. However, the reduction of overall bricklaying time is dependent on the minimization of time required for each individual manipulator. In this paper, we execute the simulation with various placements of material and the robots base, as well as different robot configurations, to determine the optimal position of the robot and material and the best configuration for the robot. The simulation results provide users with insights into how to find the best placement of robots and raw materials for brick wall assembly.
This work highlights an approach for incorporating realistic uncertainties into scientific computing workflows based on finite elements, focusing on applications in computational mechanics and design optimization. We leverage Mat\'ern-type Gaussian random fields (GRFs) generated using the SPDE method to model aleatoric uncertainties, including environmental influences, variating material properties, and geometric ambiguities. Our focus lies on delivering practical GRF realizations that accurately capture imperfections and variations and understanding how they impact the predictions of computational models and the topology of optimized designs. We describe a numerical algorithm based on solving a generalized SPDE to sample GRFs on arbitrary meshed domains. The algorithm leverages established techniques and integrates seamlessly with the open-source finite element library MFEM and associated scientific computing workflows, like those found in industrial and national laboratory settings. Our solver scales efficiently for large-scale problems and supports various domain types, including surfaces and embedded manifolds. We showcase its versatility through biomechanics and topology optimization applications. The flexibility and efficiency of SPDE-based GRF generation empower us to run large-scale optimization problems on 2D and 3D domains, including finding optimized designs on embedded surfaces, and to generate topologies beyond the reach of conventional techniques. Moreover, these capabilities allow us to model geometric uncertainties of reconstructed submanifolds, such as the surfaces of cerebral aneurysms. In addition to offering benefits in these specific domains, the proposed techniques transcend specific applications and generalize to arbitrary forward and backward problems in uncertainty quantification involving finite elements.
Learning to Optimize (L2O) stands at the intersection of traditional optimization and machine learning, utilizing the capabilities of machine learning to enhance conventional optimization techniques. As real-world optimization problems frequently share common structures, L2O provides a tool to exploit these structures for better or faster solutions. This tutorial dives deep into L2O techniques, introducing how to accelerate optimization algorithms, promptly estimate the solutions, or even reshape the optimization problem itself, making it more adaptive to real-world applications. By considering the prerequisites for successful applications of L2O and the structure of the optimization problems at hand, this tutorial provides a comprehensive guide for practitioners and researchers alike.
Inference for functional linear models in the presence of heteroscedastic errors has received insufficient attention given its practical importance; in fact, even a central limit theorem has not been studied in this case. At issue, conditional mean estimates have complicated sampling distributions due to the infinite dimensional regressors, where truncation bias and scaling issues are compounded by non-constant variance under heteroscedasticity. As a foundation for distributional inference, we establish a central limit theorem for the estimated conditional mean under general dependent errors, and subsequently we develop a paired bootstrap method to provide better approximations of sampling distributions. The proposed paired bootstrap does not follow the standard bootstrap algorithm for finite dimensional regressors, as this version fails outside of a narrow window for implementation with functional regressors. The reason owes to a bias with functional regressors in a naive bootstrap construction. Our bootstrap proposal incorporates debiasing and thereby attains much broader validity and flexibility with truncation parameters for inference under heteroscedasticity; even when the naive approach may be valid, the proposed bootstrap method performs better numerically. The bootstrap is applied to construct confidence intervals for centered projections and for conducting hypothesis tests for the multiple conditional means. Our theoretical results on bootstrap consistency are demonstrated through simulation studies and also illustrated with a real data example.
We derive information-theoretic generalization bounds for supervised learning algorithms based on the information contained in predictions rather than in the output of the training algorithm. These bounds improve over the existing information-theoretic bounds, are applicable to a wider range of algorithms, and solve two key challenges: (a) they give meaningful results for deterministic algorithms and (b) they are significantly easier to estimate. We show experimentally that the proposed bounds closely follow the generalization gap in practical scenarios for deep learning.