A well-established approach for inferring full displacement and stress fields from possibly sparse data is to calibrate the parameter of a given constitutive model using a Bayesian update. After calibration, a (stochastic) forward simulation is conducted with the identified model parameters to resolve physical fields in regions that were not accessible to the measurement device. A shortcoming of model calibration is that the model is deemed to best represent reality, which is only sometimes the case, especially in the context of the aging of structures and materials. While this issue is often addressed with repeated model calibration, a different approach is followed in the recently proposed statistical Finite Element Method (statFEM). Instead of using Bayes' theorem to update model parameters, the displacement is chosen as the stochastic prior and updated to fit the measurement data more closely. For this purpose, the statFEM framework introduces a so-called model-reality mismatch, parametrized by only three hyperparameters. This makes the inference of full-field data computationally efficient in an online stage: If the stochastic prior can be computed offline, solving the underlying partial differential equation (PDE) online is unnecessary. Compared to solving a PDE, identifying only three hyperparameters and conditioning the state on the sensor data requires much fewer computational resources. This paper presents two contributions to the existing statFEM approach: First, we use a non-intrusive polynomial chaos method to compute the prior, enabling the use of complex mechanical models in deterministic formulations. Second, we examine the influence of prior material models (linear elastic and St.Venant Kirchhoff material with uncertain Young's modulus) on the updated solution. We present statFEM results for 1D and 2D examples, while an extension to 3D is straightforward.
Model calibration consists of using experimental or field data to estimate the unknown parameters of a mathematical model. The presence of model discrepancy and measurement bias in the data complicates this task. Satellite interferograms, for instance, are widely used for calibrating geophysical models in geological hazard quantification. In this work, we used satellite interferograms to relate ground deformation observations to the properties of the magma chamber at K\={\i}lauea Volcano in Hawai`i. We derived closed-form marginal likelihoods and implemented posterior sampling procedures that simultaneously estimate the model discrepancy of physical models, and the measurement bias from the atmospheric error in satellite interferograms. We found that model calibration by aggregating multiple interferograms and downsampling the pixels in the interferograms can reduce the computation complexity compared to calibration approaches based on multiple data sets. The conditions that lead to no loss of information from data aggregation and downsampling are studied. Simulation illustrates that both discrepancy and measurement bias can be estimated, and real applications demonstrate that modeling both effects helps obtain a reliable estimation of a physical model's unobserved parameters and enhance its predictive accuracy. We implement the computational tools in the RobustCalibration package available on CRAN.
Time-dependent Maxwell's equations govern electromagnetics. Under certain conditions, we can rewrite these equations into a partial differential equation of second order, which in this case is the vectorial wave equation. For the vectorial wave, we investigate the numerical application and the challenges in the implementation. For this purpose, we consider a space-time variational setting, i.e. time is just another spatial dimension. More specifically, we apply integration by parts in time as well as in space, leading to a space-time variational formulation with different trial and test spaces. Conforming discretizations of tensor-product type result in a Galerkin--Petrov finite element method that requires a CFL condition for stability. For this Galerkin--Petrov variational formulation, we study the CFL condition and its sharpness. To overcome the CFL condition, we use a Hilbert-type transformation that leads to a variational formulation with equal trial and test spaces. Conforming space-time discretizations result in a new Galerkin--Bubnov finite element method that is unconditionally stable. In numerical examples, we demonstrate the effectiveness of this Galerkin--Bubnov finite element method. Furthermore, we investigate different projections of the right-hand side and their influence on the convergence rates. This paper is the first step towards a more stable computation and a better understanding of vectorial wave equations in a conforming space-time approach.
This work presents a rigorous mathematical formulation for topology optimization of a macrostructure undergoing ductile failure. The prediction of ductile solid materials which exhibit dominant plastic deformation is an intriguingly challenging task and plays an extremely important role in various engineering applications. Here, we rely on the phase-field approach to fracture which is a widely adopted framework for modeling and computing the fracture failure phenomena in solids. The first objective is to optimize the topology of the structure in order to minimize its mass, while accounting for structural damage. To do so, the topological phase transition function (between solid and void phases) is introduced, thus resulting in an extension of all the governing equations. Our second objective is to additionally enhance the fracture resistance of the structure. Accordingly, two different formulations are proposed. One requires only the residual force vector of the deformation field as a constraint, while in the second formulation, the residual force vector of the deformation and phase-field fracture simultaneously have been imposed. An incremental minimization principles for a class of gradient-type dissipative materials are used to derive the governing equations. Here, the level-set-based topology optimization is employed to seek an optimal layout with smooth and clear boundaries. Sensitivities are derived using the analytical gradient-based adjoint method to update the level-set surface for both formulations. Here, the evolution of the level-set surface is realized by the reaction-diffusion equation to maximize the strain energy of the structure while a certain volume of design domain is prescribed. Several three-dimensional numerical examples are presented to substantiate our algorithmic developments.
Recent work has shown that representation learning plays a critical role in sample-efficient reinforcement learning (RL) from pixels. Unfortunately, in real-world scenarios, representation learning is usually fragile to task-irrelevant distractions such as variations in background or viewpoint. To tackle this problem, we propose a novel clustering-based approach, namely Clustering with Bisimulation Metrics (CBM), which learns robust representations by grouping visual observations in the latent space. Specifically, CBM alternates between two steps: (1) grouping observations by measuring their bisimulation distances to the learned prototypes; (2) learning a set of prototypes according to the current cluster assignments. Computing cluster assignments with bisimulation metrics enables CBM to capture task-relevant information, as bisimulation metrics quantify the behavioral similarity between observations. Moreover, CBM encourages the consistency of representations within each group, which facilitates filtering out task-irrelevant information and thus induces robust representations against distractions. An appealing feature is that CBM can achieve sample-efficient representation learning even if multiple distractions exist simultaneously.Experiments demonstrate that CBM significantly improves the sample efficiency of popular visual RL algorithms and achieves state-of-the-art performance on both multiple and single distraction settings. The code is available at //github.com/MIRALab-USTC/RL-CBM.
This work introduces, analyzes and demonstrates an efficient and theoretically sound filtering strategy to ensure the condition of the least-squares problem solved at each iteration of Anderson acceleration. The filtering strategy consists of two steps: the first controls the length disparity between columns of the least-squares matrix, and the second enforces a lower bound on the angles between subspaces spanned by the columns of that matrix. The combined strategy is shown to control the condition number of the least-squares matrix at each iteration. The method is shown to be effective on a range of problems based on discretizations of partial differential equations. It is shown particularly effective for problems where the initial iterate may lie far from the solution, and which progress through distinct preasymptotic and asymptotic phases.
This paper presents $\Psi$-GNN, a novel Graph Neural Network (GNN) approach for solving the ubiquitous Poisson PDE problems with mixed boundary conditions. By leveraging the Implicit Layer Theory, $\Psi$-GNN models an ''infinitely'' deep network, thus avoiding the empirical tuning of the number of required Message Passing layers to attain the solution. Its original architecture explicitly takes into account the boundary conditions, a critical prerequisite for physical applications, and is able to adapt to any initially provided solution. $\Psi$-GNN is trained using a ''physics-informed'' loss, and the training process is stable by design, and insensitive to its initialization. Furthermore, the consistency of the approach is theoretically proven, and its flexibility and generalization efficiency are experimentally demonstrated: the same learned model can accurately handle unstructured meshes of various sizes, as well as different boundary conditions. To the best of our knowledge, $\Psi$-GNN is the first physics-informed GNN-based method that can handle various unstructured domains, boundary conditions and initial solutions while also providing convergence guarantees.
Researchers use recall to evaluate rankings across a variety of retrieval, recommendation, and machine learning tasks. While there is a colloquial interpretation of recall in set-based evaluation, the research community is far from a principled understanding of recall metrics for rankings. The lack of principled understanding of or motivation for recall has resulted in criticism amongst the retrieval community that recall is useful as a measure at all. In this light, we reflect on the measurement of recall in rankings from a formal perspective. Our analysis is composed of three tenets: recall, robustness, and lexicographic evaluation. First, we formally define `recall-orientation' as sensitivity to movement of the bottom-ranked relevant item. Second, we analyze our concept of recall orientation from the perspective of robustness with respect to possible searchers and content providers. Finally, we extend this conceptual and theoretical treatment of recall by developing a practical preference-based evaluation method based on lexicographic comparison. Through extensive empirical analysis across 17 TREC tracks, we establish that our new evaluation method, lexirecall, is correlated with existing recall metrics and exhibits substantially higher discriminative power and stability in the presence of missing labels. Our conceptual, theoretical, and empirical analysis substantially deepens our understanding of recall and motivates its adoption through connections to robustness and fairness.
This paper investigates the inverse source problem with a single propagating mode at multiple frequencies in an acoustic waveguide. The goal is to provide both theoretical justifications and efficient algorithms for imaging extended sources using the sampling methods. In contrast to the existing far/near field operator based on the integral over the space variable in the sampling methods, a multi-frequency far-field operator is introduced based on the integral over the frequency variable. This far-field operator is defined in a way to incorporate the possibly non-linear dispersion relation, a unique feature in waveguides. The factorization method is deployed to establish a rigorous characterization of the range support which is the support of source in the direction of wave propagation. A related factorization-based sampling method is also discussed. These sampling methods are shown to be capable of imaging the range support of the source. Numerical examples are provided to illustrate the performance of the sampling methods, including an example to image a complete sound-soft block.
Our goal is to produce methods for observational causal inference that are auditable, easy to troubleshoot, yield accurate treatment effect estimates, and scalable to high-dimensional data. We describe an almost-exact matching approach that achieves these goals by (i) learning a distance metric via outcome modeling, (ii) creating matched groups using the distance metric, and (iii) using the matched groups to estimate treatment effects. Our proposed method uses variable importance measurements to construct a distance metric, making it a flexible method that can be adapted to various applications. Concentrating on the scalability of the problem in the number of potential confounders, we operationalize our approach with LASSO. We derive performance guarantees for settings where LASSO outcome modeling consistently identifies all confounders (importantly without requiring the linear model to be correctly specified). We also provide experimental results demonstrating the auditability of matches, as well as extensions to more general nonparametric outcome modeling.
This PhD thesis contains several contributions to the field of statistical causal modeling. Statistical causal models are statistical models embedded with causal assumptions that allow for the inference and reasoning about the behavior of stochastic systems affected by external manipulation (interventions). This thesis contributes to the research areas concerning the estimation of causal effects, causal structure learning, and distributionally robust (out-of-distribution generalizing) prediction methods. We present novel and consistent linear and non-linear causal effects estimators in instrumental variable settings that employ data-dependent mean squared prediction error regularization. Our proposed estimators show, in certain settings, mean squared error improvements compared to both canonical and state-of-the-art estimators. We show that recent research on distributionally robust prediction methods has connections to well-studied estimators from econometrics. This connection leads us to prove that general K-class estimators possess distributional robustness properties. We, furthermore, propose a general framework for distributional robustness with respect to intervention-induced distributions. In this framework, we derive sufficient conditions for the identifiability of distributionally robust prediction methods and present impossibility results that show the necessity of several of these conditions. We present a new structure learning method applicable in additive noise models with directed trees as causal graphs. We prove consistency in a vanishing identifiability setup and provide a method for testing substructure hypotheses with asymptotic family-wise error control that remains valid post-selection. Finally, we present heuristic ideas for learning summary graphs of nonlinear time-series models.