Bilevel optimization, with broad applications in machine learning, has an intricate hierarchical structure. Gradient-based methods have emerged as a common approach to large-scale bilevel problems. However, the computation of the hyper-gradient, which involves a Hessian inverse vector product, confines the efficiency and is regarded as a bottleneck. To circumvent the inverse, we construct a sequence of low-dimensional approximate Krylov subspaces with the aid of the Lanczos process. As a result, the constructed subspace is able to dynamically and incrementally approximate the Hessian inverse vector product with less effort and thus leads to a favorable estimate of the hyper-gradient. Moreover, we propose a~provable subspace-based framework for bilevel problems where one central step is to solve a small-size tridiagonal linear system. To the best of our knowledge, this is the first time that subspace techniques are incorporated into bilevel optimization. This successful trial not only enjoys $\mathcal{O}(\epsilon^{-1})$ convergence rate but also demonstrates efficiency in a synthetic problem and two deep learning tasks.
Various iterative eigenvalue solvers have been developed to compute parts of the spectrum for a large sparse matrix, including the power method, Krylov subspace methods, contour integral methods, and preconditioned solvers such as the so called LOBPCG method. All of these solvers rely on random matrices to determine, e.g., starting vectors that have, with high probability, a non-negligible overlap with the eigenvectors of interest. For this purpose, a safe and common choice are unstructured Gaussian random matrices. In this work, we investigate the use of random Khatri-Rao products in eigenvalue solvers. On the one hand, we establish a novel subspace embedding property that provides theoretical justification for the use of such structured random matrices. On the other hand, we highlight the potential algorithmic benefits when solving eigenvalue problems with Kronecker product structure, as they arise frequently from the discretization of eigenvalue problems for differential operators on tensor product domains. In particular, we consider the use of random Khatri-Rao products within a contour integral method and LOBPCG. Numerical experiments indicate that the gains for the contour integral method strongly depend on the ability to efficiently and accurately solve (shifted) matrix equations with low-rank right-hand side. The flexibility of LOBPCG to directly employ preconditioners makes it easier to benefit from Khatri-Rao product structure, at the expense of having less theoretical justification.
Generative diffusion models apply the concept of Langevin dynamics in physics to machine leaning, attracting a lot of interest from industrial application, but a complete picture about inherent mechanisms is still lacking. In this paper, we provide a transparent physics analysis of the diffusion models, deriving the fluctuation theorem, entropy production, Franz-Parisi potential to understand the intrinsic phase transitions discovered recently. Our analysis is rooted in non-equlibrium physics and concepts from equilibrium physics, i.e., treating both forward and backward dynamics as a Langevin dynamics, and treating the reverse diffusion generative process as a statistical inference, where the time-dependent state variables serve as quenched disorder studied in spin glass theory. This unified principle is expected to guide machine learning practitioners to design better algorithms and theoretical physicists to link the machine learning to non-equilibrium thermodynamics.
We propose an operator learning approach to accelerate geometric Markov chain Monte Carlo (MCMC) for solving infinite-dimensional Bayesian inverse problems (BIPs). While geometric MCMC employs high-quality proposals that adapt to posterior local geometry, it requires repeated computations of gradients and Hessians of the log-likelihood, which becomes prohibitive when the parameter-to-observable (PtO) map is defined through expensive-to-solve parametric partial differential equations (PDEs). We consider a delayed-acceptance geometric MCMC method driven by a neural operator surrogate of the PtO map, where the proposal exploits fast surrogate predictions of the log-likelihood and, simultaneously, its gradient and Hessian. To achieve a substantial speedup, the surrogate must accurately approximate the PtO map and its Jacobian, which often demands a prohibitively large number of PtO map samples via conventional operator learning methods. In this work, we present an extension of derivative-informed operator learning [O'Leary-Roseberry et al., J. Comput. Phys., 496 (2024)] that uses joint samples of the PtO map and its Jacobian. This leads to derivative-informed neural operator (DINO) surrogates that accurately predict the observables and posterior local geometry at a significantly lower training cost than conventional methods. Cost and error analysis for reduced basis DINO surrogates are provided. Numerical studies demonstrate that DINO-driven MCMC generates effective posterior samples 3--9 times faster than geometric MCMC and 60--97 times faster than prior geometry-based MCMC. Furthermore, the training cost of DINO surrogates breaks even compared to geometric MCMC after just 10--25 effective posterior samples.
When modeling scientific and industrial problems, geometries are typically modeled by explicit boundary representations obtained from computer-aided design software. Unfitted (also known as embedded or immersed) finite element methods offer a significant advantage in dealing with complex geometries, eliminating the need for generating unstructured body-fitted meshes. However, current unfitted finite elements on nonlinear geometries are restricted to implicit (possibly high-order) level set geometries. In this work, we introduce a novel automatic computational pipeline to approximate solutions of partial differential equations on domains defined by explicit nonlinear boundary representations. For the geometrical discretization, we propose a novel algorithm to generate quadratures for the bulk and surface integration on nonlinear polytopes required to compute all the terms in unfitted finite element methods. The algorithm relies on a nonlinear triangulation of the boundary, a kd-tree refinement of the surface cells that simplify the nonlinear intersections of surface and background cells to simple cases that are diffeomorphically equivalent to linear intersections, robust polynomial root-finding algorithms and surface parameterization techniques. We prove the correctness of the proposed algorithm. We have successfully applied this algorithm to simulate partial differential equations with unfitted finite elements on nonlinear domains described by computer-aided design models, demonstrating the robustness of the geometric algorithm and showing high-order accuracy of the overall method.
Robust optimisation is a well-established framework for optimising functions in the presence of uncertainty. The inherent goal of this problem is to identify a collection of inputs whose outputs are both desirable for the decision maker, whilst also being robust to the underlying uncertainties in the problem. In this work, we study the multi-objective extension of this problem from a computational standpoint. We identify that the majority of all robust multi-objective algorithms rely on two key operations: robustification and scalarisation. Robustification refers to the strategy that is used to marginalise over the uncertainty in the problem. Whilst scalarisation refers to the procedure that is used to encode the relative importance of each objective. As these operations are not necessarily commutative, the order that they are performed in has an impact on the resulting solutions that are identified and the final decisions that are made. This work aims to give an exposition on the philosophical differences between these two operations and highlight when one should opt for one ordering over the other. As part of our analysis, we showcase how many existing risk concepts can be easily integrated into the specification and solution of a robust multi-objective optimisation problem. Besides this, we also demonstrate how one can principally define the notion of a robust Pareto front and a robust performance metric based on our robustify and scalarise methodology. To illustrate the efficacy of these new ideas, we present two insightful numerical case studies which are based on real-world data sets.
Machine learning surrogate emulators are needed in engineering design and optimization tasks to rapidly emulate computationally expensive physics-based models. In micromechanics problems the local full-field response variables are desired at microstructural length scales. While there has been a great deal of work on establishing architectures for these tasks there has been relatively little work on establishing microstructural experimental design strategies. This work demonstrates that intelligent selection of microstructural volume elements for subsequent physics simulations enables the establishment of more accurate surrogate models. There exist two key challenges towards establishing a suitable framework: (1) microstructural feature quantification and (2) establishment of a criteria which encourages construction of a diverse training data set. Three feature extraction strategies are used as well as three design criteria. A novel contrastive feature extraction approach is established for automated self-supervised extraction of microstructural summary statistics. Results indicate that for the problem considered up to a 8\% improvement in surrogate performance may be achieved using the proposed design and training strategy. Trends indicate this approach may be even more beneficial when scaled towards larger problems. These results demonstrate that the selection of an efficient experimental design is an important consideration when establishing machine learning based surrogate models.
Numerical modeling is essential for comprehending intricate physical phenomena in different domains. To handle complexity, sensitivity analysis, particularly screening, is crucial for identifying influential input parameters. Kernel-based methods, such as the Hilbert Schmidt Independence Criterion (HSIC), are valuable for analyzing dependencies between inputs and outputs. Moreover, due to the computational expense of such models, metamodels (or surrogate models) are often unavoidable. Implementing metamodels and HSIC requires data from the original model, which leads to the need for space-filling designs. While existing methods like Latin Hypercube Sampling (LHS) are effective for independent variables, incorporating dependence is challenging. This paper introduces a novel LHS variant, Quantization-based LHS, which leverages Voronoi vector quantization to address correlated inputs. The method ensures comprehensive coverage of stratified variables, enhancing distribution across marginals. The paper outlines expectation estimators based on Quantization-based LHS in various dependency settings, demonstrating their unbiasedness. The method is applied on several models of growing complexities, first on simple examples to illustrate the theory, then on more complex environmental hydrological models, when the dependence is known or not, and with more and more interactive processes and factors. The last application is on the digital twin of a French vineyard catchment (Beaujolais region) to design a vegetative filter strip and reduce water, sediment and pesticide transfers from the fields to the river. Quantization-based LHS is used to compute HSIC measures and independence tests, demonstrating its usefulness, especially in the context of complex models.
Data-driven, machine learning (ML) models of atomistic interactions are often based on flexible and non-physical functions that can relate nuanced aspects of atomic arrangements into predictions of energies and forces. As a result, these potentials are as good as the training data (usually results of so-called ab initio simulations) and we need to make sure that we have enough information for a model to become sufficiently accurate, reliable and transferable. The main challenge stems from the fact that descriptors of chemical environments are often sparse high-dimensional objects without a well-defined continuous metric. Therefore, it is rather unlikely that any ad hoc method of choosing training examples will be indiscriminate, and it will be easy to fall into the trap of confirmation bias, where the same narrow and biased sampling is used to generate train- and test- sets. We will demonstrate that classical concepts of statistical planning of experiments and optimal design can help to mitigate such problems at a relatively low computational cost. The key feature of the method we will investigate is that they allow us to assess the informativeness of data (how much we can improve the model by adding/swapping a training example) and verify if the training is feasible with the current set before obtaining any reference energies and forces -- a so-called off-line approach. In other words, we are focusing on an approach that is easy to implement and doesn't require sophisticated frameworks that involve automated access to high-performance computational (HPC).
The Bayesian evidence, crucial ingredient for model selection, is arguably the most important quantity in Bayesian data analysis: at the same time, however, it is also one of the most difficult to compute. In this paper we present a hierarchical method that leverages on a multivariate normalised approximant for the posterior probability density to infer the evidence for a model in a hierarchical fashion using a set of posterior samples drawn using an arbitrary sampling scheme.
We establish central limit theorems for principal eigenvalues and eigenvectors under a large factor model setting, and develop two-sample tests of both principal eigenvalues and principal eigenvectors. One important application is to detect structural breaks in large factor models. Compared with existing methods for detecting structural breaks, our tests provide unique insights into the source of structural breaks because they can distinguish between individual principal eigenvalues and/or eigenvectors. We demonstrate the application by comparing the principal eigenvalues and principal eigenvectors of S\&P500 Index constituents' daily returns over different years.