High-order tensor methods for solving both convex and nonconvex optimization problems have generated significant research interest, leading to algorithms with optimal global rates of convergence and local rates that are faster than Newton's method. On each iteration, these methods require the unconstrained local minimization of a (potentially nonconvex) multivariate polynomial of degree higher than two, constructed using third-order (or higher) derivative information, and regularized by an appropriate power of regularization. Developing efficient techniques for solving such subproblems is an ongoing topic of research, and this paper addresses the case of the third-order tensor subproblem. We propose the CQR algorithmic framework, for minimizing a nonconvex Cubic multivariate polynomial with Quartic Regularisation, by minimizing a sequence of local quadratic models that incorporate simple cubic and quartic terms. The role of the cubic term is to crudely approximate local tensor information, while the quartic one controls model regularization and progress. We provide necessary and sufficient optimality conditions that fully characterise the global minimizers of these cubic-quartic models. We then turn these conditions into secular equations that can be solved using nonlinear eigenvalue techniques. We show, using our optimality characterisations, that a CQR algorithmic variant has the optimal-order evaluation complexity of $\mathcal{O}(\epsilon^{-3/2})$ when applied to minimizing our quartically-regularised cubic subproblem, which can be further improved in special cases. We propose practical CQR variants that use local tensor information to construct the local cubic-quartic models. We test these variants numerically and observe them to be competitive with ARC and other subproblem solvers on typical instances and even superior on ill-conditioned subproblems with special structure.
Consistency models, which were proposed to mitigate the high computational overhead during the sampling phase of diffusion models, facilitate single-step sampling while attaining state-of-the-art empirical performance. When integrated into the training phase, consistency models attempt to train a sequence of consistency functions capable of mapping any point at any time step of the diffusion process to its starting point. Despite the empirical success, a comprehensive theoretical understanding of consistency training remains elusive. This paper takes a first step towards establishing theoretical underpinnings for consistency models. We demonstrate that, in order to generate samples within $\varepsilon$ proximity to the target in distribution (measured by some Wasserstein metric), it suffices for the number of steps in consistency learning to exceed the order of $d^{5/2}/\varepsilon$, with $d$ the data dimension. Our theory offers rigorous insights into the validity and efficacy of consistency models, illuminating their utility in downstream inference tasks.
We present a study on asymptotically compatible Galerkin discretizations for a class of parametrized nonlinear variational problems. The abstract analytical framework is based on variational convergence, or Gamma-convergence. We demonstrate the broad applicability of the theoretical framework by developing asymptotically compatible finite element discretizations of some representative nonlinear nonlocal variational problems on a bounded domain. These include nonlocal nonlinear problems with classically-defined, local boundary constraints through heterogeneous localization at the boundary, as well as nonlocal problems posed on parameter-dependent domains.
Neural operators have been explored as surrogate models for simulating physical systems to overcome the limitations of traditional partial differential equation (PDE) solvers. However, most existing operator learning methods assume that the data originate from a single physical mechanism, limiting their applicability and performance in more realistic scenarios. To this end, we propose Physical Invariant Attention Neural Operator (PIANO) to decipher and integrate the physical invariants (PI) for operator learning from the PDE series with various physical mechanisms. PIANO employs self-supervised learning to extract physical knowledge and attention mechanisms to integrate them into dynamic convolutional layers. Compared to existing techniques, PIANO can reduce the relative error by 13.6\%-82.2\% on PDE forecasting tasks across varying coefficients, forces, or boundary conditions. Additionally, varied downstream tasks reveal that the PI embeddings deciphered by PIANO align well with the underlying invariants in the PDE systems, verifying the physical significance of PIANO. The source code will be publicly available at: //github.com/optray/PIANO.
We present a machine learning framework capable of consistently inferring mathematical expressions of hyperelastic energy functionals for incompressible materials from sparse experimental data and physical laws. To achieve this goal, we propose a polyconvex neural additive model (PNAM) that enables us to express the hyperelastic model in a learnable feature space while enforcing polyconvexity. An upshot of this feature space obtained via the PNAM is that (1) it is spanned by a set of univariate basis that can be re-parametrized with a more complex mathematical form, and (2) the resultant elasticity model is guaranteed to fulfill the polyconvexity, which ensures that the acoustic tensor remains elliptic for any deformation. To further improve the interpretability, we use genetic programming to convert each univariate basis into a compact mathematical expression. The resultant multi-variable mathematical models obtained from this proposed framework are not only more interpretable but are also proven to fulfill physical laws. By controlling the compactness of the learned symbolic form, the machine learning-generated mathematical model also requires fewer arithmetic operations than its deep neural network counterparts during deployment. This latter attribute is crucial for scaling large-scale simulations where the constitutive responses of every integration point must be updated within each incremental time step. We compare our proposed model discovery framework against other state-of-the-art alternatives to assess the robustness and efficiency of the training algorithms and examine the trade-off between interpretability, accuracy, and precision of the learned symbolic hyperelastic models obtained from different approaches. Our numerical results suggest that our approach extrapolates well outside the training data regime due to the precise incorporation of physics-based knowledge.
In the present work, the applicability of physics-augmented neural network (PANN) constitutive models for complex electro-elastic finite element analysis is demonstrated. For the investigations, PANN models for electro-elastic material behavior at finite deformations are calibrated to different synthetically generated datasets, including an analytical isotropic potential, a homogenised rank-one laminate, and a homogenised metamaterial with a spherical inclusion. Subsequently, boundary value problems inspired by engineering applications of composite electro-elastic materials are considered. Scenarios with large electrically induced deformations and instabilities are particularly challenging and thus necessitate extensive investigations of the PANN constitutive models in the context of finite element analyses. First of all, an excellent prediction quality of the model is required for very general load cases occurring in the simulation. Furthermore, simulation of large deformations and instabilities poses challenges on the stability of the numerical solver, which is closely related to the constitutive model. In all cases studied, the PANN models yield excellent prediction qualities and a stable numerical behavior even in highly nonlinear scenarios. This can be traced back to the PANN models excellent performance in learning both the first and second derivatives of the ground truth electro-elastic potentials, even though it is only calibrated on the first derivatives. Overall, this work demonstrates the applicability of PANN constitutive models for the efficient and robust simulation of engineering applications of composite electro-elastic materials.
We introduce a novel continual learning method based on multifidelity deep neural networks. This method learns the correlation between the output of previously trained models and the desired output of the model on the current training dataset, limiting catastrophic forgetting. On its own the multifidelity continual learning method shows robust results that limit forgetting across several datasets. Additionally, we show that the multifidelity method can be combined with existing continual learning methods, including replay and memory aware synapses, to further limit catastrophic forgetting. The proposed continual learning method is especially suited for physical problems where the data satisfy the same physical laws on each domain, or for physics-informed neural networks, because in these cases we expect there to be a strong correlation between the output of the previous model and the model on the current training domain.
The comparison of frequency distributions is a common statistical task with broad applications and a long history of methodological development. However, existing measures do not quantify the magnitude and direction by which one distribution is shifted relative to another. In the present study, we define distributional shift (DS) as the concentration of frequencies away from the greatest discrete class, e.g., a histogram's right-most bin. We derive a measure of DS based on the sum of cumulative frequencies, intuitively quantifying shift as a statistical moment. We then define relative distributional shift (RDS) as the difference in DS between distributions. Using simulated random sampling, we demonstrate that RDS is highly related to measures that are popularly used to compare frequency distributions. Focusing on a specific use case, i.e., simulated healthcare Evaluation and Management coding profiles, we show how RDS can be used to examine many pairs of empirical and expected distributions via shift-significance plots. In comparison to other measures, RDS has the unique advantage of being a signed (directional) measure based on a simple difference in an intuitive property.
The deformed energy method has shown to be a good option for dimensional synthesis of mechanisms. In this paper the introduction of some new features to such approach is proposed. First, constraints fixing dimensions of certain links are introduced in the error function of the synthesis problem. Second, requirements on distances between determinate nodes are included in the error function for the analysis of the deformed position problem. Both the overall synthesis error function and the inner analysis error function are optimized using a Sequential Quadratic Problem (SQP) approach. This also reduces the probability of branch or circuit defects. In the case of the inner function analytical derivatives are used, while in the synthesis optimization approximate derivatives have been introduced. Furthermore, constraints are analyzed under two formulations, the Euclidean distance and an alternative approach that uses the previous raised to the power of two. The latter approach is often used in kinematics, and simplifies the computation of derivatives. Some examples are provided to show the convergence order of the error function and the fulfilment of the constraints in both formulations studied under different topological situations or achieved energy levels.
Multi-fidelity models provide a framework for integrating computational models of varying complexity, allowing for accurate predictions while optimizing computational resources. These models are especially beneficial when acquiring high-accuracy data is costly or computationally intensive. This review offers a comprehensive analysis of multi-fidelity models, focusing on their applications in scientific and engineering fields, particularly in optimization and uncertainty quantification. It classifies publications on multi-fidelity modeling according to several criteria, including application area, surrogate model selection, types of fidelity, combination methods and year of publication. The study investigates techniques for combining different fidelity levels, with an emphasis on multi-fidelity surrogate models. This work discusses reproducibility, open-sourcing methodologies and benchmarking procedures to promote transparency. The manuscript also includes educational toy problems to enhance understanding. Additionally, this paper outlines best practices for presenting multi-fidelity-related savings in a standardized, succinct and yet thorough manner. The review concludes by examining current trends in multi-fidelity modeling, including emerging techniques, recent advancements, and promising research directions.
We derive information-theoretic generalization bounds for supervised learning algorithms based on the information contained in predictions rather than in the output of the training algorithm. These bounds improve over the existing information-theoretic bounds, are applicable to a wider range of algorithms, and solve two key challenges: (a) they give meaningful results for deterministic algorithms and (b) they are significantly easier to estimate. We show experimentally that the proposed bounds closely follow the generalization gap in practical scenarios for deep learning.