In material science, models are derived to predict emergent material properties (e.g. elasticity, strength, conductivity) and their relations to processing conditions. A major drawback is the calibration of model parameters that depend on processing conditions. Currently, these parameters must be optimized to fit measured data since their relations to processing conditions (e.g. deformation temperature, strain rate) are not fully understood. We present a new approach that identifies the functional dependency of calibration parameters from processing conditions based on genetic programming. We propose two (explicit and implicit) methods to identify these dependencies and generate short interpretable expressions. The approach is used to extend a physics-based constitutive model for deformation processes. This constitutive model operates with internal material variables such as a dislocation density and contains a number of parameters, among them three calibration parameters. The derived expressions extend the constitutive model and replace the calibration parameters. Thus, interpolation between various processing parameters is enabled. Our results show that the implicit method is computationally more expensive than the explicit approach but also produces significantly better results.
Physically-inspired latent force models offer an interpretable alternative to purely data driven tools for inference in dynamical systems. They carry the structure of differential equations and the flexibility of Gaussian processes, yielding interpretable parameters and dynamics-imposed latent functions. However, the existing inference techniques associated with these models rely on the exact computation of posterior kernel terms which are seldom available in analytical form. Most applications relevant to practitioners, such as Hill equations or diffusion equations, are hence intractable. In this paper, we overcome these computational problems by proposing a variational solution to a general class of non-linear and parabolic partial differential equation latent force models. Further, we show that a neural operator approach can scale our model to thousands of instances, enabling fast, distributed computation. We demonstrate the efficacy and flexibility of our framework by achieving competitive performance on several tasks where the kernels are of varying degrees of tractability.
The numerical optimization of an electrical machine entails computationally intensive and time-consuming magneto-static finite element (FE) simulation. Generally, this FE-simulation involves varying input geometry, electrical, and material parameters of an electrical machine. The result of the FE simulation characterizes the electromagnetic behavior of the electrical machine. It usually includes nonlinear iron losses and electromagnetic torque and flux at different time-steps for an electrical cycle at each operating point (varying electrical input phase current and control angle). In this paper, we present a novel data-driven deep learning (DL) approach to approximate the electromagnetic behavior of an electrical machine by predicting intermediate measures that include non-linear iron losses, a non-negligible fraction ($\frac{1}{6}$ of a whole electrical period) of the electromagnetic torque and flux at different time-steps for each operating point. The remaining time-steps of the electromagnetic flux and torque for an electrical cycle are estimated by exploiting the magnetic state symmetry of the electrical machine. Then these calculations, along with the system parameters, are fed as input to the physics-based analytical models to estimate characteristic maps and key performance indicators (KPIs) such as material cost, maximum torque, power, torque ripple, etc. The key idea is to train the proposed multi-branch deep neural network (DNN) step by step on a large volume of stored FE data in a supervised manner. Preliminary results exhibit that the predictions of intermediate measures and the subsequent computations of KPIs are close to the ground truth for a new machine design in the input design space. In the end, the quantitative analysis validates that the hybrid approach is more accurate than the existing DNN-based direct prediction of KPIs, which avoids electromagnetic calculations.
The PGAS model is well suited for executing irregular applications on cluster-based systems, due to its efficient support for short, one-sided messages. However, there are currently two major limitations faced by PGAS applications. The first relates to scalability: despite the availability of APIs that support non-blocking operations in special cases, many PGAS operations on remote locations are synchronous by default, which can lead to long-latency stalls and poor scalability. The second relates to productivity: while it is simpler for the developer to express all communications at a fine-grained granularity that is natural to the application, experiments have shown that such a natural expression results in performance that is 20x slower than more efficient but less productive code that requires manual message aggregation and termination detection. In this paper, we introduce a new programming system for PGAS applications, in which point-to-point remote operations can be expressed as fine-grained asynchronous actor messages. In this approach, the programmer does not need to worry about programming complexities related to message aggregation and termination detection. Our approach can also be viewed as extending the classical Bulk Synchronous Parallelism model with fine-grained asynchronous communications within a phase or superstep. We believe that our approach offers a desirable point in the productivity-performance space for PGAS applications, with more scalable performance and higher productivity relative to past approaches. Specifically, for seven irregular mini-applications from the Bale benchmark suite executed using 2048 cores in the NERSC Cori system, our approach shows geometric mean performance improvements of >=20x relative to standard PGAS versions (UPC and OpenSHMEM) while maintaining comparable productivity to those versions.
Error-bounded lossy compression is becoming an indispensable technique for the success of today's scientific projects with vast volumes of data produced during the simulations or instrument data acquisitions. Not only can it significantly reduce data size, but it also can control the compression errors based on user-specified error bounds. Autoencoder (AE) models have been widely used in image compression, but few AE-based compression approaches support error-bounding features, which are highly required by scientific applications. To address this issue, we explore using convolutional autoencoders to improve error-bounded lossy compression for scientific data, with the following three key contributions. (1) We provide an in-depth investigation of the characteristics of various autoencoder models and develop an error-bounded autoencoder-based framework in terms of the SZ model. (2) We optimize the compression quality for main stages in our designed AE-based error-bounded compression framework, fine-tuning the block sizes and latent sizes and also optimizing the compression efficiency of latent vectors. (3) We evaluate our proposed solution using five real-world scientific datasets and comparing them with six other related works. Experiments show that our solution exhibits a very competitive compression quality from among all the compressors in our tests. In absolute terms, it can obtain a much better compression quality (100% ~ 800% improvement in compression ratio with the same data distortion) compared with SZ2.1 and ZFP in cases with a high compression ratio.
Running machine learning algorithms on large and rapidly growing volumes of data is often computationally expensive, one common trick to reduce the size of a data set, and thus reduce the computational cost of machine learning algorithms, is \emph{probability sampling}. It creates a sampled data set by including each data point from the original data set with a known probability. Although the benefit of running machine learning algorithms on the reduced data set is obvious, one major concern is that the performance of the solution obtained from samples might be much worse than that of the optimal solution when using the full data set. In this paper, we examine the performance loss caused by probability sampling in the context of adaptive submodular maximization. We consider a simple probability sampling method which selects each data point with probability at least $r\in[0,1]$. If we set $r=1$, our problem reduces to finding a solution based on the original full data set. We define sampling gap as the largest ratio between the optimal solution obtained from the full data set and the optimal solution obtained from the samples, over independence systems. Our main contribution is to show that if the sampling probability of each data point is at least $r$ and the utility function is policywise submodular, then the sampling gap is both upper bounded and lower bounded by $1/r$. We show that the property of policywise submodular can be found in a wide range of real-world applications, including pool-based active learning and adaptive viral marketing.
Conventional magneto-static finite element analysis of electrical machine models is time-consuming and computationally expensive. Since each machine topology has a distinct set of parameters, design optimization is commonly performed independently. This paper presents a novel method for predicting Key Performance Indicators (KPIs) of differently parameterized electrical machine topologies at the same time by mapping a high dimensional integrated design parameters in a lower dimensional latent space using a variational autoencoder. After training, via a latent space, the decoder and multi-layer neural network will function as meta-models for sampling new designs and predicting associated KPIs, respectively. This enables parameter-based concurrent multi-topology optimization.
A toric code, introduced by Hansen to extend the Reed-Solomon code as a $k$-dimensional subspace of $\mathbb{F}_q^n$, is determined by a toric variety or its associated integral convex polytope $P \subseteq [0,q-2]^n$, where $k=|P \cap \mathbb{Z}^n|$ (the number of integer lattice points of $P$). There are two relevant parameters that determine the quality of a code: the information rate, which measures how much information is contained in a single bit of each codeword; and the relative minimum distance, which measures how many errors can be corrected relative to how many bits each codeword has. Soprunov and Soprunova defined a good infinite family of codes to be a sequence of codes of unbounded polytope dimension such that neither the corresponding information rates nor relative minimum distances go to 0 in the limit. We examine different ways of constructing families of codes by considering polytope operations such as the join and direct sum. In doing so, we give conditions under which no good family can exist and strong evidence that there is no such good family of codes.
A method of Sequential Log-Convex Programming (SLCP) is constructed that exploits the log-convex structure present in many engineering design problems. The mathematical structure of Geometric Programming (GP) is combined with the ability of Sequential Quadratic Program (SQP) to accommodate a wide range of objective and constraint functions, resulting in a practical algorithm that can be adopted with little to no modification of existing design practices. Three test problems are considered to demonstrate the SLCP algorithm, comparing it with SQP and the modified Logspace Sequential Quadratic Programming (LSQP). In these cases, SLCP shows up to a 77% reduction in number of iterations compared to SQP, and an 11% reduction compared to LSQP. The airfoil analysis code XFOIL is integrated into one of the case studies to show how SLCP can be used to evolve the fidelity of design problems that have initially been modeled as GP compatible. Finally, a methodology for design based on GP and SLCP is briefly discussed.
Computational design problems arise in a number of settings, from synthetic biology to computer architectures. In this paper, we aim to solve data-driven model-based optimization (MBO) problems, where the goal is to find a design input that maximizes an unknown objective function provided access to only a static dataset of prior experiments. Such data-driven optimization procedures are the only practical methods in many real-world domains where active data collection is expensive (e.g., when optimizing over proteins) or dangerous (e.g., when optimizing over aircraft designs). Typical methods for MBO that optimize the design against a learned model suffer from distributional shift: it is easy to find a design that "fools" the model into predicting a high value. To overcome this, we propose conservative objective models (COMs), a method that learns a model of the objective function that lower bounds the actual value of the ground-truth objective on out-of-distribution inputs, and uses it for optimization. Structurally, COMs resemble adversarial training methods used to overcome adversarial examples. COMs are simple to implement and outperform a number of existing methods on a wide range of MBO problems, including optimizing protein sequences, robot morphologies, neural network weights, and superconducting materials.
This paper describes a suite of algorithms for constructing low-rank approximations of an input matrix from a random linear image of the matrix, called a sketch. These methods can preserve structural properties of the input matrix, such as positive-semidefiniteness, and they can produce approximations with a user-specified rank. The algorithms are simple, accurate, numerically stable, and provably correct. Moreover, each method is accompanied by an informative error bound that allows users to select parameters a priori to achieve a given approximation quality. These claims are supported by numerical experiments with real and synthetic data.