We present monostatic sampling methods for limited-aperture scattering problems in two dimensions. The direct sampling method (DSM) is well known to provide a robust, stable, and fast numerical scheme for imaging inhomogeneities from multistatic measurements even with only one or two incident fields. However, in practical applications, monostatic measurements in limited-aperture configuration are frequently encountered. A monostatic sampling method (MSM) was studied in full-aperture configuration in recent literature. In this paper, we develop MSM in limited-aperture configuration and derive an asymptotic formula of the corresponding indicator function. Based on the asymptotic formula, we then analyze the imaging performance of the proposed method depending on the range of measurement directions and the geometric, material properties of inhomogeneities. Furthermore, we propose a modified numerical scheme with multi-frequency measurements that improve imaging performance, especially for small anomalies. Numerical simulations are presented to validate the analytical results.
In this paper we develop a new simple and effective isogeometric analysis for modeling thermal buckling of stiffened laminated composite plates with cutouts using level sets. We employ a first order shear deformation theory to approximate the displacement field of the stiffeners and the plate. Numerical modeling with a treatment of trimmed objects, such as internal cutouts in terms of NURBS-based isogeometric analysis presents several challenges, primarily due to need for using the tensor product of the NURBS basis functions. Due to this feature, the refinement operations can only be performed globally on the domain and not locally around the cutout. The new approach can overcome the drawbacks in modeling complex geometries with multiple-patches as the level sets are used to describe the internal cutouts; while the numerical integration is used only inside the physical domain. Results of parametric studies are presented which show the influence of ply orientation, size and orientation of the cutout and the position and profile of the curvilinear stiffeners. The numerical examples show high reliability and efficiency of the present method compared with other published solutions and ABAQUS.
Approximate message passing (AMP) is a promising technique for unknown signal reconstruction of certain high-dimensional linear systems with non-Gaussian signaling. A distinguished feature of the AMP-type algorithms is that their dynamics can be rigorously described by state evolution. However, state evolution does not necessarily guarantee the convergence of iterative algorithms. To solve the convergence problem of AMP-type algorithms in principle, this paper proposes a memory AMP (MAMP) under a sufficient statistic condition, named sufficient statistic MAMP (SS-MAMP). We show that the covariance matrices of SS-MAMP are L-banded and convergent. Given an arbitrary MAMP, we can construct an SS-MAMP by damping, which not only ensures the convergence of MAMP but also preserves the orthogonality of MAMP, i.e., its dynamics can be rigorously described by state evolution. As a byproduct, we prove that the Bayes-optimal orthogonal/vector AMP (BO-OAMP/VAMP) is an SS-MAMP. As a result, we reveal two interesting properties of BO-OAMP/VAMP for large systems: 1) the covariance matrices are L-banded and are convergent, and 2) damping and memory are useless (i.e., do not bring performance improvement). As an example, we construct a sufficient statistic Bayes-optimal MAMP (SS-BO-MAMP), which is Bayes optimal if its state evolution has a unique fixed point. In addition, the mean square error (MSE) of SS-BO-MAMP is not worse than the original BO-MAMP. Finally, simulations are provided to verify the validity and accuracy of the theoretical results.
A deep learning-based model reduction (DeePMR) method for simplifying chemical kinetics is proposed and validated using high-temperature auto-ignitions, perfectly stirred reactors (PSR), and one-dimensional freely propagating flames of n-heptane/air mixtures. The mechanism reduction is modeled as an optimization problem on Boolean space, where a Boolean vector, each entry corresponding to a species, represents a reduced mechanism. The optimization goal is to minimize the reduced mechanism size given the error tolerance of a group of pre-selected benchmark quantities. The key idea of the DeePMR is to employ a deep neural network (DNN) to formulate the objective function in the optimization problem. In order to explore high dimensional Boolean space efficiently, an iterative DNN-assisted data sampling and DNN training procedure are implemented. The results show that DNN-assistance improves sampling efficiency significantly, selecting only $10^5$ samples out of $10^{34}$ possible samples for DNN to achieve sufficient accuracy. The results demonstrate the capability of the DNN to recognize key species and reasonably predict reduced mechanism performance. The well-trained DNN guarantees the optimal reduced mechanism by solving an inverse optimization problem. By comparing ignition delay times, laminar flame speeds, temperatures in PSRs, the resulting skeletal mechanism has fewer species (45 species) but the same level of accuracy as the skeletal mechanism (56 species) obtained by the Path Flux Analysis (PFA) method. In addition, the skeletal mechanism can be further reduced to 28 species if only considering atmospheric, near-stoichiometric conditions (equivalence ratio between 0.6 and 1.2). The DeePMR provides an innovative way to perform model reduction and demonstrates the great potential of data-driven methods in the combustion area.
Nash equilibrium is a central concept in game theory. Several Nash solvers exist, yet none scale to normal-form games with many actions and many players, especially those with payoff tensors too big to be stored in memory. In this work, we propose an approach that iteratively improves an approximation to a Nash equilibrium through joint play. It accomplishes this by tracing a previously established homotopy that defines a continuum of equilibria for the game regularized with decaying levels of entropy. This continuum asymptotically approaches the limiting logit equilibrium, proven by McKelvey and Palfrey (1995) to be unique in almost all games, thereby partially circumventing the well-known equilibrium selection problem of many-player games. To encourage iterates to remain near this path, we efficiently minimize average deviation incentive via stochastic gradient descent, intelligently sampling entries in the payoff tensor as needed. Monte Carlo estimates of the stochastic gradient from joint play are biased due to the appearance of a nonlinear max operator in the objective, so we introduce additional innovations to the algorithm to alleviate gradient bias. The descent process can also be viewed as repeatedly constructing and reacting to a polymatrix approximation to the game. In these ways, our proposed approach, average deviation incentive descent with adaptive sampling (ADIDAS), is most similar to three classical approaches, namely homotopy-type, Lyapunov, and iterative polymatrix solvers. The lack of local convergence guarantees for biased gradient descent prevents guaranteed convergence to Nash, however, we demonstrate through extensive experiments the ability of this approach to approximate a unique Nash in normal-form games with as many as seven players and twenty one actions (several billion outcomes) that are orders of magnitude larger than those possible with prior algorithms.
In the paper, we propose a class of faster adaptive Gradient Descent Ascent (GDA) methods for solving the nonconvex-strongly-concave minimax problems based on unified adaptive matrices, which include almost existing coordinate-wise and global adaptive learning rates. Specifically, we propose a fast Adaptive Gradient Decent Ascent (AdaGDA) method based on the basic momentum technique, which reaches a lower gradient complexity of $O(\kappa^4\epsilon^{-4})$ for finding an $\epsilon$-stationary point without large batches, which improves the results of the existing adaptive GDA methods by a factor of $O(\sqrt{\kappa})$. At the same time, we present an accelerated version of AdaGDA (VR-AdaGDA) method based on the momentum-based variance reduced technique, which achieves a lower gradient complexity of $O(\kappa^{4.5}\epsilon^{-3})$ for finding an $\epsilon$-stationary point without large batches, which improves the results of the existing adaptive GDA methods by a factor of $O(\epsilon^{-1})$. Moreover, we prove that our VR-AdaGDA method reaches the best known gradient complexity of $O(\kappa^{3}\epsilon^{-3})$ with the mini-batch size $O(\kappa^3)$. In particular, we provide an effective convergence analysis framework for our adaptive GDA methods. Some experimental results on policy evaluation and fair classifier tasks demonstrate the efficiency of our algorithms.
The paper presents the results from a multi-year effort to develop and validate image processing methods for selecting the best physical models based on solar image observations. The approach consists of selecting the physical models based on their agreement with coronal holes extracted from the images. Ultimately, the goal is to use physical models to predict geomagnetic storms. We decompose the problem into three subproblems: (i) coronal hole segmentation based on physical constraints, (ii) matching clusters of coronal holes between different maps, and (iii) physical map classification. For segmenting coronal holes, we develop a multi-modal method that uses segmentation maps from three different methods to initialize a level-set method that evolves the initial coronal hole segmentation to the magnetic boundary. Then, we introduce a new method based on Linear Programming for matching clusters of coronal holes. The final matching is then performed using Random Forests. The methods were carefully validated using consensus maps derived from multiple readers, manual clustering, manual map classification, and method validation for 50 maps. The proposed multi-modal segmentation method significantly outperformed SegNet, U-net, Henney-Harvey, and FCN by providing accurate boundary detection. Overall, the method gave a 95.5% map classification accuracy.
Gaussian processes and random fields have a long history, covering multiple approaches to representing spatial and spatio-temporal dependence structures, such as covariance functions, spectral representations, reproducing kernel Hilbert spaces, and graph based models. This article describes how the stochastic partial differential equation approach to generalising Mat\'ern covariance models via Hilbert space projections connects with several of these approaches, with each connection being useful in different situations. In addition to an overview of the main ideas, some important extensions, theory, applications, and other recent developments are discussed. The methods include both Markovian and non-Markovian models, non-Gaussian random fields, non-stationary fields and space-time fields on arbitrary manifolds, and practical computational considerations.
We investigate the quality of space approximation of a class of stochastic integral equations of convolution type with Gaussian noise. Such equations arise, for example, when considering mild solutions of stochastic fractional order partial differential equations but also when considering mild solutions of classical stochastic partial differential equations. The key requirement for the equations is a smoothing property of the deterministic evolution operator which is typical in parabolic type problems. We show that if one has access to nonsmooth data estimates for the deterministic error operator together with its derivative of a space discretization procedure, then one obtains error estimates in pathwise H\"older norms with rates that can be read off the deterministic error rates. We illustrate the main result by considering a class of stochastic fractional order partial differential equations and space approximations performed by spectral Galerkin methods and finite elements. We also improve an existing result on the stochastic heat equation.
In real-world applications, data often come in a growing manner, where the data volume and the number of classes may increase dynamically. This will bring a critical challenge for learning: given the increasing data volume or the number of classes, one has to instantaneously adjust the neural model capacity to obtain promising performance. Existing methods either ignore the growing nature of data or seek to independently search an optimal architecture for a given dataset, and thus are incapable of promptly adjusting the architectures for the changed data. To address this, we present a neural architecture adaptation method, namely Adaptation eXpert (AdaXpert), to efficiently adjust previous architectures on the growing data. Specifically, we introduce an architecture adjuster to generate a suitable architecture for each data snapshot, based on the previous architecture and the different extent between current and previous data distributions. Furthermore, we propose an adaptation condition to determine the necessity of adjustment, thereby avoiding unnecessary and time-consuming adjustments. Extensive experiments on two growth scenarios (increasing data volume and number of classes) demonstrate the effectiveness of the proposed method.
In this work, we consider the distributed optimization of non-smooth convex functions using a network of computing units. We investigate this problem under two regularity assumptions: (1) the Lipschitz continuity of the global objective function, and (2) the Lipschitz continuity of local individual functions. Under the local regularity assumption, we provide the first optimal first-order decentralized algorithm called multi-step primal-dual (MSPD) and its corresponding optimal convergence rate. A notable aspect of this result is that, for non-smooth functions, while the dominant term of the error is in $O(1/\sqrt{t})$, the structure of the communication network only impacts a second-order term in $O(1/t)$, where $t$ is time. In other words, the error due to limits in communication resources decreases at a fast rate even in the case of non-strongly-convex objective functions. Under the global regularity assumption, we provide a simple yet efficient algorithm called distributed randomized smoothing (DRS) based on a local smoothing of the objective function, and show that DRS is within a $d^{1/4}$ multiplicative factor of the optimal convergence rate, where $d$ is the underlying dimension.