The work provides an exhaustive comparison of some representative families of topology optimization methods for 3D structural optimization, such as the Solid Isotropic Material with Penalization (SIMP), the Level-set, the Bidirectional Evolutionary Structural Optimization (BESO), and the Variational Topology Optimization (VARTOP) methods. The main differences and similarities of these approaches are then highlighted from an algorithmic standpoint. The comparison is carried out via the study of a set of numerical benchmark cases using industrial-like fine-discretization meshes (around 1 million finite elements), and Matlab as the common computational platform, to ensure fair comparisons. Then, the results obtained for every benchmark case with the different methods are compared in terms of computational cost, topology quality, achieved minimum value of the objective function, and robustness of the computations (convergence in objective function and topology). Finally, some quantitative and qualitative results are presented, from which, an attempt of qualification of the methods, in terms of their relative performance, is done.
In this paper, we derive improved a priori error estimates for families of hybridizable interior penalty discontinuous Galerkin (H-IP) methods using a variable penalty for second-order elliptic problems. The strategy is to use a penalization function of the form $\mathcal{O}(1/h^{1+\delta})$, where $h$ denotes the mesh size and $\delta$ is a user-dependent parameter. We then quantify its direct impact on the convergence analysis, namely, the (strong) consistency, discrete coercivity, and boundedness (with $h^{\delta}$-dependency), and we derive updated error estimates for both discrete energy- and $L^{2}$-norms. The originality of the error analysis relies specifically on the use of conforming interpolants of the exact solution. All theoretical results are supported by numerical evidence.
Weight initialization plays an important role in training neural networks and also affects tremendous deep learning applications. Various weight initialization strategies have already been developed for different activation functions with different neural networks. These initialization algorithms are based on minimizing the variance of the parameters between layers and might still fail when neural networks are deep, e.g., dying ReLU. To address this challenge, we study neural networks from a nonlinear computation point of view and propose a novel weight initialization strategy that is based on the linear product structure (LPS) of neural networks. The proposed strategy is derived from the polynomial approximation of activation functions by using theories of numerical algebraic geometry to guarantee to find all the local minima. We also provide a theoretical analysis that the LPS initialization has a lower probability of dying ReLU comparing to other existing initialization strategies. Finally, we test the LPS initialization algorithm on both fully connected neural networks and convolutional neural networks to show its feasibility, efficiency, and robustness on public datasets.
In the present work, we describe a framework for modeling how models can be built that integrates concepts and methods from a wide range of fields. The information schism between the real-world and that which can be gathered and considered by any individual information processing agent is characterized and discussed, which is followed by the presentation of a series of the adopted requisites while developing the modeling approach. The issue of mapping from datasets into models is subsequently addressed, as well as some of the respectively implied difficulties and limitations. Based on these considerations, an approach to meta modeling how models are built is then progressively developed. First, the reference M^* meta model framework is presented, which relies critically in associating whole datasets and respective models in terms of a strict equivalence relation. Among the interesting features of this model are its ability to bridge the gap between data and modeling, as well as paving the way to an algebra of both data and models which can be employed to combine models into hierarchical manner. After illustrating the M* model in terms of patterns derived from regular lattices, the reported modeling approach continues by discussing how sampling issues, error and overlooked data can be addressed, leading to the $M^{<\epsilon>}$ variant. The situation in which the data needs to be represented in terms of respective probability densities is treated next, yielding the $M^{<\sigma>}$ meta model, which is then illustrated respectively to a real-world dataset (iris flowers data). Several considerations about how the developed framework can provide insights about data clustering, complexity, collaborative research, deep learning, and creativity are then presented, followed by overall conclusions.
This paper is concerned with numerical solution of transport problems in heterogeneous porous media. A semi-discrete continuous-in-time formulation of the linear advection-diffusion equation is obtained by using a mixed hybrid finite element method, in which the flux variable represents both the advective and diffusive flux, and the Lagrange multiplier arising from the hybridization is used for the discretization of the advective term. Based on global-in-time and nonoverlapping domain decomposition, we propose two implicit local time-stepping methods to solve the semi-discrete problem. The first method uses the time-dependent Steklov-Poincar\'e type operator and the second uses the optimized Schwarz waveform relaxation (OSWR) with Robin transmission conditions. For each method, we formulate a space-time interface problem which is solved iteratively. Each iteration involves solving the subdomain problems independently and globally in time; thus, different time steps can be used in the subdomains. The convergence of the fully discrete OSWR algorithm with nonmatching time grids is proved. Numerical results for problems with various Pecl\'et numbers and discontinuous coefficients, including a prototype for the simulation of the underground storage of nuclear waste, are presented to illustrate the performance of the proposed local time-stepping methods.
We consider linear sparse recovery problems where additional structure regarding the support of the solution is known. The form of the structure considered is non-overlapping sets of indices that each contain part of the support. An algorithm based on iterative hard thresholding is proposed to solve this problem. The convergence and error of the method are analyzed with respect to mutual coherence. Numerical simulations are examined in the context of an inverse source problem, including modifications for off-grid recovery
Computational design problems arise in a number of settings, from synthetic biology to computer architectures. In this paper, we aim to solve data-driven model-based optimization (MBO) problems, where the goal is to find a design input that maximizes an unknown objective function provided access to only a static dataset of prior experiments. Such data-driven optimization procedures are the only practical methods in many real-world domains where active data collection is expensive (e.g., when optimizing over proteins) or dangerous (e.g., when optimizing over aircraft designs). Typical methods for MBO that optimize the design against a learned model suffer from distributional shift: it is easy to find a design that "fools" the model into predicting a high value. To overcome this, we propose conservative objective models (COMs), a method that learns a model of the objective function that lower bounds the actual value of the ground-truth objective on out-of-distribution inputs, and uses it for optimization. Structurally, COMs resemble adversarial training methods used to overcome adversarial examples. COMs are simple to implement and outperform a number of existing methods on a wide range of MBO problems, including optimizing protein sequences, robot morphologies, neural network weights, and superconducting materials.
Since deep neural networks were developed, they have made huge contributions to everyday lives. Machine learning provides more rational advice than humans are capable of in almost every aspect of daily life. However, despite this achievement, the design and training of neural networks are still challenging and unpredictable procedures. To lower the technical thresholds for common users, automated hyper-parameter optimization (HPO) has become a popular topic in both academic and industrial areas. This paper provides a review of the most essential topics on HPO. The first section introduces the key hyper-parameters related to model training and structure, and discusses their importance and methods to define the value range. Then, the research focuses on major optimization algorithms and their applicability, covering their efficiency and accuracy especially for deep learning networks. This study next reviews major services and toolkits for HPO, comparing their support for state-of-the-art searching algorithms, feasibility with major deep learning frameworks, and extensibility for new modules designed by users. The paper concludes with problems that exist when HPO is applied to deep learning, a comparison between optimization algorithms, and prominent approaches for model evaluation with limited computational resources.
There has been appreciable progress in unsupervised network representation learning (UNRL) approaches over graphs recently with flexible random-walk approaches, new optimization objectives and deep architectures. However, there is no common ground for systematic comparison of embeddings to understand their behavior for different graphs and tasks. In this paper we theoretically group different approaches under a unifying framework and empirically investigate the effectiveness of different network representation methods. In particular, we argue that most of the UNRL approaches either explicitly or implicit model and exploit context information of a node. Consequently, we propose a framework that casts a variety of approaches -- random walk based, matrix factorization and deep learning based -- into a unified context-based optimization function. We systematically group the methods based on their similarities and differences. We study the differences among these methods in detail which we later use to explain their performance differences (on downstream tasks). We conduct a large-scale empirical study considering 9 popular and recent UNRL techniques and 11 real-world datasets with varying structural properties and two common tasks -- node classification and link prediction. We find that there is no single method that is a clear winner and that the choice of a suitable method is dictated by certain properties of the embedding methods, task and structural properties of the underlying graph. In addition we also report the common pitfalls in evaluation of UNRL methods and come up with suggestions for experimental design and interpretation of results.
There has been appreciable progress in unsupervised network representation learning (UNRL) approaches over graphs recently with flexible random-walk approaches, new optimization objectives and deep architectures. However, there is no common ground for systematic comparison of embeddings to understand their behavior for different graphs and tasks. In this paper we theoretically group different approaches under a unifying framework and empirically investigate the effectiveness of different network representation methods. In particular, we argue that most of the UNRL approaches either explicitly or implicit model and exploit context information of a node. Consequently, we propose a framework that casts a variety of approaches -- random walk based, matrix factorization and deep learning based -- into a unified context-based optimization function. We systematically group the methods based on their similarities and differences. We study the differences among these methods in detail which we later use to explain their performance differences (on downstream tasks). We conduct a large-scale empirical study considering 9 popular and recent UNRL techniques and 11 real-world datasets with varying structural properties and two common tasks -- node classification and link prediction. We find that there is no single method that is a clear winner and that the choice of a suitable method is dictated by certain properties of the embedding methods, task and structural properties of the underlying graph. In addition we also report the common pitfalls in evaluation of UNRL methods and come up with suggestions for experimental design and interpretation of results.
Detecting objects and estimating their pose remains as one of the major challenges of the computer vision research community. There exists a compromise between localizing the objects and estimating their viewpoints. The detector ideally needs to be view-invariant, while the pose estimation process should be able to generalize towards the category-level. This work is an exploration of using deep learning models for solving both problems simultaneously. For doing so, we propose three novel deep learning architectures, which are able to perform a joint detection and pose estimation, where we gradually decouple the two tasks. We also investigate whether the pose estimation problem should be solved as a classification or regression problem, being this still an open question in the computer vision community. We detail a comparative analysis of all our solutions and the methods that currently define the state of the art for this problem. We use PASCAL3D+ and ObjectNet3D datasets to present the thorough experimental evaluation and main results. With the proposed models we achieve the state-of-the-art performance in both datasets.