In topology optimization, the state of structures is typically obtained by numerically evaluating a discretized PDE-based model. The degrees of freedom of such a model can be partitioned in free and prescribed sets to define the boundary conditions. A multi-partition problem involves multiple partitions of the same discretization, typically corresponding to different loading scenarios. As a result, solving multi-partition problems involves multiple factorization/preconditionings of the system matrix, requiring a high computational effort. In this paper, a novel method is proposed to efficiently calculate the responses and accompanying design sensitivities in such multi-partition problems using static condensation for use in gradient-based topology optimization. A main problem class that benefits from the proposed method is the topology optimization of small-displacement multi-input-multi-output compliant mechanisms. However, the method is applicable to any linear problem. We present its formulation and an algorithmic complexity analysis to estimate computational advantages for both direct and iterative solution methods to solve the system of equations, verified by numerical experiments. It is demonstrated that substantial gains are achievable for large-scale multi-partition problems. This is especially true for problems with both a small set of number of degrees of freedom that fully describes the performance of the structure and with large similarities between the different partitions. A major contribution to the gain is the lack of large adjoint analyses required to obtain the sensitivities of the performance measure.
Chance constraints provide a principled framework to mitigate the risk of high-impact extreme events by modifying the controllable properties of a system. The low probability and rare occurrence of such events, however, impose severe sampling and computational requirements on classical solution methods that render them impractical. This work proposes a novel sampling-free method for solving rare chance constrained optimization problems affected by uncertainties that follow general Gaussian mixture distributions. By integrating modern developments in large deviation theory with tools from convex analysis and bilevel optimization, we propose tractable formulations that can be solved by off-the-shelf solvers. Our formulations enjoy several advantages compared to classical methods: their size and complexity is independent of event rarity, they do not require linearity or convexity assumptions on system constraints, and under easily verifiable conditions, serve as safe conservative approximations or asymptotically exact reformulations of the true problem. Computational experiments on linear, nonlinear and PDE-constrained problems from applications in portfolio management, structural engineering and fluid dynamics illustrate the broad applicability of our method and its advantages over classical sampling-based approaches in terms of both accuracy and efficiency.
Learning problems commonly exhibit an interesting feedback mechanism wherein the population data reacts to competing decision makers' actions. This paper formulates a new game theoretic framework for this phenomenon, called multi-player performative prediction. We focus on two distinct solution concepts, namely (i) performatively stable equilibria and (ii) Nash equilibria of the game. The latter equilibria are arguably more informative, but can be found efficiently only when the game is monotone. We show that under mild assumptions, the performatively stable equilibria can be found efficiently by a variety of algorithms, including repeated retraining and repeated (stochastic) gradient play. We then establish transparent sufficient conditions for strong monotonicity of the game and use them to develop algorithms for finding Nash equilibria. We investigate derivative free methods and adaptive gradient algorithms wherein each player alternates between learning a parametric description of their distribution and gradient steps on the empirical risk. Synthetic and semi-synthetic numerical experiments illustrate the results.
Spectral clustering has been one of the widely used methods for community detection in networks. However, large-scale networks bring computational challenges to the eigenvalue decomposition therein. In this paper, we study the spectral clustering using randomized sketching algorithms from a statistical perspective, where we typically assume the network data are generated from a stochastic block model that is not necessarily of full rank. To do this, we first use the recently developed sketching algorithms to obtain two randomized spectral clustering algorithms, namely, the random projection-based and the random sampling-based spectral clustering. Then we study the theoretical bounds of the resulting algorithms in terms of the approximation error for the population adjacency matrix, the misclassification error, and the estimation error for the link probability matrix. It turns out that, under mild conditions, the randomized spectral clustering algorithms lead to the same theoretical bounds as those of the original spectral clustering algorithm. We also extend the results to degree-corrected stochastic block models. Numerical experiments support our theoretical findings and show the efficiency of randomized methods. A new R package called Rclust is developed and made available to the public.
The ever-increasing number of nodes in current and future wireless communication networks brings unprecedented challenges for the allocation of the available communication resources. This is caused by the combinatorial nature of the resource allocation problems, which limits the performance of state-of-the-art techniques when the network size increases. In this paper, we take a new direction and investigate how methods from statistical physics can be used to address resource allocation problems in large networks. To this aim, we propose a novel model of the wireless network based on a type of disordered physical systems called spin glasses. We show that resource allocation problems have the same structure as the problem of finding specific configurations in spin glasses. Based on this parallel, we investigate the use of the Survey Propagation method from statistical physics in the solution of resource allocation problems in wireless networks. Through numerical simulations we show that the proposed statistical-physics-based resource allocation algorithm is a promising tool for the efficient allocation of communication resources in large wireless communications networks. Given a fixed number of resources, we are able to serve a larger number of nodes, compared to state-of-the-art reference schemes, without introducing more interference into the system
A support structure is required to successfully create structural parts in the powder bed fusion process for additive manufacturing. In this study, we present the topology optimization of a support structure that improves the heat dissipation in the building process. First, we construct a numerical method that obtains the temperature field in the building process, represented by the transient heat conduction phenomenon with the volume heat flux. Next, we formulate an optimization problem for maximizing heat dissipation and develop an optimization algorithm that incorporates a level-set-based topology optimization. A sensitivity of the objective function is derived using the adjoint variable method. Finally, several numerical examples are provided to demonstrate the effectiveness and validity of the proposed method.
In a wide range of practical problems, such as forming operations and impact tests, assuming that one of the contacting bodies is rigid is an excellent approximation to the physical phenomenon. In this work, the well-established dual mortar method is adopted to enforce interface constraints in the finite deformation frictionless contact of rigid and deformable bodies. The efficiency of the nonlinear contact algorithm proposed here is based on two main contributions. Firstly, a variational formulation of the method using the so-called Petrov-Galerkin scheme is investigated, as it unlocks a significant simplification by removing the need to explicitly evaluate the dual basis functions. The corresponding first-order dual mortar interpolation is presented in detail. Particular focus is, then, placed on the extension for second-order interpolation by employing a piecewise linear interpolation scheme, which critically retains the geometrical information of the finite element mesh. Secondly, a new definition for the nodal orthonormal moving frame attached to each contact node is suggested. It reduces the geometrical coupling between the nodes and consequently decreases the stiffness matrix bandwidth. The proposed contributions decrease the computational complexity of dual mortar methods for rigid/deformable interaction, especially in the three-dimensional setting, while preserving accuracy and robustness.
We study the dihedral multi-reference alignment problem of estimating the orbit of a signal from multiple noisy observations of the signal, acted on by random elements of the dihedral group. We show that if the group elements are drawn from a generic distribution, the orbit of a generic signal is uniquely determined from the second moment of the observations. This implies that the optimal estimation rate in the high noise regime is proportional to the square of the variance of the noise. This is the first result of this type for multi-reference alignment over a non-abelian group with a non-uniform distribution of group elements. Based on tools from invariant theory and algebraic geometry, we also delineate conditions for unique orbit recovery for multi-reference alignment models over finite groups (namely, when the dihedral group is replaced by a general finite group) when the group elements are drawn from a generic distribution. Finally, we design and study numerically three computational frameworks for estimating the signal based on group synchronization, expectation-maximization, and the method of moments.
Large sparse linear systems of equations are ubiquitous in science and engineering, such as those arising from discretizations of partial differential equations. Algebraic multigrid (AMG) methods are one of the most common methods of solving such linear systems, with an extensive body of underlying mathematical theory. A system of linear equations defines a graph on the set of unknowns and each level of a multigrid solver requires the selection of an appropriate coarse graph along with restriction and interpolation operators that map to and from the coarse representation. The efficiency of the multigrid solver depends critically on this selection and many selection methods have been developed over the years. Recently, it has been demonstrated that it is possible to directly learn the AMG interpolation and restriction operators, given a coarse graph selection. In this paper, we consider the complementary problem of learning to coarsen graphs for a multigrid solver, a necessary step in developing fully learnable AMG methods. We propose a method using a reinforcement learning (RL) agent based on graph neural networks (GNNs), which can learn to perform graph coarsening on small planar training graphs and then be applied to unstructured large planar graphs, assuming bounded node degree. We demonstrate that this method can produce better coarse graphs than existing algorithms, even as the graph size increases and other properties of the graph are varied. We also propose an efficient inference procedure for performing graph coarsening that results in linear time complexity in graph size.
In this paper we propose an effective non-rigid object tracking method based on spatial-temporal consistent saliency detection. In contrast to most existing trackers that use a bounding box to specify the tracked target, the proposed method can extract the accurate regions of the target as tracking output, which achieves better description of the non-rigid objects while reduces background pollution to the target model. Furthermore, our model has several unique features. First, a tailored deep fully convolutional neural network (TFCN) is developed to model the local saliency prior for a given image region, which not only provides the pixel-wise outputs but also integrates the semantic information. Second, a multi-scale multi-region mechanism is proposed to generate local region saliency maps that effectively consider visual perceptions with different spatial layouts and scale variations. Subsequently, these saliency maps are fused via a weighted entropy method, resulting in a final discriminative saliency map. Finally, we present a non-rigid object tracking algorithm based on the proposed saliency detection method by utilizing a spatial-temporal consistent saliency map (STCSM) model to conduct target-background classification and using a simple fine-tuning scheme for online updating. Numerous experimental results demonstrate that the proposed algorithm achieves competitive performance in comparison with state-of-the-art methods for both saliency detection and visual tracking, especially outperforming other related trackers on the non-rigid object tracking datasets.
In this paper, we study the optimal convergence rate for distributed convex optimization problems in networks. We model the communication restrictions imposed by the network as a set of affine constraints and provide optimal complexity bounds for four different setups, namely: the function $F(\xb) \triangleq \sum_{i=1}^{m}f_i(\xb)$ is strongly convex and smooth, either strongly convex or smooth or just convex. Our results show that Nesterov's accelerated gradient descent on the dual problem can be executed in a distributed manner and obtains the same optimal rates as in the centralized version of the problem (up to constant or logarithmic factors) with an additional cost related to the spectral gap of the interaction matrix. Finally, we discuss some extensions to the proposed setup such as proximal friendly functions, time-varying graphs, improvement of the condition numbers.