We study the finite element approximation of the solid isotropic material with penalization (SIMP) model for the topology optimization of the compliance of a linearly elastic structure. To ensure the existence of a minimizer to the infinite-dimensional problem, we consider two popular restriction methods: $W^{1,p}$-type regularization and density filtering. Previous results prove weak(-*) convergence in the solution space of the material distribution to an unspecified minimizer of the infinite-dimensional problem. In this work, we show that, for every isolated local or global minimizer, there exists a sequence of finite element minimizers that strongly converges to the minimizer in the solution space. As a by-product, this ensures that there exists a sequence of unfiltered discretized material distributions that does not exhibit checkerboarding.
We study the problem of estimating the convex hull of the image $f(X)\subset\mathbb{R}^n$ of a compact set $X\subset\mathbb{R}^m$ with smooth boundary through a smooth function $f:\mathbb{R}^m\to\mathbb{R}^n$. Assuming that $f$ is a diffeomorphism or a submersion, we derive new bounds on the Hausdorff distance between the convex hull of $f(X)$ and the convex hull of the images $f(x_i)$ of $M$ samples $x_i$ on the boundary of $X$. When applied to the problem of geometric inference from random samples, our results give tighter and more general error bounds than the state of the art. We present applications to the problems of robust optimization, of reachability analysis of dynamical systems, and of robust trajectory optimization under bounded uncertainty.
Information cascade in online social networks can be rather negative, e.g., the spread of rumors may trigger panic. To limit the influence of misinformation in an effective and efficient manner, the influence minimization (IMIN) problem is studied in the literature: given a graph G and a seed set S, blocking at most b vertices such that the influence spread of the seed set is minimized. In this paper, we are the first to prove the IMIN problem is NP-hard and hard to approximate. Due to the hardness of the problem, existing works resort to greedy solutions and use Monte-Carlo Simulations to solve the problem. However, they are cost-prohibitive on large graphs since they have to enumerate all the candidate blockers and compute the decrease of expected spread when blocking each of them. To improve the efficiency, we propose the AdvancedGreedy algorithm (AG) based on a new graph sampling technique that applies the dominator tree structure, which can compute the decrease of the expected spread of all candidate blockers at once. Besides, we further propose the GreedyReplace algorithm (GR) by considering the relationships among candidate blockers. Extensive experiments on 8 real-life graphs demonstrate that our AG and GR algorithms are significantly faster than the state-of-the-art by up to 6 orders of magnitude, and GR can achieve better effectiveness with its time cost close to AG.
In this paper we deal with the problem of sequential testing of multiple hypotheses. The main goal is minimising the expected sample size (ESS) under restrictions on the error probabilities. We use a variant of the method of Lagrange multipliers which is based on the minimisation of an auxiliary objective function (called Lagrangian). This function is defined as a weighted sum of all the test characteristics we are interested in: the error probabilities and the ESSs evaluated at some points of interest. In this paper, we use a definition of the Lagrangian function involving the ESS evaluated at any finite number of fixed parameter points (not necessarily those representing the hypotheses). Then we develop a computer-oriented method of minimisation of the Lagrangian function, that provides, depending on the specific choice of the parameter points, optimal tests in different concrete settings, like in Bayesian, Kiefer-Weiss and other settings. To exemplify the proposed methods for the particular case of sampling from a Bernoulli population we develop a set of computer algorithms for designing sequential tests that minimise the Lagrangian function and for the numerical evaluation of test characteristics like the error probabilities and the ESS, and other related. For the Bernoulli model, we made a series of computer evaluations related to the optimality of sequential multi-hypothesis tests, in a particular case of three hypotheses. A numerical comparison with the matrix sequential probability ratio test is carried out.
Robotics research has been focusing on cooperative multi-agent problems, where agents must work together and communicate to achieve a shared objective. To tackle this challenge, we explore imitation learning algorithms. These methods learn a controller by observing demonstrations of an expert, such as the behaviour of a centralised omniscient controller, which can perceive the entire environment, including the state and observations of all agents. Performing tasks with complete knowledge of the state of a system is relatively easy, but centralised solutions might not be feasible in real scenarios since agents do not have direct access to the state but only to their observations. To overcome this issue, we train end-to-end Neural Networks that take as input local observations obtained from an omniscient centralised controller, i.e., the agents' sensor readings and the communications received, producing as output the action to be performed and the communication to be transmitted. This study concentrates on two cooperative tasks using a distributed controller: distributing the robots evenly in space and colouring them based on their position relative to others. While an explicit exchange of messages between the agents is required to solve the second task, in the first one, a communication protocol is unnecessary, although it may increase performance. The experiments are run in Enki, a high-performance open-source simulator for planar robots, which provides collision detection and limited physics support for robots evolving on a flat surface. Moreover, it can simulate groups of robots hundreds of times faster than real-time. The results show how applying a communication strategy improves the performance of the distributed model, letting it decide which actions to take almost as precisely and quickly as the expert controller.
Modeling and shaping how information spreads through a network is a major research topic in network analysis. While initially the focus has been mostly on efficiency, recently fairness criteria have been taken into account in this setting. Most work has focused on the maximin criteria however, and thus still different groups can receive very different shares of information. In this work we propose to consider fairness as a notion to be guaranteed by an algorithm rather than as a criterion to be maximized. To this end, we propose three optimization problems that aim at maximizing the overall spread while enforcing strict levels of demographic parity fairness via constraints (either ex-post or ex-ante). The level of fairness hence becomes a user choice rather than a property to be observed upon output. We study this setting from various perspectives. First, we prove that the cost of introducing demographic parity can be high in terms of both overall spread and computational complexity, i.e., the price of fairness may be unbounded for all three problems and optimal solutions are hard to compute, in some case even approximately or when fairness constraints may be violated. For one of our problems, we still design an algorithm with both constant approximation factor and fairness violation. We also give two heuristics that allow the user to choose the tolerated fairness violation. By means of an extensive experimental study, we show that our algorithms perform well in practice, that is, they achieve the best demographic parity fairness values. For certain instances we additionally even obtain an overall spread comparable to the most efficient algorithms that come without any fairness guarantee, indicating that the empirical price of fairness may actually be small when using our algorithms.
Improvements in technology lead to increasing availability of large data sets which makes the need for data reduction and informative subsamples ever more important. In this paper we construct $ D $-optimal subsampling designs for polynomial regression in one covariate for invariant distributions of the covariate. We study quadratic regression more closely for specific distributions. In particular we make statements on the shape of the resulting optimal subsampling designs and the effect of the subsample size on the design. To illustrate the advantage of the optimal subsampling designs we examine the efficiency of uniform random subsampling.
Consider the problem of solving systems of linear algebraic equations $Ax=b$ with a real symmetric positive definite matrix $A$ using the conjugate gradient (CG) method. To stop the algorithm at the appropriate moment, it is important to monitor the quality of the approximate solution. One of the most relevant quantities for measuring the quality of the approximate solution is the $A$-norm of the error. This quantity cannot be easily computed, however, it can be estimated. In this paper we discuss and analyze the behaviour of the Gauss-Radau upper bound on the $A$-norm of the error, based on viewing CG as a procedure for approximating a certain Riemann-Stieltjes integral. This upper bound depends on a prescribed underestimate $\mu$ to the smallest eigenvalue of $A$. We concentrate on explaining a phenomenon observed during computations showing that, in later CG iterations, the upper bound loses its accuracy, and is almost independent of $\mu$. We construct a model problem that is used to demonstrate and study the behaviour of the upper bound in dependence of $\mu$, and developed formulas that are helpful in understanding this behavior. We show that the above mentioned phenomenon is closely related to the convergence of the smallest Ritz value to the smallest eigenvalue of $A$. It occurs when the smallest Ritz value is a better approximation to the smallest eigenvalue than the prescribed underestimate $\mu$. We also suggest an adaptive strategy for improving the accuracy of the upper bounds in the previous iterations.
Prophet inequalities for rewards maximization are fundamental to optimal stopping theory with extensive applications to mechanism design and online optimization. We study the \emph{cost minimization} counterpart of the classical prophet inequality: a decision maker is facing a sequence of costs $X_1, X_2, \dots, X_n$ drawn from known distributions in an online manner and \emph{must} ``stop'' at some point and take the last cost seen. The goal is to compete with a ``prophet'' who can see the realizations of all $X_i$'s upfront and always select the minimum, obtaining a cost of $\mathbb{E}[\min_i X_i]$. If the $X_i$'s are not identically distributed, no strategy can achieve a bounded approximation, even for random arrival order and $n = 2$. This leads us to consider the case where the $X_i$'s are independent and identically distributed (I.I.D.). For the I.I.D. case, we show that if the distribution satisfies a mild condition, the optimal stopping strategy achieves a (distribution-dependent) constant-factor approximation to the prophet's cost. Moreover, for MHR distributions, this constant is at most $2$. All our results are tight. We also demonstrate an example distribution that does not satisfy the condition and for which the competitive ratio of any algorithm is infinite. Turning our attention to single-threshold strategies, we design a threshold that achieves a $O\left(polylog{n}\right)$-factor approximation, where the exponent in the logarithmic factor is a distribution-dependent constant, and we show a matching lower bound. Finally, we note that our results can be used to design approximately optimal posted price-style mechanisms for procurement auctions which may be of independent interest. Our techniques utilize the \emph{hazard rate} of the distribution in a novel way, allowing for a fine-grained analysis which could find further applications in prophet inequalities.
We study the scaling limits of stochastic gradient descent (SGD) with constant step-size in the high-dimensional regime. We prove limit theorems for the trajectories of summary statistics (i.e., finite-dimensional functions) of SGD as the dimension goes to infinity. Our approach allows one to choose the summary statistics that are tracked, the initialization, and the step-size. It yields both ballistic (ODE) and diffusive (SDE) limits, with the limit depending dramatically on the former choices. We show a critical scaling regime for the step-size, below which the effective ballistic dynamics matches gradient flow for the population loss, but at which, a new correction term appears which changes the phase diagram. About the fixed points of this effective dynamics, the corresponding diffusive limits can be quite complex and even degenerate. We demonstrate our approach on popular examples including estimation for spiked matrix and tensor models and classification via two-layer networks for binary and XOR-type Gaussian mixture models. These examples exhibit surprising phenomena including multimodal timescales to convergence as well as convergence to sub-optimal solutions with probability bounded away from zero from random (e.g., Gaussian) initializations. At the same time, we demonstrate the benefit of overparametrization by showing that the latter probability goes to zero as the second layer width grows.
The extragradient method has recently gained increasing attention, due to its convergence behavior on smooth games. In $n$-player differentiable games, the eigenvalues of the Jacobian of the vector field are distributed on the complex plane. Thus, compared to classical (i.e., single player) minimization, games exhibit more convoluted dynamics, where the extragradient method succeeds while simple gradient method could fail. Yet, in this work, instead of focusing on a specific problem class, we follow a reverse path: starting from the momentum extragradient method as the selected optimizer, and using polynomial-based analyses, we identify problem subclasses where the use of momentum in extragradient motions lead to optimal performance. Based on the hyperparameter setup, we show that the extragradient with momentum exhibits three different modes of convergence: when the eigenvalues are distributed $i)$ on the real line, $ii)$ both on the real line along with complex conjugates, and $iii)$ only as complex conjugates. We then derive the optimal hyperparameters for each case, and show that it achieves an accelerated convergence rate.