{mayi_des}
To gain a better theoretical understanding of how evolutionary algorithms (EAs) cope with plateaus of constant fitness, we propose the $n$-dimensional Plateau$_k$ function as natural benchmark and analyze how different variants of the $(1 + 1)$ EA optimize it. The Plateau$_k$ function has a plateau of second-best fitness in a ball of radius $k$ around the optimum. As evolutionary algorithm, we regard the $(1 + 1)$ EA using an arbitrary unbiased mutation operator. Denoting by $\alpha$ the random number of bits flipped in an application of this operator and assuming that $\Pr[\alpha = 1]$ has at least some small sub-constant value, we show the surprising result that for all constant $k \ge 2$, the runtime $T$ follows a distribution close to the geometric one with success probability equal to the probability to flip between $1$ and $k$ bits divided by the size of the plateau. Consequently, the expected runtime is the inverse of this number, and thus only depends on the probability to flip between $1$ and $k$ bits, but not on other characteristics of the mutation operator. Our result also implies that the optimal mutation rate for standard bit mutation here is approximately $k/(en)$. Our main analysis tool is a combined analysis of the Markov chains on the search point space and on the Hamming level space, an approach that promises to be useful also for other plateau problems.
We present a unified analysis method that relies on the generalized cosine rule and $\phi$-convex for online optimization in normed vector space using dynamic regret as the performance metric. In combing the update rules, we start with strategy $S$ (a two-parameter variant strategy covering Optimistic-FTRL with surrogate linearized losses), and obtain $S$-I (type-I relaxation variant form of $S$) and $S$-II (type-II relaxation variant form of $S$, which is Optimistic-MD) by relaxation. Regret bounds for $S$-I and $S$-II are the tightest possible. As instantiations, regret bounds of normalized exponentiated subgradient and greedy/lazy projection are better than the currently known optimal results. By replacing losses of online game with monotone operators, and extending the definition of regret, namely regret$^n$, we extend online convex optimization to online monotone optimization, which expands the application scope of $S$-I and $S$-II.
We consider a class of submodular maximization problems in which decision-makers have limited access to the objective function. We explore scenarios where the decision-maker can observe only pairwise information, i.e., can evaluate the objective function on sets of size two. We begin with a negative result that no algorithm using only $k$-wise information can guarantee performance better than $k/n$. We present two algorithms that utilize only pairwise information about the function and characterize their performance relative to the optimal, which depends on the curvature of the submodular function. Additionally, if the submodular function possess a property called supermodularity of conditioning, then we can provide a method to bound the performance based purely on pairwise information. The proposed algorithms offer significant computational speedups over a traditional greedy strategy. A by-product of our study is the introduction of two new notions of curvature, the $k$-Marginal Curvature and the $k$-Cardinality Curvature. Finally, we present experiments highlighting the performance of our proposed algorithms in terms of approximation and time complexity.
In this paper, we revisit the constrained and stochastic continuous submodular maximization in both offline and online settings. For each $\gamma$-weakly DR-submodular function $f$, we use the factor-revealing optimization equation to derive an optimal auxiliary function $F$, whose stationary points provide a $(1-e^{-\gamma})$-approximation to the global maximum value (denoted as $OPT$) of problem $\max_{\boldsymbol{x}\in\mathcal{C}}f(\boldsymbol{x})$. Naturally, the projected (mirror) gradient ascent relied on this non-oblivious function achieves $(1-e^{-\gamma}-\epsilon^{2})OPT-\epsilon$ after $O(1/\epsilon^{2})$ iterations, beating the traditional $(\frac{\gamma^{2}}{1+\gamma^{2}})$-approximation gradient ascent \citep{hassani2017gradient} for submodular maximization. Similarly, based on $F$, the classical Frank-Wolfe algorithm equipped with variance reduction technique \citep{mokhtari2018conditional} also returns a solution with objective value larger than $(1-e^{-\gamma}-\epsilon^{2})OPT-\epsilon$ after $O(1/\epsilon^{3})$ iterations. In the online setting, we first consider the adversarial delays for stochastic gradient feedback, under which we propose a boosting online gradient algorithm with the same non-oblivious search, achieving a regret of $\sqrt{D}$ (where $D$ is the sum of delays of gradient feedback) against a $(1-e^{-\gamma})$-approximation to the best feasible solution in hindsight. Finally, extensive numerical experiments demonstrate the efficiency of our boosting methods.
Modern machine learning systems such as deep neural networks are often highly over-parameterized so that they can fit the noisy training data exactly, yet they can still achieve small test errors in practice. In this paper, we study this "benign overfitting" phenomenon of the maximum margin classifier for linear classification problems. Specifically, we consider data generated from sub-Gaussian mixtures, and provide a tight risk bound for the maximum margin linear classifier in the over-parameterized setting. Our results precisely characterize the condition under which benign overfitting can occur in linear classification problems, and improve on previous work. They also have direct implications for over-parameterized logistic regression.
The convex minimization of $f(\mathbf{x})+g(\mathbf{x})+h(\mathbf{A}\mathbf{x})$ over $\mathbb{R}^n$ with differentiable $f$ and linear operator $\mathbf{A}: \mathbb{R}^n\rightarrow \mathbb{R}^m$, has been well-studied in the literature. By considering the primal-dual optimality of the problem, many algorithms are proposed from different perspectives such as monotone operator scheme and fixed point theory. In this paper, we start with a base algorithm to reveal the connection between several algorithms such as AFBA, PD3O and Chambolle-Pock. Then, we prove its convergence under a relaxed assumption associated with the linear operator and characterize the general constraint on primal and dual stepsizes. The result improves the upper bound of stepsizes of AFBA and indicates that Chambolle-Pock, as the special case of the base algorithm when $f=0$, can take the stepsize of the dual iteration up to $4/3$ of the previously proven one.
Approximate message passing (AMP) is a promising technique for unknown signal reconstruction of certain high-dimensional linear systems with non-Gaussian signaling. A distinguished feature of the AMP-type algorithms is that their dynamics can be rigorously described by state evolution. However, state evolution does not necessarily guarantee the convergence of iterative algorithms. To solve the convergence problem of AMP-type algorithms in principle, this paper proposes a memory AMP (MAMP) under a sufficient statistic condition, named sufficient statistic MAMP (SS-MAMP). We show that the covariance matrices of SS-MAMP are L-banded and convergent. Given an arbitrary MAMP, we can construct an SS-MAMP by damping, which not only ensures the convergence of MAMP but also preserves the orthogonality of MAMP, i.e., its dynamics can be rigorously described by state evolution. As a byproduct, we prove that the Bayes-optimal orthogonal/vector AMP (BO-OAMP/VAMP) is an SS-MAMP. As a result, we reveal two interesting properties of BO-OAMP/VAMP for large systems: 1) the covariance matrices are L-banded and are convergent in BO-OAMP/VAMP, and 2) damping and memory are useless (i.e., do not bring performance improvement) in BO-OAMP/VAMP. As an example, we construct a sufficient statistic Bayes-optimal MAMP (BO-MAMP), which is Bayes optimal if its state evolution has a unique fixed point and its MSE is not worse than the original BO-MAMP. Finally, simulations are provided to verify the validity and accuracy of the theoretical results.
This paper analyzes a two-timescale stochastic algorithm framework for bilevel optimization. Bilevel optimization is a class of problems which exhibit a two-level structure, and its goal is to minimize an outer objective function with variables which are constrained to be the optimal solution to an (inner) optimization problem. We consider the case when the inner problem is unconstrained and strongly convex, while the outer problem is constrained and has a smooth objective function. We propose a two-timescale stochastic approximation (TTSA) algorithm for tackling such a bilevel problem. In the algorithm, a stochastic gradient update with a larger step size is used for the inner problem, while a projected stochastic gradient update with a smaller step size is used for the outer problem. We analyze the convergence rates for the TTSA algorithm under various settings: when the outer problem is strongly convex (resp.~weakly convex), the TTSA algorithm finds an $\mathcal{O}(K^{-2/3})$-optimal (resp.~$\mathcal{O}(K^{-2/5})$-stationary) solution, where $K$ is the total iteration number. As an application, we show that a two-timescale natural actor-critic proximal policy optimization algorithm can be viewed as a special case of our TTSA framework. Importantly, the natural actor-critic algorithm is shown to converge at a rate of $\mathcal{O}(K^{-1/4})$ in terms of the gap in expected discounted reward compared to a global optimal policy.
String vibration represents an active field of research in acoustics. Small-amplitude vibration is often assumed, leading to simplified physical models that can be simulated efficiently. However, the inclusion of nonlinear phenomena due to larger string stretchings is necessary to capture important features, and efficient numerical algorithms are currently lacking in this context. Of the available techniques, many lead to schemes which may only be solved iteratively, resulting in high computational cost, and the additional concerns of existence and uniqueness of solutions. Slow and fast waves are present concurrently in the transverse and longitudinal directions of motion, adding further complications concerning numerical dispersion. This work presents a linearly-implicit scheme for the simulation of the geometrically exact nonlinear string model. The scheme conserves a numerical energy, expressed as the sum of quadratic terms only, and including an auxiliary state variable yielding the nonlinear effects. This scheme allows to treat the transverse and longitudinal waves separately, using a mixed finite difference/modal scheme for the two directions of motion, thus allowing to accurately resolve the wave speeds at reference sample rates. Numerical experiments are presented throughout.
Implicit probabilistic models are models defined naturally in terms of a sampling procedure and often induces a likelihood function that cannot be expressed explicitly. We develop a simple method for estimating parameters in implicit models that does not require knowledge of the form of the likelihood function or any derived quantities, but can be shown to be equivalent to maximizing likelihood under some conditions. Our result holds in the non-asymptotic parametric setting, where both the capacity of the model and the number of data examples are finite. We also demonstrate encouraging experimental results.
In this paper, we study the optimal convergence rate for distributed convex optimization problems in networks. We model the communication restrictions imposed by the network as a set of affine constraints and provide optimal complexity bounds for four different setups, namely: the function $F(\xb) \triangleq \sum_{i=1}^{m}f_i(\xb)$ is strongly convex and smooth, either strongly convex or smooth or just convex. Our results show that Nesterov's accelerated gradient descent on the dual problem can be executed in a distributed manner and obtains the same optimal rates as in the centralized version of the problem (up to constant or logarithmic factors) with an additional cost related to the spectral gap of the interaction matrix. Finally, we discuss some extensions to the proposed setup such as proximal friendly functions, time-varying graphs, improvement of the condition numbers.