We are interested in the optimization of convex domains under a PDE constraint. Due to the difficulties of approximating convex domains in $\mathbb{R}^3$, the restriction to rotationally symmetric domains is used to reduce shape optimization problems to a two-dimensional setting. For the optimization of an eigenvalue arising in a problem of optimal insulation, the existence of an optimal domain is proven. An algorithm is proposed that can be applied to general shape optimization problems under the geometric constraints of convexity and rotational symmetry. The approximated optimal domains for the eigenvalue problem in optimal insulation are discussed.
In this paper we apply methods originated in Complexity theory to some problems of Approximation. We notice that the construction of Alman and Williams that disproves the rigidity of Walsh-Hadamard matrices, provides good $\ell_p$-approximation for $p<2$. It follows that the first $n$ functions of Walsh system can be approximated with an error $n^{-\delta}$ by a linear space of dimension $n^{1-\delta}$: $$ d_{n^{1-\delta}}(\{w_1,\ldots,w_n\}, L_p[0,1]) \le n^{-\delta},\quad p\in[1,2),\;\delta=\delta(p)>0. $$ We do not know if this is possible for the trigonometric system. We show that the algebraic method of Alon--Frankl--R\"odl for bounding the number of low-signum-rank matrices, works for tensors: almost all signum-tensors have large signum-rank and can't be $\ell_1$-approximated by low-rank tensors. This implies lower bounds for $\Theta_m$~ -- the error of $m$-term approximation of multivariate functions by sums of tensor products $u^1(x_1)\cdots u^d(x_d)$. In particular, for the set of trigonometric polynomials with spectrum in $\prod_{j=1}^d[-n_j,n_j]$ and of norm $\|t\|_\infty\le 1$ we have $$ \Theta_m(\mathcal T(n_1,\ldots,n_d)_\infty,L_1[-\pi,\pi]^d) \ge c_1(d)>0,\quad m\le c_2(d)\frac{\prod n_j}{\max\{n_j\}}. $$ Sharp bounds follow for classes of dominated mixed smoothness: $$ \Theta_m(W^{(r,r,\ldots,r)}_p,L_q[0,1]^d)\asymp m^{-\frac{rd}{d-1}},\quad\mbox 2\le p\le\infty,\; 1\le q\le 2. $$
We introduce a new method for Estimation of Signal Parameters based on Iterative Rational Approximation (ESPIRA) for sparse exponential sums. Our algorithm uses the AAA algorithm for rational approximation of the discrete Fourier transform of the given equidistant signal values. We show that ESPIRA can be interpreted as a matrix pencil method applied to Loewner matrices. These Loewner matrices are closely connected with the Hankel matrices which are usually employed for signal recovery. Due to the construction of the Loewner matrices via an adaptive selection of index sets, the matrix pencil method is stabilized. ESPIRA achieves similar recovery results for exact data as ESPRIT and the matrix pencil method but with less computational effort. Moreover, ESPIRA strongly outperforms ESPRIT and the matrix pencil method for noisy data and for signal approximation by short exponential sums.
Optimal experimental design (OED) plays an important role in the problem of identifying uncertainty with limited experimental data. In many applications, we seek to minimize the uncertainty of a predicted quantity of interest (QoI) based on the solution of the inverse problem, rather than the inversion model parameter itself. In these scenarios, we develop an efficient method for goal-oriented optimal experimental design (GOOED) for large-scale Bayesian linear inverse problem that finds sensor locations to maximize the expected information gain (EIG) for a predicted QoI. By deriving a new formula to compute the EIG, exploiting low-rank structures of two appropriate operators, we are able to employ an online-offline decomposition scheme and a swapping greedy algorithm to maximize the EIG at a cost measured in model solutions that is independent of the problem dimensions. We provide detailed error analysis of the approximated EIG, and demonstrate the efficiency, accuracy, and both data- and parameter-dimension independence of the proposed algorithm for a contaminant transport inverse problem with infinite-dimensional parameter field.
In this paper, we consider downlink low Earth orbit (LEO) satellite communication systems where multiple LEO satellites are uniformly distributed over a sphere at a certain altitude according to a homogeneous binomial point process (BPP). Based on the characteristics of the BPP, we analyze the distance distributions and the distribution cases for the serving satellite. We analytically derive the exact outage probability, and the approximated expression is obtained using the Poisson limit theorem. With these derived expressions, the system throughput maximization problem is formulated under the satellite-visibility and outage constraints. To solve this problem, we reformulate it with bounded feasible sets and propose an iterative algorithm to obtain near-optimal solutions. Simulation results perfectly match the derived exact expressions for the outage probability and system throughput. The analytical results of the approximated expressions are fairly close to those of the exact ones. It is also shown that the proposed algorithm for the throughput maximization is very close to the optimal performance obtained by a two-dimensional exhaustive search.
We give two approximation algorithms solving the Stochastic Boolean Function Evaluation (SBFE) problem for symmetric Boolean functions. The first is an $O(\log n)$-approximation algorithm, based on the submodular goal-value approach of Deshpande, Hellerstein and Kletenik. Our second algorithm, which is simple, is based on the algorithm solving the SBFE problem for $k$-of-$n$ functions, due to Salloum, Breuer, and Ben-Dov. It achieves a $(B-1)$ approximation factor, where $B$ is the number of blocks of 0's and 1's in the standard vector representation of the symmetric Boolean function. As part of the design of the first algorithm, we prove that the goal value of any symmetric Boolean function is less than $n(n+1)/2$. Finally, we give an example showing that for symmetric Boolean functions, minimum expected verification cost and minimum expected evaluation cost are not necessarily equal. This contrasts with a previous result, given by Das, Jafarpour, Orlitsky, Pan and Suresh, which showed that equality holds in the unit-cost case.
The Sliced-Wasserstein distance (SW) is being increasingly used in machine learning applications as an alternative to the Wasserstein distance and offers significant computational and statistical benefits. Since it is defined as an expectation over random projections, SW is commonly approximated by Monte Carlo. We adopt a new perspective to approximate SW by making use of the concentration of measure phenomenon: under mild assumptions, one-dimensional projections of a high-dimensional random vector are approximately Gaussian. Based on this observation, we develop a simple deterministic approximation for SW. Our method does not require sampling a number of random projections, and is therefore both accurate and easy to use compared to the usual Monte Carlo approximation. We derive nonasymptotical guarantees for our approach, and show that the approximation error goes to zero as the dimension increases, under a weak dependence condition on the data distribution. We validate our theoretical findings on synthetic datasets, and illustrate the proposed approximation on a generative modeling problem.
We consider the problem of domain approximation in finite element methods for Maxwell equations on curved domains, i.e., when affine or polynomial meshes fail to exactly cover the domain of interest. In such cases, one is forced to approximate the domain by a sequence of polyhedral domains arising from inexact meshes. We deduce conditions on the quality of these approximations that ensure rates of error convergence between discrete solutions -- in the approximate domains -- to the continuous one in the original domain.
Many problems on signal processing reduce to nonparametric function estimation. We propose a new methodology, piecewise convex fitting (PCF), and give a two-stage adaptive estimate. In the first stage, the number and location of the change points is estimated using strong smoothing. In the second stage, a constrained smoothing spline fit is performed with the smoothing level chosen to minimize the MSE. The imposed constraint is that a single change point occurs in a region about each empirical change point of the first-stage estimate. This constraint is equivalent to requiring that the third derivative of the second-stage estimate has a single sign in a small neighborhood about each first-stage change point. We sketch how PCF may be applied to signal recovery, instantaneous frequency estimation, surface reconstruction, image segmentation, spectral estimation and multivariate adaptive regression.
This paper describes a suite of algorithms for constructing low-rank approximations of an input matrix from a random linear image of the matrix, called a sketch. These methods can preserve structural properties of the input matrix, such as positive-semidefiniteness, and they can produce approximations with a user-specified rank. The algorithms are simple, accurate, numerically stable, and provably correct. Moreover, each method is accompanied by an informative error bound that allows users to select parameters a priori to achieve a given approximation quality. These claims are supported by numerical experiments with real and synthetic data.
In this paper, we study the optimal convergence rate for distributed convex optimization problems in networks. We model the communication restrictions imposed by the network as a set of affine constraints and provide optimal complexity bounds for four different setups, namely: the function $F(\xb) \triangleq \sum_{i=1}^{m}f_i(\xb)$ is strongly convex and smooth, either strongly convex or smooth or just convex. Our results show that Nesterov's accelerated gradient descent on the dual problem can be executed in a distributed manner and obtains the same optimal rates as in the centralized version of the problem (up to constant or logarithmic factors) with an additional cost related to the spectral gap of the interaction matrix. Finally, we discuss some extensions to the proposed setup such as proximal friendly functions, time-varying graphs, improvement of the condition numbers.