In this work we show an error estimate for a first order Gaussian beam at a fold caustic, approximating time-harmonic waves governed by the Helmholtz equation. For the caustic that we study the exact solution can be constructed using Airy functions and there are explicit formulae for the Gaussian beam parameters. Via precise comparisons we show that the pointwise error on the caustic is of the order $O(k^{-5/6})$ where $k$ is the wave number in Helmholtz.
Goal-oriented error estimation provides the ability to approximate the discretization error in a chosen functional quantity of interest. Adaptive mesh methods provide the ability to control this discretization error to obtain accurate quantity of interest approximations while still remaining computationally feasible. Traditional discrete goal-oriented error estimates incur linearization errors in their derivation. In this paper, we investigate the role of linearization errors in adaptive goal-oriented error simulations. In particular, we develop a novel two-level goal-oriented error estimate that is free of linearization errors. Additionally, we highlight how linearization errors can facilitate the verification of the adjoint solution used in goal-oriented error estimation. We then verify the newly proposed error estimate by applying it to a model nonlinear problem for several quantities of interest and further highlight its asymptotic effectiveness as mesh sizes are reduced. In an adaptive mesh context, we then compare the newly proposed estimate to a more traditional two-level goal-oriented error estimate. We highlight that accounting for linearization errors in the error estimate can improve its effectiveness in certain situations and demonstrate that localizing linearization errors can lead to more optimal adapted meshes.
We consider multi-variate signals spanned by the integer shifts of a set of generating functions with distinct frequency profiles and the problem of reconstructing them from samples taken on a random periodic set. We show that such a sampling strategy succeeds with high probability provided that the density of the sampling pattern exceeds the number of frequency profiles by a logarithmic factor. The signal model includes bandlimited functions with multi-band spectra. While in this well-studied setting delicate constructions provide sampling strategies that meet the information theoretic benchmark of Shannon and Landau, the sampling pattern that we consider provides, at the price of a logarithmic oversampling factor, a simple alternative that is accompanied by favorable a priori stability margins (snug frames). More generally, we also treat bandlimited functions with arbitrary compact spectra, and different measures of its complexity and approximation rates by integer tiles. At the technical level, we elaborate on recent work on relevant sampling, with the key difference that the reconstruction guarantees that we provide hold uniformly for all signals, rather than for a subset of well-concentrated ones. This is achieved by methods of concentration of measure formulated on the Zak domain.
The moment-sum-of-squares (moment-SOS) hierarchy is one of the most celebrated and widely applied methods for approximating the minimum of an n-variate polynomial over a feasible region defined by polynomial (in)equalities. A key feature of the hierarchy is that, at a fixed level, it can be formulated as a semidefinite program of size polynomial in the number of variables n. Although this suggests that it may therefore be computed in polynomial time, this is not necessarily the case. Indeed, as O'Donnell (2017) and later Raghavendra & Weitz (2017) show, there exist examples where the sos-representations used in the hierarchy have exponential bit-complexity. We study the computational complexity of the moment-SOS hierarchy, complementing and expanding upon earlier work of Raghavendra & Weitz (2017). In particular, we establish algebraic and geometric conditions under which polynomial-time computation is guaranteed to be possible.
It is well known that Cauchy problem for Laplace equations is an ill-posed problem in Hadamard's sense. Small deviations in Cauchy data may lead to large errors in the solutions. It is observed that if a bound is imposed on the solution, there exists a conditional stability estimate. This gives a reasonable way to construct stable algorithms. However, it is impossible to have good results at all points in the domain. Although numerical methods for Cauchy problems for Laplace equations have been widely studied for quite a long time, there are still some unclear points, for example, how to evaluate the numerical solutions, which means whether we can approximate the Cauchy data well and keep the bound of the solution, and at which points the numerical results are reliable? In this paper, we will prove the conditional stability estimate which is quantitatively related to harmonic measures. The harmonic measure can be used as an indicate function to pointwisely evaluate the numerical result, which further enables us to find a reliable subdomain where the local convergence rate is higher than a certain order.
We develop a shape-Newton method for solving generic free-boundary problems where one of the free-boundary conditions is governed by the Bernoulli equation. The Newton-like scheme is developed by employing shape derivatives in the weak forms, which allows us to update the position of the free surface and the potential on the free boundary by solving a boundary-value problem at each iteration. To validate the effectiveness of the approach, we apply the scheme to solve a problem involving the flow over a submerged triangular obstacle.
The efficient representation of random fields on geometrically complex domains is crucial for Bayesian modelling in engineering and machine learning. Today's prevalent random field representations are restricted to unbounded domains or are too restrictive in terms of possible field properties. As a result, new techniques leveraging the historically established link between stochastic PDEs (SPDEs) and random fields are especially appealing for engineering applications with complex geometries which already have a finite element discretisation for solving the physical conservation equations. Unlike the dense covariance matrix of a random field, its inverse, the precision matrix, is usually sparse and equal to the stiffness matrix of a Helmholtz-like SPDE. In this paper, we use the SPDE representation to develop a scalable framework for large-scale statistical finite element analysis (statFEM) and Gaussian process (GP) regression on geometrically complex domains. We use the SPDE formulation to obtain the relevant prior probability densities with a sparse precision matrix. The properties of the priors are governed by the parameters and possibly fractional order of the Helmholtz-like SPDE so that we can model on bounded domains and manifolds anisotropic, non-homogeneous random fields with arbitrary smoothness. We use for assembling the sparse precision matrix the same finite element mesh used for solving the physical conservation equations. The observation models for statFEM and GP regression are such that the posterior probability densities are Gaussians with a closed-form mean and precision. The expressions for the mean vector and the precision matrix can be evaluated using only sparse matrix operations. We demonstrate the versatility of the proposed framework and its convergence properties with one and two-dimensional Poisson and thin-shell examples.
The large bandwidth combined with ultra-massive multiple-input multiple-output (UM-MIMO) arrays enables terahertz (THz) systems to achieve terabits-per-second throughput. The THz systems are expected to operate in the near, intermediate, as well as the far-field. As such, channel estimation strategies suitable for the near, intermediate, or far-field have been introduced in the literature. In this work, we propose a cross-field, i.e., able to operate in near, intermediate, and far-field, compressive channel estimation strategy. For an array-of-subarrays (AoSA) architecture, the proposed method compares the received signals across the arrays to determine whether a near, intermediate, or far-field channel estimation approach will be appropriate. Subsequently, compressed estimation is performed in which the proximity of multiple subarrays (SAs) at the transmitter and receiver is exploited to reduce computational complexity and increase estimation accuracy. Numerical results show that the proposed method can enhance channel estimation accuracy and complexity at all distances of interest.
We introduce and analyze a new finite-difference scheme, relying on the theta-method, for solving monotone second-order mean field games. These games consist of a coupled system of the Fokker-Planck and the Hamilton-Jacobi-Bellman equation. The theta-method is used for discretizing the diffusion terms: we approximate them with a convex combination of an implicit and an explicit term. On contrast, we use an explicit centered scheme for the first-order terms. Assuming that the running cost is strongly convex and regular, we first prove the monotonicity and the stability of our theta-scheme, under a CFL condition. Taking advantage of the regularity of the solution of the continuous problem, we estimate the consistency error of the theta-scheme. Our main result is a convergence rate of order $\mathcal{O}(h^r)$ for the theta-scheme, where $h$ is the step length of the space variable and $r \in (0,1)$ is related to the H\"older continuity of the solution of the continuous problem and some of its derivatives.
We present a mass lumping approach based on an isogeometric Petrov-Galerkin method that preserves higher-order spatial accuracy in explicit dynamics calculations irrespective of the polynomial degree of the spline approximation. To discretize the test function space, our method uses an approximate dual basis, whose functions are smooth, have local support and satisfy approximate bi-orthogonality with respect to a trial space of B-splines. The resulting mass matrix is ``close'' to the identity matrix. Specifically, a lumped version of this mass matrix preserves all relevant polynomials when utilized in a Galerkin projection. Consequently, the mass matrix can be lumped (via row-sum lumping) without compromising spatial accuracy in explicit dynamics calculations. We address the imposition of Dirichlet boundary conditions and the preservation of approximate bi-orthogonality under geometric mappings. In addition, we establish a link between the exact dual and approximate dual basis functions via an iterative algorithm that improves the approximate dual basis towards exact bi-orthogonality. We demonstrate the performance of our higher-order accurate mass lumping approach via convergence studies and spectral analyses of discretized beam, plate and shell models.
We give query complexity lower bounds for convex optimization and the related feasibility problem. We show that quadratic memory is necessary to achieve the optimal oracle complexity for first-order convex optimization. In particular, this shows that center-of-mass cutting-planes algorithms in dimension $d$ which use $\tilde O(d^2)$ memory and $\tilde O(d)$ queries are Pareto-optimal for both convex optimization and the feasibility problem, up to logarithmic factors. Precisely, we prove that to minimize $1$-Lipschitz convex functions over the unit ball to $1/d^4$ accuracy, any deterministic first-order algorithms using at most $d^{2-\delta}$ bits of memory must make $\tilde\Omega(d^{1+\delta/3})$ queries, for any $\delta\in[0,1]$. For the feasibility problem, in which an algorithm only has access to a separation oracle, we show a stronger trade-off: for at most $d^{2-\delta}$ memory, the number of queries required is $\tilde\Omega(d^{1+\delta})$. This resolves a COLT 2019 open problem of Woodworth and Srebro.