For the $h$-finite-element method ($h$-FEM) applied to the Helmholtz equation, the question of how quickly the meshwidth $h$ must decrease with the frequency $k$ to maintain accuracy as $k$ increases has been studied since the mid 80's. Nevertheless, there still do not exist in the literature any $k$-explicit bounds on the relative error of the FEM solution (the measure of the FEM error most often used in practical applications), apart from in one dimension. The main result of this paper is the sharp result that, for the lowest fixed-order conforming FEM (with polynomial degree, $p$, equal to one), the condition "$h^2 k^3$ sufficiently small" is sufficient for the relative error of the FEM solution in 2 or 3 dimensions to be controllably small (independent of $k$) for scattering of a plane wave by a nontrapping obstacle and/or a nontrapping inhomogeneous medium. We also prove relative-error bounds on the FEM solution for arbitrary fixed-order methods applied to scattering by a nontrapping obstacle, but these bounds are not sharp for $p\geq 2$. A key ingredient in our proofs is a result describing the oscillatory behaviour of the solution of the plane-wave scattering problem, which we prove using semiclassical defect measures.
We present an index theory of equilibria for extensive form games. This requires developing an index theory for games where the strategy sets of players are general polytopes and their payoff functions are multiaffine in the product of these polytopes. Such polytopes arise from identifying (topologically) equivalent mixed strategies of a normal form game.
Recent advances in quantized compressed sensing and high-dimensional estimation have shown that signal recovery is even feasible under strong non-linear distortions in the observation process. An important characteristic of associated guarantees is uniformity, i.e., recovery succeeds for an entire class of structured signals with a fixed measurement ensemble. However, despite significant results in various special cases, a general understanding of uniform recovery from non-linear observations is still missing. This paper develops a unified approach to this problem under the assumption of i.i.d. sub-Gaussian measurement vectors. Our main result shows that a simple least-squares estimator with any convex constraint can serve as a universal recovery strategy, which is outlier robust and does not require explicit knowledge of the underlying non-linearity. Based on empirical process theory, a key technical novelty is an approximative increment condition that can be implemented for all common types of non-linear models. This flexibility allows us to apply our approach to a variety of problems in non-linear compressed sensing and high-dimensional statistics, leading to several new and improved guarantees. Each of these applications is accompanied by a conceptually simple and systematic proof, which does not rely on any deeper properties of the observation model. On the other hand, known local stability properties can be incorporated into our framework in a plug-and-play manner, thereby implying near-optimal error bounds.
In this paper we examine the concept of complexity as it applies to generative and evolutionary art and design. Complexity has many different, discipline specific definitions, such as complexity in physical systems (entropy), algorithmic measures of information complexity and the field of "complex systems". We apply a series of different complexity measures to three different evolutionary art datasets and look at the correlations between complexity and individual aesthetic judgement by the artist (in the case of two datasets) or the physically measured complexity of generative 3D forms. Our results show that the degree of correlation is different for each set and measure, indicating that there is no overall "better" measure. However, specific measures do perform well on individual datasets, indicating that careful choice can increase the value of using such measures. We then assess the value of complexity measures for the audience by undertaking a large-scale survey on the perception of complexity and aesthetics. We conclude by discussing the value of direct measures in generative and evolutionary art, reinforcing recent findings from neuroimaging and psychology which suggest human aesthetic judgement is informed by many extrinsic factors beyond the measurable properties of the object being judged.
Inclusion of a term $-\gamma \nabla \nabla \cdot u$, forcing $\nabla \cdot u$ to be pointwise small, is an effective tool for improving mass conservation in discretizations of incompressible flows. However, the added grad-div term couples all velocity components, decreases sparsity and increases the condition number in the linear systems that must be solved every time step. To address these three issues various sparse grad-div regularizations and a modular grad-div method have been developed. We develop and analyze herein a synthesis of a fully decoupled, parallel sparse grad-div method of Guermond and Minev with the modular grad-div method. Let $G^{\ast }=-diag(\partial _{x}^{2},\partial _{y}^{2},\partial _{z}^{2})$ denote the diagonal of $% G=-\nabla \nabla \cdot $, and $\alpha \geq 0$\ an adjustable parameter. The 2-step method considered is \begin{eqnarray} 1 &:&\frac{\widetilde{u}^{n+1}-u^{n}}{k}+u^{n}\cdot \nabla \widetilde{u}^{n+1}+\nabla p^{n+1}-\nu \Delta \widetilde{u}^{n+1}=f\text{ & }\nabla \cdot \widetilde{u}^{n+1}=0,\\ 2 &:&\left[ \frac{1}{k}I+(\gamma +\alpha )G^{\ast }\right] u^{n+1}=\frac{1}{k }\widetilde{u}^{n+1}+\left[ (\gamma +\alpha )G^{\ast }-\gamma G\right] u^{n}. \end{eqnarray} The analysis also establishes that the method controls the persistent size of $\Vert \nabla \cdot u \Vert$ in general and controls the transients in $\Vert\nabla \cdot u\Vert$ for a cold start when $\alpha >0.5\gamma $. Consistent numerical tests are presented.
Although Deep Neural Networks (DNNs) have shown incredible performance in perceptive and control tasks, several trustworthy issues are still open. One of the most discussed topics is the existence of adversarial perturbations, which has opened an interesting research line on provable techniques capable of quantifying the robustness of a given input. In this regard, the Euclidean distance of the input from the classification boundary denotes a well-proved robustness assessment as the minimal affordable adversarial perturbation. Unfortunately, computing such a distance is highly complex due the non-convex nature of NNs. Despite several methods have been proposed to address this issue, to the best of our knowledge, no provable results have been presented to estimate and bound the error committed. This paper addresses this issue by proposing two lightweight strategies to find the minimal adversarial perturbation. Differently from the state-of-the-art, the proposed approach allows formulating an error estimation theory of the approximate distance with respect to the theoretical one. Finally, a substantial set of experiments is reported to evaluate the performance of the algorithms and support the theoretical findings. The obtained results show that the proposed strategies approximate the theoretical distance for samples close to the classification boundary, leading to provable robustness guarantees against any adversarial attacks.
For each partition of a data set into a given number of parts there is a partition such that every part is as much as possible a good model (an "algorithmic sufficient statistic") for the data in that part. Since this can be done for every number between one and the number of data, the result is a function, the cluster structure function. It maps the number of parts of a partition to values related to the deficiencies of being good models by the parts. Such a function starts with a value at least zero for no partition of the data set and descents to zero for the partition of the data set into singleton parts. The optimal clustering is the one chosen to minimize the cluster structure function. The theory behind the method is expressed in algorithmic information theory (Kolmogorov complexity). In practice the Kolmogorov complexities involved are approximated by a concrete compressor. We give examples using real data sets: the MNIST handwritten digits and the segmentation of real cells as used in stem cell research.
Determining the adsorption isotherms is an issue of significant importance in preparative chromatography. A modern technique for estimating adsorption isotherms is to solve an inverse problem so that the simulated batch separation coincides with actual experimental results. However, due to the ill-posedness, the high non-linearity, and the uncertainty quantification of the corresponding physical model, the existing deterministic inversion methods are usually inefficient in real-world applications. To overcome these difficulties and study the uncertainties of the adsorption-isotherm parameters, in this work, based on the Bayesian sampling framework, we propose a statistical approach for estimating the adsorption isotherms in various chromatography systems. Two modified Markov chain Monte Carlo algorithms are developed for a numerical realization of our statistical approach. Numerical experiments with both synthetic and real data are conducted and described to show the efficiency of the proposed new method.
The simulation of long, nonlinear dispersive waves in bounded domains usually requires the use of slip-wall boundary conditions. Boussinesq systems appearing in the literature are generally not well-posed when such boundary conditions are imposed, or if they are well-posed it is very cumbersome to implement the boundary conditions in numerical approximations. In the present paper a new Boussinesq system is proposed for the study of long waves of small amplitude in a basin when slip-wall boundary conditions are required. The new system is derived using asymptotic techniques under the assumption of small bathymetric variations, and a mathematical proof of well-posedness for the new system is developed. The new system is also solved numerically using a Galerkin finite-element method, where the boundary conditions are imposed with the help of Nitsche's method. Convergence of the numerical method is analyzed, and precise error estimates are provided. The method is then implemented, and the convergence is verified using numerical experiments. Numerical simulations for solitary waves shoaling on a plane slope are also presented. The results are compared to experimental data, and excellent agreement is found.
We consider the question: how can you sample good negative examples for contrastive learning? We argue that, as with metric learning, learning contrastive representations benefits from hard negative samples (i.e., points that are difficult to distinguish from an anchor point). The key challenge toward using hard negatives is that contrastive methods must remain unsupervised, making it infeasible to adopt existing negative sampling strategies that use label information. In response, we develop a new class of unsupervised methods for selecting hard negative samples where the user can control the amount of hardness. A limiting case of this sampling results in a representation that tightly clusters each class, and pushes different classes as far apart as possible. The proposed method improves downstream performance across multiple modalities, requires only few additional lines of code to implement, and introduces no computational overhead.
Deep reinforcement learning (RL) methods generally engage in exploratory behavior through noise injection in the action space. An alternative is to add noise directly to the agent's parameters, which can lead to more consistent exploration and a richer set of behaviors. Methods such as evolutionary strategies use parameter perturbations, but discard all temporal structure in the process and require significantly more samples. Combining parameter noise with traditional RL methods allows to combine the best of both worlds. We demonstrate that both off- and on-policy methods benefit from this approach through experimental comparison of DQN, DDPG, and TRPO on high-dimensional discrete action environments as well as continuous control tasks. Our results show that RL with parameter noise learns more efficiently than traditional RL with action space noise and evolutionary strategies individually.