We consider nodal-based Lagrangian interpolations for the finite element approximation of the Maxwell eigenvalue problem. The first approach introduced is a standard Galerkin method on Powell-Sabin meshes, which has recently been shown to yield convergent approximations in two dimensions, whereas the other two are stabilized formulations that can be motivated by a variational multiscale approach. For the latter, a mixed formulation equivalent to the original problem is used, in which the operator has a saddle point structure. The Lagrange multiplier introduced to enforce the divergence constraint vanishes in an appropriate functional setting. The first stabilized method we consider consists of an augmented formulation with the introduction of a mesh dependent term that can be regarded as the Laplacian of the multiplier of the divergence constraint. The second formulation is based on orthogonal projections, which can be recast as a residual based stabilization technique. We rely on the classical spectral theory to analyze the approximating methods for the eigenproblem. The stability and convergence aspects are inherited from the associated source problems. We investigate the numerical performance of the proposed formulations and provide some convergence results validating the theoretical ones for several benchmark tests, including ones with smooth and singular solutions.
This paper introduces the application of the weak Galerkin (WG) finite element method to solve the Steklov eigenvalue problem, focusing on obtaining lower bounds of the eigenvalues. The noncomforming finite element space of the weak Galerkin finite element method is the key to obtain lower bounds of the eigenvalues. The arbitary high order lower bound estimates are given and the guaranteed lower bounds of the eigenvalues are also discussed. Numerical results demonstrate the accuracy and lower bound property of the numerical scheme.
The task of maximizing a monotone submodular function under a cardinality constraint is at the core of many machine learning and data mining applications, including data summarization, sparse regression and coverage problems. We study this classic problem in the fully dynamic setting, where elements can be both inserted and removed. Our main result is a randomized algorithm that maintains an efficient data structure with a poly-logarithmic amortized update time and yields a $(1/2-\epsilon)$-approximate solution. We complement our theoretical analysis with an empirical study of the performance of our algorithm.
A recent development in Bayesian optimization is the use of local optimization strategies, which can deliver strong empirical performance on high-dimensional problems compared to traditional global strategies. The "folk wisdom" in the literature is that the focus on local optimization sidesteps the curse of dimensionality; however, little is known concretely about the expected behavior or convergence of Bayesian local optimization routines. We first study the behavior of the local approach, and find that the statistics of individual local solutions of Gaussian process sample paths are surprisingly good compared to what we would expect to recover from global methods. We then present the first rigorous analysis of such a Bayesian local optimization algorithm recently proposed by M\"uller et al. (2021), and derive convergence rates in both the noisy and noiseless settings.
We review different (reduced) models for thin structures using bending as principal mechanism to undergo large deformations. Each model consists in the minimization of a fourth order energy, potentially subject to a nonconvex constraint. Equilibrium deformations are approximated using local discontinuous Galerkin (LDG) finite elements. The design of the discrete energies relies on a discrete Hessian operator defined on discontinuous functions with better approximation properties than the piecewise Hessian. Discrete gradient flows are put in place to drive the minimization process. They are chosen for their robustness and ability to preserve the nonconvex constraint. Several numerical experiments are presented to showcase the large variety of shapes that can be achieved with these models.
This paper proposes and analyzes a novel efficient high-order finite volume method for the ideal magnetohydrodynamics (MHD). As a distinctive feature, the method simultaneously preserves a discretely divergence-free (DDF) constraint on the magnetic field and the positivity-preserving (PP) property, which ensures the positivity of density, pressure, and internal energy. To enforce the DDF condition, we design a new discrete projection approach that projects the reconstructed point values at the cell interface into a DDF space, without using any approximation polynomials. This projection method is highly efficient, easy to implement, and particularly suitable for standard high-order finite volume WENO methods, which typically return only the point values in the reconstruction. Moreover, we also develop a new finite volume framework for constructing provably PP schemes for the ideal MHD system. The framework comprises the discrete projection technique, a suitable approximation to the Godunov--Powell source terms, and a simple PP limiter. We provide rigorous analysis of the PP property of the proposed finite volume method, demonstrating that the DDF condition and the proper approximation to the source terms eliminate the impact of magnetic divergence terms on the PP property. The analysis is challenging due to the internal energy function's nonlinearity and the intricate relationship between the DDF and PP properties. To address these challenges, the recently developed geometric quasilinearization approach is adopted, which transforms a nonlinear constraint into a family of linear constraints. Finally, we validate the effectiveness of the proposed method through several benchmark and demanding numerical examples. The results demonstrate that the proposed method is robust, accurate, and highly effective, confirming the significance of the proposed DDF projection and PP techniques.
We present a novel approach for computing a variant of eigenvector centrality for multilayer networks with inter-layer constraints on node importance. Specifically, we consider a multilayer network defined by multiple edge-weighted, potentially directed, graphs over the same set of nodes with each graph representing one layer of the network and no inter-layer edges. As in the standard eigenvector centrality construction, the importance of each node in a given layer is based on the weighted sum of the importance of adjacent nodes in that same layer. Unlike standard eigenvector centrality, we assume that the adjacency relationship and the importance of adjacent nodes may be based on distinct layers. Importantly, this type of centrality constraint is only partially supported by existing frameworks for multilayer eigenvector centrality that use edges between nodes in different layers to capture inter-layer dependencies. For our model, constrained, layer-specific eigenvector centrality values are defined by a system of independent eigenvalue problems and dependent pseudo-eigenvalue problems, whose solution can be efficiently realized using an interleaved power iteration algorithm.
Efficient simulation of stochastic partial differential equations (SPDE) on general domains requires noise discretization. This paper employs piecewise linear interpolation of noise in a fully discrete finite element approximation of a semilinear stochastic reaction-advection-diffusion equation on a convex polyhedral domain. The Gaussian noise is white in time, spatially correlated, and modeled as a standard cylindrical Wiener process on a reproducing kernel Hilbert space. This paper provides the first rigorous analysis of the resulting noise discretization error for a general spatial covariance kernel. The kernel is assumed to be defined on a larger regular domain, allowing for sampling by the circulant embedding method. The error bound under mild kernel assumptions requires non-trivial techniques like Hilbert--Schmidt bounds on products of finite element interpolants, entropy numbers of fractional Sobolev space embeddings and an error bound for interpolants in fractional Sobolev norms. Examples with kernels encountered in applications are illustrated in numerical simulations using the FEniCS finite element software. Key findings include: noise interpolation does not introduce additional errors for Mat\'ern kernels in $d\ge2$; there exist kernels that yield dominant interpolation errors; and generating noise on a coarser mesh does not always compromise accuracy.
We consider a social choice setting in which agents and alternatives are represented by points in a metric space, and the cost of an agent for an alternative is the distance between the corresponding points in the space. The goal is to choose a single alternative to (approximately) minimize the social cost (cost of all agents) or the maximum cost of any agent, when only limited information about the preferences of the agents is given. Previous work has shown that the best possible distortion one can hope to achieve is $3$ when access to the ordinal preferences of the agents is given, even when the distances between alternatives in the metric space are known. We improve upon this bound of $3$ by designing deterministic mechanisms that exploit a bit of cardinal information. We show that it is possible to achieve distortion $1+\sqrt{2}$ by using the ordinal preferences of the agents, the distances between alternatives, and a threshold approval set per agent that contains all alternatives for whom her cost is within an appropriately chosen factor of her cost for her most-preferred alternative. We show that this bound is the best possible for any deterministic mechanism in general metric spaces, and also provide improved bounds for the fundamental case of a line metric.
Stein thinning is a promising algorithm proposed by (Riabiz et al., 2022) for post-processing outputs of Markov chain Monte Carlo (MCMC). The main principle is to greedily minimize the kernelized Stein discrepancy (KSD), which only requires the gradient of the log-target distribution, and is thus well-suited for Bayesian inference. The main advantages of Stein thinning are the automatic remove of the burn-in period, the correction of the bias introduced by recent MCMC algorithms, and the asymptotic properties of convergence towards the target distribution. Nevertheless, Stein thinning suffers from several empirical pathologies, which may result in poor approximations, as observed in the literature. In this article, we conduct a theoretical analysis of these pathologies, to clearly identify the mechanisms at stake, and suggest improved strategies. Then, we introduce the regularized Stein thinning algorithm to alleviate the identified pathologies. Finally, theoretical guarantees and extensive experiments show the high efficiency of the proposed algorithm.
With the rapid development of facial forgery techniques, forgery detection has attracted more and more attention due to security concerns. Existing approaches attempt to use frequency information to mine subtle artifacts under high-quality forged faces. However, the exploitation of frequency information is coarse-grained, and more importantly, their vanilla learning process struggles to extract fine-grained forgery traces. To address this issue, we propose a progressive enhancement learning framework to exploit both the RGB and fine-grained frequency clues. Specifically, we perform a fine-grained decomposition of RGB images to completely decouple the real and fake traces in the frequency space. Subsequently, we propose a progressive enhancement learning framework based on a two-branch network, combined with self-enhancement and mutual-enhancement modules. The self-enhancement module captures the traces in different input spaces based on spatial noise enhancement and channel attention. The Mutual-enhancement module concurrently enhances RGB and frequency features by communicating in the shared spatial dimension. The progressive enhancement process facilitates the learning of discriminative features with fine-grained face forgery clues. Extensive experiments on several datasets show that our method outperforms the state-of-the-art face forgery detection methods.