In the context of image processing, given a $k$-th order, homogeneous and linear differential operator with constant coefficients, we study a class of variational problems whose regularizing terms depend on the operator. Precisely, the regularizers are integrals of spatially inhomogeneous integrands with convex dependence on the differential operator applied to the image function. The setting is made rigorous by means of the theory of Radon measures and of suitable function spaces modeled on $BV$. We prove the lower semicontinuity of the functionals at stake and existence of minimizers for the corresponding variational problems. Then, we embed the latter into a bilevel scheme in order to automatically compute the space-dependent regularization parameters, thus allowing for good flexibility and preservation of details in the reconstructed image. We establish existence of optima for the scheme and we finally substantiate its feasibility by numerical examples in image denoising. The cases that we treat are Huber versions of the first and second order total variation with both the Huber and the regularization parameter being spatially dependent. Notably the spatially dependent version of second order total variation produces high quality reconstructions when compared to regularizations of similar type, and the introduction of the spatially dependent Huber parameter leads to a further enhancement of the image details.
Discretization of continuous-time diffusion processes is a widely recognized method for sampling. However, it seems to be a considerable restriction when the potentials are often required to be smooth (gradient Lipschitz). This paper studies the problem of sampling through Euler discretization, where the potential function is assumed to be a mixture of weakly smooth distributions and satisfies weakly dissipative. We establish the convergence in Kullback-Leibler (KL) divergence with the number of iterations to reach $\epsilon$-neighborhood of a target distribution in only polynomial dependence on the dimension. We relax the degenerated convex at infinity conditions of \citet{erdogdu2020convergence} and prove convergence guarantees under Poincar\'{e} inequality or non-strongly convex outside the ball. In addition, we also provide convergence in $L_{\beta}$-Wasserstein metric for the smoothing potential.
This article considers the massive MIMO unsourced random access problem on a quasi-static Rayleigh fading channel. Given a fixed message length and a prescribed number of channel uses, the objective is to construct a coding scheme that minimizes the energy-per-bit subject to a fixed probability of error. The proposed scheme differs from other state-of-the-art schemes in that it blends activity detection, single-user coding, pilot-aided and temporary decisions-aided iterative channel estimation and decoding, minimum-mean squared error (MMSE) estimation, and successive interference cancellation (SIC). We show that an appropriate combination of these ideas can substantially outperform state-of-the-art coding schemes when the number of active users is more than 100, making this the best performing scheme known for this regime.
We introduce and discuss shape-based models for finding the best interpolation data in the compression of images with noise. The aim is to reconstruct missing regions by means of minimizing a data fitting term in the $L^2$-norm between the images and their reconstructed counterparts using time-dependent PDE inpainting. We analyze the proposed models in the framework of the $\Gamma$-convergence from two different points of view. First, we consider a continuous stationary PDE model, obtained by focusing on the first iteration of the discretized time-dependent PDE, and get pointwise information on the "relevance" of each pixel by a topological asymptotic method. Second, we introduce a finite dimensional setting of the continuous model based on "fat pixels" (balls with positive radius), and we study by $\Gamma$-convergence the asymptotics when the radius vanishes. Numerical computations are presented that confirm the usefulness of our theoretical findings for non-stationary PDE-based image compression.
A mass-preserving two-step Lagrange-Galerkin scheme of second order in time for convection-diffusion problems is presented, and convergence with optimal error estimates is proved in the framework of $L^2$-theory. The introduced scheme maintains the advantages of the Lagrange-Galerkin method, i.e., CFL-free robustness for convection-dominated problems and a symmetric and positive coefficient matrix resulting from the discretization. In addition, the scheme conserves the mass on the discrete level if the involved integrals are computed exactly. Unconditional stability and error estimates of second order in time are proved by employing two new key lemmas on the truncation error of the material derivative in conservative form and on a discrete Gronwall inequality for multistep methods. The mass-preserving property is achieved by the Jacobian multiplication technique introduced by Rui and Tabata in 2010, and the accuracy of second order in time is obtained based on the idea of the multistep Galerkin method along characteristics originally introduced by Ewing and Russel in 1981. For the first time step, the mass-preserving scheme of first order in time by Rui and Tabata in 2010 is employed, which is efficient and does not cause any loss of convergence order in the $\ell^\infty(L^2)$- and $\ell^2(H^1_0)$-norms. For the time increment $\Delta t$, the mesh size $h$ and a conforming finite element space of polynomial degree $k$, the convergence order is of $O(\Delta t^2 + h^k)$ in the $\ell^\infty(L^2)\cap \ell^2(H^1_0)$-norm and of $O(\Delta t^2 + h^{k+1})$ in the $\ell^\infty(L^2)$-norm if the duality argument can be employed. Error estimates of $O(\Delta t^{3/2}+h^k)$ in discrete versions of the $L^\infty(H^1_0)$- and $H^1(L^2)$-norm are additionally proved. Numerical results confirm the theoretical convergence orders in one, two and three dimensions.
In this paper, we propose a multi-stage image segmentation framework that incorporates a weighted difference of anisotropic and isotropic total variation (AITV). The segmentation framework generally consists of two stages: smoothing and thresholding, thus referred to as SaT. In the first stage, a smoothed image is obtained by an AITV-regularized Mumford-Shah (MS) model, which can be solved efficiently by the alternating direction method of multipliers (ADMM) with a closed-form solution of a proximal operator of the $\ell_1 -\alpha \ell_2$ regularizer. Convergence of the ADMM algorithm is analyzed. In the second stage, we threshold the smoothed image by $k$-means clustering to obtain the final segmentation result. Numerical experiments demonstrate that the proposed segmentation framework is versatile for both grayscale and color images, efficient in producing high-quality segmentation results within a few seconds, and robust to input images that are corrupted with noise, blur, or both. We compare the AITV method with its original convex and nonconvex TV$^p (0<p<1)$ counterparts, showcasing the qualitative and quantitative advantages of our proposed method.
We consider generalized Nash equilibrium problems (GNEPs) with non-convex strategy spaces and non-convex cost functions. This general class of games includes the important case of games with mixed-integer variables for which only a few results are known in the literature. We present a new approach to characterize equilibria via a convexification technique using the Nikaido-Isoda function. To any given instance of the GNEP, we construct a set of convexified instances and show that a feasible strategy profile is an equilibrium for the original instance if and only if it is an equilibrium for any convexified instance and the convexified cost functions coincide with the initial ones. We further develop this approach along three dimensions. We first show that for quasi-linear models, where a convexified instance exists in which for fixed strategies of the opponent players, the cost function of every player is linear and the respective strategy space is polyhedral, the convexification reduces the GNEP to a standard (non-linear) optimization problem. Secondly, we derive two complete characterizations of those GNEPs for which the convexification leads to a jointly constrained or a jointly convex GNEP, respectively. These characterizations require new concepts related to the interplay of the convex hull operator applied to restricted subsets of feasible strategies and may be interesting on their own. Finally, we demonstrate the applicability of our results by presenting a numerical study regarding the computation of equilibria for a class of integral network flow GNEPs.
Purpose: To develop a synergistic image reconstruction framework that exploits multicontrast (MC), multicoil, and compressed sensing (CS) redundancies in magnetic resonance imaging (MRI). Approach: CS, MC acquisition, and parallel imaging (PI) have been individually well developed, but the combination of the three has not been equally well studied, much less the potential benefits of isotropy within such a setting. Inspired by total variation theory, we introduce an isotropic MC image regularizer and attain its full potential by integrating it into compressed MC multicoil MRI. A convex optimization problem is posed to model the new variational framework and a first-order algorithm is developed to solve the problem. Results: It turns out that the proposed isotropic regularizer outperforms many of the state-of-the-art reconstruction methods not only in terms of rotation-invariance preservation of symmetrical features, but also in suppressing noise or streaking artifacts, which are normally encountered in PI methods at aggressive undersampling rates. Moreover, the new framework significantly prevents intercontrast leakage of contrast-specific details, which seems to be a difficult situation to handle for some variational and low-rank MC reconstruction approaches. Conclusions: The new framework is a viable option for image reconstruction in fast protocols of MC parallel MRI, potentially reducing patient discomfort in otherwise long and time-consuming scans.
Due to the COVID-19 pandemic, there is an increasing demand for portable CT machines worldwide in order to diagnose patients in a variety of settings. This has led to a need for CT image reconstruction algorithms that can produce high quality images in the case when multiple types of geometry parameters have been perturbed. In this paper we present an alternating minimization algorithm to address this issue, where one step minimizes a regularized linear least squares problem, and the other step minimizes a bounded non-linear least squares problem. Additionally, we survey existing methods to accelerate convergence of the algorithm and discuss implementation details. Finally, numerical experiments are conducted to illustrate the effectiveness of the algorithm.
We consider the Cauchy problem for a second-order nonlinear evolution equation in a Hilbert space. This equation represents the abstract generalization of the Ball integro-differential equation. The general nonlinear case with respect to terms of the equation which include a square of a norm of a gradient is considered. A three-layer semi-discrete scheme is proposed in order to find an approximate solution. In this scheme, the approximation of nonlinear terms that are dependent on the gradient is carried out by using an integral mean. We show that the solution of the nonlinear discrete problem and its corresponding difference analogue of a first-order derivative is uniformly bounded. For the solution of the corresponding linear discrete problem, it is obtained high-order a priori estimates by using two-variable Chebyshev polynomials. Based on these estimates we prove the stability of the nonlinear discrete problem. For smooth solutions, we provide error estimates for the approximate solution. An iteration method is applied in order to find an approximate solution for each temporal step. The convergence of the iteration process is proved.
This paper is concerned with the inverse acoustic scattering problems by an obstacle or a cavity with a sound-soft or a sound-hard boundary. A direct imaging method relying on the boundary conditions is proposed for reconstructing the shape of the obstacle or cavity. First, the scattered fields are approximated by the Fourier-Bessel functions with the measurements on a closed curve. Then, the indicator functions are established by the superpositions of the total fields or their derivatives to the incident point sources. We prove that the indicator functions vanish only on the boundary of the obstacle or cavity. Numerical examples are also included to demonstrate the effectiveness of the method.