We propose a monotone discretization for the integral fractional Laplace equation on bounded Lipschitz domains with the homogenous Dirichlet boundary condition. The method is inspired by a quadrature-based finite difference method of Huang and Oberman, but is defined on unstructured grids in arbitrary dimensions with a more flexible domain for approximating singular integral. The scale of the singular integral domain not only depends on the local grid size, but also on the distance to the boundary, since the H\"{o}lder coefficient of the solution deteriorates as it approaches the boundary. By using a discrete barrier function that also reflects the distance to the boundary, we show optimal pointwise convergence rates in terms of the H\"{o}lder regularity of the data on both quasi-uniform and graded grids. Several numerical examples are provided to illustrate the sharpness of the theoretical results.
Flexible sparsity regularization means stably approximating sparse solutions of operator equations by using coefficient-dependent penalizations. We propose and analyse a general nonconvex approach in this respect, from both theoretical and numerical perspectives. Namely, we show convergence of the regularization method and establish convergence properties of a couple of majorization approaches for the associated nonconvex problems. We also test a monotone algorithm for an academic example where the operator is an $M$ matrix, and on a time-dependent optimal control problem, pointing out the advantages of employing variable penalties over a fixed penalty.
Stochastic gradient descent with momentum (SGDM) is the dominant algorithm in many optimization scenarios, including convex optimization instances and non-convex neural network training. Yet, in the stochastic setting, momentum interferes with gradient noise, often leading to specific step size and momentum choices in order to guarantee convergence, set aside acceleration. Proximal point methods, on the other hand, have gained much attention due to their numerical stability and elasticity against imperfect tuning. Their stochastic accelerated variants though have received limited attention: how momentum interacts with the stability of (stochastic) proximal point methods remains largely unstudied. To address this, we focus on the convergence and stability of the stochastic proximal point algorithm with momentum (SPPAM), and show that SPPAM allows a faster linear convergence rate compared to stochastic proximal point algorithm (SPPA) with a better contraction factor, under proper hyperparameter tuning. In terms of stability, we show that SPPAM depends on problem constants more favorably than SGDM, allowing a wider range of step size and momentum that lead to convergence.
We investigate a gradient flow structure of the Ginzburg--Landau--Devonshire (GLD) model for anisotropic ferroelectric materials by reconstructing its energy form. We show that the modified energy form admits at least one minimizer. Under some regularity assumptions for the electric charge distribution and the initial polarization field, we prove that the $L^2$ gradient flow structure has a unique solution. To simulate the GLD model numerically, we propose an energy-stable semi-implicit time-stepping scheme and a hybridizable discontinuous Galerkin method for space discretization. Some numerical tests are provided to verify the stability and convergence of the proposed numerical scheme as well as some properties of ferroelectric materials.
We derive a priori error of the Godunov method for the multidimensional Euler system of gas dynamics. To this end we apply the relative energy principle and estimate the distance between the numerical solution and the strong solution. This yields also the estimates of the $L^2$-norm of errors in density, momentum and entropy. Under the assumption that the numerical density and energy are bounded, we obtain a convergence rate of $1/2$ for the relative energy in the $L^1$-norm. Further, under the assumption -- the total variation of numerical solution is bounded, we obtain the first order convergence rate for the relative energy in the $L^1$-norm. Consequently, numerical solutions (density, momentum and entropy) converge in the $L^2$-norm with the convergence rate of $1/2$. The numerical results presented for Riemann problems are consistent with our theoretical analysis.
We introduce an unfitted finite element method with Lagrange-multipliers to study an Eulerian time-stepping scheme for moving domain problems applied to a model problem where the domain motion is implicit to the problem. We consider the heat equation as the partial differential equation (PDE) in the bulk domain, and the domain motion is described by an ordinary differential equation (ODE), coupled to the bulk partial differential equation through the transfer of forces at the moving interface. The discretisation is based on an unfitted finite element discretisation on a time-independent mesh. The method-of-lines time discretisation is enabled by an implicit extension of the bulk solution through additional stabilisation, as introduced by Lehrenfeld & Olshanskii (ESAIM: M2AN, 53:585-614, 2019). The analysis of the coupled problem relies on the Lagrange-multiplier formulation, the fact that the Lagrange-multiplier solution is equal to the normal stress at the interface and that the motion of the interface is given through rigid-body motion. This paper covers the complete stability analysis of the method and an error estimate in the energy norm. This includes the dynamic error in the domain motion resulting from the discretised ODE and the forces from the discretised PDE. To the best of our knowledge this is the first error analysis of this type of coupled moving domain problem in a fully Eulerian framework. Numerical examples illustrate the theoretical results.
Our objective is to stabilise and accelerate the time-domain boundary element method (TDBEM) for the three-dimensional wave equation. To overcome the potential time instability, we considered using the Burton--Miller-type boundary integral equation (BMBIE) instead of the ordinary boundary integral equation (OBIE), which consists of the single- and double-layer potentials. In addition, we introduced a smooth temporal basis, i.e. the B-spline temporal basis of order $d$, whereas $d=1$ was used together with the OBIE in a previous study [Takahashi 2014]. Corresponding to these new techniques, we generalised the interpolation-based fast multipole method that was developed in \cite{takahashi2014}. In particular, we constructed the multipole-to-local formula (M2L) so that even for $d\ge 2$ we can maintain the computational complexity of the entire algorithm, i.e. $O(N_{\rm s}^{1+\delta} N_{\rm t})$, where $N_{\rm s}$ and $N_{\rm t}$ denote the number of boundary elements and the number of time steps, respectively, and $\delta$ is theoretically estimated as $1/3$ or $1/2$. The numerical examples indicated that the BMBIE is indispensable for solving the homogeneous Dirichlet problem, but the order $d$ cannot exceed 1 owing to the doubtful cancellation of significant digits when calculating the corresponding layer potentials. In regard to the homogeneous Neumann problem, the previous TDBEM based on the OBIE with $d=1$ can be unstable, whereas it was found that the BMBIE with $d=2$ can be stable and accurate. The present study will enhance the usefulness of the TDBEM for 3D scalar wave problems.
High-index saddle dynamics provides an effective means to compute the any-index saddle points and construct the solution landscape. In this paper we prove the optimal-order error estimates for Euler discretization of high-index saddle dynamics with respect to the time step size, which remains untreated in the literature. We overcome the main difficulties that lie in the strong nonlinearity of the saddle dynamics and the orthonormalization procedure in the numerical scheme that is uncommon in standard discretization of differential equations. The derived methods are further extended to study the generalized high-index saddle dynamics for non-gradient systems and provide theoretical support for the accuracy of numerical implementations.
Several decades ago the Proximal Point Algorithm (PPA) stated to gain a long-lasting attraction for both abstract operator theory and numerical optimization communities. Even in modern applications, researchers still use proximal minimization theory to design scalable algorithms that overcome nonsmoothness. Remarkable works as \cite{Fer:91,Ber:82constrained,Ber:89parallel,Tom:11} established tight relations between the convergence behavior of PPA and the regularity of the objective function. In this manuscript we derive nonasymptotic iteration complexity of exact and inexact PPA to minimize convex functions under $\gamma-$Holderian growth: $\BigO{\log(1/\epsilon)}$ (for $\gamma \in [1,2]$) and $\BigO{1/\epsilon^{\gamma - 2}}$ (for $\gamma > 2$). In particular, we recover well-known results on PPA: finite convergence for sharp minima and linear convergence for quadratic growth, even under presence of inexactness. However, without taking into account the concrete computational effort paid for computing each PPA iteration, any iteration complexity remains abstract and purely informative. Therefore, using an inner (proximal) gradient/subgradient method subroutine that computes inexact PPA iteration, we secondly show novel computational complexity bounds on a restarted inexact PPA, available when no information on the growth of the objective function is known. In the numerical experiments we confirm the practical performance and implementability of our framework.
There has been a growing interest in unsupervised domain adaptation (UDA) to alleviate the data scalability issue, while the existing works usually focus on classifying independently discrete labels. However, in many tasks (e.g., medical diagnosis), the labels are discrete and successively distributed. The UDA for ordinal classification requires inducing non-trivial ordinal distribution prior to the latent space. Target for this, the partially ordered set (poset) is defined for constraining the latent vector. Instead of the typically i.i.d. Gaussian latent prior, in this work, a recursively conditional Gaussian (RCG) set is proposed for ordered constraint modeling, which admits a tractable joint distribution prior. Furthermore, we are able to control the density of content vectors that violate the poset constraint by a simple "three-sigma rule". We explicitly disentangle the cross-domain images into a shared ordinal prior induced ordinal content space and two separate source/target ordinal-unrelated spaces, and the self-training is worked on the shared space exclusively for ordinal-aware domain alignment. Extensive experiments on UDA medical diagnoses and facial age estimation demonstrate its effectiveness.
Many problems on signal processing reduce to nonparametric function estimation. We propose a new methodology, piecewise convex fitting (PCF), and give a two-stage adaptive estimate. In the first stage, the number and location of the change points is estimated using strong smoothing. In the second stage, a constrained smoothing spline fit is performed with the smoothing level chosen to minimize the MSE. The imposed constraint is that a single change point occurs in a region about each empirical change point of the first-stage estimate. This constraint is equivalent to requiring that the third derivative of the second-stage estimate has a single sign in a small neighborhood about each first-stage change point. We sketch how PCF may be applied to signal recovery, instantaneous frequency estimation, surface reconstruction, image segmentation, spectral estimation and multivariate adaptive regression.