Time-space fractional Bloch-Torrey equations are developed by some researchers to investigate the relationship between diffusion and fractional-order dynamics. In this paper, we first propose a second-order scheme for this equation by employing the recently proposed L2-type formula [A.~A.~Alikhanov, C.~Huang, Appl.~Math.~Comput.~(2021) 126545]. Then, we prove the stability and the convergence of this scheme. Based on such the numerical scheme, a L2-type all-at-once system is derived. In order to solve this system in a parallel-in-time pattern, a bilateral preconditioning technique is designed according to the special structure of the system. We theoretically show that the condition number of the preconditioned matrix is uniformly bounded by a constant for the time fractional order $\alpha \in (0,0.3624)$. Numerical results are reported to show the efficiency of our method.
We describe a new approach to derive numerical approximations of boundary conditions for high-order accurate finite-difference approximations. The approach, called the Local Compatibility Boundary Condition (LCBC) method, uses boundary conditions and compatibility boundary conditions derived from the governing equations, as well as interior and boundary grid values, to construct a local polynomial, whose degree matches the order of accuracy of the interior scheme, centered at each boundary point. The local polynomial is then used to derive a discrete formula for each ghost point in terms of the data. This approach leads to centered approximations that are generally more accurate and stable than one-sided approximations. Moreover, the stencil approximations are local since they do not couple to neighboring ghost-point values which can occur with traditional compatibility conditions. The local polynomial is derived using continuous operators and derivatives which enables the automatic construction of stencil approximations at different orders of accuracy. The LCBC method is developed here for problems governed by second-order partial differential equations, and it is verified for a wide range of sample problems, both time-dependent and time-independent, in two space dimensions and for schemes up to sixth-order accuracy.
This paper studies quasi-Newton methods for solving strongly-convex-strongly-concave saddle point problems (SPP). We propose a variant of general greedy Broyden family update for SPP, which has explicit local superlinear convergence rate of ${\mathcal O}\big(\big(1-\frac{1}{n\kappa^2}\big)^{k(k-1)/2}\big)$, where $n$ is dimensions of the problem, $\kappa$ is the condition number and $k$ is the number of iterations. The design and analysis of proposed algorithm are based on estimating the square of indefinite Hessian matrix, which is different from classical quasi-Newton methods in convex optimization. We also present two specific Broyden family algorithms with BFGS-type and SR1-type updates, which enjoy the faster local convergence rate of $\mathcal O\big(\big(1-\frac{1}{n}\big)^{k(k-1)/2}\big)$.
Given a target distribution $\mu \propto e^{-\mathcal{H}}$ to sample from with Hamiltonian $\mathcal{H}$, in this paper we propose and analyze new Metropolis-Hastings sampling algorithms that target an alternative distribution $\mu^f_{1,\alpha,c} \propto e^{-\mathcal{H}^{f}_{1,\alpha,c}}$, where $\mathcal{H}^{f}_{1,\alpha,c}$ is a landscape-modified Hamiltonian which we introduce explicitly. The advantage of the Metropolis dynamics which targets $\pi^f_{1,\alpha,c}$ is that it enjoys reduced critical height described by the threshold parameter $c$, function $f$, and a penalty parameter $\alpha \geq 0$ that controls the state-dependent effect. First, we investigate the case of fixed $\alpha$ and propose a self-normalized estimator that corrects for the bias of sampling and prove asymptotic convergence results and Chernoff-type bound of the proposed estimator. Next, we consider the case of annealing the penalty parameter $\alpha$. We prove strong ergodicity and bounds on the total variation mixing time of the resulting non-homogeneous chain subject to appropriate assumptions on the decay of $\alpha$. We illustrate the proposed algorithms by comparing their mixing times with the original Metropolis dynamics on statistical physics models including the ferromagnetic Ising model on the hypercube or the complete graph and the $q$-state Potts model on the two-dimensional torus. In these cases, the mixing times of the classical Glauber dynamics are at least exponential in the system size as the critical height grows at least linearly with the size, while the proposed annealing algorithm, with appropriate choice of $f$, $c$, and annealing schedule on $\alpha$, mixes rapidly with at most polynomial dependence on the size. The crux of the proof harnesses on the important observation that the reduced critical height can be bounded independently of the size that gives rise to rapid mixing.
In this paper, we develop a high order residual distribution (RD) method for solving steady state conservation laws in a novel Hermite weighted essentially non-oscillatory (HWENO) framework recently developed in [24]. In particular, we design a high order HWENO integration for the integrals of source term and fluxes based on the point value of the solution and its spatial derivatives, and the principles of residual distribution schemes are adapted to obtain steady state solutions. Two advantages of the novel HWENO framework have been shown in [24]: first, compared with the traditional HWENO framework, the proposed method does not need to introduce additional auxiliary equations to update the derivatives of the unknown variable, and just compute them from the current point value of the solution and its old spatial derivatives, which saves the computational storage and CPU time, and thereby improve the computational efficiency of the traditional HWENO framework. Second, compared with the traditional WENO method, reconstruction stencil of the HWENO methods becomes more compact, their boundary treatment is simpler, and the numerical errors are smaller at the same grid. Thus, it is also a compact scheme when we design the higher order accuracy, compared with that in [11] Chou and Shu proposed. Extensive numerical experiments for one- and two-dimensional scalar and systems problems confirm the high order accuracy and good quality of our scheme.
In this paper, we propose a $C^{0}$ interior penalty method for $m$th-Laplace equation on bounded Lipschitz polyhedral domain in $\mathbb{R}^{d}$, where $m$ and $d$ can be any positive integers. The standard $H^{1}$-conforming piecewise $r$-th order polynomial space is used to approximate the exact solution $u$, where $r$ can be any integer greater than or equal to $m$. Unlike the interior penalty method in [T.~Gudi and M.~Neilan, {\em An interior penalty method for a sixth-order elliptic equation}, IMA J. Numer. Anal., \textbf{31(4)} (2011), pp. 1734--1753], we avoid computing $D^{m}$ of numerical solution on each element and high order normal derivatives of numerical solution along mesh interfaces. Therefore our method can be easily implemented. After proving discrete $H^{m}$-norm bounded by the natural energy semi-norm associated with our method, we manage to obtain stability and optimal convergence with respect to discrete $H^{m}$-norm. Numerical experiments validate our theoretical estimate.
Multiple antennas arrays play a key role in wireless networks for communications but also for localization and sensing applications. The use of large antenna arrays at high carrier frequencies (in the mmWave range) pushes towards a propagation regime in which the wavefront is no longer plane but spherical. This allows to infer the position and orientation of a transmitting source from the received signal without the need of using multiple anchor nodes, located in known positions. To understand the fundamental limits of large antenna arrays for localization, this paper combines wave propagation theory with estimation theory, and computes the Cram\'er-Rao Bound (CRB) for the estimation of the source position on the basis of the three Cartesian components of the electric field, observed over a rectangular surface area. The problem is referred to as holographic positioning and is formulated by taking into account the radiation angular pattern of the transmitting source, which is typically ignored in standard signal processing models. We assume that the source is a Hertzian dipole, and address the holographic positioning problem in both cases, that is, with and without a priori knowledge of its orientation. To simplify the analysis and gain further insights, we also consider the case in which the dipole is located on the line perpendicular to the surface center. Numerical and asymptotic results are given to quantify the CRBs, and to quantify the effect of various system parameters on the ultimate estimation accuracy. It turns out that surfaces of practical size may guarantee a centimeter-level accuracy in the mmWave bands.
In this work, we investigate the recovery of a parameter in a diffusion process given by the order of derivation in time for a class of diffusion type equations, including both classical and time-fractional diffusion equations, from the flux measurement observed at one point on the boundary. The mathematical model for time-fractional diffusion equations involves a Djrbashian-Caputo fractional derivative in time. We prove a uniqueness result in an unknown medium (e.g., diffusion coefficients, obstacle, initial condition and source), i.e., the recovery of the order of derivation in a diffusion process having several pieces of unknown information. The proof relies on the analyticity of the solution at large time, asymptotic decay behavior, strong maximum principle of the elliptic problem and suitable application of the Hopf lemma. Further we provide an easy-to-implement reconstruction algorithm based on a nonlinear least-squares formulation, and several numerical experiments are presented to complement the theoretical analysis.
The convergence rates of iterative methods for solving a linear system $\mathbf{A} x = b$ typically depend on the condition number of the matrix $\mathbf{A}$. Preconditioning is a common way of speeding up these methods by reducing that condition number in a computationally inexpensive way. In this paper, we revisit the decades-old problem of how to best improve $\mathbf{A}$'s condition number by left or right diagonal rescaling. We make progress on this problem in several directions. First, we provide new bounds for the classic heuristic of scaling $\mathbf{A}$ by its diagonal values (a.k.a. Jacobi preconditioning). We prove that this approach reduces $\mathbf{A}$'s condition number to within a quadratic factor of the best possible scaling. Second, we give a solver for structured mixed packing and covering semidefinite programs (MPC SDPs) which computes a constant-factor optimal scaling for $\mathbf{A}$ in $\widetilde{O}(\text{nnz}(\mathbf{A}) \cdot \text{poly}(\kappa^\star))$ time; this matches the cost of solving the linear system after scaling up to a $\widetilde{O}(\text{poly}(\kappa^\star))$ factor. Third, we demonstrate that a sufficiently general width-independent MPC SDP solver would imply near-optimal runtimes for the scaling problems we consider, and natural variants concerned with measures of average conditioning. Finally, we highlight connections of our preconditioning techniques to semi-random noise models, as well as applications in reducing risk in several statistical regression models.
In this paper, we investigate the imaging inverse problem by employing an infinite-dimensional Bayesian inference method with a general fractional total variation-Gaussian (GFTG) prior. This novel hybrid prior is a development for the total variation-Gaussian (TG) prior and the non-local total variation-Gaussian (NLTG) prior, which is a combination of the Gaussian prior and a general fractional total variation regularization term, which contains a wide class of fractional derivative. Compared to the TG prior, the GFTG prior can effectively reduce the staircase effect, enhance the texture details of the images and also provide a complete theoretical analysis in the infinite-dimensional limit similarly to TG prior. The separability of the state space in Bayesian inference is essential for developments of probability and integration theory in infinite-dimensional setting, thus we first introduce the corresponding general fractional Sobolev space and prove that the space is a separable Banach space. Thereafter, we give the well-posedness and finite-dimensional approximation of the posterior measure of the Bayesian inverse problem based on the GFTG prior, and then the samples are extracted from the posterior distribution by using the preconditioned Crank-Nicolson (pCN) algorithm. Finally, we give several numerical examples of image reconstruction under liner and nonlinear models to illustrate the advantages of the proposed improved prior.
In this work, we consider the distributed optimization of non-smooth convex functions using a network of computing units. We investigate this problem under two regularity assumptions: (1) the Lipschitz continuity of the global objective function, and (2) the Lipschitz continuity of local individual functions. Under the local regularity assumption, we provide the first optimal first-order decentralized algorithm called multi-step primal-dual (MSPD) and its corresponding optimal convergence rate. A notable aspect of this result is that, for non-smooth functions, while the dominant term of the error is in $O(1/\sqrt{t})$, the structure of the communication network only impacts a second-order term in $O(1/t)$, where $t$ is time. In other words, the error due to limits in communication resources decreases at a fast rate even in the case of non-strongly-convex objective functions. Under the global regularity assumption, we provide a simple yet efficient algorithm called distributed randomized smoothing (DRS) based on a local smoothing of the objective function, and show that DRS is within a $d^{1/4}$ multiplicative factor of the optimal convergence rate, where $d$ is the underlying dimension.