For the discretization of the convective term in the Navier-Stokes equations (NSEs), the commonly used convective formulation (CONV) does not preserve the energy if the divergence constraint is only weakly enforced. In this paper, we apply the skew-symmetrization technique in [B. Cockburn, G. Kanschat and D. Sch\"{o}tzau, Math. Comp., 74 (2005), pp. 1067-1095] to conforming finite element methods, which restores energy conservation for CONV. The crucial idea is to replace the discrete advective velocity with its a $H(\operatorname{div})$-conforming divergence-free approximation in CONV. We prove that the modified convective formulation also conserves linear momentum, helicity, 2D enstrophy and total vorticity under some appropriate senses. Its a Picard-type linearization form also conserves them. Under the assumption $\boldsymbol{u}\in L^{2}(0,T;\boldsymbol{W}^{1,\infty}(\Omega)),$ it can be shown that the Gronwall constant does not explicitly depend on the Reynolds number in the error estimates. The long time numerical simulations show that the linearized and modified convective formulation has a similar performance with the EMAC formulation and outperforms the usual skew-symmetric formulation (SKEW).
In this paper, both semidiscrete and fully discrete finite element methods are analyzed for the penalized two-dimensional unsteady Navier-Stokes equations with nonsmooth initial data. First order backward Euler method is applied for the time discretization, whereas conforming finite element method is used for the spatial discretization. Optimal $L^2$ error estimates for the semidiscrete as well as the fully discrete approximations of the velocity and of the pressure are derived for realistically assumed conditions on the data. The main ingredient in the proof is the appropriate exploitation of the inverse of the penalized Stokes operator, negative norm estimates and time weighted estimates. Numerical examples are discussed at the end which conform our theoretical results.
A singularly perturbed parabolic problem of convection-diffusion type with a discontinuous initial condition is examined. An analytic function is identified which matches the discontinuity in the initial condition and also satisfies the homogenous parabolic differential equation associated with the problem. The difference between this analytical function and the solution of the parabolic problem is approximated numerically, using an upwind finite difference operator combined with an appropriate layer-adapted mesh. The numerical method is shown to be parameter-uniform. Numerical results are presented to illustrate the theoretical error bounds established in the paper.
In this paper, we propose Nesterov Accelerated Shuffling Gradient (NASG), a new algorithm for the convex finite-sum minimization problems. Our method integrates the traditional Nesterov's acceleration momentum with different shuffling sampling schemes. We show that our algorithm has an improved rate of $\mathcal{O}(1/T)$ using unified shuffling schemes, where $T$ is the number of epochs. This rate is better than that of any other shuffling gradient methods in convex regime. Our convergence analysis does not require an assumption on bounded domain or a bounded gradient condition. For randomized shuffling schemes, we improve the convergence bound further. When employing some initial condition, we show that our method converges faster near the small neighborhood of the solution. Numerical simulations demonstrate the efficiency of our algorithm.
In high-dimensional regression, we attempt to estimate a parameter vector $\beta_0\in\mathbb{R}^p$ from $n\lesssim p$ observations $\{(y_i,x_i)\}_{i\leq n}$ where $x_i\in\mathbb{R}^p$ is a vector of predictors and $y_i$ is a response variable. A well-established approach uses convex regularizers to promote specific structures (e.g. sparsity) of the estimate $\widehat{\beta}$, while allowing for practical algorithms. Theoretical analysis implies that convex penalization schemes have nearly optimal estimation properties in certain settings. However, in general the gaps between statistically optimal estimation (with unbounded computational resources) and convex methods are poorly understood. We show that when the statistican has very simple structural information about the distribution of the entries of $\beta_0$, a large gap frequently exists between the best performance achieved by any convex regularizer satisfying a mild technical condition and either (i) the optimal statistical error or (ii) the statistical error achieved by optimal approximate message passing algorithms. Remarkably, a gap occurs at high enough signal-to-noise ratio if and only if the distribution of the coordinates of $\beta_0$ is not log-concave. These conclusions follow from an analysis of standard Gaussian designs. Our lower bounds are expected to be generally tight, and we prove tightness under certain conditions.
An evolving surface finite element discretisation is analysed for the evolution of a closed two-dimensional surface governed by a system coupling a generalised forced mean curvature flow and a reaction--diffusion process on the surface, inspired by a gradient flow of a coupled energy. Two algorithms are proposed, both based on a system coupling the diffusion equation to evolution equations for geometric quantities in the velocity law for the surface. One of the numerical methods is proved to be convergent in the $H^1$ norm with optimal-order for finite elements of degree at least two. We present numerical experiments illustrating the convergence behaviour and demonstrating the qualitative properties of the flow: preservation of mean convexity, loss of convexity, weak maximum principles, and the occurrence of self-intersections.
The CUR decomposition is a technique for low-rank approximation that selects small subsets of the columns and rows of a given matrix to use as bases for its column and rowspaces. It has recently attracted much interest, as it has several advantages over traditional low rank decompositions based on orthonormal bases. These include the preservation of properties such as sparsity or non-negativity, the ability to interpret data, and reduced storage requirements. The problem of finding the skeleton sets that minimize the norm of the residual error is known to be NP-hard, but classical pivoting schemes such as column pivoted QR work tend to work well in practice. When combined with randomized dimension reduction techniques, classical pivoting based methods become particularly effective, and have proven capable of very rapidly computing approximate CUR decompositions of large, potentially sparse, matrices. Another class of popular algorithms for computing CUR de-compositions are based on drawing the columns and rows randomly from the full index sets, using specialized probability distributions based on leverage scores. Such sampling based techniques are particularly appealing for very large scale problems, and are well supported by theoretical performance guarantees. This manuscript provides a comparative study of the various randomized algorithms for computing CUR decompositions that have recently been proposed. Additionally, it proposes some modifications and simplifications to the existing algorithms that leads to faster execution times.
The aim of this work is to provide the first strong convergence result of numerical approximation of a general time-fractional second order stochastic partial differential equation involving a Caputo derivative in time of order $\alpha\in(\frac 12; 1)$ and driven simultaneously by a multiplicative standard Brownian motion and additive fBm with Hurst parameter $H\in(\frac 12, 1)$, more realistic to model the random effects on transport of particles in medium with thermal memory. We prove the existence and uniqueness results and perform the spatial discretization using the finite element and the temporal discretization using a fractional exponential integrator scheme. We provide the temporal and spatial convergence proofs for our fully discrete scheme and the result shows that the convergence orders depend on the regularity of the initial data, the power of the fractional derivative, and the Hurst parameter $H$.
We obtain new equitightness and $C([0,T];L^p(\mathbb{R}^N))$-convergence results for numerical approximations of generalized porous medium equations of the form $$ \partial_tu-\mathfrak{L}[\varphi(u)]=g\qquad\text{in $\mathbb{R}^N\times(0,T)$}, $$ where $\varphi:\mathbb{R}\to\mathbb{R}$ is continuous and nondecreasing, and $\mathfrak{L}$ is a local or nonlocal diffusion operator. Our results include slow diffusions, strongly degenerate Stefan problems, and fast diffusions above a critical exponent. These results improve the previous $C([0,T];L_{\text{loc}}^p(\mathbb{R}^N))$-convergence obtained in a series of papers on the topic by the authors. To have equitightness and global $L^p$-convergence, some additional restrictions on $\mathfrak{L}$ and $\varphi$ are needed. Most commonly used symmetric operators $\mathfrak{L}$ are still included: the Laplacian, fractional Laplacians, and other generators of symmetric L\'evy processes with some fractional moment. We also discuss extensions to nonlinear possibly strongly degenerate convection-diffusion equations.
The capacity of finite state channels (FSCs) with feedback has been shown to be a limit of a sequence of multi-letter expressions. Despite many efforts, a closed-form single-letter capacity characterization is unknown to date. In this paper, the feedback capacity is studied from a fundamental algorithmic point of view by addressing the question of whether or not the capacity can be algorithmically computed. To this aim, the concept of Turing machines is used, which provides fundamental performance limits of digital computers. It is shown that the feedback capacity of FSCs is not Banach-Mazur computable and therefore not Borel-Turing computable. As a consequence, it is shown that either achievability or converse is not Banach-Mazur computable, which means that there are computable FSCs for which it is impossible to find computable tight upper and lower bounds. Furthermore, it is shown that the feedback capacity cannot be characterized as the maximization of a finite-letter formula of entropic quantities.
Graph convolution is the core of most Graph Neural Networks (GNNs) and usually approximated by message passing between direct (one-hop) neighbors. In this work, we remove the restriction of using only the direct neighbors by introducing a powerful, yet spatially localized graph convolution: Graph diffusion convolution (GDC). GDC leverages generalized graph diffusion, examples of which are the heat kernel and personalized PageRank. It alleviates the problem of noisy and often arbitrarily defined edges in real graphs. We show that GDC is closely related to spectral-based models and thus combines the strengths of both spatial (message passing) and spectral methods. We demonstrate that replacing message passing with graph diffusion convolution consistently leads to significant performance improvements across a wide range of models on both supervised and unsupervised tasks and a variety of datasets. Furthermore, GDC is not limited to GNNs but can trivially be combined with any graph-based model or algorithm (e.g. spectral clustering) without requiring any changes to the latter or affecting its computational complexity. Our implementation is available online.