We propose a unified theoretical framework to examine the energy dissipation properties at all stages of explicit exponential Runge-Kutta (EERK) methods for gradient flow problems. The main part of the novel framework is to construct the differential form of EERK method by using the difference coefficients of method and the so-called discrete orthogonal convolution kernels. As the main result, we prove that an EERK method can preserve the original energy dissipation law unconditionally if the associated differentiation matrix is positive semi-definite. A simple indicator, namely average dissipation rate, is also introduced for these multi-stage methods to evaluate the overall energy dissipation rate of an EERK method such that one can choose proper parameters in some parameterized EERK methods or compare different kinds of EERK methods. Some existing EERK methods in the literature are evaluated from the perspective of preserving the original energy dissipation law and the energy dissipation rate. Some numerical examples are also included to support our theory.
We detail for the first time a complete explicit description of the quasi-cyclic structure of all classical finite generalized quadrangles. Using these descriptions we construct families of quasi-cyclic LDPC codes derived from the point-line incidence matrix of the quadrangles by explicitly calculating quasi-cyclic generator and parity check matrices for these codes. This allows us to construct parity check and generator matrices of all such codes of length up to 400000. These codes cover a wide range of transmission rates, are easy and fast to implement and perform close to Shannon's limit with no visible error floors. We also include some performance data for these codes. Furthermore, we include a complete explicit description of the quasi-cyclic structure of the point-line and point-hyperplane incidences of the finite projective and affine spaces.
We consider the application of the generalized Convolution Quadrature (gCQ) to approximate the solution of an important class of sectorial problems. The gCQ is a generalization of Lubich's Convolution Quadrature (CQ) that allows for variable steps. The available stability and convergence theory for the gCQ requires non realistic regularity assumptions on the data, which do not hold in many applications of interest, such as the approximation of subdiffusion equations. It is well known that for non smooth enough data the original CQ, with uniform steps, presents an order reduction close to the singularity. We generalize the analysis of the gCQ to data satisfying realistic regularity assumptions and provide sufficient conditions for stability and convergence on arbitrary sequences of time points. We consider the particular case of graded meshes and show how to choose them optimally, according to the behaviour of the data. An important advantage of the gCQ method is that it allows for a fast and memory reduced implementation. We describe how the fast and oblivious gCQ can be implemented and illustrate our theoretical results with several numerical experiments.
Differential abundance analysis is a key component of microbiome studies. While dozens of methods for it exist, currently, there is no consensus on the preferred methods. Correctness of results in differential abundance analysis is an ambiguous concept that cannot be evaluated without employing simulated data, but we argue that consistency of results across datasets should be considered as an essential quality of a well-performing method. We compared the performance of 14 differential abundance analysis methods employing datasets from 54 taxonomic profiling studies based on 16S rRNA gene or shotgun sequencing. For each method, we examined how the results replicated between random partitions of each dataset and between datasets from independent studies. While certain methods showed good consistency, some widely used methods were observed to produce a substantial number of conflicting findings. Overall, the highest consistency without unnecessary reduction in sensitivity was attained by analyzing relative abundances with a non-parametric method (Wilcoxon test or ordinal regression model) or linear regression (MaAsLin2). Comparable performance was also attained by analyzing presence/absence of taxa with logistic regression.
The problems of optimal recovering univariate functions and their derivatives are studied. To solve these problems, two variants of the truncation method are constructed, which are order-optimal both in the sense of accuracy and in terms of the amount of involved Galerkin information. For numerical summation, it has been established how the parameters characterizing the problem being solved affect its stability.
We explore a class of splitting schemes employing implicit-explicit (IMEX) time-stepping to achieve accurate and energy-stable solutions for thin-film equations and Cahn-Hilliard models with variable mobility. This splitting method incorporates a linear, constant coefficient implicit step, facilitating efficient computational implementation. We investigate the influence of stabilizing splitting parameters on the numerical solution computationally, considering various initial conditions. Furthermore, we generate energy-stability plots for the proposed methods, examining different choices of splitting parameter values and timestep sizes. These methods enhance the accuracy of the original bi-harmonic-modified (BHM) approach, while preserving its energy-decreasing property and achieving second-order accuracy. We present numerical experiments to illustrate the performance of the proposed methods.
We present a class of high-order Eulerian-Lagrangian Runge-Kutta finite volume methods that can numerically solve Burgers' equation with shock formations, which could be extended to general scalar conservation laws. Eulerian-Lagrangian (EL) and semi-Lagrangian (SL) methods have recently seen increased development and have become a staple for allowing large time-stepping sizes. Yet, maintaining relatively large time-stepping sizes post shock formation remains quite challenging. Our proposed scheme integrates the partial differential equation on a space-time region partitioned by linear approximations to the characteristics determined by the Rankine-Hugoniot jump condition. We trace the characteristics forward in time and present a merging procedure for the mesh cells to handle intersecting characteristics due to shocks. Following this partitioning, we write the equation in a time-differential form and evolve with Runge-Kutta methods in a method-of-lines fashion. High-resolution methods such as ENO and WENO-AO schemes are used for spatial reconstruction. Extension to higher dimensions is done via dimensional splitting. Numerical experiments demonstrate our scheme's high-order accuracy and ability to sharply capture post-shock solutions with large time-stepping sizes.
This paper aims to investigate the diffusion behavior of particles moving in stochastic flows under a structure-preserving scheme. We compute the effective diffusivity for normal diffusive random flows and establish the power law between spatial and temporal variables for cases with anomalous diffusion phenomena. From a Lagrangian approach, we separate the corresponding stochastic differential equations (SDEs) into sub-problems and construct a one-step structure-preserving method to solve them. Then by modified equation systems, the convergence analysis in calculating the effective diffusivity is provided and compared between the structure-preserving scheme and the Euler-Maruyama scheme. Also, we provide the error estimate for the structure-preserving scheme in calculating the power law for a series of super-diffusive random flows. Finally, we calculate the effective diffusivity and anomalous diffusion phenomena for a series of 2D and 3D random fields.
Semitopologies model consensus in distributed system by equating the notion of a quorum -- a set of participants sufficient to make local progress -- with that of an open set. This yields a topology-like theory of consensus, but semitopologies generalise topologies, since the intersection of two quorums need not necessarily be a quorum. The semitopological model of consensus is naturally heterogeneous and local, just like topologies can be heterogenous and local, and for the same reasons: points may have different quorums and there is no restriction that open sets / quorums be uniformly generated (e.g. open sets can be something other than two-thirds majorities of the points in the space). Semiframes are an algebraic abstraction of semitopologies. They are to semitopologies as frames are to topologies. We give a notion of semifilter, which plays a role analogous to filters, and show how to build a semiframe out of the open sets of a semitopology, and a semitopology out of the semifilters of a semiframe. We define suitable notions of category and morphism and prove a categorical duality between (sober) semiframes and (spatial) semitopologies, and investigate well-behavedness properties on semitopologies and semiframes across the duality. Surprisingly, the structure of semiframes is not what one might initially expect just from looking at semitopologies, and the canonical structure required for the duality result -- a compatibility relation *, generalising sets intersection -- is also canonical for expressing well-behavedness properties. Overall, we deliver a new categorical, algebraic, abstract framework within which to study consensus on distributed systems, and which is also simply interesting to consider as a mathematical theory in its own right.
Distributionally robust optimization has emerged as an attractive way to train robust machine learning models, capturing data uncertainty and distribution shifts. Recent statistical analyses have proved that robust models built from Wasserstein ambiguity sets have nice generalization guarantees, breaking the curse of dimensionality. However, these results are obtained in specific cases, at the cost of approximations, or under assumptions difficult to verify in practice. In contrast, we establish, in this article, exact generalization guarantees that cover all practical cases, including any transport cost function and any loss function, potentially non-convex and nonsmooth. For instance, our result applies to deep learning, without requiring restrictive assumptions. We achieve this result through a novel proof technique that combines nonsmooth analysis rationale with classical concentration results. Our approach is general enough to extend to the recent versions of Wasserstein/Sinkhorn distributionally robust problems that involve (double) regularizations.
In many statistical modeling problems, such as classification and regression, it is common to encounter sparse and blocky coefficients. Sparse fused Lasso is specifically designed to recover these sparse and blocky structured features, especially in cases where the design matrix has ultrahigh dimensions, meaning that the number of features significantly surpasses the number of samples. Quantile loss is a well-known robust loss function that is widely used in statistical modeling. In this paper, we propose a new sparse fused lasso classification model, and develop a unified multi-block linearized alternating direction method of multipliers algorithm that effectively selects sparse and blocky features for regression and classification. Our algorithm has been proven to converge with a derived linear convergence rate. Additionally, our algorithm has a significant advantage over existing methods for solving ultrahigh dimensional sparse fused Lasso regression and classification models due to its lower time complexity. Note that the algorithm can be easily extended to solve various existing fused Lasso models. Finally, we present numerical results for several synthetic and real-world examples, which demonstrate the robustness, scalability, and accuracy of the proposed classification model and algorithm