Any measure $\mu$ on a CAT(k) space M that is stratified as a finite union of manifolds and has local exponential maps near the Fr\'echet mean $\bar\mu$ yields a continuous "tangential collapse" from the tangent cone of M at $\bar\mu$ to a vector space that preserves the Fr\'echet mean, restricts to an isometry on the "fluctuating cone" of directions in which the Fr\'echet mean can vary under perturbation of $\mu$, and preserves angles between arbitrary and fluctuating tangent vectors at the Fr\'echet mean.
Twenty years after the discovery of the F5 algorithm, Gr\"obner bases with signatures are still challenging to understand and to adapt to different settings. This contrasts with Buchberger's algorithm, which we can bend in many directions keeping correctness and termination obvious. I propose an axiomatic approach to Gr\"obner bases with signatures with the purpose of uncoupling the theory and the algorithms, and giving general results applicable in many different settings (e.g. Gr\"obner for submodules, F4-style reduction, noncommutative rings, non-Noetherian settings, etc.).
We consider the surface Stokes equation on a smooth closed hypersurface in three-dimensional space. For discretization of this problem a generalization of the surface finite element method (SFEM) of Dziuk-Elliott combined with a Hood-Taylor pair of finite element spaces has been used in the literature. We call this method Hood-Taylor-SFEM. This method uses a penalty technique to weakly satisfy the tangentiality constraint. In this paper we present a discretization error analysis of this method resulting in optimal discretization error bounds in an energy norm. We also address linear algebra aspects related to (pre)conditioning of the system matrix.
Since its introduction to computational geometry by Alt and Godau in 1992, the Fr\'echet distance has been a mainstay of algorithmic research on curve similarity computations. The focus of the research has been on comparing polygonal curves, with the notable exception of an algorithm for the decision problem for planar piecewise smooth curves due to Rote (2007). We present an algorithm for the decision problem for piecewise smooth curves that is both conceptually simpler and naturally extends to the first algorithm for the problem for piecewise smooth curves in $\mathbb{R}^d$. We assume that the algorithm is given two continuous curves, each consisting of a sequence of $m$, resp.\ $n$, smooth pieces, where each piece belongs to a sufficiently well-behaved class of curves, such as the set of algebraic curves of bounded degree. We introduce a decomposition of the free space diagram into a controlled number of pieces that can be used to solve the decision problem similarly to the polygonal case, in $O(mn)$ time, leading to a computation of the Fr\'echet distance that runs in $O(mn\log(mn))$ time. Furthermore, we study approximation algorithms for piecewise smooth curves that are also $c$-packed for some fixed value $c$. We adapt the existing framework for $(1+\epsilon)$-approximations and show that an approximate decision can be computed in $O(cn/\epsilon)$ time for any $\epsilon > 0$.
For finite element approximations of transport phenomena, it is often necessary to apply a form of limiting to ensure that the discrete solution remains well-behaved and satisfies physical constraints. However, these limiting procedures are typically performed at discrete nodal locations, which is not sufficient to ensure the robustness of the scheme when the solution must be evaluated at arbitrary locations (e.g., for adaptive mesh refinement, remapping in arbitrary Lagragian--Eulerian solvers, overset meshes, etc.). In this work, a novel limiting approach for discontinuous Galerkin methods is presented which ensures that the solution is continuously bounds-preserving (i.e., across the entire solution polynomial) for any arbitrary choice of basis, approximation order, and mesh element type. Through a modified formulation for the constraint functionals, the proposed approach requires only the solution of a single spatial scalar minimization problem per element for which a highly efficient numerical optimization procedure is presented. The efficacy of this approach is shown in numerical experiments by enforcing continuous constraints in high-order unstructured discontinuous Galerkin discretizations of hyperbolic conservation laws, ranging from scalar transport with maximum principle preserving constraints to compressible gas dynamics with positivity-preserving constraints.
Let $G$ be a graph of order $n$. A classical upper bound for the domination number of a graph $G$ having no isolated vertices is $\lfloor\frac{n}{2}\rfloor$. However, for several families of graphs, we have $\gamma(G) \le \lfloor\sqrt{n}\rfloor$ which gives a substantially improved upper bound. In this paper, we give a condition necessary for a graph $G$ to have $\gamma(G) \le \lfloor\sqrt{n}\rfloor$, and some conditions sufficient for a graph $G$ to have $\gamma(G) \le \lfloor\sqrt{n}\rfloor$. We also present a characterization of all connected graphs $G$ of order $n$ with $\gamma(G) = \lfloor\sqrt{n}\rfloor$. Further, we prove that for a graph $G$ not satisfying $rad(G)=diam(G)=rad(\overline{G})=diam(\overline{G})=2$, deciding whether $\gamma(G) \le \lfloor\sqrt{n}\rfloor$ or $\gamma(\overline{G}) \le \lfloor\sqrt{n}\rfloor$ can be done in polynomial time. We conjecture that this decision problem can be solved in polynomial time for any graph $G$.
Quasiperiodic systems, related to irrational numbers, are space-filling structures without decay nor translation invariance. How to accurately recover these systems, especially for non-smooth cases, presents a big challenge in numerical computation. In this paper, we propose a new algorithm, finite points recovery (FPR) method, which is available for both smooth and non-smooth cases, to address this challenge. The FPR method first establishes a homomorphism between the lower-dimensional definition domain of the quasiperiodic function and the higher-dimensional torus, then recovers the global quasiperiodic system by employing interpolation technique with finite points in the definition domain without dimensional lifting. Furthermore, we develop accurate and efficient strategies of selecting finite points according to the arithmetic properties of irrational numbers. The corresponding mathematical theory, convergence analysis, and computational complexity analysis on choosing finite points are presented. Numerical experiments demonstrate the effectiveness and superiority of FPR approach in recovering both smooth quasiperiodic functions and piecewise constant Fibonacci quasicrystals. While existing spectral methods encounter difficulties in accurately recovering non-smooth quasiperiodic functions.
The mutual-visibility problem in a graph $G$ asks for the cardinality of a largest set of vertices $S\subseteq V(G)$ so that for any two vertices $x,y\in S$ there is a shortest $x,y$-path $P$ so that all internal vertices of $P$ are not in $S$. This is also said as $x,y$ are visible with respect to $S$, or $S$-visible for short. Variations of this problem are known, based on the extension of the visibility property of vertices that are in and/or outside $S$. Such variations are called total, outer and dual mutual-visibility problems. This work is focused on studying the corresponding four visibility parameters in graphs of diameter two, throughout showing bounds and/or closed formulae for these parameters. The mutual-visibility problem in the Cartesian product of two complete graphs is equivalent to (an instance of) the celebrated Zarankievicz's problem. Here we study the dual and outer mutual-visibility problem for the Cartesian product of two complete graphs and all the mutual-visibility problems for the direct product of such graphs as well. We also study all the mutual-visibility problems for the line graphs of complete and complete bipartite graphs. As a consequence of this study, we present several relationships between the mentioned problems and some instances of the classical Tur\'an problem. Moreover, we study the visibility problems for cographs and several non-trivial diameter-two graphs of minimum size.
The multigrid V-cycle method is a popular method for solving systems of linear equations. It computes an approximate solution by using smoothing on fine levels and solving a system of linear equations on the coarsest level. Solving on the coarsest level depends on the size and difficulty of the problem. If the size permits, it is typical to use a direct method based on LU or Cholesky decomposition. In settings with large coarsest-level problems, approximate solvers such as iterative Krylov subspace methods, or direct methods based on low-rank approximation, are often used. The accuracy of the coarsest-level solver is typically determined based on the experience of the users with the concrete problems and methods. In this paper we present an approach to analyzing the effects of approximate coarsest-level solves on the convergence of the V-cycle method for symmetric positive definite problems. Using these results, we derive coarsest-level stopping criterion through which we may control the difference between the approximation computed by a V-cycle method with approximate coarsest-level solver and the approximation which would be computed if the coarsest-level problems were solved exactly. The coarsest-level stopping criterion may thus be set up such that the V-cycle method converges to a chosen finest-level accuracy in (nearly) the same number of V-cycle iterations as the V-cycle method with exact coarsest-level solver. We also utilize the theoretical results to discuss how the convergence of the V-cycle method may be affected by the choice of a tolerance in a coarsest-level stopping criterion based on the relative residual norm.
In this contribution we deal with Gaussian quadrature rules based on orthogonal polynomials associated with a weight function $w(x)= x^{\alpha} e^{-x}$ supported on an interval $(0,z)$, $z>0.$ The modified Chebyshev algorithm is used in order to test the accuracy in the computation of the coefficients of the three-term recurrence relation, the zeros and weights, as well as the dependence on the parameter $z.$
Hashing has been widely used in approximate nearest search for large-scale database retrieval for its computation and storage efficiency. Deep hashing, which devises convolutional neural network architecture to exploit and extract the semantic information or feature of images, has received increasing attention recently. In this survey, several deep supervised hashing methods for image retrieval are evaluated and I conclude three main different directions for deep supervised hashing methods. Several comments are made at the end. Moreover, to break through the bottleneck of the existing hashing methods, I propose a Shadow Recurrent Hashing(SRH) method as a try. Specifically, I devise a CNN architecture to extract the semantic features of images and design a loss function to encourage similar images projected close. To this end, I propose a concept: shadow of the CNN output. During optimization process, the CNN output and its shadow are guiding each other so as to achieve the optimal solution as much as possible. Several experiments on dataset CIFAR-10 show the satisfying performance of SRH.