This article is about finding the limit set for banded Toeplitz matrices. Our main result is a new approach to approximate the limit set $\Lambda(b)$ where $b$ is the symbol of the banded Toeplitz matrix. The new approach is geometrical and based on the formula $\Lambda(b) = \cap_{\rho \in (0, \infty)} \text{sp } T(b_\rho)$. We show that the full intersection can be approximated by the intersection for a finite number of $\rho$'s, and that the intersection of polygon approximations for $\text{sp } T(b_\rho)$ yields an approximating polygon for $\Lambda(b)$ that converges to $\Lambda(b)$ in the Hausdorff metric. Further, we show that one can slightly expand the polygon approximations for $\text{sp } T(b_\rho)$ to ensure that they contain $\text{sp } T(b_\rho)$. Then, taking the intersection yields an approximating superset of $\Lambda(b)$ which converges to $\Lambda(b)$ in the Hausdorff metric, and is guaranteed to contain $\Lambda(b)$. We implement the algorithm in Python and test it. It performs on par to and better in some cases than existing algorithms. We argue, but do not prove, that the average time complexity of the algorithm is $O(n^2 + mn\log m)$, where $n$ is the number of $\rho$'s and $m$ is the number of vertices for the polygons approximating $\text{sp } T(b_\rho)$. Further, we argue that the distance from $\Lambda(b)$ to both the approximating polygon and the approximating superset decreases as $O(1/\sqrt{k})$ for most of $\Lambda(b)$, where $k$ is the number of elementary operations required by the algorithm.
Over the last two decades, the field of geometric curve evolutions has attracted significant attention from scientific computing. One of the most popular numerical methods for solving geometric flows is the so-called BGN scheme, which was proposed by Barrett, Garcke, and Nurnberg (J. Comput. Phys., 222 (2007), pp. 441{467), due to its favorable properties (e.g., its computational efficiency and the good mesh property). However, the BGN scheme is limited to first-order accuracy in time, and how to develop a higher-order numerical scheme is challenging. In this paper, we propose a fully discrete, temporal second-order parametric finite element method, which incorporates a mesh regularization technique when necessary, for solving geometric flows of curves. The scheme is constructed based on the BGN formulation and a semi-implicit Crank-Nicolson leap-frog time stepping discretization as well as a linear finite element approximation in space. More importantly, we point out that the shape metrics, such as manifold distance and Hausdorff distance, instead of function norms, should be employed to measure numerical errors. Extensive numerical experiments demonstrate that the proposed BGN-based scheme is second-order accurate in time in terms of shape metrics. Moreover, by employing the classical BGN scheme as a mesh regularization technique when necessary, our proposed second-order scheme exhibits good properties with respect to the mesh distribution.
This survey explores modern approaches for computing low-rank approximations of high-dimensional matrices by means of the randomized SVD, randomized subspace iteration, and randomized block Krylov iteration. The paper compares the procedures via theoretical analyses and numerical studies to highlight how the best choice of algorithm depends on spectral properties of the matrix and the computational resources available. Despite superior performance for many problems, randomized block Krylov iteration has not been widely adopted in computational science. The paper strengthens the case for this method in three ways. First, it presents new pseudocode that can significantly reduce computational costs. Second, it provides a new analysis that yields simple, precise, and informative error bounds. Last, it showcases applications to challenging scientific problems, including principal component analysis for genetic data and spectral clustering for molecular dynamics data.
A rigidity circuit (in 2D) is a minimal dependent set in the rigidity matroid, i.e. a minimal graph supporting a non-trivial stress in any generic placement of its vertices in $\mathbb R^2$. Any rigidity circuit on $n\geq 5$ vertices can be obtained from rigidity circuits on a fewer number of vertices by applying the combinatorial resultant (CR) operation. The inverse operation is called a combinatorial resultant decomposition (CR-decomp). Any rigidity circuit on $n\geq 5$ vertices can be successively decomposed into smaller circuits, until the complete graphs $K_4$ are reached. This sequence of CR-decomps has the structure of a rooted binary tree called the combinatorial resultant tree (CR-tree). A CR-tree encodes an elimination strategy for computing circuit polynomials via Sylvester resultants. Different CR-trees lead to elimination strategies that can vary greatly in time and memory consumption. It is an open problem to establish criteria for optimal CR-trees, or at least to characterize those CR-trees that lead to good elimination strategies. In [12] we presented an algorithm for enumerating CR-trees where we give the algorithms for decomposing 3-connected rigidity circuits in polynomial time. In this paper we focus on those circuits that are not 3-connected, which we simply call 2-connected. In order to enumerate CR-decomps of 2-connected circuits $G$, a brute force exp-time search has to be performed among the subgraphs induced by the subsets of $V(G)$. This exp-time bottleneck is not present in the 3-connected case. In this paper we will argue that we do not have to account for all possible CR-decomps of 2-connected rigidity circuits to find a good elimination strategy; we only have to account for those CR-decomps that are a 2-split, all of which can be enumerated in polynomial time. We present algorithms and computational evidence in support of this heuristic.
We present here a new splitting method to solve Lyapunov equations in a Kronecker product form. Although this resulting matrix is of order $n^2$, each iteration demands two operations with the matrix $A$: a multiplication of the form $(A-\sigma I) \tilde{B}$ and a inversion of the form $(A-\sigma I)^{-1}\tilde{B}$. We see that for some choice of a parameter the iteration matrix is such that all their eigenvalues are in absolute value less than 1. Moreover we present a theorem that enables us to get a good starting vector for the method.
Topology optimization (TO) has experienced a dramatic development over the last decades aided by the arising of metamaterials and additive manufacturing (AM) techniques, and it is intended to achieve the current and future challenges. In this paper we propose an extension for linear orthotropic materials of a three-dimensional TO algorithm which directly operates on the six elastic properties -- three longitudinal and shear moduli, having fixed three Poisson ratios -- of the finite element (FE) discretization of certain analysis domain. By performing a gradient-descent-alike optimization on these properties, the standard deviation of a strain-energy measurement is minimized, thus coming up with optimized, strain-homogenized structures with variable longitudinal and shear stiffness in their different material directions. To this end, an orthotropic formulation with two approaches -- direct or strain-based and complementary or stress-based -- has been developed for this optimization problem, being the stress-based more efficient as previous works on this topic have shown. The key advantages that we propose are: (1) the use of orthotropic ahead of isotropic materials, which enables a more versatile optimization process since the design space is increased by six times, and (2) no constraint needs to be imposed (such as maximum volume) in contrast to other methods widely used in this field such as Solid Isotropic Material with Penalization (SIMP), all of this by setting one unique hyper-parameter. Results of four designed load cases show that this orthotropic-TO algorithm outperforms the isotropic case, both for the similar algorithm from which this is an extension and for a SIMP run in a FE commercial software, presenting a comparable computational cost. We remark that it works particularly effectively on pure shear or shear-governed problems such as torsion loading.
We consider a sharp interface formulation for the multi-phase Mullins-Sekerka flow. The flow is characterized by a network of curves evolving such that the total surface energy of the curves is reduced, while the areas of the enclosed phases are conserved. Making use of a variational formulation, we introduce a fully discrete finite element method. Our discretization features a parametric approximation of the moving interfaces that is independent of the discretization used for the equations in the bulk. The scheme can be shown to be unconditionally stable and to satisfy an exact volume conservation property. Moreover, an inherent tangential velocity for the vertices on the discrete curves leads to asymptotically equidistributed vertices, meaning no remeshing is necessary in practice. Several numerical examples, including a convergence experiment for the three-phase Mullins-Sekerka flow, demonstrate the capabilities of the introduced method.
Neuromorphic computing is one of the few current approaches that have the potential to significantly reduce power consumption in Machine Learning and Artificial Intelligence. Imam & Cleland presented an odour-learning algorithm that runs on a neuromorphic architecture and is inspired by circuits described in the mammalian olfactory bulb. They assess the algorithm's performance in "rapid online learning and identification" of gaseous odorants and odorless gases (short "gases") using a set of gas sensor recordings of different odour presentations and corrupting them by impulse noise. We replicated parts of the study and discovered limitations that affect some of the conclusions drawn. First, the dataset used suffers from sensor drift and a non-randomised measurement protocol, rendering it of limited use for odour identification benchmarks. Second, we found that the model is restricted in its ability to generalise over repeated presentations of the same gas. We demonstrate that the task the study refers to can be solved with a simple hash table approach, matching or exceeding the reported results in accuracy and runtime. Therefore, a validation of the model that goes beyond restoring a learned data sample remains to be shown, in particular its suitability to odour identification tasks.
We make two contributions to the Isolation Forest method for anomaly and outlier detection. The first contribution is an information-theoretically motivated generalisation of the score function that is used to aggregate the scores across random tree estimators. This generalisation allows one to take into account not just the ensemble average across trees but instead the whole distribution. The second contribution is an alternative scoring function at the level of the individual tree estimator, in which we replace the depth-based scoring of the Isolation Forest with one based on hyper-volumes associated to an isolation tree's leaf nodes. We motivate the use of both of these methods on generated data and also evaluate them on 34 datasets from the recent and exhaustive ``ADBench'' benchmark, finding significant improvement over the standard isolation forest for both variants on some datasets and improvement on average across all datasets for one of the two variants. The code to reproduce our results is made available as part of the submission.
Trajectory segmentation refers to dividing a trajectory into meaningful consecutive sub-trajectories. This paper focuses on trajectory segmentation for 3D rigid-body motions. Most segmentation approaches in the literature represent the body's trajectory as a point trajectory, considering only its translation and neglecting its rotation. We propose a novel trajectory representation for rigid-body motions that incorporates both translation and rotation, and additionally exhibits several invariant properties. This representation consists of a geometric progress rate and a third-order trajectory-shape descriptor. Concepts from screw theory were used to make this representation time-invariant and also invariant to the choice of body reference point. This new representation is validated for a self-supervised segmentation approach, both in simulation and using real recordings of human-demonstrated pouring motions. The results show a more robust detection of consecutive submotions with distinct features and a more consistent segmentation compared to conventional representations. We believe that other existing segmentation methods may benefit from using this trajectory representation to improve their invariance.
We propose an approach to compute inner and outer-approximations of the sets of values satisfying constraints expressed as arbitrarily quantified formulas. Such formulas arise for instance when specifying important problems in control such as robustness, motion planning or controllers comparison. We propose an interval-based method which allows for tractable but tight approximations. We demonstrate its applicability through a series of examples and benchmarks using a prototype implementation.