Dynamic topological logic ($\mathbf{DTL}$) is a trimodal logic designed for reasoning about dynamic topological systems. It was shown by Fern\'andez-Duque that the natural set of axioms for $\mathbf{DTL}$ is incomplete, but he provided a complete axiomatisation in an extended language. In this paper, we consider dynamic topological logic over scattered spaces, which are topological spaces where every nonempty subspace has an isolated point. Scattered spaces appear in the context of computational logic as they provide semantics for provability and enjoy definable fixed points. We exhibit the first sound and complete dynamic topological logic in the original trimodal language. In particular, we show that the version of $\mathbf{DTL}$ based on the class of scattered spaces is finitely axiomatisable over the original language, and that the natural axiomatisation is sound and complete.
In this paper, we provide sufficient conditions for dissipativity and local asymptotic stability of discrete-time dynamical systems parametrized by deep neural networks. We leverage the representation of neural networks as pointwise affine maps, thus exposing their local linear operators and making them accessible to classical system analytic and design methods. This allows us to "crack open the black box" of the neural dynamical system's behavior by evaluating their dissipativity, and estimating their stationary points and state-space partitioning. We relate the norms of these local linear operators to the energy stored in the dissipative system with supply rates represented by their aggregate bias terms. Empirically, we analyze the variance in dynamical behavior and eigenvalue spectra of these local linear operators with varying weight factorizations, activation functions, bias terms, and depths.
A mobile robot's precise location information is critical for navigation and task processing, especially for a multi-robot system (MRS) to collaborate and collect valuable data from the field. However, a robot in situations where it does not have access to GPS signals, such as in an environmentally controlled, indoor, or underground environment, finds it difficult to locate using its sensor alone. As a result, robots sharing their local information to improve their localization estimates benefit the entire MRS team. There have been several attempts to model-based multi-robot localization using Radio Signal Strength Indicator (RSSI) as a source to calculate bearing information. We also utilize the RSSI for wireless networks generated through the communication of multiple robots in a system and aim to localize agents with high accuracy and efficiency in a dynamic environment for shared information fusion to refine the localization estimation. This estimator structure reduces one source of measurement correlation while appropriately incorporating others. This paper proposes a decentralized Multi-robot Synergistic Localization System (MRSL) for a dense and dynamic environment. Robots update their position estimation whenever new information receives from their neighbors. When the system senses the presence of other robots in the region, it exchanges position estimates and merges the received data to improve its localization accuracy. Our approach uses Bayesian rule-based integration, which has shown to be computationally efficient and applicable to asynchronous robotics communication. We have performed extensive simulation experiments with a varying number of robots to analyze the algorithm. MRSL's localization accuracy with RSSI outperformed other algorithms from the literature, showing a significant promise for future development.
The Quantified Reflection Calculus with one modality, denoted by $\mathsf{QRC}_1$, is a strictly positive quantified modal logic inspired by the unimodal fragment of the Reflection Calculus. The quantified strictly positive language consists of a verum constant and relation symbols as atomic formulas, with the only available connectives being the conjunction, the diamond, and the universal quantifier. $\mathsf{QRC}_1$ statements are assertions of the form $\varphi \leadsto \psi$ where $\varphi$ and $\psi$ are in this strictly positive language. $\mathsf{QRC}_1$ was born out of the wish for a nice quantified provability logic for theories of arithmetic such as Peano Arithmetic, even though Vardanyan showed in 1986 that this is impossible in general. However, restricting the language to the strictly positive fragment is a viable solution, as shown by the author and Joosten in 2022. $\mathsf{QRC}_1$ has been proved sound and complete with respect to constant domain Kripke models. Furthermore, it has the finite model property with respect to both the domain size and the number of worlds, implying its decidability. Coq is an interactive theorem prover with which one can write definitions, executable algorithms, statements, and machine-checked proofs. We describe a Coq formalization of $\mathsf{QRC}_1$ and of some relevant results, most noticeably the Kripke soundness theorem. In the future, we aim to formalize the full completeness proof and extract a decision procedure in order to mechanically check whether a given implication of strictly positive formulas is provable in $\mathsf{QRC}_1$ or not.
We develop a framework for epistemic logic that combines relevant modal logic with classical propositional logic. In our framework the agent is modeled as reasoning in accordance with a relevant modal logic while the propositional fragment of our logics is classical. In order to achieve this feature, we modify the relational semantics for relevant modal logics so that validity in a model is defined as satisfaction throughout a set of designated states that, as far as propositional connectives are concerned, behave like classical possible worlds. The main technical result of the paper is a modular completeness theorem parametrized by the relevant modal logic formalizing the agent's reasoning.
Learning interpretable representations of neural dynamics at a population level is a crucial first step to understanding how neural activity relates to perception and behavior. Models of neural dynamics often focus on either low-dimensional projections of neural activity, or on learning dynamical systems that explicitly relate to the neural state over time. We discuss how these two approaches are interrelated by considering dynamical systems as representative of flows on a low-dimensional manifold. Building on this concept, we propose a new decomposed dynamical system model that represents complex non-stationary and nonlinear dynamics of time-series data as a sparse combination of simpler, more interpretable components. The decomposed nature of the dynamics generalizes over previous switched approaches and enables modeling of overlapping and non-stationary drifts in the dynamics. We further present a dictionary learning-driven approach to model fitting, where we leverage recent results in tracking sparse vectors over time. We demonstrate that our model can learn efficient representations and smooth transitions between dynamical modes in both continuous-time and discrete-time examples. We show results on low-dimensional linear and nonlinear attractors to demonstrate that our decomposed dynamical systems model can well approximate nonlinear dynamics. Additionally, we apply our model to C. elegans data, illustrating a diversity of dynamics that is obscured when classified into discrete states.
Evolutionary algorithms are bio-inspired algorithms that can easily adapt to changing environments. Recent results in the area of runtime analysis have pointed out that algorithms such as the (1+1)~EA and Global SEMO can efficiently reoptimize linear functions under a dynamic uniform constraint. Motivated by this study, we investigate single- and multi-objective baseline evolutionary algorithms for the classical knapsack problem where the capacity of the knapsack varies over time. We establish different benchmark scenarios where the capacity changes every $\tau$ iterations according to a uniform or normal distribution. Our experimental investigations analyze the behavior of our algorithms in terms of the magnitude of changes determined by parameters of the chosen distribution, the frequency determined by $\tau$, and the class of knapsack instance under consideration. Our results show that the multi-objective approaches using a population that caters for dynamic changes have a clear advantage on many benchmarks scenarios when the frequency of changes is not too high. Furthermore, we demonstrate that the diversity mechanisms used in popular evolutionary multi-objective algorithms such as NSGA-II and SPEA2 do not necessarily result in better performance and even lead to inferior results compared to our simple multi-objective approaches.
This paper presents a multi-scale method for convection-dominated diffusion problems in the regime of large P\'eclet numbers. The application of the solution operator to piecewise constant right-hand sides on some arbitrary coarse mesh defines a finite-dimensional coarse ansatz space with favorable approximation properties. For some relevant error measures, including the $L^2$-norm, the Galerkin projection onto this generalized finite element space even yields $\varepsilon$-independent error bounds, $\varepsilon$ being the singular perturbation parameter. By constructing an approximate local basis, the approach becomes a novel multi-scale method in the spirit of the Super-Localized Orthogonal Decomposition (SLOD). The error caused by basis localization can be estimated in an a-posteriori way. In contrast to existing multi-scale methods, numerical experiments indicate $\varepsilon$-independent convergence without preasymptotic effects even in the under-resolved regime of large mesh P\'eclet numbers.
The distance function to a compact set plays a crucial role in the paradigm of topological data analysis. In particular, the sublevel sets of the distance function are used in the computation of persistent homology -- a backbone of the topological data analysis pipeline. Despite its stability to perturbations in the Hausdorff distance, persistent homology is highly sensitive to outliers. In this work, we develop a framework of statistical inference for persistent homology in the presence of outliers. Drawing inspiration from recent developments in robust statistics, we propose a $\textit{median-of-means}$ variant of the distance function ($\textsf{MoM Dist}$), and establish its statistical properties. In particular, we show that, even in the presence of outliers, the sublevel filtrations and weighted filtrations induced by $\textsf{MoM Dist}$ are both consistent estimators of the true underlying population counterpart, and their rates of convergence in the bottleneck metric are controlled by the fraction of outliers in the data. Finally, we demonstrate the advantages of the proposed methodology through simulations and applications.
We present a collection of modular open source C++ libraries for the development of logic synthesis applications. These libraries can be used to develop applications for the design of classical and emerging technologies, as well as for the implementation of quantum compilers. All libraries are well documented and well tested. Furthermore, being header-only, the libraries can be readily used as core components in complex logic synthesis systems.
In this paper we develop accelerated first-order methods for convex optimization with locally Lipschitz continuous gradient (LLCG), which is beyond the well-studied class of convex optimization with Lipschitz continuous gradient. In particular, we first consider unconstrained convex optimization with LLCG and propose accelerated proximal gradient (APG) methods for solving it. The proposed APG methods are equipped with a verifiable termination criterion and enjoy an operation complexity of ${\cal O}(\varepsilon^{-1/2}\log \varepsilon^{-1})$ and ${\cal O}(\log \varepsilon^{-1})$ for finding an $\varepsilon$-residual solution of an unconstrained convex and strongly convex optimization problem, respectively. We then consider constrained convex optimization with LLCG and propose an first-order proximal augmented Lagrangian method for solving it by applying one of our proposed APG methods to approximately solve a sequence of proximal augmented Lagrangian subproblems. The resulting method is equipped with a verifiable termination criterion and enjoys an operation complexity of ${\cal O}(\varepsilon^{-1}\log \varepsilon^{-1})$ and ${\cal O}(\varepsilon^{-1/2}\log \varepsilon^{-1})$ for finding an $\varepsilon$-KKT solution of a constrained convex and strongly convex optimization problem, respectively. All the proposed methods in this paper are parameter-free or almost parameter-free except that the knowledge on convexity parameter is required. To the best of our knowledge, no prior studies were conducted to investigate accelerated first-order methods with complexity guarantees for convex optimization with LLCG. All the complexity results obtained in this paper are entirely new.