Systems consisting of spheres rolling on elastic membranes have been used to introduce a core conceptual idea of General Relativity (GR): how curvature guides the movement of matter. However, such schemes cannot accurately represent relativistic dynamics in the laboratory because of the dominance of dissipation and external gravitational fields. Here we demonstrate that an ``active" object (a wheeled robot), which moves in a straight line on level ground and can alter its speed depending on the curvature of the deformable terrain it moves on, can exactly capture dynamics in curved relativistic spacetimes. Via the systematic study of the robot's dynamics in the radial and orbital directions, we develop a mapping of the emergent trajectories of a wheeled vehicle on a spandex membrane to the motion in a curved spacetime. Our mapping demonstrates how the driven robot's dynamics mix space and time in a metric, and shows how active particles do not necessarily follow geodesics in the real space but instead follow geodesics in a fiducial spacetime. The mapping further reveals how parameters such as the membrane elasticity and instantaneous speed allow the programming of a desired spacetime, such as the Schwarzschild metric near a non-rotating blackhole. Our mapping and framework facilitate creation of a robophysical analog to a general relativistic system in the laboratory at low cost that can provide insights into active matter in deformable environments and robot exploration in complex landscapes.
We propose a topological mapping and localization system able to operate on real human colonoscopies, despite significant shape and illumination changes. The map is a graph where each node codes a colon location by a set of real images, while edges represent traversability between nodes. For close-in-time images, where scene changes are minor, place recognition can be successfully managed with the recent transformers-based local feature matching algorithms. However, under long-term changes -- such as different colonoscopies of the same patient -- feature-based matching fails. To address this, we train on real colonoscopies a deep global descriptor achieving high recall with significant changes in the scene. The addition of a Bayesian filter boosts the accuracy of long-term place recognition, enabling relocalization in a previously built map. Our experiments show that ColonMapper is able to autonomously build a map and localize against it in two important use cases: localization within the same colonoscopy or within different colonoscopies of the same patient. Code will be available upon acceptance.
Krenn, Gu and Zeilinger initiated the study of PMValid edge-colourings because of its connection to a problem from quantum physics. A graph is defined to have a PMValid $k$-edge-colouring if it admits a $k$-edge-colouring (i.e. an edge colouring with $k$-colours) with the property that all perfect matchings are monochromatic and each of the $k$ colour classes contain at least one perfect matching. The matching index of a graph $G$, $\mu(G)$ is defined as the maximum value of $k$ for which $G$ admits a PMValid $k$-edge-colouring. It is easy to see that $\mu(G)\geq 1$ if and only if $G$ has a perfect matching (due to the trivial $1$-edge-colouring which is PMValid). Bogdanov observed that for all graphs non-isomorphic to $K_4$, $\mu(G)\leq 2$ and $\mu(K_4)=3$. However, the characterisation of graphs for which $\mu(G)=1$ and $\mu(G)=2$ is not known. In this work, we answer this question. Using this characterisation, we also give a fast algorithm to compute $\mu(G)$ of a graph $G$. In view of our work, the structure of PMValid $k$-edge-colourable graphs is now fully understood for all $k$. Our characterisation, also has an implication to the aforementioned quantum physics problem. In particular, it settles a conjecture of Krenn and Gu for a sub-class of graphs.
Understanding fluid movement in multi-pored materials is vital for energy security and physiology. For instance, shale (a geological material) and bone (a biological material) exhibit multiple pore networks. Double porosity/permeability models provide a mechanics-based approach to describe hydrodynamics in aforesaid porous materials. However, current theoretical results primarily address state-state response, and their counterparts in the transient regime are still wanting. The primary aim of this paper is to fill this knowledge gap. We present three principal properties -- with rigorous mathematical arguments -- that the solutions under the double porosity/permeability model satisfy in the transient regime: backward-in-time uniqueness, reciprocity, and a variational principle. We employ the ``energy method'' -- by exploiting the physical total kinetic energy of the flowing fluid -- to establish the first property and Cauchy-Riemann convolutions to prove the next two. The results reported in this paper -- that qualitatively describe the dynamics of fluid flow in double-pored media -- have (a) theoretical significance, (b) practical applications, and (c) considerable pedagogical value. In particular, these results will benefit practitioners and computational scientists in checking the accuracy of numerical simulators. The backward-in-time uniqueness lays a firm theoretical foundation for pursuing inverse problems in which one predicts the prescribed initial conditions based on data available about the solution at a later instance.
In this work we extend the shifted Laplacian approach to the elastic Helmholtz equation. The shifted Laplacian multigrid method is a common preconditioning approach for the discretized acoustic Helmholtz equation. In some cases, like geophysical seismic imaging, one needs to consider the elastic Helmholtz equation, which is harder to solve: it is three times larger and contains a nullity-rich grad-div term. These properties make the solution of the equation more difficult for multigrid solvers. The key idea in this work is combining the shifted Laplacian with approaches for linear elasticity. We provide local Fourier analysis and numerical evidence that the convergence rate of our method is independent of the Poisson's ratio. Moreover, to better handle the problem size, we complement our multigrid method with the domain decomposition approach, which works in synergy with the local nature of the shifted Laplacian, so we enjoy the advantages of both methods without sacrificing performance. We demonstrate the efficiency of our solver on 2D and 3D problems in heterogeneous media.
Orthogonality is a notion based on the duality between programs and their environments used to determine when they can be safely combined. For instance, it is a powerful tool to establish termination properties in classical formal systems. It was given a general treatment with the concept of orthogonality category, of which numerous models of linear logic are instances, by Hyland and Schalk. This paper considers the subclass of focused orthogonalities. We develop a theory of fixpoint constructions in focused orthogonality categories. Central results are lifting theorems for initial algebras and final coalgebras. These crucially hinge on the insight that focused orthogonality categories are relational fibrations. The theory provides an axiomatic categorical framework for models of linear logic with least and greatest fixpoints of types. We further investigate domain-theoretic settings, showing how to lift bifree algebras, used to solve mixed-variance recursive type equations, to focused orthogonality categories.
Linear logic has provided new perspectives on proof-theory, denotational semantics and the study of programming languages. One of its main successes are proof-nets, canonical representations of proofs that lie at the intersection between logic and graph theory. In the case of the minimalist proof-system of multiplicative linear logic without units (MLL), these two aspects are completely fused: proof-nets for this system are graphs satisfying a correctness criterion that can be fully expressed in the language of graphs. For more expressive logical systems (containing logical constants, quantifiers and exponential modalities), this is not completely the case. The purely graphical approach of proof-nets deprives them of any sequential structure that is crucial to represent the order in which arguments are presented, which is necessary for these extensions. Rebuilding this order of presentation - sequentializing the graph - is thus a requirement for a graph to be logical. Presentations and study of the artifacts ensuring that sequentialization can be done, such as boxes or jumps, are an integral part of researches on linear logic. Jumps, extensively studied by Faggian and di Giamberardino, can express intermediate degrees of sequentialization between a sequent calculus proof and a fully desequentialized proof-net. We propose to analyze the logical strength of jumps by internalizing them in an extention of MLL where axioms on a specific formula, the jumping formula, introduce constrains on the possible sequentializations. The jumping formula needs to be treated non-linearly, which we do either axiomatically, or by embedding it in a very controlled fragment of multiplicative-exponential linear logic, uncovering the exponential logic of sequentialization.
This study examines clusterability testing for a signed graph in the bounded-degree model. Our contributions are two-fold. First, we provide a quantum algorithm with query complexity $\tilde{O}(N^{1/3})$ for testing clusterability, which yields a polynomial speedup over the best classical clusterability tester known [arXiv:2102.07587]. Second, we prove an $\tilde{\Omega}(\sqrt{N})$ classical query lower bound for testing clusterability, which nearly matches the upper bound from [arXiv:2102.07587]. This settles the classical query complexity of clusterability testing, and it shows that our quantum algorithm has an advantage over any classical algorithm.
Finding the distribution of the velocities and pressures of a fluid (by solving the Navier-Stokes equations) is a principal task in the chemical, energy, and pharmaceutical industries, as well as in mechanical engineering and the design of pipeline systems. With existing solvers, such as OpenFOAM and Ansys, simulations of fluid dynamics in intricate geometries are computationally expensive and require re-simulation whenever the geometric parameters or the initial and boundary conditions are altered. Physics-informed neural networks are a promising tool for simulating fluid flows in complex geometries, as they can adapt to changes in the geometry and mesh definitions, allowing for generalization across different shapes. We present a hybrid quantum physics-informed neural network that simulates laminar fluid flows in 3D Y-shaped mixers. Our approach combines the expressive power of a quantum model with the flexibility of a physics-informed neural network, resulting in a 21% higher accuracy compared to a purely classical neural network. Our findings highlight the potential of machine learning approaches, and in particular hybrid quantum physics-informed neural network, for complex shape optimization tasks in computational fluid dynamics. By improving the accuracy of fluid simulations in complex geometries, our research using hybrid quantum models contributes to the development of more efficient and reliable fluid dynamics solvers.
Learning distance functions between complex objects, such as the Wasserstein distance to compare point sets, is a common goal in machine learning applications. However, functions on such complex objects (e.g., point sets and graphs) are often required to be invariant to a wide variety of group actions e.g. permutation or rigid transformation. Therefore, continuous and symmetric product functions (such as distance functions) on such complex objects must also be invariant to the product of such group actions. We call these functions symmetric and factor-wise group invariant (or SFGI functions in short). In this paper, we first present a general neural network architecture for approximating SFGI functions. The main contribution of this paper combines this general neural network with a sketching idea to develop a specific and efficient neural network which can approximate the $p$-th Wasserstein distance between point sets. Very importantly, the required model complexity is independent of the sizes of input point sets. On the theoretical front, to the best of our knowledge, this is the first result showing that there exists a neural network with the capacity to approximate Wasserstein distance with bounded model complexity. Our work provides an interesting integration of sketching ideas for geometric problems with universal approximation of symmetric functions. On the empirical front, we present a range of results showing that our newly proposed neural network architecture performs comparatively or better than other models (including a SOTA Siamese Autoencoder based approach). In particular, our neural network generalizes significantly better and trains much faster than the SOTA Siamese AE. Finally, this line of investigation could be useful in exploring effective neural network design for solving a broad range of geometric optimization problems (e.g., $k$-means in a metric space).
The goal of explainable Artificial Intelligence (XAI) is to generate human-interpretable explanations, but there are no computationally precise theories of how humans interpret AI generated explanations. The lack of theory means that validation of XAI must be done empirically, on a case-by-case basis, which prevents systematic theory-building in XAI. We propose a psychological theory of how humans draw conclusions from saliency maps, the most common form of XAI explanation, which for the first time allows for precise prediction of explainee inference conditioned on explanation. Our theory posits that absent explanation humans expect the AI to make similar decisions to themselves, and that they interpret an explanation by comparison to the explanations they themselves would give. Comparison is formalized via Shepard's universal law of generalization in a similarity space, a classic theory from cognitive science. A pre-registered user study on AI image classifications with saliency map explanations demonstrate that our theory quantitatively matches participants' predictions of the AI.