Initial orbit determination (IOD) is an important early step in the processing chain that makes sense of and reconciles the multiple optical observations of a resident space object. IOD methods generally operate on line-of-sight (LOS) vectors extracted from images of the object, hence the LOS vectors can be seen as discrete point samples of the raw optical measurements. Typically, the number of LOS vectors used by an IOD method is much smaller than the available measurements (\ie, the set of pixel intensity values), hence current IOD methods arguably under-utilize the rich information present in the data. In this paper, we propose a \emph{direct} IOD method called D-IOD that fits the orbital parameters directly on the observed streak images, without requiring LOS extraction. Since it does not utilize LOS vectors, D-IOD avoids potential inaccuracies or errors due to an imperfect LOS extraction step. Two innovations underpin our novel orbit-fitting paradigm: first, we introduce a novel non-linear least-squares objective function that computes the loss between the candidate-orbit-generated streak images and the observed streak images. Second, the objective function is minimized with a gradient descent approach that is embedded in our proposed optimization strategies designed for streak images. We demonstrate the effectiveness of D-IOD on a variety of simulated scenarios and challenging real streak images.
We present a polymorphic linear lambda-calculus as a proof language for second-order intuitionistic linear logic. The calculus includes addition and scalar multiplication, enabling the proof of a linearity result at the syntactic level.
We introduce a novel algorithm that converges to level-set convex viscosity solutions of high-dimensional Hamilton-Jacobi equations. The algorithm is applicable to a broad class of curvature motion PDEs, as well as a recently developed Hamilton-Jacobi equation for the Tukey depth, which is a statistical depth measure of data points. A main contribution of our work is a new monotone scheme for approximating the direction of the gradient, which allows for monotone discretizations of pure partial derivatives in the direction of, and orthogonal to, the gradient. We provide a convergence analysis of the algorithm on both regular Cartesian grids and unstructured point clouds in any dimension and present numerical experiments that demonstrate the effectiveness of the algorithm in approximating solutions of the affine flow in two dimensions and the Tukey depth measure of high-dimensional datasets such as MNIST and FashionMNIST.
A comprehensive mathematical model of the multiphysics flow of blood and Cerebrospinal Fluid (CSF) in the brain can be expressed as the coupling of a poromechanics system and Stokes' equations: the first describes fluids filtration through the cerebral tissue and the tissue's elastic response, while the latter models the flow of the CSF in the brain ventricles. This model describes the functioning of the brain's waste clearance mechanism, which has been recently discovered to play an essential role in the progress of neurodegenerative diseases. To model the interactions between different scales in the porous medium, we propose a physically consistent coupling between Multi-compartment Poroelasticity (MPE) equations and Stokes' equations. In this work, we introduce a numerical scheme for the discretization of such coupled MPE-Stokes system, employing a high-order discontinuous Galerkin method on polytopal grids to efficiently account for the geometric complexity of the domain. We analyze the stability and convergence of the space semidiscretized formulation, we prove a-priori error estimates, and we present a temporal discretization based on a combination of Newmark's $\beta$-method for the elastic wave equation and the $\theta$-method for the other equations of the model. Numerical simulations carried out on test cases with manufactured solutions validate the theoretical error estimates. We also present numerical results on a two-dimensional slice of a patient-specific brain geometry reconstructed from diagnostic images, to test in practice the advantages of the proposed approach.
Detecting and exploiting similarities between seemingly distant objects is at the core of analogical reasoning which itself is at the core of artificial intelligence. This paper develops {\em from the ground up} an abstract algebraic and {\em qualitative} notion of similarity based on the observation that sets of generalizations encode important properties of elements. We show that similarity defined in this way has appealing mathematical properties. As we construct our notion of similarity from first principles using only elementary concepts of universal algebra, to convince the reader of its plausibility, we show that it can be naturally embedded into first-order logic via model-theoretic types.
This work focuses on the conservation of quantities such as Hamiltonians, mass, and momentum when solution fields of partial differential equations are approximated with nonlinear parametrizations such as deep networks. The proposed approach builds on Neural Galerkin schemes that are based on the Dirac--Frenkel variational principle to train nonlinear parametrizations sequentially in time. We first show that only adding constraints that aim to conserve quantities in continuous time can be insufficient because the nonlinear dependence on the parameters implies that even quantities that are linear in the solution fields become nonlinear in the parameters and thus are challenging to discretize in time. Instead, we propose Neural Galerkin schemes that compute at each time step an explicit embedding onto the manifold of nonlinearly parametrized solution fields to guarantee conservation of quantities. The embeddings can be combined with standard explicit and implicit time integration schemes. Numerical experiments demonstrate that the proposed approach conserves quantities up to machine precision.
Time-Aware Shaper (TAS) is a time-triggered scheduling mechanism that ensures bounded latency for time-critical Scheduled Traffic (ST) flows. The Linux kernel implementation (a.k.a TAPRIO) has limited capabilities due to varying CPU workloads and thus does not offer tight latency bound for the ST flows. Also, currently only higher cycle times are possible. Other software implementations are limited to simulation studies without physical implementation. In this paper, we present $\mu$TAS, a MicroC-based hardware implementation of TAS onto a programmable SmartNIC. $\mu$TAS takes advantage of the parallel-processing architecture of the SmartNIC to configure the scheduling behaviour of its queues at runtime. To demonstrate the effectiveness of $\mu$TAS, we built a Time-Sensitive Networking (TSN) testbed from scratch. This consists of multiple end-hosts capable of generating ST and Best Effort (BE) flows and TSN switches equipped with SmartNICs running $\mu$TAS. Time synchronization is maintained between the switches and hosts. Our experiments demonstrate that the ST flows experience a bounded latency of the order of tens of microseconds.
Structural convergence is a framework for convergence of graphs by Ne\v{s}et\v{r}il and Ossona de Mendez that unifies the dense (left) graph convergence and Benjamini-Schramm convergence. They posed a problem asking whether for a given sequence of graphs $(G_n)$ converging to a limit $L$ and a vertex $r$ of $L$ it is possible to find a sequence of vertices $(r_n)$ such that $L$ rooted at $r$ is the limit of the graphs $G_n$ rooted at $r_n$. A counterexample was found by Christofides and Kr\'{a}l', but they showed that the statement holds for almost all vertices $r$ of $L$. We offer another perspective to the original problem by considering the size of definable sets to which the root $r$ belongs. We prove that if $r$ is an algebraic vertex (i.e. belongs to a finite definable set), the sequence of roots $(r_n)$ always exists.
From the literature, it is known that the choice of basis functions in hp-FEM heavily influences the computational cost in order to obtain an approximate solution. Depending on the choice of the reference element, suitable tensor product like basis functions of Jacobi polynomials with different weights lead to optimal properties due to condition number and sparsity. This paper presents biorthogonal basis functions to the primal basis functions mentioned above. The authors investigate hypercubes and simplices as reference elements, as well as the cases of $H^1$ and H(Curl). The functions can be expressed sums of tensor products of Jacobi polynomials with maximal two summands.
The generalized Golub-Kahan bidiagonalization has been used to solve saddle-point systems where the leading block is symmetric and positive definite. We extend this iterative method for the case where the symmetry condition no longer holds. We do so by relying on the known connection the algorithm has with the Conjugate Gradient method and following the line of reasoning that adapts the latter into the Full Orthogonalization Method. We propose appropriate stopping criteria based on the residual and an estimate of the energy norm for the error associated with the primal variable. Numerical comparison with GMRES highlights the advantages of our proposed strategy regarding its low memory requirements and the associated implications.
The goal of explainable Artificial Intelligence (XAI) is to generate human-interpretable explanations, but there are no computationally precise theories of how humans interpret AI generated explanations. The lack of theory means that validation of XAI must be done empirically, on a case-by-case basis, which prevents systematic theory-building in XAI. We propose a psychological theory of how humans draw conclusions from saliency maps, the most common form of XAI explanation, which for the first time allows for precise prediction of explainee inference conditioned on explanation. Our theory posits that absent explanation humans expect the AI to make similar decisions to themselves, and that they interpret an explanation by comparison to the explanations they themselves would give. Comparison is formalized via Shepard's universal law of generalization in a similarity space, a classic theory from cognitive science. A pre-registered user study on AI image classifications with saliency map explanations demonstrate that our theory quantitatively matches participants' predictions of the AI.