The Elvis problem has been studied in [2], which proves existence of solutions. However, their computation in the non-smooth case remains unsolved. A bisection method is proposed to solve the Elvis problem in two space dimensions for general convex bounded velocity sets. The convergence rate is proved to be linear. Finally, numerical tests are performed on smooth and non-smooth velocity sets demonstrating the robustness of the algorithm.
We explore the features of two methods of stabilization, aggregation and supremizer methods, for reduced-order modeling of parametrized optimal control problems. In both methods, the reduced basis spaces are augmented to guarantee stability. For the aggregation method, the reduced basis approximation spaces for the state and adjoint variables are augmented in such a way that the spaces are identical. For the supremizer method, the reduced basis approximation space for the state-control product space is augmented with the solutions of a supremizer equation. We implement both of these methods for solving several parametrized control problems and assess their performance. Results indicate that the number of reduced basis vectors needed to approximate the solution space to some tolerance with the supremizer method is much larger, possibly double, that for aggregation. There are also some cases where the supremizer method fails to produce a converged solution. We present results to compare the accuracy, efficiency, and computational costs associated with both methods of stabilization which suggest that stabilization by aggregation is a superior stabilization method for control problems.
The Taylor expansion, which stems from Linear Logic and its differential extensions, is an approximation framework for the $\lambda$-calculus (and many of its variants). The reduction of the approximants of a $\lambda$-term induces a reduction on the $\lambda$-term itself, which enjoys a simulation property: whenever a term reduces to another, the approximants reduce accordingly. In recent work, we extended this result to an infinitary $\lambda$-calculus (namely, $\Lambda_{\infty}^{001}$). This short paper solves the question whether the converse property also holds: if the approximants of some term reduce to the approximants of another term, is there a $\beta$-reduction between these terms? This happens to be true for the $\lambda$-calculus, as we show, but our proof fails in the infinitary case. We exhibit a counter-example, refuting the conservativity for $\Lambda_{\infty}^{001}$.
We investigate predicative aspects of constructive univalent foundations. By predicative and constructive, we respectively mean that we do not assume Voevodsky's propositional resizing axioms or excluded middle. Our work complements existing work on predicative mathematics by exploring what cannot be done predicatively in univalent foundations. Our first main result is that nontrivial (directed or bounded) complete posets are necessarily large. That is, if such a nontrivial poset is small, then weak propositional resizing holds. It is possible to derive full propositional resizing if we strengthen nontriviality to positivity. The distinction between nontriviality and positivity is analogous to the distinction between nonemptiness and inhabitedness. Moreover, we prove that locally small, nontrivial (directed or bounded) complete posets necessarily lack decidable equality. We prove our results for a general class of posets, which includes e.g. directed complete posets, bounded complete posets, sup-lattices and frames. Secondly, the fact that these nontrivial posets are necessarily large has the important consequence that Tarski's theorem (and similar results) cannot be applied in nontrivial instances. Furthermore, we explain that generalizations of Tarski's theorem that allow for large structures are provably false by showing that the ordinal of ordinals in a univalent universe has small suprema in the presence of set quotients. The latter also leads us to investigate the inter-definability and interaction of type universes of propositional truncations and set quotients, as well as a set replacement principle. Thirdly, we clarify, in our predicative setting, the relation between the traditional definition of sup-lattice that requires suprema for all subsets and our definition that asks for suprema of all small families.
We study the Online Traveling Salesperson Problem (OLTSP) with predictions. In OLTSP, a sequence of initially unknown requests arrive over time at points (locations) of a metric space. The goal is, starting from a particular point of the metric space (the origin), to serve all these requests while minimizing the total time spent. The server moves with unit speed or is "waiting" (zero speed) at some location. We consider two variants: in the open variant, the goal is achieved when the last request is served. In the closed one, the server additionally has to return to the origin. We adopt a prediction model, introduced for OLTSP on the line, in which the predictions correspond to the locations of the requests and extend it to more general metric spaces. We first propose an oracle-based algorithmic framework, inspired by previous work. This framework allows us to design online algorithms for general metric spaces that provide competitive ratio guarantees which, given perfect predictions, beat the best possible classical guarantee (consistency). Moreover, they degrade gracefully along with the increase in error (smoothness), but always within a constant factor of the best known competitive ratio in the classical case (robustness). Having reduced the problem to designing suitable efficient oracles, we describe how to achieve this for general metric spaces as well as specific metric spaces (rings, trees and flowers), the resulting algorithms being tractable in the latter case. The consistency guarantees of our algorithms are tight in almost all cases, and their smoothness guarantees only suffer a linear dependency on the error, which we show is necessary. Finally, we provide robustness guarantees improving previous results.
Our work concerns algorithms for an unweighted variant of Maximum Flow. In the All-Pairs Connectivity (APC) problem, we are given a graph $G$ on $n$ vertices and $m$ edges, and are tasked with computing the maximum number of edge-disjoint paths from $s$ to $t$ (equivalently, the size of a minimum $(s,t)$-cut) in $G$, for all pairs of vertices $(s,t)$. Although over undirected graphs APC can be solved in essentially optimal $n^{2+o(1)}$ time, the true time complexity of APC over directed graphs remains open: this problem can be solved in $\tilde{O}(m^\omega)$ time, where $\omega \in [2, 2.373)$ is the exponent of matrix multiplication, but no matching conditional lower bound is known. We study a variant of APC called the $k$-Bounded All Pairs Connectivity ($k$-APC) problem. In this problem, we are given an integer $k$ and graph $G$, and are tasked with reporting the size of a minimum $(s,t)$-cut only for pairs $(s,t)$ of vertices with a minimum cut size less than $k$ (if the minimum $(s,t)$-cut has size at least $k$, we just report it is "large" instead of computing the exact value). We present an algorithm solving $k$-APC in directed graphs in $\tilde{O}((kn)^\omega)$ time. This runtime is $\tilde O(n^\omega)$ for all $k$ polylogarithmic in $n$, which is essentially optimal under popular conjectures from fine-grained complexity. Previously, this runtime was only known for $k\le 2$ [Georgiadis et al., ICALP 2017]. We also study a variant of $k$-APC, the $k$-Bounded All-Pairs Vertex Connectivity ($k$-APVC) problem, which considers internally vertex-disjoint paths instead of edge-disjoint paths. We present an algorithm solving $k$-APVC in directed graphs in $\tilde{O}(k^2n^\omega)$ time. Previous work solved an easier version of the $k$-APVC problem in $\tilde O((kn)^\omega)$ time [Abboud et al, ICALP 2019].
We consider the design of sublinear space and query complexity algorithms for estimating the cost of a minimum spanning tree (MST) and the cost of a minimum traveling salesman (TSP) tour in a metric on $n$ points. We first consider the $o(n)$-space regime and show that, when the input is a stream of all $\binom{n}{2}$ entries of the metric, for any $\alpha \ge 2$, both MST and TSP cost can be $\alpha$-approximated using $\tilde{O}(n/\alpha)$ space, and that $\Omega(n/\alpha^2)$ space is necessary for this task. Moreover, we show that even if the streaming algorithm is allowed $p$ passes over a metric stream, it still requires $\tilde{\Omega}(\sqrt{n/\alpha p^2})$ space. We next consider the semi-streaming regime, where computing even the exact MST cost is easy and the main challenge is to estimate TSP cost to within a factor that is strictly better than $2$. We show that, if the input is a stream of all edges of the weighted graph that induces the underlying metric, for any $\varepsilon > 0$, any one-pass $(2-\varepsilon)$-approximation of TSP cost requires $\Omega(\varepsilon^2 n^2)$ space; on the other hand, there is an $\tilde{O}(n)$ space two-pass algorithm that approximates the TSP cost to within a factor of 1.96. Finally, we consider the query complexity of estimating metric TSP cost to within a factor that is strictly better than $2$, when the algorithm is given access to a matrix that specifies pairwise distances between all points. For MST estimation in this model, it is known that a $(1+\varepsilon)$-approximation is achievable with $\tilde{O}(n/\varepsilon^{O(1)})$ queries. We design an algorithm that performs $\tilde{O}(n^{1.5})$ distance queries and achieves a strictly better than $2$-approximation when either the metric is known to contain a spanning tree supported on weight-$1$ edges or the algorithm is given access to a minimum spanning tree of the graph.
SARSA, a classical on-policy control algorithm for reinforcement learning, is known to chatter when combined with linear function approximation: SARSA does not diverge but oscillates in a bounded region. However, little is known about how fast SARSA converges to that region and how large the region is. In this paper, we make progress towards this open problem by showing the convergence rate of projected SARSA to a bounded region. Importantly, the region is much smaller than the region that we project into, provided that the magnitude of the reward is not too large. Existing works regarding the convergence of linear SARSA to a fixed point all require the Lipschitz constant of SARSA's policy improvement operator to be sufficiently small; our analysis instead applies to arbitrary Lipschitz constants and thus characterizes the behavior of linear SARSA for a new regime.
An old problem in multivariate statistics is that linear Gaussian models are often unidentifiable, i.e. some parameters cannot be uniquely estimated. In factor (component) analysis, an orthogonal rotation of the factors is unidentifiable, while in linear regression, the direction of effect cannot be identified. For such linear models, non-Gaussianity of the (latent) variables has been shown to provide identifiability. In the case of factor analysis, this leads to independent component analysis, while in the case of the direction of effect, non-Gaussian versions of structural equation modelling solve the problem. More recently, we have shown how even general nonparametric nonlinear versions of such models can be estimated. Non-Gaussianity is not enough in this case, but assuming we have time series, or that the distributions are suitably modulated by some observed auxiliary variables, the models are identifiable. This paper reviews the identifiability theory for the linear and nonlinear cases, considering both factor analytic models and structural equation models.
Automated detection of contraband items in X-ray images can significantly increase public safety, by enhancing the productivity and alleviating the mental load of security officers in airports, subways, customs/post offices, etc. The large volume and high throughput of passengers, mailed parcels, etc., during rush hours make it a Big Data analysis task. Modern computer vision algorithms relying on Deep Neural Networks (DNNs) have proven capable of undertaking this task even under resource-constrained and embedded execution scenarios, e.g., as is the case with fast, single-stage, anchor-based object detectors. This paper proposes a two-fold improvement of such algorithms for the X-ray analysis domain, introducing two complementary novelties. Firstly, more efficient anchors are obtained by hierarchical clustering the sizes of the ground-truth training set bounding boxes; thus, the resulting anchors follow a natural hierarchy aligned with the semantic structure of the data. Secondly, the default Non-Maximum Suppression (NMS) algorithm at the end of the object detection pipeline is modified to better handle occluded object detection and to reduce the number of false predictions, by inserting the Efficient Intersection over Union (E-IoU) metric into the Weighted Cluster NMS method. E-IoU provides more discriminative geometrical correlations between the candidate bounding boxes/Regions-of-Interest (RoIs). The proposed method is implemented on a common single-stage object detector (YOLOv5) and its experimental evaluation on a relevant public dataset indicates significant accuracy gains over both the baseline and competing approaches. This highlights the potential of Big Data analysis in enhancing public safety.
The paper introduces a novel non-parametric Riemannian regression method using Isometric Riemannian Manifolds (IRMs). The proposed technique, Intrinsic Local Polynomial Regression on IRMs (ILPR-IRMs), enables global data mapping between Riemannian manifolds while preserving underlying geometries. The ILPR method is generalized to handle multivariate covariates on any Riemannian manifold and isometry. Specifically, for manifolds equipped with Euclidean Pullback Metrics (EPMs), a closed analytical formula is derived for the multivariate ILPR (ILPR-EPM). Asymptotic statistical properties of the ILPR-EPM for the multivariate local linear case are established, including a formula for the asymptotic bias, establishing estimator consistency. The paper showcases possible applications of the method by focusing on a group of Riemannian metrics on the Symmetric Positive Definite (SPD) manifold, which arises in machine learning and neuroscience. It is demonstrated that several metrics on the SPD manifold are EPMs, resulting in a closed analytical expression for the multivariate ILPR estimator on the SPD manifold. The paper evaluates the ILPR estimator's performance under two specific EPMs, Log-Cholesky and Log-Euclidean, on simulated data on the SPD manifold and compares it with extrinsic LPR using the Affine-Invariant when scaling the manifold and covariate dimension. The results show that the ILPR using the Log-Cholesky metric is computationally faster and provides a better trade-off between error and time than other metrics. Finally, the Log-Cholesky metric on the SPD manifold is employed to implement an efficient and intrinsic version of Rie-SNE for visualizing high-dimensional SPD data. The code for implementing ILPR-EPMs and other relevant calculations is available on the GitHub page.