This work explores the relationship between the set of Wardrop equilibria~(WE) of a routing game, the total demand of that game, and the occurrence of Braess's paradox~(BP). The BP formalizes the counter-intuitive fact that for some networks, removing a path from the network decreases congestion at WE. For a single origin-destination routing games with affine cost functions, the first part of this work provides tools for analyzing the evolution of the WE as the demand varies. It characterizes the piece-wise affine nature of this dependence by showing that the set of directions in which the WE can vary in each piece is the solution of a variational inequality problem. In the process we establish various properties of changes in the set of used and minimal-cost paths as demand varies. As a consequence of these characterizations, we derive a procedure to obtain the WE for all demands above a certain threshold. The second part of the paper deals with detecting the presence of BP in a network. We supply a number of sufficient conditions that reveal the presence of BP and that are computationally tractable. We also discuss a different perspective on BP, where we establish that a path causing BP at a particular demand must be strictly beneficial to the network at a lower demand. Several examples throughout this work illustrate and elaborate our findings.
We observe a large variety of robots in terms of their bodies, sensors, and actuators. Given the commonalities in the skill sets, teaching each skill to each different robot independently is inefficient and not scalable when the large variety in the robotic landscape is considered. If we can learn the correspondences between the sensorimotor spaces of different robots, we can expect a skill that is learned in one robot can be more directly and easily transferred to other robots. In this paper, we propose a method to learn correspondences among two or more robots that may have different morphologies. To be specific, besides robots with similar morphologies with different degrees of freedom, we show that a fixed-based manipulator robot with joint control and a differential drive mobile robot can be addressed within the proposed framework. To set up the correspondence among the robots considered, an initial base task is demonstrated to the robots to achieve the same goal. Then, a common latent representation is learned along with the individual robot policies for achieving the goal. After the initial learning stage, the observation of a new task execution by one robot becomes sufficient to generate a latent space representation pertaining to the other robots to achieve the same task. We verified our system in a set of experiments where the correspondence between robots is learned (1) when the robots need to follow the same paths to achieve the same task, (2) when the robots need to follow different trajectories to achieve the same task, and (3) when complexities of the required sensorimotor trajectories are different for the robots. We also provide a proof-of-the-concept realization of correspondence learning between a real manipulator robot and a simulated mobile robot.
Maximal regularity is a kind of a priori estimates for parabolic-type equations and it plays an important role in the theory of nonlinear differential equations. The aim of this paper is to investigate the temporally discrete counterpart of maximal regularity for the discontinuous Galerkin (DG) time-stepping method. We will establish such an estimate without logarithmic factor over a quasi-uniform temporal mesh. To show the main result, we introduce the temporally regularized Green's function and then reduce the discrete maximal regularity to a weighted error estimate for its DG approximation. Our results would be useful for investigation of DG approximation of nonlinear parabolic problems.
We present NCGD, a method for constrained nonsmooth convex optimization. In each iteration, NCGD finds the best norm-constrained descent direction by considering the worst bound over all local subgradients. We prove a few global convergence rates of NCGD. For well-behaved nonsmooth functions (characterized by the weak smoothness property and being Lipschitz continuous), NCGD converges in $O(\epsilon^{-1})$ iterations, where $\epsilon$ is the desired optimality gap. NCGD converges in $O(\epsilon^{-0.5})$ iterations for strongly convex, weakly smooth functions. Furthermore, if the function is strongly convex and smooth, then NCGD achieves linear convergence (i.e., $O(-\log \epsilon)$). The overall efficiency of NCGD depends on the efficiency of solving a minimax optimization problem involving the subdifferential of the objective function in each iteration.
We consider a general nonsymmetric second-order linear elliptic PDE in the framework of the Lax-Milgram lemma. We formulate and analyze an adaptive finite element algorithm with arbitrary polynomial degree that steers the adaptive mesh-refinement and the inexact iterative solution of the arising linear systems. More precisely, the iterative solver employs, as an outer loop, the so-called Zarantonello iteration to symmetrize the system and, as an inner loop, a uniformly contractive algebraic solver, e.g., an optimally preconditioned conjugate gradient method or an optimal geometric multigrid algorithm. We prove that the proposed inexact adaptive iteratively symmetrized finite element method (AISFEM) leads to full linear convergence and, for sufficiently small adaptivity parameters, to optimal convergence rates with respect to the overall computational cost, i.e., the total computational time. Numerical experiments underline the theory.
We propose new linear combinations of compositions of a basic second-order scheme with appropriately chosen coefficients to construct higher order numerical integrators for differential equations. They can be considered as a generalization of extrapolation methods and multi-product expansions. A general analysis is provided and new methods up to order 8 are built and tested. The new approach is shown to reduce the latency problem when implemented in a parallel environment and leads to schemes that are significantly more efficient than standard extrapolation when the linear combination is delayed by a number of steps.
We propose a simple multivariate normality test based on Kac-Bernstein's characterization, which can be conducted by utilising existing statistical independence tests for sums and differences of data samples. We also perform its empirical investigation, which reveals that for high-dimensional data, the proposed approach may be more efficient than the alternative ones. The accompanying code repository is provided at \url{//shorturl.at/rtuy5}.
In this work we extend the shifted Laplacian approach to the elastic Helmholtz equation. The shifted Laplacian multigrid method is a common preconditioning approach for the discretized acoustic Helmholtz equation. In some cases, like geophysical seismic imaging, one needs to consider the elastic Helmholtz equation, which is harder to solve: it is three times larger and contains a nullity-rich grad-div term. These properties make the solution of the equation more difficult for multigrid solvers. The key idea in this work is combining the shifted Laplacian with approaches for linear elasticity. We provide local Fourier analysis and numerical evidence that the convergence rate of our method is independent of the Poisson's ratio. Moreover, to better handle the problem size, we complement our multigrid method with the domain decomposition approach, which works in synergy with the local nature of the shifted Laplacian, so we enjoy the advantages of both methods without sacrificing performance. We demonstrate the efficiency of our solver on 2D and 3D problems in heterogeneous media.
We consider the extension of two-variable guarded fragment logic with local Presburger quantifiers. These are quantifiers that can express properties such as ``the number of incoming blue edges plus twice the number of outgoing red edges is at most three times the number of incoming green edges'' and captures various description logics with counting, but without constant symbols. We show that the satisfiability of this logic is EXP-complete. While the lower bound already holds for the standard two-variable guarded fragment logic, the upper bound is established by a novel, yet simple deterministic graph theoretic based algorithm.
The strong convergence of the semi-implicit Euler-Maruyama (EM) method for stochastic differential equations with non-linear coefficients driven by a class of L\'evy processes is investigated. The dependence of the convergence order of the numerical scheme on the parameters of the class of L\'evy processes is discovered, which is different from existing results. In addition, the existence and uniqueness of numerical invariant measure of the semi-implicit EM method is studied and its convergence to the underlying invariant measure is also proved. Numerical examples are provided to confirm our theoretical results.
The goal of explainable Artificial Intelligence (XAI) is to generate human-interpretable explanations, but there are no computationally precise theories of how humans interpret AI generated explanations. The lack of theory means that validation of XAI must be done empirically, on a case-by-case basis, which prevents systematic theory-building in XAI. We propose a psychological theory of how humans draw conclusions from saliency maps, the most common form of XAI explanation, which for the first time allows for precise prediction of explainee inference conditioned on explanation. Our theory posits that absent explanation humans expect the AI to make similar decisions to themselves, and that they interpret an explanation by comparison to the explanations they themselves would give. Comparison is formalized via Shepard's universal law of generalization in a similarity space, a classic theory from cognitive science. A pre-registered user study on AI image classifications with saliency map explanations demonstrate that our theory quantitatively matches participants' predictions of the AI.