Assouad-Nagata dimension addresses both large and small scale behaviors of metric spaces and is a refinement of Gromov's asymptotic dimension. A metric space $M$ is a minor-closed metric if there exists an (edge-)weighted graph $G$ in a fixed minor-closed family such that the underlying space of $M$ is the vertex-set of $G$, and the metric of $M$ is the distance function in $G$. Minor-closed metrics naturally arise when removing redundant edges of the underlying graphs by using edge-deletion and edge-contraction. In this paper, we determine the Assouad-Nagata dimension of every minor-closed metric. It is a common generalization of known results for the asymptotic dimension of $H$-minor free unweighted graphs and the Assouad-Nagata dimension of some 2-dimensional continuous spaces (e.g.\ complete Riemannian surfaces with finite Euler genus) and their corollaries.
In this article, we focus on the error that is committed when computing the matrix logarithm using the Gauss--Legendre quadrature rules. These formulas can be interpreted as Pad\'e approximants of a suitable Gauss hypergeometric function. Empirical observation tells us that the convergence of these quadratures becomes slow when the matrix is not close to the identity matrix, thus suggesting the usage of an inverse scaling and squaring approach for obtaining a matrix with this property. The novelty of this work is the introduction of error estimates that can be used to select a priori both the number of Legendre points needed to obtain a given accuracy and the number of inverse scaling and squaring to be performed. We include some numerical experiments to show the reliability of the estimates introduced.
Information geometry of Markov chains has been studied by Nagaoka, Takeuchi and others using the dually flat structure of the space of transition probabilities. In this context, a submanifold of the space is called a Markov model. In the present paper, we seek for a theory of extended spaces of Markov models in the following sense. As a prototype, for the space of probability distributions on a finite set, Amari has introduced the space of positive measures simply by removing the constraint condition that the total mass is equal to $1$ and investigated the extended space by finding the Bregman and $F$-divergence suitably. According to this line, we introduce an extension of the space of transition probabilities equipped with suitable $F$-divergence for a given Markov chain. We regard it as the space of positive transition measures on a Markov chain, and study the dually flat structure on the space. That provides a new insight on the geometry of Markov chains. We also discuss a relation with other existing work.
Nonignorable missing outcomes are common in real world datasets and often require strong parametric assumptions to achieve identification. These assumptions can be implausible or untestable, and so we may forgo them in favour of partially identified models that narrow the set of a priori possible values to an identification region. Here we propose a new nonparametric Bayes method that allows for the incorporation of multiple clinically relevant restrictions of the parameter space simultaneously. We focus on two common restrictions, instrumental variables and the direction of missing data bias, and investigate how these restrictions narrow the identification region for parameters of interest. Additionally, we propose a rejection sampling algorithm that allows us to quantify the evidence for these assumptions in the data. We compare our method to a standard Heckman selection model in both simulation studies and in an applied problem examining the effectiveness of cash-transfers for people experiencing homelessness.
We provide a new characterization of both belief update and belief revision in terms of a Kripke-Lewis semantics. We consider frames consisting of a set of states, a Kripke belief relation and a Lewis selection function. Adding a valuation to a frame yields a model. Given a model and a state, we identify the initial belief set K with the set of formulas that are believed at that state and we identify either the updated belief set or the revised belief set, prompted by the input represented by formula A, as the set of formulas that are the consequent of conditionals that (1) are believed at that state and (2) have A as antecedent. We show that this class of models characterizes both the Katsuno-Mendelzon (KM) belief update functions and the AGM belief revision functions, in the following sense: (1) each model gives rise to a partial belief function that can be completed into a full KM/AGM update/revision function, and (2) for every KM/AGM update/revision function there is a model whose associated belief function coincides with it. The difference between update and revision can be reduced to two semantic properties that appear in a stronger form in revision relative to update, thus confirming the finding by Peppas et al. (1996) that, "for a fixed theory K, revising K is much the same as updating K"
We show convergence rates for a sparse grid approximation of the distribution of solutions of the stochastic Landau-Lifshitz-Gilbert equation. Beyond being a frequently studied equation in engineering and physics, the stochastic Landau-Lifshitz-Gilbert equation poses many interesting challenges that do not appear simultaneously in previous works on uncertainty quantification: The equation is strongly nonlinear, time-dependent, and has a non-convex side constraint. Moreover, the parametrization of the stochastic noise features countably many unbounded parameters and low regularity compared to other elliptic and parabolic problems studied in uncertainty quantification. We use a novel technique to establish uniform holomorphic regularity of the parameter-to-solution map based on a Gronwall-type estimate combined with previously known methods that use the implicit function theorem. We demonstrate numerically the feasibility of the stochastic collocation method and show a clear advantage of a multi-level stochastic collocation scheme for the stochastic Landau-Lifshitz-Gilbert equation.
Cross-modal augmentation of Magnetic Resonance Imaging (MRI) and microscopic imaging based on the same tissue samples is promising because it can allow histopathological analysis in the absence of an underlying invasive biopsy procedure. Here, we tested a method for generating microscopic histological images from MRI scans of the corpus callosum using conditional generative adversarial network (cGAN) architecture. To our knowledge, this is the first multimodal translation of the brain MRI to histological volumetric representation of the same sample. The technique was assessed by training paired image translation models taking sets of images from MRI scans and microscopy. The use of cGAN for this purpose is challenging because microscopy images are large in size and typically have low sample availability. The current work demonstrates that the framework reliably synthesizes histology images from MRI scans of corpus callosum, emphasizing the network's ability to train on high resolution histologies paired with relatively lower-resolution MRI scans. With the ultimate goal of avoiding biopsies, the proposed tool can be used for educational purposes.
We consider a new splitting based on the Sherman-Morrison-Woodbury formula, which is particularly effective with iterative methods for the numerical solution of large linear systems. These systems involve matrices that are perturbations of circulant or block circulant matrices, which commonly arise in the discretization of differential equations using finite element or finite difference methods. We prove the convergence of the new iteration without making any assumptions regarding the symmetry or diagonal-dominance of the matrix. To illustrate the efficacy of the new iteration we present various applications. These include extensions of the new iteration to block matrices that arise in certain saddle point problems as well as two-dimensional finite difference discretizations. The new method exhibits fast convergence in all of the test cases we used. It has minimal storage requirements, straightforward implementation and compatibility with nearly circulant matrices via the Fast Fourier Transform. For this reasons it can be a valuable tool for the solution of various finite element and finite difference discretizations of differential equations.
Quantum information scrambling is a unitary process that destroys local correlations and spreads information throughout the system, effectively hiding it in nonlocal degrees of freedom. In principle, unscrambling this information is possible with perfect knowledge of the unitary dynamics[arXiv:1710.03363]. However, this work demonstrates that even without previous knowledge of the internal dynamics, information can be efficiently decoded from an unknown scrambler by monitoring the outgoing information of a local subsystem. Surprisingly, we show that scramblers with unknown internal dynamics, which are rapidly mixing but not fully chaotic, can be decoded using Clifford decoders. The essential properties of a scrambling unitary can be efficiently recovered, even if the process is exponentially complex. Specifically, we establish that a unitary operator composed of $t$ non-Clifford gates admits a Clifford decoder up to $t\le n$.
Microring resonators (MRRs) are promising devices for time-delay photonic reservoir computing, but the impact of the different physical effects taking place in the MRRs on the reservoir computing performance is yet to be fully understood. We numerically analyze the impact of linear losses as well as thermo-optic and free-carrier effects relaxation times on the prediction error of the time-series task NARMA-10. We demonstrate the existence of three regions, defined by the input power and the frequency detuning between the optical source and the microring resonance, that reveal the cavity transition from linear to nonlinear regimes. One of these regions offers very low error in time-series prediction under relatively low input power and number of nodes while the other regions either lack nonlinearity or become unstable. This study provides insight into the design of the MRR and the optimization of its physical properties for improving the prediction performance of time-delay reservoir computing.
Hashing has been widely used in approximate nearest search for large-scale database retrieval for its computation and storage efficiency. Deep hashing, which devises convolutional neural network architecture to exploit and extract the semantic information or feature of images, has received increasing attention recently. In this survey, several deep supervised hashing methods for image retrieval are evaluated and I conclude three main different directions for deep supervised hashing methods. Several comments are made at the end. Moreover, to break through the bottleneck of the existing hashing methods, I propose a Shadow Recurrent Hashing(SRH) method as a try. Specifically, I devise a CNN architecture to extract the semantic features of images and design a loss function to encourage similar images projected close. To this end, I propose a concept: shadow of the CNN output. During optimization process, the CNN output and its shadow are guiding each other so as to achieve the optimal solution as much as possible. Several experiments on dataset CIFAR-10 show the satisfying performance of SRH.