Assouad-Nagata dimension addresses both large and small scale behaviors of metric spaces and is a refinement of Gromov's asymptotic dimension. A metric space $M$ is a minor-closed metric if there exists an (edge)-weighted graph $G$ in a fixed minor-closed family such that the underlying space of $M$ is the vertex-set of $G$, and the metric of $M$ is the distance function in $G$. Minor-closed metrics naturally arise when removing redundant edges of the underlying graphs by using edge-deletion and edge-contraction. In this paper, we determine the Assouad-Nagata dimension of every minor-closed metric. It is a common generalization of known results for the asymptotic dimension of $H$-minor free unweighted graphs and the Assouad-Nagata dimension of some 2-dimensional continuous spaces (e.g.\ complete Rienmannian surfaces with finite Euler genus) and their corollaries.
We give an adequate, concrete, categorical-based model for Lambda-S, which is a typed version of a linear-algebraic lambda calculus, extended with measurements. Lambda-S is an extension to first-order lambda calculus unifying two approaches of non-cloning in quantum lambda-calculi: to forbid duplication of variables, and to consider all lambda-terms as algebraic linear functions. The type system of Lambda-S have a superposition constructor S such that a type A is considered as the base of a vector space while SA is its span. Our model considers S as the composition of two functors in an adjunction relation between the category of sets and the category of vector spaces over C. The right adjoint is a forgetful functor U, which is hidden in the language, and plays a central role in the computational reasoning.
A comprehensive mathematical model of the multiphysics flow of blood and Cerebrospinal Fluid (CSF) in the brain can be expressed as the coupling of a poromechanics system and Stokes' equations: the first describes fluids filtration through the cerebral tissue and the tissue's elastic response, while the latter models the flow of the CSF in the brain ventricles. This model describes the functioning of the brain's waste clearance mechanism, which has been recently discovered to play an essential role in the progress of neurodegenerative diseases. To model the interactions between different scales in the porous medium, we propose a physically consistent coupling between Multi-compartment Poroelasticity (MPE) equations and Stokes' equations. In this work, we introduce a numerical scheme for the discretization of such coupled MPE-Stokes system, employing a high-order discontinuous Galerkin method on polytopal grids to efficiently account for the geometric complexity of the domain. We analyze the stability and convergence of the space semidiscretized formulation, we prove a-priori error estimates, and we present a temporal discretization based on a combination of Newmark's $\beta$-method for the elastic wave equation and the $\theta$-method for the other equations of the model. Numerical simulations carried out on test cases with manufactured solutions validate the theoretical error estimates. We also present numerical results on a two-dimensional slice of a patient-specific brain geometry reconstructed from diagnostic images, to test in practice the advantages of the proposed approach.
In situations where both extreme and non-extreme data are of interest, modelling the whole data set accurately is important. In a univariate framework, modelling the bulk and tail of a distribution has been extensively studied before. However, when more than one variable is of concern, models that aim specifically at capturing both regions correctly are scarce in the literature. A dependence model that blends two copulas with different characteristics over the whole range of the data support is proposed. One copula is tailored to the bulk and the other to the tail, with a dynamic weighting function employed to transition smoothly between them. Tail dependence properties are investigated numerically and simulation is used to confirm that the blended model is sufficiently flexible to capture a wide variety of structures. The model is applied to study the dependence between temperature and ozone concentration at two sites in the UK and compared with a single copula fit. The proposed model provides a better, more flexible, fit to the data, and is also capable of capturing complex dependence structures.
Radiomics is an emerging area of medical imaging data analysis particularly for cancer. It involves the conversion of digital medical images into mineable ultra-high dimensional data. Machine learning algorithms are widely used in radiomics data analysis to develop powerful decision support model to improve precision in diagnosis, assessment of prognosis and prediction of therapy response. However, machine learning algorithms for causal inference have not been previously employed in radiomics analysis. In this paper, we evaluate the value of machine learning algorithms for causal inference in radiomics. We select three recent competitive variable selection algorithms for causal inference: outcome-adaptive lasso (OAL), generalized outcome-adaptive lasso (GOAL) and causal ball screening (CBS). We used a sure independence screening procedure to propose an extension of GOAL and OAL for ultra-high dimensional data, SIS + GOAL and SIS + OAL. We compared SIS + GOAL, SIS + OAL and CBS using simulation study and two radiomics datasets in cancer, osteosarcoma and gliosarcoma. The two radiomics studies and the simulation study identified SIS + GOAL as the optimal variable selection algorithm.
We explore a linear inhomogeneous elasticity equation with random Lam\'e parameters. The latter are parameterized by a countably infinite number of terms in separated expansions. The main aim of this work is to estimate expected values (considered as an infinite dimensional integral on the parametric space corresponding to the random coefficients) of linear functionals acting on the solution of the elasticity equation. To achieve this, the expansions of the random parameters are truncated, a high-order quasi-Monte Carlo (QMC) is combined with a sparse grid approach to approximate the high dimensional integral, and a Galerkin finite element method (FEM) is introduced to approximate the solution of the elasticity equation over the physical domain. The error estimates from (1) truncating the infinite expansion, (2) the Galerkin FEM, and (3) the QMC sparse grid quadrature rule are all studied. For this purpose, we show certain required regularity properties of the continuous solution with respect to both the parametric and physical variables. To achieve our theoretical regularity and convergence results, some reasonable assumptions on the expansions of the random coefficients are imposed. Finally, some numerical results are delivered.
This work is concerned with the uniform accuracy of implicit-explicit backward differentiation formulas for general linear hyperbolic relaxation systems satisfying the structural stability condition proposed previously by the third author. We prove the uniform stability and accuracy of a class of IMEX-BDF schemes discretized spatially by a Fourier spectral method. The result reveals that the accuracy of the fully discretized schemes is independent of the relaxation time in all regimes. It is verified by numerical experiments on several applications to traffic flows, rarefied gas dynamics and kinetic theory.
We couple the L1 discretization of the Caputo fractional derivative in time with the Galerkin scheme to devise a linear numerical method for the semilinear subdiffusion equation. Two important points that we make are: nonsmooth initial data and time-dependent diffusion coefficient. We prove the stability and convergence of the method under weak assumptions concerning regularity of the diffusivity. We find optimal pointwise in space and global in time errors, which are verified with several numerical experiments.
We present a multigrid algorithm to solve efficiently the large saddle-point systems of equations that typically arise in PDE-constrained optimization under uncertainty. The algorithm is based on a collective smoother that at each iteration sweeps over the nodes of the computational mesh, and solves a reduced saddle-point system whose size depends on the number $N$ of samples used to discretized the probability space. We show that this reduced system can be solved with optimal $O(N)$ complexity. We test the multigrid method on three problems: a linear-quadratic problem, possibly with a local or a boundary control, for which the multigrid method is used to solve directly the linear optimality system; a nonsmooth problem with box constraints and $L^1$-norm penalization on the control, in which the multigrid scheme is used within a semismooth Newton iteration; a risk-adverse problem with the smoothed CVaR risk measure where the multigrid method is called within a preconditioned Newton iteration. In all cases, the multigrid algorithm exhibits excellent performances and robustness with respect to the parameters of interest.
The emergence of complex structures in the systems governed by a simple set of rules is among the most fascinating aspects of Nature. The particularly powerful and versatile model suitable for investigating this phenomenon is provided by cellular automata, with the Game of Life being one of the most prominent examples. However, this simplified model can be too limiting in providing a tool for modelling real systems. To address this, we introduce and study an extended version of the Game of Life, with the dynamical process governing the rule selection at each step. We show that the introduced modification significantly alters the behaviour of the game. We also demonstrate that the choice of the synchronization policy can be used to control the trade-off between the stability and the growth in the system.
Hashing has been widely used in approximate nearest search for large-scale database retrieval for its computation and storage efficiency. Deep hashing, which devises convolutional neural network architecture to exploit and extract the semantic information or feature of images, has received increasing attention recently. In this survey, several deep supervised hashing methods for image retrieval are evaluated and I conclude three main different directions for deep supervised hashing methods. Several comments are made at the end. Moreover, to break through the bottleneck of the existing hashing methods, I propose a Shadow Recurrent Hashing(SRH) method as a try. Specifically, I devise a CNN architecture to extract the semantic features of images and design a loss function to encourage similar images projected close. To this end, I propose a concept: shadow of the CNN output. During optimization process, the CNN output and its shadow are guiding each other so as to achieve the optimal solution as much as possible. Several experiments on dataset CIFAR-10 show the satisfying performance of SRH.