Many combinatorial optimization problems can be formulated as the search for a subgraph that satisfies certain properties and minimizes the total weight. We assume here that the vertices correspond to points in a metric space and can take any position in given uncertainty sets. Then, the cost function to be minimized is the sum of the distances for the worst positions of the vertices in their uncertainty sets. We propose two types of polynomial-time approximation algorithms. The first one relies on solving a deterministic counterpart of the problem where the uncertain distances are replaced with maximum pairwise distances. We study in details the resulting approximation ratio, which depends on the structure of the feasible subgraphs and whether the metric space is Ptolemaic or not. The second algorithm is a fully-polynomial time approximation scheme for the special case of $s-t$ paths.
To solve many problems on graphs, graph traversals are used, the usual variants of which are the depth-first search and the breadth-first search. Implementing a graph traversal we consequently reach all vertices of the graph that belong to a connected component. The breadth-first search is the usual choice when constructing efficient algorithms for finding connected components of a graph. Methods of simple iteration for solving systems of linear equations with modified graph adjacency matrices and with the properly specified right-hand side can be considered as graph traversal algorithms. These traversal algorithms, generally speaking, turn out to be non-equivalent neither to the depth-first search nor the breadth-first search. The example of such a traversal algorithm is the one associated with the Gauss-Seidel method. For an arbitrary connected graph, to visit all its vertices, the algorithm requires not more iterations than that is required for BFS. For a large number of instances of the problem, fewer iterations will be required.
Robust optimisation is a well-established framework for optimising functions in the presence of uncertainty. The inherent goal of this problem is to identify a collection of inputs whose outputs are both desirable for the decision maker, whilst also being robust to the underlying uncertainties in the problem. In this work, we study the multi-objective case of this problem. We identify that the majority of all robust multi-objective algorithms rely on two key operations: robustification and scalarisation. Robustification refers to the strategy that is used to account for the uncertainty in the problem. Scalarisation refers to the procedure that is used to encode the relative importance of each objective to a scalar-valued reward. As these operations are not necessarily commutative, the order that they are performed in has an impact on the resulting solutions that are identified and the final decisions that are made. The purpose of this work is to give a thorough exposition on the effects of these different orderings and in particular highlight when one should opt for one ordering over the other. As part of our analysis, we showcase how many existing risk concepts can be integrated into the specification and solution of a robust multi-objective optimisation problem. Besides this, we also demonstrate how one can principally define the notion of a robust Pareto front and a robust performance metric based on our ``robustify and scalarise'' methodology. To illustrate the efficacy of these new ideas, we present two insightful case studies which are based on real-world data sets.
Several different types of identification problems have been already studied in the literature, where the objective is to distinguish any two vertices of a graph by their unique neighborhoods in a suitably chosen dominating or total-dominating set of the graph, often referred to as a \emph{code}. To study such problems under a unifying point of view, reformulations of the already studied problems in terms of covering problems in suitably constructed hypergraphs have been provided. Analyzing these hypergraph representations, we introduce a new separation property, called \emph{full-separation}, which has not yet been considered in the literature so far. We study it in combination with both domination and total-domination, and call the resulting codes \emph{full-separating-dominating codes} (or \emph{FD-codes} for short) and \emph{full-separating-total-dominating codes} (or \emph{FTD-codes} for short), respectively. We address the conditions for the existence of FD- and FTD-codes, bounds for their size and their relation to codes of the other types. We show that the problems of determining an FD- or an FTD-code of minimum cardinality in a graph is NP-hard. We also show that the cardinalities of minimum FD- and FTD-codes differ by at most one, but that it is NP-complete to decide if they are equal for a given graph in general. We find the exact values of minimum cardinalities of the FD- and FTD-codes on some familiar graph classes like paths, cycles, half-graphs and spiders. This helps us compare the two codes with other codes on these graph families thereby exhibiting extremal cases for several lower bounds.
Recently, constructions of optimal linear codes from simplicial complexes have attracted much attention and some related nice works were presented. Let $q$ be a prime power. In this paper, by using the simplicial complexes of ${\mathbb F}_{q}^m$ with one single maximal element, we construct four families of linear codes over the ring ${\mathbb F}_{q}+u{\mathbb F}_{q}$ ($u^2=0$), which generalizes the results of [IEEE Trans. Inf. Theory 66(6):3657-3663, 2020]. The parameters and Lee weight distributions of these four families of codes are completely determined. Most notably, via the Gray map, we obtain several classes of optimal linear codes over ${\mathbb F}_{q}$, including (near) Griesmer codes and distance-optimal codes.
The discretization of fluid-poromechanics systems is typically highly demanding in terms of computational effort. This is particularly true for models of multiphysics flows in the brain, due to the geometrical complexity of the cerebral anatomy - requiring a very fine computational mesh for finite element discretization - and to the high number of variables involved. Indeed, this kind of problems can be modeled by a coupled system encompassing the Stokes equations for the cerebrospinal fluid in the brain ventricles and Multiple-network Poro-Elasticity (MPE) equations describing the brain tissue, the interstitial fluid, and the blood vascular networks at different space scales. The present work aims to rigorously derive a posteriori error estimates for the coupled Stokes-MPE problem, as a first step towards the design of adaptive refinement strategies or reduced order models to decrease the computational demand of the problem. Through numerical experiments, we verify the reliability and optimal efficiency of the proposed a posteriori estimator and identify the role of the different solution variables in its composition.
A module of a graph G is a set of vertices that have the same set of neighbours outside. Modules of a graphs form a so-called partitive family and thereby can be represented by a unique tree MD(G), called the modular decomposition tree. Motivated by the central role of modules in numerous algorithmic graph theory questions, the problem of efficiently computing MD(G) has been investigated since the early 70's. To date the best algorithms run in linear time but are all rather complicated. By combining previous algorithmic paradigms developed for the problem, we are able to present a simpler linear-time that relies on very simple data-structures, namely slice decomposition and sequences of rooted ordered trees.
We study the numerical approximation of the stochastic heat equation with a distributional reaction term. Under a condition on the Besov regularity of the reaction term, it was proven recently that a strong solution exists and is unique in the pathwise sense, in a class of H\"older continuous processes. For a suitable choice of sequence $(b^k)_{k\in \mathbb{N}}$ approximating $b$, we prove that the error between the solution $u$ of the SPDE with reaction term $b$ and its tamed Euler finite-difference scheme with mollified drift $b^k$, converges to $0$ in $L^m(\Omega)$ with a rate that depends on the Besov regularity of $b$. In particular, one can consider two interesting cases: first, even when $b$ is only a (finite) measure, a rate of convergence is obtained. On the other hand, when $b$ is a bounded measurable function, the (almost) optimal rate of convergence $(\frac{1}{2}-\varepsilon)$-in space and $(\frac{1}{4}-\varepsilon)$-in time is achieved. Stochastic sewing techniques are used in the proofs, in particular to deduce new regularising properties of the discrete Ornstein-Uhlenbeck process.
The homogenization procedure developed here is conducted on a laminate with periodic space-time modulation on the fine scale: at leading order, this modulation creates convection in the low-wavelength regime if both parameters are modulated. However, if only one parameter is modulated, which is more realistic, this convective term disappears and one recovers a standard diffusion equation with effective homogeneous parameters; this does not describe the non-reciprocity and the propagation of the field observed from exact dispersion diagrams. This inconsistency is corrected here by considering second-order homogenization which results in a non-reciprocal propagation term that is proved to be non-zero for any laminate and verified via numerical simulation. The same methodology is also applied to the case when the density is modulated in the heat equation, leading therefore to a corrective advective term which cancels out non-reciprocity at the leading order but not at the second order.
Motivated by the recent successful application of physics-informed neural networks (PINNs) to solve Boltzmann-type equations [S. Jin, Z. Ma, and K. Wu, J. Sci. Comput., 94 (2023), pp. 57], we provide a rigorous error analysis for PINNs in approximating the solution of the Boltzmann equation near a global Maxwellian. The challenge arises from the nonlocal quadratic interaction term defined in the unbounded domain of velocity space. Analyzing this term on an unbounded domain requires the inclusion of a truncation function, which demands delicate analysis techniques. As a generalization of this analysis, we also provide proof of the asymptotic preserving property when using micro-macro decomposition-based neural networks.
Hashing has been widely used in approximate nearest search for large-scale database retrieval for its computation and storage efficiency. Deep hashing, which devises convolutional neural network architecture to exploit and extract the semantic information or feature of images, has received increasing attention recently. In this survey, several deep supervised hashing methods for image retrieval are evaluated and I conclude three main different directions for deep supervised hashing methods. Several comments are made at the end. Moreover, to break through the bottleneck of the existing hashing methods, I propose a Shadow Recurrent Hashing(SRH) method as a try. Specifically, I devise a CNN architecture to extract the semantic features of images and design a loss function to encourage similar images projected close. To this end, I propose a concept: shadow of the CNN output. During optimization process, the CNN output and its shadow are guiding each other so as to achieve the optimal solution as much as possible. Several experiments on dataset CIFAR-10 show the satisfying performance of SRH.