A linear arrangement is a mapping $\pi$ from the $n$ vertices of a graph $G$ to $n$ distinct consecutive integers. Linear arrangements can be represented by drawing the vertices along a horizontal line and drawing the edges as semicircles above said line. In this setting, the length of an edge is defined as the absolute value of the difference between the positions of its two vertices in the arrangement, and the cost of an arrangement as the sum of all edge lengths. Here we study two variants of the Maximum Linear Arrangement problem (MaxLA), which consists of finding an arrangement that maximizes the cost. In the planar variant for free trees, vertices have to be arranged in such a way that there are no edge crossings. In the projective variant for rooted trees, arrangements have to be planar and the root of the tree cannot be covered by any edge. In this paper we present algorithms that are linear in time and space to solve planar and projective MaxLA for trees. We also prove several properties of maximum projective and planar arrangements, and show that caterpillar trees maximize planar MaxLA over all trees of a fixed size thereby generalizing a previous extremal result on trees.
In this paper we examine the use of low-rank approximations for the handling of radiation boundary conditions in a transient heat equation given a cavity radiation setting. The finite element discretization that arises from cavity radiation is well known to be dense, which poses difficulties for efficiency and scalability of solvers. Here we consider a special treatment of the cavity radiation discretization using a block low-rank approximation combined with hierarchical matrices. We provide an overview of the methodology and discusses techniques that can be used to improve efficiency within the framework of hierarchical matrices, including the usage of the approximate cross approximation (ACA) method. We provide a number of numerical results that demonstrate the accuracy and efficiency of the approach in practical problems, and demonstrate significant speedup and memory reduction compared to the more conventional "dense matrix" approach.
In this paper, we put forward the model of zero-error distributed function compression system of two binary memoryless sources X and Y, where there are two encoders En1 and En2 and one decoder De, connected by two channels (En1, De) and (En2, De) with the capacity constraints C1 and C2, respectively. The encoder En1 can observe X or (X,Y) and the encoder En2 can observe Y or (X,Y) according to the two switches s1 and s2 open or closed (corresponding to taking values 0 or 1). The decoder De is required to compress the binary arithmetic sum f(X,Y)=X+Y with zero error by using the system multiple times. We use (s1s2;C1,C2;f) to denote the model in which it is assumed that C1 \geq C2 by symmetry. The compression capacity for the model is defined as the maximum average number of times that the function f can be compressed with zero error for one use of the system, which measures the efficiency of using the system. We fully characterize the compression capacities for all the four cases of the model (s1s2;C1,C2;f) for s1s2= 00,01,10,11. Here, the characterization of the compression capacity for the case (01;C1,C2;f) with C1>C2 is highly nontrivial, where a novel graph coloring approach is developed. Furthermore, we apply the compression capacity for (01;C1,C2;f) to an open problem in network function computation that whether the best known upper bound of Guang et al. on computing capacity is in general tight.
The Straightness is a measure designed to characterize a pair of vertices in a spatial graph. It is defined as the ratio of the Euclidean distance to the graph distance between these vertices. It is often used as an average, for instance to describe the accessibility of a single vertex relatively to all the other vertices in the graph, or even to summarize the graph as a whole. In some cases, one needs to process the Straightness between not only vertices, but also any other points constituting the graph of interest. Suppose for instance that our graph represents a road network and we do not want to limit ourselves to crossroad-to-crossroad itineraries, but allow any street number to be a starting point or destination. In this situation, the standard approach consists in: 1) discretizing the graph edges, 2) processing the vertex-to-vertex Straightness considering the additional vertices resulting from this discretization, and 3) performing the appropriate average on the obtained values. However, this discrete approximation can be computationally expensive on large graphs, and its precision has not been clearly assessed. In this article, we adopt a continuous approach to average the Straightness over the edges of spatial graphs. This allows us to derive 5 distinct measures able to characterize precisely the accessibility of the whole graph, as well as individual vertices and edges. Our method is generic and could be applied to other measures designed for spatial graphs. We perform an experimental evaluation of our continuous average Straightness measures, and show how they behave differently from the traditional vertex-to-vertex ones. Moreover, we also study their discrete approximations, and show that our approach is globally less demanding in terms of both processing time and memory usage. Our R source code is publicly available under an open source license.
We give an $\widetilde{O}(\sqrt{n})$-space single-pass $0.483$-approximation streaming algorithm for estimating the maximum directed cut size (Max-DICUT) in a directed graph on $n$ vertices. This improves over an $O(\log n)$-space $4/9 < 0.45$ approximation algorithm due to Chou, Golovnev, and Velusamy (FOCS 2020), which was known to be optimal for $o(\sqrt{n})$-space algorithms. Max-DICUT is a special case of a constraint satisfaction problem (CSP). In this broader context, we give the first CSP for which algorithms with $\widetilde{O}(\sqrt{n})$ space can provably outperform $o(\sqrt{n})$-space algorithms. The key technical contribution of our work is development of the notions of a first-order snapshot of a (directed) graph and of estimates of such snapshots. These snapshots can be used to simulate certain (non-streaming) Max-DICUT algorithms, including the "oblivious" algorithms introduced by Feige and Jozeph (Algorithmica, 2015), who showed that one such algorithm achieves a 0.483-approximation. Previous work of the authors (SODA 2023) studied the restricted case of bounded-degree graphs, and observed that in this setting, it is straightforward to estimate the snapshot with $\ell_1$ errors and this suffices to simulate oblivious algorithms. But for unbounded-degree graphs, even defining an achievable and sufficient notion of estimation is subtle. We describe a new notion of snapshot estimation and prove its sufficiency using careful smoothing techniques, and then develop an algorithm which sketches such an estimate via a delicate process of intertwined vertex- and edge-subsampling. Prior to our work, the only streaming algorithms for any CSP on general instances were based on generalizations of the $O(\log n)$-space algorithm for Max-DICUT, and thus our work opens the possibility of a new class of algorithms for approximating CSPs.
We consider $t$-Lee-error-correcting codes of length $n$ over the residue ring $\mathbb{Z}_m := \mathbb{Z}/m\mathbb{Z}$ and determine upper and lower bounds on the number of $t$-Lee-error-correcting codes. We use two different methods, namely estimating isolated nodes on bipartite graphs and the graph container method. The former gives density results for codes of fixed size and the latter for any size. This confirms some recent density results for linear Lee metric codes and provides new density results for nonlinear codes. To apply a variant of the graph container algorithm we also investigate some geometrical properties of the balls in the Lee metric.
This paper combines modern numerical computation with theoretical results to improve our understanding of the growth factor problem for Gaussian elimination. On the computational side we obtain lower bounds for the maximum growth for complete pivoting for $n=1:75$ and $n=100$ using the Julia JuMP optimization package. At $n=100$ we obtain a growth factor bigger than $3n$. The numerical evidence suggests that the maximum growth factor is bigger than $n$ if and only if $n \ge 11$. We also present a number of theoretical results. We show that the maximum growth factor over matrices with entries restricted to a subset of the reals is nearly equal to the maximum growth factor over all real matrices. We also show that the growth factors under floating point arithmetic and exact arithmetic are nearly identical. Finally, through numerical search, and stability and extrapolation results, we provide improved lower bounds for the maximum growth factor. Specifically, we find that the largest growth factor is bigger than $1.0045n$, and the lim sup of the ratio with $n$ is greater than or equal to $3.317$. In contrast to the old conjecture that growth might never be bigger than $n$, it seems likely that the maximum growth divided by $n$ goes to infinity as $n \rightarrow \infty$.
The page number of a directed acyclic graph $G$ is the minimum $k$ for which there is a topological ordering of $G$ and a $k$-coloring of the edges such that no two edges of the same color cross, i.e., have alternating endpoints along the topological ordering. We address the long-standing open problem asking for the largest page number among all upward planar graphs. We improve the best known lower bound to $5$ and present the first asymptotic improvement over the trivial $O(n)$ upper bound, where $n$ denotes the number of vertices in $G$. Specifically, we first prove that the page number of every upward planar graph is bounded in terms of its width, as well as its height. We then combine both approaches to show that every $n$-vertex upward planar graph has page number $O(n^{2/3} \log(n)^{2/3})$.
Inferring causal structures from time series data is the central interest of many scientific inquiries. A major barrier to such inference is the problem of subsampling, i.e., the frequency of measurements is much lower than that of causal influence. To overcome this problem, numerous model-based and model-free methods have been proposed, yet either limited to the linear case or failed to establish identifiability. In this work, we propose a model-free algorithm that can identify the entire causal structure from subsampled time series, without any parametric constraint. The idea is that the challenge of subsampling arises mainly from \emph{unobserved} time steps and therefore should be handled with tools designed for unobserved variables. Among these tools, we find the proxy variable approach particularly fits, in the sense that the proxy of an unobserved variable is naturally itself at the observed time step. Following this intuition, we establish comprehensive structural identifiability results. Our method is constraint-based and requires no more regularities than common continuity and differentiability. Theoretical advantages are reflected in experimental results.
It is often desirable to summarise a probability measure on a space $X$ in terms of a mode, or MAP estimator, i.e.\ a point of maximum probability. Such points can be rigorously defined using masses of metric balls in the small-radius limit. However, the theory is not entirely straightforward: the literature contains multiple notions of mode and various examples of pathological measures that have no mode in any sense. Since the masses of balls induce natural orderings on the points of $X$, this article aims to shed light on some of the problems in non-parametric MAP estimation by taking an order-theoretic perspective, which appears to be a new one in the inverse problems community. This point of view opens up attractive proof strategies based upon the Cantor and Kuratowski intersection theorems; it also reveals that many of the pathologies arise from the distinction between greatest and maximal elements of an order, and from the existence of incomparable elements of $X$, which we show can be dense in $X$, even for an absolutely continuous measure on $X = \mathbb{R}$.
The trade algorithm, which includes the curveball and fastball implementations, is the state-of-the-art for uniformly sampling r x c binary matrices with fixed row and column sums. The mixing time of the trade algorithm is currently unknown, although 5r is currently used as a heuristic. We propose a distribution-based approach to estimating the mixing time, but which also can return a sample of matrices that are nearly guaranteed to be uniformly randomly sampled. In numerical experiments on matrices that vary by size, fill, and row and column sum distributions, we find that the upper bound on mixing time is at least 10r, and that it increases as a function of both c and the fraction of cells containing a 1.