The ternary relation $B(x,y,z)$ of betweenness states that an element $y$ is between the elements $x$ and $z$, in some sense depending on the considered structure. In a partially ordered set $(N,\leq)$, $B(x,y,z):\Longleftrightarrow x<y<z\vee z<y<x$. The corresponding betweenness structure is $(N,B)$. The class of betweenness structures of linear orders is first-order definable. That of partial orders is monadic second-order definable. An order-theoretic tree is a partial order such that the set of elements larger that any element is linearly ordered and any two elements have an upper-bound. Finite or infinite rooted trees ordered by the ancestor relation are order-theoretic trees. In an order-theoretic tree, we define $B(x,y,z)$ to mean that $x<y<z$ or $z<y<x$ or $x<y\leq x\sqcup z$ or $z<y\leq x\sqcup z$ provided the least upper-bound $x\sqcup z$ of $x$ and $z$ is defined when $x$ and $z$ are incomparable. In a previous article, we established that the corresponding class of betweenness structures is monadic second-order definable.We prove here that the induced substructures of the betweenness structures of the countable order-theoretic trees form a monadic second-order definable class, denoted by IBO. The proof uses a variant of cographs, the partitioned probe cographs, and their known six finite minimal excluded induced subgraphs called the bounds of the class. This proof links two apparently unrelated topics: cographs and order-theoretic trees.However, the class IBO has finitely many bounds, i.e., minimal excluded finite induced substructures. Hence it is first-order definable. The proof of finiteness uses well-quasi-orders and does not provide the finite list of bounds. Hence, the associated first-order defining sentence is not known.
The classic algorithm of Bodlaender and Kloks [J. Algorithms, 1996] solves the following problem in linear fixed-parameter time: given a tree decomposition of a graph of (possibly suboptimal) width k, compute an optimum-width tree decomposition of the graph. In this work, we prove that this problem can also be solved in mso in the following sense: for every positive integer k, there is an mso transduction from tree decompositions of width k to tree decompositions of optimum width. Together with our recent results [LICS 2016], this implies that for every k there exists an mso transduction which inputs a graph of treewidth k, and nondeterministically outputs its tree decomposition of optimum width. We also show that mso transductions can be implemented in linear fixed-parameter time, which enables us to derive the algorithmic result of Bodlaender and Kloks as a corollary of our main result.
In this paper we are concerned with a sequence of univariate random variables with piecewise polynomial means and independent sub-Gaussian noise. The underlying polynomials are allowed to be of arbitrary but fixed degrees. All the other model parameters are allowed to vary depending on the sample size. We propose a two-step estimation procedure based on the $\ell_0$-penalisation and provide upper bounds on the localisation error. We complement these results by deriving a global information-theoretic lower bounds, which show that our two-step estimators are nearly minimax rate-optimal. We also show that our estimator enjoys near optimally adaptive performance by attaining individual localisation errors depending on the level of smoothness at individual change points of the underlying signal. In addition, under a special smoothness constraint, we provide a minimax lower bound on the localisation errors. This lower bound is independent of the polynomial orders and is sharper than the global minimax lower bound.
We are interested in the problem of learning the directed acyclic graph (DAG) when data are generated from a linear structural equation model (SEM) and the causal structure can be characterized by a polytree. Under the Gaussian polytree models, we study sufficient conditions on the sample sizes for the well-known Chow-Liu algorithm to exactly recover both the skeleton and the equivalence class of the polytree, which is uniquely represented by a CPDAG. On the other hand, necessary conditions on the required sample sizes for both skeleton and CPDAG recovery are also derived in terms of information-theoretic lower bounds, which match the respective sufficient conditions and thereby give a sharp characterization of the difficulty of these tasks. We also consider extensions to the sub-Gaussian case, and then study the estimation of the inverse correlation matrix under such models. Our theoretical findings are illustrated by comprehensive numerical simulations, and experiments on benchmark data also demonstrate the robustness of polytree learning when the true graphical structures can only be approximated by polytrees.
In this paper, we introduce a new representation for team-coordinated game-theoretic decision making, which we coin team belief DAG form. In our representation, at every timestep, a team coordinator observes the information that is public to all its members, and then decides on a prescription for all the possible states consistent with its observations. Our representation unifies and extends recent approaches to team coordination. Similar to the approach of Carminati et al (2021), our team belief DAG form can be used to capture adversarial team games, and enables standard, out-of-the-box game-theoretic techniques including no-regret learning (e.g., CFR and its state-of-the-art modern variants such as DCFR and PCFR+) and first-order methods. However, our representation can be exponentially smaller, and can be viewed as a lossless abstraction of theirs into a directed acyclic graph. In particular, like the LP-based algorithm of Zhang & Sandholm (2022), the size of our representation scales with the amount of information uncommon to the team; in fact, using linear programming on top of our team belief DAG form to solve for a team correlated equilibrium in an adversarial team games recovers almost exactly their algorithm. Unlike that paper, however, our representation explicitly exposes the structure of the decision space, which is what enables the aforementioned game-theoretic techniques.
Partitioning a graph into balanced blocks such that few edges run between blocks is a key problem for large-scale distributed processing. A current trend for partitioning huge graphs are streaming algorithms, which use low computational resources. In this work, we present a shared-memory streaming multi-recursive partitioning scheme that performs recursive multi-sections on the fly without knowing the overall input graph. Our approach has a considerably lower running time complexity in comparison with state-of-the-art non-buffered one-pass partitioning algorithms for the standard graph partitioning case. Moreover, if the topology of a distributed system is known, it is possible to further optimize the communication costs by mapping partitions onto processing elements. Our experiments indicate that our algorithm is both faster and produces better process mappings than competing tools. In case of graph partitioning, our framework is up to two orders of magnitude faster at the cost of 5% more cut edges compared to Fennel.
We analyse a class of time discretizations for solving the Gross-Pitaevskii equation at low-regularity on an arbitrary Lipschitz domain $\Omega \subset \mathbb{R}^d$, $d \le 3$, with a non-smooth potential. We show that these schemes, together with their optimal local error structure, allow for convergence under lower regularity assumptions on both the solution and the potential than is required by classical methods, such as splitting or exponential integrator methods. Moreover, we show convergence in the case of periodic boundary conditions, in any fractional positive Sobolev space $H^{r}$, $r \ge 0$ beyond the more typical $L^2$-error analysis.
Trefftz methods are numerical methods for the approximation of solutions to boundary and/or initial value problems. They are Galerkin methods with particular test and trial functions, which solve locally the governing partial differential equation (PDE). This property is called the Trefftz property. Quasi-Trefftz methods were introduced to leverage the advantages of Trefftz methods for problems governed by variable coefficient PDEs, by relaxing the Trefftz property into a so-called quasi-Trefftz property: test and trial functions are not exact solutions but rather local approximate solutions to the governing PDE. In order to develop quassi-Trefftz methods for aero-acoustics problems governed by the convected Helmholtz equation, the present work tackles the question of the definition, construction and approximation properties of three families of quasi-Trefftz functions: two based on generalizations on plane wave solutions, and one polynomial.
A radio labelling of a graph $G$ is a mapping $f : V(G) \rightarrow \{0, 1, 2,\ldots\}$ such that $|f(u)-f(v)| \geq diam(G) + 1 - d(u,v)$ for every pair of distinct vertices $u,v$ of $G$, where $diam(G)$ is the diameter of $G$ and $d(u,v)$ is the distance between $u$ and $v$ in $G$. The radio number $rn(G)$ of $G$ is the smallest integer $k$ such that $G$ admits a radio labelling $f$ with $\max\{f(v):v \in V(G)\} = k$. The weight of a tree $T$ from a vertex $v \in V(T)$ is the sum of the distances in $T$ from $v$ to all other vertices, and a vertex of $T$ achieving the minimum weight is called a weight center of $T$. It is known that any tree has one or two weight centers. A tree is called a two-branch tree if the removal of all its weight centers results in a forest with exactly two components. In this paper we obtain a sharp lower bound for the radio number of two-branch trees which improves a known lower bound for general trees. We also give a necessary and sufficient condition for this improved lower bound to be achieved. Using these results, we determine the radio number of two families of level-wise regular two-branch trees.
Safe Policy Improvement (SPI) aims at provable guarantees that a learned policy is at least approximately as good as a given baseline policy. Building on SPI with Soft Baseline Bootstrapping (Soft-SPIBB) by Nadjahi et al., we identify theoretical issues in their approach, provide a corrected theory, and derive a new algorithm that is provably safe on finite Markov Decision Processes (MDP). Additionally, we provide a heuristic algorithm that exhibits the best performance among many state of the art SPI algorithms on two different benchmarks. Furthermore, we introduce a taxonomy of SPI algorithms and empirically show an interesting property of two classes of SPI algorithms: while the mean performance of algorithms that incorporate the uncertainty as a penalty on the action-value is higher, actively restricting the set of policies more consistently produces good policies and is, thus, safer.
We consider the problem of discovering $K$ related Gaussian directed acyclic graphs (DAGs), where the involved graph structures share a consistent causal order and sparse unions of supports. Under the multi-task learning setting, we propose a $l_1/l_2$-regularized maximum likelihood estimator (MLE) for learning $K$ linear structural equation models. We theoretically show that the joint estimator, by leveraging data across related tasks, can achieve a better sample complexity for recovering the causal order (or topological order) than separate estimations. Moreover, the joint estimator is able to recover non-identifiable DAGs, by estimating them together with some identifiable DAGs. Lastly, our analysis also shows the consistency of union support recovery of the structures. To allow practical implementation, we design a continuous optimization problem whose optimizer is the same as the joint estimator and can be approximated efficiently by an iterative algorithm. We validate the theoretical analysis and the effectiveness of the joint estimator in experiments.