In the context of two-player games over graphs, a language $L$ is called half-positional if, in all games using $L$ as winning objective, the protagonist can play optimally using positional strategies, that is, strategies that do not depend on the history of the play. In this work, we describe the class of parity automata recognising half-positional languages, providing a complete characterisation of half-positionality for $\omega$-regular languages. As corollaries, we establish decidability of half-positionality in polynomial time, finite-to-infinite and 1-to-2-players lifts, and show the closure under union of prefix-independent half-positional objectives, answering a conjecture by Kopczy\'nski.
We show that the known list-decoding algorithms for univariate multiplicity and folded Reed-Solomon codes can be made to run in $\tilde{O}(n)$ time. Univariate multiplicity codes and FRS codes are natural variants of Reed-Solomon codes that were discovered and studied for their applications to list decoding. It is known that for every $\epsilon>0$, and rate $r \in (0,1)$, there exist explicit families of these codes that have rate $r$ and can be list decoded from a $(1-r-\epsilon)$ fraction of errors with constant list size in polynomial time (Guruswami & Wang (IEEE Trans. Inform. Theory 2013) and Kopparty, Ron-Zewi, Saraf & Wootters (SIAM J. Comput. 2023)). In this work, we present randomized algorithms that perform the above list-decoding tasks in $\tilde{O}(n)$, where $n$ is the block-length of the code. Our algorithms have two main components. The first component builds upon the lattice-based approach of Alekhnovich (IEEE Trans. Inf. Theory 2005), who designed a $\tilde{O}(n)$ time list-decoding algorithm for Reed-Solomon codes approaching the Johnson radius. As part of the second component, we design $\tilde{O}(n)$ time algorithms for two natural algebraic problems: given a $(m+2)$-variate polynomial $Q(x,y_0,\dots,y_m) = \tilde{Q}(x) + \sum_{i=0}^m Q_i(x)\cdot y_i$ the first algorithm solves order-$m$ linear differential equations of the form $Q\left(x, f(x), \frac{df}{dx}, \dots,\frac{d^m f}{dx^m}\right) \equiv 0$ while the second solves functional equations of the form $Q\left(x, f(x), f(\gamma x), \dots,f(\gamma^m x)\right) \equiv 0$, where $m$ is an arbitrary constant and $\gamma$ is a field element of sufficiently high order. These algorithms can be viewed as generalizations of classical $\tilde{O}(n)$ time algorithms of Sieveking (Computing 1972) and Kung (Numer. Math. 1974) for computing the modular inverse of a power series, and might be of independent interest.
We propose a game-based formulation for learning dimensionality-reducing representations of feature vectors, when only a prior knowledge on future prediction tasks is available. In this game, the first player chooses a representation, and then the second player adversarially chooses a prediction task from a given class, representing the prior knowledge. The first player aims is to minimize, and the second player to maximize, the regret: The minimal prediction loss using the representation, compared to the same loss using the original features. For the canonical setting in which the representation, the response to predict and the predictors are all linear functions, and under the mean squared error loss function, we derive the theoretically optimal representation in pure strategies, which shows the effectiveness of the prior knowledge, and the optimal regret in mixed strategies, which shows the usefulness of randomizing the representation. For general representations and loss functions, we propose an efficient algorithm to optimize a randomized representation. The algorithm only requires the gradients of the loss function, and is based on incrementally adding a representation rule to a mixture of such rules.
For a fixed graph $H$, in the graph homomorphism problem, denoted by $Hom(H)$, we are given a graph $G$ and we have to determine whether there exists an edge-preserving mapping $\varphi: V(G) \to V(H)$. Note that $Hom(C_3)$, where $C_3$ is the cycle of length $3$, is equivalent to $3$-Coloring. The question whether $3$-Coloring is polynomial-time solvable on diameter-$2$ graphs is a well-known open problem. In this paper we study the $Hom(C_{2k+1})$ problem on bounded-diameter graphs for $k\geq 2$, so we consider all other odd cycles than $C_3$. We prove that for $k\geq 2$, the $Hom(C_{2k+1})$ problem is polynomial-time solvable on diameter-$(k+1)$ graphs -- note that such a result for $k=1$ would be precisely a polynomial-time algorithm for $3$-Coloring of diameter-$2$ graphs. Furthermore, we give subexponential-time algorithms for diameter-$(k+2)$ and -$(k+3)$ graphs. We complement these results with a lower bound for diameter-$(2k+2)$ graphs -- in this class of graphs the $Hom(C_{2k+1})$ problem is NP-hard and cannot be solved in subexponential-time, unless the ETH fails. Finally, we consider another direction of generalizing $3$-Coloring on diameter-$2$ graphs. We consider other target graphs $H$ than odd cycles but we restrict ourselves to diameter $2$. We show that if $H$ is triangle-free, then $Hom(H)$ is polynomial-time solvable on diameter-$2$ graphs.
In decision-making, maxitive functions are used for worst-case and best-case evaluations. Maxitivity gives rise to a rich structure that is well-studied in the context of the pointwise order. In this article, we investigate maxitivity with respect to general preorders and provide a representation theorem for such functionals. The results are illustrated for different stochastic orders in the literature, including the usual stochastic order, the increasing convex/concave order, and the dispersive order.
Data sets tend to live in low-dimensional non-linear subspaces. Ideal data analysis tools for such data sets should therefore account for such non-linear geometry. The symmetric Riemannian geometry setting can be suitable for a variety of reasons. First, it comes with a rich mathematical structure to account for a wide range of non-linear geometries that has been shown to be able to capture the data geometry through empirical evidence from classical non-linear embedding. Second, many standard data analysis tools initially developed for data in Euclidean space can also be generalised efficiently to data on a symmetric Riemannian manifold. A conceptual challenge comes from the lack of guidelines for constructing a symmetric Riemannian structure on the data space itself and the lack of guidelines for modifying successful algorithms on symmetric Riemannian manifolds for data analysis to this setting. This work considers these challenges in the setting of pullback Riemannian geometry through a diffeomorphism. The first part of the paper characterises diffeomorphisms that result in proper, stable and efficient data analysis. The second part then uses these best practices to guide construction of such diffeomorphisms through deep learning. As a proof of concept, different types of pullback geometries -- among which the proposed construction -- are tested on several data analysis tasks and on several toy data sets. The numerical experiments confirm the predictions from theory, i.e., that the diffeomorphisms generating the pullback geometry need to map the data manifold into a geodesic subspace of the pulled back Riemannian manifold while preserving local isometry around the data manifold for proper, stable and efficient data analysis, and that pulling back positive curvature can be problematic in terms of stability.
We show that certain diagrams of $\infty$-logoses are reconstructed in internal languages of their oplax limits via lex, accessible modalities, which enables us to use plain homotopy type theory to reason about not only a single $\infty$-logos but also a diagram of $\infty$-logoses. This also provides a higher dimensional version of Sterling's synthetic Tait computability -- a type theory for higher dimensional logical relations. To prove the main result, we establish a precise correspondence between the lex, accessible localizations of an $\infty$-logos and the lex, accessible modalities in the internal language of the $\infty$-logos. To do this, we also partly develop the Kripke-Joyal semantics of homotopy type theory in $\infty$-logoses.
We continue the study of balanceable graphs, defined by Caro, Hansberg, and Montejano in 2021 as graphs $G$ such that any $2$-coloring of the edges of a sufficiently large complete graph containing sufficiently many edges of each color contains a balanced copy of $G$. While the problem of recognizing balanceable graphs was conjectured to be NP-complete by Dailly, Hansberg, and Ventura in 2021, balanceable graphs admit an elegant combinatorial characterization: a graph is balanceable if and only there exist two vertex subsets, one containing half of all the graph's edges and another one such that the corresponding cut contains half of all the graph's edges. We consider a special case of this property, namely when one of the two sets is a vertex cover, and call the corresponding graphs simply balanceable. We prove a number of results on balanceable and simply balanceable regular graphs. First, we characterize simply balanceable regular graphs via a condition involving the independence number of the graph. Second, we address a question of Dailly, Hansberg, and Ventura from 2021 and show that every cubic graph is balanceable. Third, using Brooks' theorem, we show that every $4$-regular graph with order divisible by $4$ is balanceable. Finally, we show that it is NP-complete to determine if a $9$-regular graph is simply balanceable.
Machine learning classification methods usually assume that all possible classes are sufficiently present within the training set. Due to their inherent rarities, extreme events are always under-represented and classifiers tailored for predicting extremes need to be carefully designed to handle this under-representation. In this paper, we address the question of how to assess and compare classifiers with respect to their capacity to capture extreme occurrences. This is also related to the topic of scoring rules used in forecasting literature. In this context, we propose and study a risk function adapted to extremal classifiers. The inferential properties of our empirical risk estimator are derived under the framework of multivariate regular variation and hidden regular variation. A simulation study compares different classifiers and indicates their performance with respect to our risk function. To conclude, we apply our framework to the analysis of extreme river discharges in the Danube river basin. The application compares different predictive algorithms and test their capacity at forecasting river discharges from other river stations.
The central path problem is a variation on the single facility location problem. The aim is to find, in a given connected graph $G$, a path $P$ minimizing its eccentricity, which is the maximal distance from $P$ to any vertex of the graph $G$. The path eccentricity of $G$ is the minimal eccentricity achievable over all paths in $G$. In this article we consider the path eccentricity of the class of the $k$-AT-free graphs. They are graphs in which any set of three vertices contains a pair for which every path between them uses at least one vertex of the closed neighborhood at distance $k$ of the third. We prove that they have path eccentricity bounded by $k$. Moreover, we answer a question of G\'omez and Guti\'errez asking if there is a relation between path eccentricity and the consecutive ones property. The latter is the property for a binary matrix to admit a permutation of the rows placing the 1's consecutively on the columns. It was already known that graphs whose adjacency matrices have the consecutive ones property have path eccentricity at most 1, and that the same remains true when the augmented adjacency matrices (with ones on the diagonal) has the consecutive ones property. We generalize these results as follow. We study graphs whose adjacency matrices can be made to satisfy the consecutive ones property after changing some values on the diagonal, and show that those graphs have path eccentricity at most 2, by showing that they are 2-AT-free.
This paper concerns an expansion of first-order Belnap-Dunn logic whose connectives and quantifiers are all familiar from classical logic. The language and logical consequence relation of the logic are defined, a proof system for the defined logic is presented, and the soundness and completeness of the presented proof system is established. The close relationship between the logical consequence relations of the defined logic and the version of classical logic with the same language is illustrated by the minor differences between the presented proof system and a sound and complete proof system for the version of classical logic with the same language. Moreover, fifteen classical laws of logical equivalence are given by which the logical equivalence relation of the defined logic distinguishes itself from the logical equivalence relation of many logics that are closely related at first glance.