For every finitary monad $T$ on sets and every endofunctor $F$ on the category of $T$-algebras we introduce the concept of an ffg-Elgot algebra for $F$, that is, an algebra admitting coherent solutions for finite systems of recursive equations with effects represented by the monad $T$. The goal is to study the existence and construction of free ffg-Elgot algebras. To this end, we investigate the locally ffg fixed point $\varphi F$, i.e. the colimit of all $F$-coalgebras with free finitely generated carrier, which is shown to be the initial ffg-Elgot algebra. This is the technical foundation for our main result: the category of ffg-Elgot algebras is monadic over the category of $T$-algebras.
The existing relay-assisted terahertz (THz) wireless system is limited to dual-hop transmission with pointing errors and short-term fading without considering the shadowing effect. This paper analyzes the performance of a multihop-assisted backhaul communication mixed with an access link under the shadowed fading with antenna misalignment errors. We derive statistical results of the signal-to-noise ratio (SNR) of the multihop link by considering independent but not identically distributed (i.ni.d) $\alpha$-$\mu$ fading channel with pointing errors employing channel-assisted (CA) and fixed-gain (FG) amplify-and-forward (AF) relaying for each hop. We analyze the outage probability, average BER, and ergodic capacity performance of the mixed system considering the generalized-$K$ shadowed fading model with AF and decode-and-forward (DF) protocols employed for the access link. We derive exact expressions of the performance metrics for the CA-multihop system with the DF relaying for the last hop and upper bound of the performance for the FG-multihop system using FG and DF relaying at the last relay. We also develop asymptotic analysis in the high SNR to derive the diversity order of the system and use computer simulations to provide design and deployment aspects of multiple relays in the backhaul link to extend the communication range for THz wireless transmissions.
We consider phase-field models with and without lateral flow for the numerical simulation of lateral phase separation and coarsening in lipid membranes. For the numerical solution of these models, we apply an unfitted finite element method that is flexible in handling complex and possibly evolving shapes in the absence of an explicit surface parametrization. Through several numerical tests, we investigate the effect of the presence of lateral flow on the evolution of phases. In particular, we focus on understanding how variable line tension, viscosity, membrane composition, and surface shape affect the pattern formation. Keywords: Lateral phase separation, surface Cahn-Hilliard equation, lateral flow, surface Navier-Stokes-Cahn-Hilliard system, TraceFEM
Persistence modules have a natural home in the setting of stratified spaces and constructible cosheaves. In this article, we first give explicit constructible cosheaves for common data-motivated persistence modules, namely, for modules that arise from zig-zag filtrations (including monotone filtrations), and for augmented persistence modules (which encode the data of instantaneous events). We then identify an equivalence of categories between a particular notion of zig-zag modules and the combinatorial entrance path category on stratified $\mathbb{R}$. Finally, we compute the algebraic $K$-theory of generalized zig-zag modules and describe connections to both Euler curves and $K_0$ of the monoid of persistence diagrams as described by Bubenik and Elchesen.
The error threshold of a one-parameter family of quantum channels is defined as the largest noise level such that the quantum capacity of the channel remains positive. This in turn guarantees the existence of a quantum error correction code for noise modeled by that channel. Discretizing the single-qubit errors leads to the important family of Pauli quantum channels; curiously, multipartite entangled states can increase the threshold of these channels beyond the so-called hashing bound, an effect termed superadditivity of coherent information. In this work, we divide the simplex of Pauli channels into one-parameter families and compute numerical lower bounds on their error thresholds. We find substantial increases of error thresholds relative to the hashing bound for large regions in the Pauli simplex corresponding to biased noise, which is a realistic noise model in promising quantum computing architectures. The error thresholds are computed on the family of graph states, a special type of stabilizer state. In order to determine the coherent information of a graph state, we devise an algorithm that exploits the symmetries of the underlying graph, resulting in a substantial computational speed-up. This algorithm uses tools from computational group theory and allows us to consider symmetric graph states on a large number of vertices. Our algorithm works particularly well for repetition codes and concatenated repetition codes (or cat codes), for which our results provide the first comprehensive study of superadditivity for arbitrary Pauli channels. In addition, we identify a novel family of quantum codes based on tree graphs. The error thresholds of these tree graph states outperform repetition and cat codes in large regions of the Pauli simplex, and hence form a new code family with desirable error correction properties.
Regular expressions with capture variables, also known as regex-formulas, extract relations of spans (intervals identified by their start and end indices) from text. In turn, the class of regular document spanners is the closure of the regex formulas under the Relational Algebra. We investigate the computational complexity of querying text by aggregate functions, such as sum, average, and quantile, on top of regular document spanners. To this end, we formally define aggregate functions over regular document spanners and analyze the computational complexity of exact and approximate computation. More precisely, we show that in a restricted case, all studied aggregate functions can be computed in polynomial time. In general, however, even though exact computation is intractable, some aggregates can still be approximated with fully polynomial-time randomized approximation schemes (FPRAS).
The framework of document spanners abstracts the task of information extraction from text as a function that maps every document (a string) into a relation over the document's spans (intervals identified by their start and end indices). For instance, the regular spanners are the closure under the Relational Algebra (RA) of the regular expressions with capture variables, and the expressive power of the regular spanners is precisely captured by the class of VSet-automata - a restricted class of transducers that mark the endpoints of selected spans. In this work, we embark on the investigation of document spanners that can annotate extractions with auxiliary information such as confidence, support, and confidentiality measures. To this end, we adopt the abstraction of provenance semirings by Green et al., where tuples of a relation are annotated with the elements of a commutative semiring, and where the annotation propagates through the (positive) RA operators via the semiring operators. Hence, the proposed spanner extension, referred to as an annotator, maps every string into an annotated relation over the spans. As a specific instantiation, we explore weighted VSet-automata that, similarly to weighted automata and transducers, attach semiring elements to transitions. We investigate key aspects of expressiveness, such as the closure under the positive RA, and key aspects of computational complexity, such as the enumeration of annotated answers and their ranked enumeration in the case of numeric semirings. For a number of these problems, fundamental properties of the underlying semiring, such as positivity, are crucial for establishing tractability.
We consider the problem of deterministically enumerating all minimum $k$-cut-sets in a given hypergraph for any fixed $k$. The input here is a hypergraph $G = (V, E)$ with non-negative hyperedge costs. A subset $F$ of hyperedges is a $k$-cut-set if the number of connected components in $G - F$ is at least $k$ and it is a minimum $k$-cut-set if it has the least cost among all $k$-cut-sets. For fixed $k$, we call the problem of finding a minimum $k$-cut-set as Hypergraph-$k$-Cut and the problem of enumerating all minimum $k$-cut-sets as Enum-Hypergraph-$k$-Cut. The special cases of Hypergraph-$k$-Cut and Enum-Hypergraph-$k$-Cut restricted to graph inputs are well-known to be solvable in (randomized as well as deterministic) polynomial time. In contrast, it is only recently that polynomial-time algorithms for Hypergraph-$k$-Cut were developed. The randomized polynomial-time algorithm for Hypergraph-$k$-Cut that was designed in 2018 (Chandrasekaran, Xu, and Yu, SODA 2018) showed that the number of minimum $k$-cut-sets in a hypergraph is $O(n^{2k-2})$, where $n$ is the number of vertices in the input hypergraph, and that they can all be enumerated in randomized polynomial time, thus resolving Enum-Hypergraph-$k$-Cut in randomized polynomial time. A deterministic polynomial-time algorithm for Hypergraph-$k$-Cut was subsequently designed in 2020 (Chandrasekaran and Chekuri, FOCS 2020), but it is not guaranteed to enumerate all minimum $k$-cut-sets. In this work, we give the first deterministic polynomial-time algorithm to solve Enum-Hypergraph-$k$-Cut (this is non-trivial even for $k = 2$). Our algorithms are based on new structural results that allow for efficient recovery of all minimum $k$-cut-sets by solving minimum $(S,T)$-terminal cuts. Our techniques give new structural insights even for enumerating all minimum cut-sets (i.e., minimum 2-cut-sets) in a given hypergraph.
A \textit{$k$-total coloring} of a graph $G$ is an assignment of $k$ colors to its elements (vertices and edges) so that adjacent or incident elements have different colors. The total chromatic number is the smallest integer $k$ for which the graph $G$ has a $k$-total coloring. Clearly, this number is at least $\Delta(G)+1$, where $\Delta(G)$ is the maximum degree of $G$. When the lower bound is reached, the graph is said to be Type~1. The upper bound of $\Delta(G)+2$ is a central problem that has been open for fifty years, is verified for graphs with maximum degree 4 but not for regular graphs. Most classified direct product of graphs are Type~1. The particular cases of the direct product of cycle graphs $C_m \times C_n$, for $m =3p, 5\ell$ and $8\ell$ with $p \geq 2$ and $\ell \geq 1$, and arbitrary $n \geq 3$, were previously known to be Type 1 and motivated the conjecture that, except for $C_4 \times C_4$, all direct product of cycle graphs $C_m \times C_n$ with $m,n \geq 3$ are Type 1. We give a general pattern proving that all $C_m \times C_n$ are Type 1, except for $C_4 \times C_4$. dditionally, we investigate sufficient conditions to ensure that the direct product reaches the lower bound for the total chromatic number.
Process tomography, the experimental characterization of physical processes, is a central task in science and engineering. Here we investigate the axiomatic requirements that guarantee the in-principle feasibility of process tomography in general physical theories. Specifically, we explore the requirement that process tomography should be achievable with a finite number of auxiliary systems and with a finite number of input states. We show that this requirement is satisfied in every theory equipped with universal extensions, that is, correlated states from which all other correlations can be generated locally with non-zero probability. We show that universal extensions are guaranteed to exist in two cases: (1) theories permitting conclusive state teleportation, and (2) theories satisfying three properties of Causality, Pure Product States, and Purification. In case (2), the existence of universal extensions follows from a symmetry property of Purification, whereby all pure bipartite states with the same marginal on one system are locally interconvertible. Crucially, our results hold even in theories that do not satisfy Local Tomography, the property that the state of any composite system can be identified from the correlations of local measurements. Summarizing, the existence of universal extensions, without any additional requirement of Local Tomography, is a sufficient guarantee for the characterizability of physical processes using a finite number of auxiliary systems.
Social media has been developing rapidly in public due to its nature of spreading new information, which leads to rumors being circulated. Meanwhile, detecting rumors from such massive information in social media is becoming an arduous challenge. Therefore, some deep learning methods are applied to discover rumors through the way they spread, such as Recursive Neural Network (RvNN) and so on. However, these deep learning methods only take into account the patterns of deep propagation but ignore the structures of wide dispersion in rumor detection. Actually, propagation and dispersion are two crucial characteristics of rumors. In this paper, we propose a novel bi-directional graph model, named Bi-Directional Graph Convolutional Networks (Bi-GCN), to explore both characteristics by operating on both top-down and bottom-up propagation of rumors. It leverages a GCN with a top-down directed graph of rumor spreading to learn the patterns of rumor propagation, and a GCN with an opposite directed graph of rumor diffusion to capture the structures of rumor dispersion. Moreover, the information from the source post is involved in each layer of GCN to enhance the influences from the roots of rumors. Encouraging empirical results on several benchmarks confirm the superiority of the proposed method over the state-of-the-art approaches.