For a skew polynomial ring $R=A[X;\theta,\delta]$ where $A$ is a commutative frobenius ring, $\theta$ an endomorphism of $A$ and $\delta$ a $\theta$-derivation of $A$, we consider cyclic left module codes $\mathcal{C}=Rg/Rf\subset R/Rf$ where $g$ is a left and right divisor of $f$ in $R$. In this paper we derive a parity check matrix when $A$ is a finite commutative frobenius ring using only the framework of skew polynomial rings. We consider rings $A=B[a_1,\ldots,a_s]$ which are free $B$-algebras where the restriction of $\delta$ and $\theta$ to $B$ are polynomial maps. If a Gr\"obner basis can be computed over $B$, then we show that all Euclidean and Hermitian dual-containing codes $\mathcal{C}=Rg/Rf\subset R/Rf$ can be computed using a Gr\"obner basis. We also give an algorithm to test if the dual code is again a cyclic left module code. We illustrate our approach for rings of order $4$ with non-trivial endomorphism and the Galois ring of characteristic $4$.
For a positive integer $k$, a proper $k$-coloring of a graph $G$ is a mapping $f: V(G) \rightarrow \{1,2, \ldots, k\}$ such that $f(u) \neq f(v)$ for each edge $uv$ of $G$. The smallest integer $k$ for which there is a proper $k$-coloring of $G$ is called the chromatic number of $G$, denoted by $\chi(G)$. A locally identifying coloring (for short, lid-coloring) of a graph $G$ is a proper $k$-coloring of $G$ such that every pair of adjacent vertices with distinct closed neighborhoods has distinct set of colors in their closed neighborhoods. The smallest integer $k$ such that $G$ has a lid-coloring with $k$ colors is called locally identifying chromatic number (for short, lid-chromatic number) of $G$, denoted by $\chi_{lid}(G)$. This paper studies the lid-coloring of the Cartesian product and tensor product of two graphs. We prove that if $G$ and $H$ are two connected graphs having at least two vertices then (a) $\chi_{lid}(G \square H) \leq \chi(G) \chi(H)-1$ and (b) $\chi_{lid}(G \times H) \leq \chi(G) \chi(H)$. Here $G \square H$ and $G \times H$ denote the Cartesian and tensor products of $G$ and $H$ respectively. We determine the lid-chromatic number of $C_m \square P_n$, $C_m \square C_n$, $P_m \times P_n$, $C_m \times P_n$ and $C_m \times C_n$, where $C_m$ and $P_n$ denote a cycle and a path on $m$ and $n$ vertices respectively.
Consider the problem of estimating a random variable $X$ from noisy observations $Y = X+ Z$, where $Z$ is standard normal, under the $L^1$ fidelity criterion. It is well known that the optimal Bayesian estimator in this setting is the conditional median. This work shows that the only prior distribution on $X$ that induces linearity in the conditional median is Gaussian. Along the way, several other results are presented. In particular, it is demonstrated that if the conditional distribution $P_{X|Y=y}$ is symmetric for all $y$, then $X$ must follow a Gaussian distribution. Additionally, we consider other $L^p$ losses and observe the following phenomenon: for $p \in [1,2]$, Gaussian is the only prior distribution that induces a linear optimal Bayesian estimator, and for $p \in (2,\infty)$, infinitely many prior distributions on $X$ can induce linearity. Finally, extensions are provided to encompass noise models leading to conditional distributions from certain exponential families.
We present a comprehensive analysis of the coupled scheme introduced in [Springer Proceedings in Mathematics \& Statistics, vol 237. Springer, Cham 2018 \cite{S2018}] for linear and Hamilton-Jacobi equations. This method merges two distinct schemes, each tailored to handle specific solution characteristics. It offers a versatile framework for coupling various schemes, enabling the integration of accurate methods for smooth solutions and the treatment of discontinuities and gradient jumps. In \cite{S2018}, the emphasis was on coupling an anti-dissipative scheme designed for discontinuous solutions with a semi-Lagrangian scheme developed for smooth solutions. In this paper, we rigorously establish the essential properties of the resulting coupled scheme, especially in the linear case. To illustrate the effectiveness of this coupled approach, we present a series of one-dimensional examples.
In the Steiner point removal (SPR) problem, we are given a (weighted) graph $G$ and a subset $T$ of its vertices called terminals, and the goal is to compute a (weighted) graph $H$ on $T$ that is a minor of $G$, such that the distance between every pair of terminals is preserved to within some small multiplicative factor, that is called the stretch of $H$. It has been shown that on general graphs we can achieve stretch $O(\log |T|)$ [Filtser, 2018]. On the other hand, the best-known stretch lower bound is $8$ [Chan-Xia-Konjevod-Richa, 2006], which holds even for trees. In this work, we show an improved lower bound of $\tilde\Omega\big(\sqrt{\log |T|}\big)$.
Given a large graph $G$ with a subset $|T|=k$ of its vertices called terminals, a quality-$q$ flow sparsifier is a small graph $G'$ that contains $T$ and preserves all multicommodity flows that can be routed between terminals in $T$, to within factor $q$. The problem of constructing flow sparsifiers with good (small) quality and (small) size has been a central problem in graph compression for decades. A natural approach of constructing $O(1)$-quality flow sparsifiers, which was adopted in most previous constructions, is contraction. Andoni, Krauthgamer, and Gupta constructed a sketch of size $f(k,\varepsilon)$ that stores all feasible multicommodity flows up to a factor of $(1+\varepsilon)$, raised the question of constructing quality-$(1+\varepsilon)$ flow sparsifiers whose size only depends on $k,\varepsilon$ (but not the number of vertices in the input graph $G$), and proposed a contraction-based framework towards it using their sketch result. In this paper, we settle their question for contraction-based flow sparsifiers, by showing that quality-$(1+\varepsilon)$ contraction-based flow sparsifiers with size $f(\varepsilon)$ exist for all $5$-terminal graphs, but not for all $6$-terminal graphs. Our hardness result on $6$-terminal graphs improves upon a recent hardness result by Krauthgamer and Mosenzon on exact (quality-$1$) flow sparsifiers, for contraction-based constructions. Our construction and proof utilize the notion of tight spans in metric geometry, which we believe is a powerful tool for future work.
We consider online reinforcement learning (RL) in episodic Markov decision processes (MDPs) under the linear $q^\pi$-realizability assumption, where it is assumed that the action-values of all policies can be expressed as linear functions of state-action features. This class is known to be more general than linear MDPs, where the transition kernel and the reward function are assumed to be linear functions of the feature vectors. As our first contribution, we show that the difference between the two classes is the presence of states in linearly $q^\pi$-realizable MDPs where for any policy, all the actions have approximately equal values, and skipping over these states by following an arbitrarily fixed policy in those states transforms the problem to a linear MDP. Based on this observation, we derive a novel (computationally inefficient) learning algorithm for linearly $q^\pi$-realizable MDPs that simultaneously learns what states should be skipped over and runs another learning algorithm on the linear MDP hidden in the problem. The method returns an $\epsilon$-optimal policy after $\text{polylog}(H, d)/\epsilon^2$ interactions with the MDP, where $H$ is the time horizon and $d$ is the dimension of the feature vectors, giving the first polynomial-sample-complexity online RL algorithm for this setting. The results are proved for the misspecified case, where the sample complexity is shown to degrade gracefully with the misspecification error.
The top-$k$-sum operator computes the sum of the largest $k$ components of a given vector. The Euclidean projection onto the top-$k$-sum constraint serves as a crucial subroutine in iterative methods to solve composite superquantile optimization problems. In this paper, we introduce a solver that implements two finite-termination algorithms to compute this projection. Both algorithms have complexity $O(n)$ when applied to a sorted $n$-dimensional input vector, where the absorbed constant is independent of $k$. This stands in contrast to the existing grid-search-inspired method that has $O(k(n-k))$ complexity. The improvement is significant when $k$ is linearly dependent on $n$, which frequently encountered in practical superquantile optimization applications. In instances where the input vector is unsorted, an additional cost is incurred to (partially) sort the vector. To reduce this cost, we further derive a rigorous procedure that leverages approximate sorting to compute the projection, which is particularly useful when solving a sequence of similar projection problems. Numerical results show that our methods solve problems of scale $n=10^7$ and $k=10^4$ within $0.05$ seconds, whereas the existing grid-search-based method and the Gurobi QP solver can take minutes to hours.
We present approximation algorithms for the Fault-tolerant $k$-Supplier with Outliers ($\mathsf{F}k\mathsf{SO}$) problem. This is a common generalization of two known problems -- $k$-Supplier with Outliers, and Fault-tolerant $k$-Supplier -- each of which generalize the well-known $k$-Supplier problem. In the $k$-Supplier problem the goal is to serve $n$ clients $C$, by opening $k$ facilities from a set of possible facilities $F$; the objective function is the farthest that any client must travel to access an open facility. In $\mathsf{F}k\mathsf{SO}$, each client $v$ has a fault-tolerance $\ell_v$, and now desires $\ell_v$ facilities to serve it; so each client $v$'s contribution to the objective function is now its distance to the $\ell_v^{\text{th}}$ closest open facility. Furthermore, we are allowed to choose $m$ clients that we will serve, and only those clients contribute to the objective function, while the remaining $n-m$ are considered outliers. Our main result is a $\min\{4t-1,2^t+1\}$-approximation for the $\mathsf{F}k\mathsf{SO}$ problem, where $t$ is the number of distinct values of $\ell_v$ that appear in the instance. At $t=1$, i.e. in the case where the $\ell_v$'s are uniformly some $\ell$, this yields a $3$-approximation, improving upon the $11$-approximation given for the uniform case by Inamdar and Varadarajan [2020], who also introduced the problem. Our result for the uniform case matches tight $3$-approximations that exist for $k$-Supplier, $k$-Supplier with Outliers, and Fault-tolerant $k$-Supplier. Our key technical contribution is an application of the round-or-cut schema to $\mathsf{F}k\mathsf{SO}$. Guided by an LP relaxation, we reduce to a simpler optimization problem, which we can solve to obtain distance bounds for the "round" step, and valid inequalities for the "cut" step.
$\textit{De Novo}$ Genome assembly is one of the most important tasks in computational biology. ELBA is the state-of-the-art distributed-memory parallel algorithm for overlap detection and layout simplification steps of $\textit{De Novo}$ genome assembly but exists a performance bottleneck in pairwise alignment. In this work, we proposed 3 GPU schedulers for ELBA to accommodate multiple MPI processes and multiple GPUs. The GPU schedulers enable multiple MPI processes to perform computation on GPUs in a round-robin fashion. Both strong and weak scaling experiments show that 3 schedulers are able to significantly improve the performance of baseline while there is a trade-off between parallelism and GPU scheduler overhead. For the best performance implementation, the one-to-one scheduler achieves $\sim$7-8$\times$ speed-up using 25 MPI processes compared with the baseline vanilla ELBA GPU scheduler.
In this paper we study the problem of maximizing the distance to a given point $C_0$ over a polytope $\mathcal{P}$. Assuming that the polytope is circumscribed by a known ball we construct an intersection of balls which preserves the vertices of the polytope on the boundary of this ball, and show that the intersection of balls approximates the polytope arbitrarily well. Then, we use some known results regarding the maximization of distances to a given point over an intersection of balls to create a new polytope which preserves the maximizers to the original problem. Next, a new intersection of balls is obtained in a similar fashion, and as such, after a finite number of iterations, we conjecture, we end up with an intersection of balls over which we can maximize the distance to the given point. The obtained distance is shown to be a non trivial upper bound to the original distance. Tests are made with maximizing the distance to a random point over the unit hypercube up to dimension $n = 100$. Several detailed 2-d examples are also shown.