Let $\mathbb{F}_q$ be a finite field of characteristic $p$. In this paper we prove that the $c$-Boomerang Uniformity, $c \neq 0$, for all permutation monomials $x^d$, where $d > 1$ and $p \nmid d$, is bounded by $d^2$. Further, we utilize this bound to estimate the $c$-boomerang uniformity of a large class of Generalized Triangular Dynamical Systems, a polynomial-based approach to describe cryptographic permutations, including the well-known Substitution-Permutation Network.
Since the Radon transform (RT) consists in a line integral function, some modeling assumptions are made on Computed Tomography (CT) system, making image reconstruction analytical methods, such as Filtered Backprojection (FBP), sensitive to artifacts and noise. In the other hand, recently, a new integral transform, called Scale Space Radon Transform (SSRT), is introduced where, RT is a particular case. Thanks to its interesting properties, such as good scale space behavior, the SSRT has known number of new applications. In this paper, with the aim to improve the reconstructed image quality for these methods, we propose to model the X-ray beam with the Scale Space Radon Transform (SSRT) where, the assumptions done on the physical dimensions of the CT system elements reflect better the reality. After depicting the basic properties and the inversion of SSRT, the FBP algorithm is used to reconstruct the image from the SSRT sinogram where the RT spectrum used in FBP is replaced by SSRT and the Gaussian kernel, expressed in their frequency domain. PSNR and SSIM, as quality measures, are used to compare RT and SSRT-based image reconstruction on Shepp-Logan head and anthropomorphic abdominal phantoms. The first findings show that the SSRT-based method outperforms the methods based on RT, especially, when the number of projections is reduced, making it more appropriate for applications requiring low-dose radiation, such as medical X-ray CT. While SSRT-FBP and RT-FBP have utmost the same runtime, the experiments show that SSRT-FBP is more robust to Poisson-Gaussian noise corrupting CT data.
In this paper, we propose two new classes of binary array codes, termed V-ETBR and V-ESIP codes, by reformulating and generalizing the variant technique of deriving the well-known generalized row-diagonal parity~(RDP) codes from shortened independent parity~(IP) codes. The V-ETBR and V-ESIP codes are both based on binary parity-check matrices and are essentially variants of two classes of codes over a special polynomial ring (termed ETBR and ESIP codes in this paper). To explore the conditions that make the variant codes binary Maximum Distance Separable~(MDS) array codes that achieve optimal storage efficiency, this paper derives the connections between V-ETBR/V-ESIP codes and ETBR/ESIP codes. These connections are beneficial for constructing various forms of the variant codes. By utilizing these connections, this paper also explicitly presents the constructions of V-ETBR and V-ESIP MDS array codes with any number of parity columns $r$, along with their fast syndrome computations. In terms of construction, all proposed MDS array codes have an exponentially growing total number of data columns with respect to the column size, while alternative codes have that only with linear order. In terms of computation, the proposed syndrome computations make the corresponding encoding/decoding asymptotically require $\lfloor \lg r \rfloor+1$ XOR~(exclusive OR) operations per data bit, when the total number of data columns approaches infinity. This is also the lowest known asymptotic complexity in MDS codes.
For a positive integer $k$, a proper $k$-coloring of a graph $G$ is a mapping $f: V(G) \rightarrow \{1,2, \ldots, k\}$ such that $f(u) \neq f(v)$ for each edge $uv$ of $G$. The smallest integer $k$ for which there is a proper $k$-coloring of $G$ is called the chromatic number of $G$, denoted by $\chi(G)$. A locally identifying coloring (for short, lid-coloring) of a graph $G$ is a proper $k$-coloring of $G$ such that every pair of adjacent vertices with distinct closed neighborhoods has distinct set of colors in their closed neighborhoods. The smallest integer $k$ such that $G$ has a lid-coloring with $k$ colors is called locally identifying chromatic number (for short, lid-chromatic number) of $G$, denoted by $\chi_{lid}(G)$. This paper studies the lid-coloring of the Cartesian product and tensor product of two graphs. We prove that if $G$ and $H$ are two connected graphs having at least two vertices then (a) $\chi_{lid}(G \square H) \leq \chi(G) \chi(H)-1$ and (b) $\chi_{lid}(G \times H) \leq \chi(G) \chi(H)$. Here $G \square H$ and $G \times H$ denote the Cartesian and tensor products of $G$ and $H$ respectively. We determine the lid-chromatic number of $C_m \square P_n$, $C_m \square C_n$, $P_m \times P_n$, $C_m \times P_n$ and $C_m \times C_n$, where $C_m$ and $P_n$ denote a cycle and a path on $m$ and $n$ vertices respectively.
Synthesis consists in deciding whether a given labeled transition system (TS) $A$ can be implemented by a net $N$ of type $\tau$. In case of a negative decision, it may be possible to convert $A$ into an implementable TS $B$ by applying various modification techniques, like relabeling edges that previously had the same label, suppressing edges/states/events, etc. It may however be useful to limit the number of such modifications to stay close to the original problem, or optimize the technique. In this paper, we show that most of the corresponding problems are NP-complete if $\tau$ corresponds to the type of flip-flop nets or some flip-flop net derivatives.
We introduce Clifford Group Equivariant Neural Networks: a novel approach for constructing $\mathrm{O}(n)$- and $\mathrm{E}(n)$-equivariant models. We identify and study the $\textit{Clifford group}$, a subgroup inside the Clifford algebra whose definition we adjust to achieve several favorable properties. Primarily, the group's action forms an orthogonal automorphism that extends beyond the typical vector space to the entire Clifford algebra while respecting the multivector grading. This leads to several non-equivalent subrepresentations corresponding to the multivector decomposition. Furthermore, we prove that the action respects not just the vector space structure of the Clifford algebra but also its multiplicative structure, i.e., the geometric product. These findings imply that every polynomial in multivectors, An advantage worth mentioning is that we obtain expressive layers that can elegantly generalize to inner-product spaces of any dimension. We demonstrate, notably from a single core implementation, state-of-the-art performance on several distinct tasks, including a three-dimensional $n$-body experiment, a four-dimensional Lorentz-equivariant high-energy physics experiment, and a five-dimensional convex hull experiment.
Subset Sum Ratio is the following optimization problem: Given a set of $n$ positive numbers $I$, find disjoint subsets $X,Y \subseteq I$ minimizing the ratio $\max\{\Sigma(X)/\Sigma(Y),\Sigma(Y)/\Sigma(X)\}$, where $\Sigma(Z)$ denotes the sum of all elements of $Z$. Subset Sum Ratio is an optimization variant of the Equal Subset Sum problem. It was introduced by Woeginger and Yu in '92 and is known to admit an FPTAS [Bazgan, Santha, Tuza '98]. The best approximation schemes before this work had running time $O(n^4/\varepsilon)$ [Melissinos, Pagourtzis '18], $\tilde O(n^{2.3}/\varepsilon^{2.6})$ and $\tilde O(n^2/\varepsilon^3)$ [Alonistiotis et al. '22]. In this work, we present an improved approximation scheme for Subset Sum Ratio running in time $O(n / \varepsilon^{0.9386})$. Here we assume that the items are given in sorted order, otherwise we need an additional running time of $O(n \log n)$ for sorting. Our improved running time simultaneously improves the dependence on $n$ to linear and the dependence on $1/\varepsilon$ to sublinear. For comparison, the related Subset Sum problem admits an approximation scheme running in time $O(n/\varepsilon)$ [Gens, Levner '79]. If one would achieve an approximation scheme with running time $\tilde O(n / \varepsilon^{0.99})$ for Subset Sum, then one would falsify the Strong Exponential Time Hypothesis [Abboud, Bringmann, Hermelin, Shabtay '19] as well as the Min-Plus-Convolution Hypothesis [Bringmann, Nakos '21]. We thus establish that Subset Sum Ratio admits faster approximation schemes than Subset Sum. This comes as a surprise, since at any point in time before this work the best known approximation scheme for Subset Sum Ratio had a worse running time than the best known approximation scheme for Subset Sum.
Recently, Graph Transformer (GT) models have been widely used in the task of Molecular Property Prediction (MPP) due to their high reliability in characterizing the latent relationship among graph nodes (i.e., the atoms in a molecule). However, most existing GT-based methods usually explore the basic interactions between pairwise atoms, and thus they fail to consider the important interactions among critical motifs (e.g., functional groups consisted of several atoms) of molecules. As motifs in a molecule are significant patterns that are of great importance for determining molecular properties (e.g., toxicity and solubility), overlooking motif interactions inevitably hinders the effectiveness of MPP. To address this issue, we propose a novel Atom-Motif Contrastive Transformer (AMCT), which not only explores the atom-level interactions but also considers the motif-level interactions. Since the representations of atoms and motifs for a given molecule are actually two different views of the same instance, they are naturally aligned to generate the self-supervisory signals for model training. Meanwhile, the same motif can exist in different molecules, and hence we also employ the contrastive loss to maximize the representation agreement of identical motifs across different molecules. Finally, in order to clearly identify the motifs that are critical in deciding the properties of each molecule, we further construct a property-aware attention mechanism into our learning framework. Our proposed AMCT is extensively evaluated on seven popular benchmark datasets, and both quantitative and qualitative results firmly demonstrate its effectiveness when compared with the state-of-the-art methods.
$\textit{De Novo}$ Genome assembly is one of the most important tasks in computational biology. ELBA is the state-of-the-art distributed-memory parallel algorithm for overlap detection and layout simplification steps of $\textit{De Novo}$ genome assembly but exists a performance bottleneck in pairwise alignment. In this work, we proposed 3 GPU schedulers for ELBA to accommodate multiple MPI processes and multiple GPUs. The GPU schedulers enable multiple MPI processes to perform computation on GPUs in a round-robin fashion. Both strong and weak scaling experiments show that 3 schedulers are able to significantly improve the performance of baseline while there is a trade-off between parallelism and GPU scheduler overhead. For the best performance implementation, the one-to-one scheduler achieves $\sim$7-8$\times$ speed-up using 25 MPI processes compared with the baseline vanilla ELBA GPU scheduler.
Uniform sampling from the set $\mathcal{G}(\mathbf{d})$ of graphs with a given degree-sequence $\mathbf{d} = (d_1, \dots, d_n) \in \mathbb N^n$ is a classical problem in the study of random graphs. We consider an analogue for temporal graphs in which the edges are labeled with integer timestamps. The input to this generation problem is a tuple $\mathbf{D} = (\mathbf{d}, T) \in \mathbb N^n \times \mathbb N_{>0}$ and the task is to output a uniform random sample from the set $\mathcal{G}(\mathbf{D})$ of temporal graphs with degree-sequence $\mathbf{d}$ and timestamps in the interval $[1, T]$. By allowing repeated edges with distinct timestamps, $\mathcal{G}(\mathbf{D})$ can be non-empty even if $\mathcal{G}(\mathbf{d})$ is, and as a consequence, existing algorithms are difficult to apply. We describe an algorithm for this generation problem which runs in expected time $O(M)$ if $\Delta^{2+\epsilon} = O(M)$ for some constant $\epsilon > 0$ and $T - \Delta = \Omega(T)$ where $M = \sum_i d_i$ and $\Delta = \max_i d_i$. Our algorithm applies the switching method of McKay and Wormald $[1]$ to temporal graphs: we first generate a random temporal multigraph and then remove self-loops and duplicated edges with switching operations which rewire the edges in a degree-preserving manner.
In this paper we study the problem of maximizing the distance to a given point $C_0$ over a polytope $\mathcal{P}$. Assuming that the polytope is circumscribed by a known ball we construct an intersection of balls which preserves the vertices of the polytope on the boundary of this ball, and show that the intersection of balls approximates the polytope arbitrarily well. Then, we use some known results regarding the maximization of distances to a given point over an intersection of balls to create a new polytope which preserves the maximizers to the original problem. Next, a new intersection of balls is obtained in a similar fashion, and as such, after a finite number of iterations, we conjecture, we end up with an intersection of balls over which we can maximize the distance to the given point. The obtained distance is shown to be a non trivial upper bound to the original distance. Tests are made with maximizing the distance to a random point over the unit hypercube up to dimension $n = 100$. Several detailed 2-d examples are also shown.