We use multidimensional circulant approach to construct new qutrit stabilizer $\dsb{\ell, 0, d}$ codes with parameters $(\ell, d) \in \{(51, 16), (52, 16), (54, 17), (55, 17), (57, 17)\}$ through symplectic self-dual additive codes over $\F_9$. In addition to these five new codes, we use bordered construction to derive two more qutrit codes with parameters $(\ell, d) \in \{(53, 16), (56, 17)\}$ that improve upon previously best known parameters.
In fair division of a connected graph $G = (V, E)$, each of $n$ agents receives a share of $G$'s vertex set $V$. These shares partition $V$, with each share required to induce a connected subgraph. Agents use their own valuation functions to determine the non-negative numerical values of the shares, which determine whether the allocation is fair in some specified sense. We introduce forbidden substructures called graph cutsets, which block divisions that are fair in the EF1 (envy-free up to one item) sense by cutting the graph into "too many pieces". Two parameters - gap and valence - determine blocked values of $n$. If $G$ guarantees connected EF1 allocations for $n$ agents with valuations that are CA (common and additive), then $G$ contains no elementary cutset of gap $k \ge 2$ and valence in the interval $\[n - k + 1, n - 1\]$. If $G$ guarantees connected EF1 allocations for $n$ agents with valuations in the broader CM (common and monotone) class, then $G$ contains no cutset of gap $k \ge 2$ and valence in the interval $\[n - k + 1, n - 1\]$. These results rule out the existence of connected EF1 allocations in a variety of situations. For some graphs $G$ we can, with help from some new positive results, pin down $G$'s spectrum - the list of exactly which values of $n$ do/do not guarantee connected EF1 allocations. Examples suggest a conjectured common spectral pattern for all graphs. Further, we show that it is NP-hard to determine whether a graph admits a cutset. We also provide an example of a (non-traceable) graph on eight vertices that has no cutsets of gap $\ge 2$ at all, yet fails to guarantee connected EF1 allocations for three agents with CA preferences.
It is well-known that a complex circulant matrix can be diagonalized by a discrete Fourier matrix with imaginary unit $\mathtt{i}$. The main aim of this paper is to demonstrate that a quaternion circulant matrix cannot be diagonalized by a discrete quaternion Fourier matrix with three imaginary units $\mathtt{i}$, $\mathtt{j}$ and $\mathtt{k}$. Instead, a quaternion circulant matrix can be block-diagonalized into 1-by-1 block and 2-by-2 block matrices by permuted discrete quaternion Fourier transform matrix. With such a block-diagonalized form, the inverse of a quaternion circulant matrix can be determined efficiently similar to the inverse of a complex circulant matrix. We make use of this block-diagonalized form to study quaternion tensor singular value decomposition of quaternion tensors where the entries are quaternion numbers. The applications including computing the inverse of a quaternion circulant matrix, and solving quaternion Toeplitz system arising from linear prediction of quaternion signals are employed to validate the efficiency of our proposed block diagonalized results. A numerical example of color video as third-order quaternion tensor is employed to validate the effectiveness of quaternion tensor singular value decomposition.
A graph $G=(V,E)$ is said to be distance magic if there is a bijection $f$ from a vertex set of $G$ to the first $|V(G)|$ natural numbers such that for each vertex $v$, its weight given by $\sum_{u \in N(v)}f(u)$ is constant, where $N(v)$ is an open neighborhood of a vertex $v$. In this paper, we introduce the concept of $p$-distance magic labeling and establish the necessary and sufficient condition for a graph to be distance magic. Additionally, we introduce necessary and sufficient conditions for a connected regular graph to exhibit distance magic properties in terms of the eigenvalues of its adjacency and Laplacian matrices. Furthermore, we study the spectra of distance magic graphs, focusing on singular distance magic graphs. Also, we show that the number of distance magic labelings of a graph is, at most, the size of its automorphism group.
Let $X$ be a $d$-dimensional simplicial complex. A function $F\colon X(k)\to \{0,1\}^k$ is said to be a direct product function if there exists a function $f\colon X(1)\to \{0,1\}$ such that $F(\sigma) = (f(\sigma_1), \ldots, f(\sigma_k))$ for each $k$-face $\sigma$. In an effort to simplify components of the PCP theorem, Goldreich and Safra introduced the problem of direct product testing, which asks whether one can test if $F\colon X(k)\to \{0,1\}^k$ is correlated with a direct product function by querying $F$ on only $2$ inputs. Dinur and Kaufman conjectured that there exist bounded degree complexes with a direct product test in the small soundness regime. We resolve their conjecture by showing that for all $\delta>0$, there exists a family of high-dimensional expanders with degree $O_{\delta}(1)$ and a $2$-query direct product tester with soundness $\delta$. We use the characterization given by a subset of the authors and independently by Dikstein and Dinur, who showed that some form of non-Abelian coboundary expansion (which they called "Unique-Games coboundary expansion") is a necessary and sufficient condition for a complex to admit such direct product testers. Our main technical contribution is a general technique for showing coboundary expansion of complexes with coefficients in a non-Abelian group. This allows us to prove that the high dimensional expanders constructed by Chapman and Lubotzky satisfies the necessary conditions, thus admitting a 2-query direct product tester with small soundness.
The frame scaling problem is: given vectors $U := \{u_{1}, ..., u_{n} \} \subseteq \mathbb{R}^{d}$, marginals $c \in \mathbb{R}^{n}_{++}$, and precision $\varepsilon > 0$, find left and right scalings $L \in \mathbb{R}^{d \times d}, r \in \mathbb{R}^n$ such that $(v_1,\dots,v_n) := (Lu_1 r_1,\dots,Lu_nr_n)$ simultaneously satisfies $\sum_{i=1}^n v_i v_i^{\mathsf{T}} = I_d$ and $\|v_{j}\|_{2}^{2} = c_{j}, \forall j \in [n]$, up to error $\varepsilon$. This problem has appeared in a variety of fields throughout linear algebra and computer science. In this work, we give a strongly polynomial algorithm for frame scaling with $\log(1/\varepsilon)$ convergence. This answers a question of Diakonikolas, Tzamos and Kane (STOC 2023), who gave the first strongly polynomial randomized algorithm with poly$(1/\varepsilon)$ convergence for the special case $c = \frac{d}{n} 1_{n}$. Our algorithm is deterministic, applies for general $c \in \mathbb{R}^{n}_{++}$, and requires $O(n^{3} \log(n/\varepsilon))$ iterations as compared to $O(n^{5} d^{11}/\varepsilon^{5})$ iterations of DTK. By lifting the framework of Linial, Samorodnitsky and Wigderson (Combinatorica 2000) for matrix scaling to frames, we are able to simplify both the algorithm and analysis. Our main technical contribution is to generalize the potential analysis of LSW to the frame setting and compute an update step in strongly polynomial time that achieves geometric progress in each iteration. In fact, we can adapt our results to give an improved analysis of strongly polynomial matrix scaling, reducing the $O(n^{5} \log(n/\varepsilon))$ iteration bound of LSW to $O(n^{3} \log(n/\varepsilon))$. Additionally, we prove a novel bound on the size of approximate frame scaling solutions, involving the condition measure $\bar{\chi}$ studied in the linear programming literature, which may be of independent interest.
In this paper, we aim to find the conditions for input-state stability (ISS) and incremental input-state stability ($\delta$ISS) of Gated Graph Neural Networks (GGNNs). We show that this recurrent version of Graph Neural Networks (GNNs) can be expressed as a dynamical distributed system and, as a consequence, can be analysed using model-based techniques to assess its stability and robustness properties. Then, the stability criteria found can be exploited as constraints during the training process to enforce the internal stability of the neural network. Two distributed control examples, flocking and multi-robot motion control, show that using these conditions increases the performance and robustness of the gated GNNs.
Most mathematical distortions used in ML are fundamentally integral in nature: $f$-divergences, Bregman divergences, (regularized) optimal transport distances, integral probability metrics, geodesic distances, etc. In this paper, we unveil a grounded theory and tools which can help improve these distortions to better cope with ML requirements. We start with a generalization of Riemann integration that also encapsulates functions that are not strictly additive but are, more generally, $t$-additive, as in nonextensive statistical mechanics. Notably, this recovers Volterra's product integral as a special case. We then generalize the Fundamental Theorem of calculus using an extension of the (Euclidean) derivative. This, along with a series of more specific Theorems, serves as a basis for results showing how one can specifically design, alter, or change fundamental properties of distortion measures in a simple way, with a special emphasis on geometric- and ML-related properties that are the metricity, hyperbolicity, and encoding. We show how to apply it to a problem that has recently gained traction in ML: hyperbolic embeddings with a "cheap" and accurate encoding along the hyperbolic vs Euclidean scale. We unveil a new application for which the Poincar\'e disk model has very appealing features, and our theory comes in handy: \textit{model} embeddings for boosted combinations of decision trees, trained using the log-loss (trees) and logistic loss (combinations).
Two graphs $G$ and $H$ are homomorphism indistinguishable over a class of graphs $\mathcal{F}$ if for all graphs $F \in \mathcal{F}$ the number of homomorphisms from $F$ to $G$ is equal to the number of homomorphisms from $F$ to $H$. Many natural equivalence relations comparing graphs such as (quantum) isomorphism, spectral, and logical equivalences can be characterised as homomorphism indistinguishability relations over certain graph classes. Abstracting from the wealth of such instances, we show in this paper that equivalences w.r.t. any self-complementarity logic admitting a characterisation as homomorphism indistinguishability relation can be characterised by homomorphism indistinguishability over a minor-closed graph class. Self-complementarity is a mild property satisfied by most well-studied logics. This result follows from a correspondence between closure properties of a graph class and preservation properties of its homomorphism indistinguishability relation. Furthermore, we classify all graph classes which are in a sense finite (essentially profinite) and satisfy the maximality condition of being homomorphism distinguishing closed, i.e. adding any graph to the class strictly refines its homomorphism indistinguishability relation. Thereby, we answer various questions raised by Roberson (2022) on general properties of the homomorphism distinguishing closure.
The \emph{Fast Gaussian Transform} (FGT) enables subquadratic-time multiplication of an $n\times n$ Gaussian kernel matrix $\mathsf{K}_{i,j}= \exp ( - \| x_i - x_j \|_2^2 ) $ with an arbitrary vector $h \in \mathbb{R}^n$, where $x_1,\dots, x_n \in \mathbb{R}^d$ are a set of \emph{fixed} source points. This kernel plays a central role in machine learning and random feature maps. Nevertheless, in most modern data analysis applications, datasets are dynamically changing (yet often have low rank), and recomputing the FGT from scratch in (kernel-based) algorithms incurs a major computational overhead ($\gtrsim n$ time for a single source update $\in \mathbb{R}^d$). These applications motivate a \emph{dynamic FGT} algorithm, which maintains a dynamic set of sources under \emph{kernel-density estimation} (KDE) queries in \emph{sublinear time} while retaining Mat-Vec multiplication accuracy and speed. Assuming the dynamic data-points $x_i$ lie in a (possibly changing) $k$-dimensional subspace ($k\leq d$), our main result is an efficient dynamic FGT algorithm, supporting the following operations in $\log^{O(k)}(n/\varepsilon)$ time: (1) Adding or deleting a source point, and (2) Estimating the ``kernel-density'' of a query point with respect to sources with $\varepsilon$ additive accuracy. The core of the algorithm is a dynamic data structure for maintaining the \emph{projected} ``interaction rank'' between source and target boxes, decoupled into finite truncation of Taylor and Hermite expansions.
While existing work in robust deep learning has focused on small pixel-level $\ell_p$ norm-based perturbations, this may not account for perturbations encountered in several real world settings. In many such cases although test data might not be available, broad specifications about the types of perturbations (such as an unknown degree of rotation) may be known. We consider a setup where robustness is expected over an unseen test domain that is not i.i.d. but deviates from the training domain. While this deviation may not be exactly known, its broad characterization is specified a priori, in terms of attributes. We propose an adversarial training approach which learns to generate new samples so as to maximize exposure of the classifier to the attributes-space, without having access to the data from the test domain. Our adversarial training solves a min-max optimization problem, with the inner maximization generating adversarial perturbations, and the outer minimization finding model parameters by optimizing the loss on adversarial perturbations generated from the inner maximization. We demonstrate the applicability of our approach on three types of naturally occurring perturbations -- object-related shifts, geometric transformations, and common image corruptions. Our approach enables deep neural networks to be robust against a wide range of naturally occurring perturbations. We demonstrate the usefulness of the proposed approach by showing the robustness gains of deep neural networks trained using our adversarial training on MNIST, CIFAR-10, and a new variant of the CLEVR dataset.