The Erd\H{o}s, Gr\"unwald and Weiszfeld theorem provides a characterization of infinite graphs which are Eulerian. That is, infinite graphs which admit infinite Eulerian trails. In this article we complement this theorem with a characterization of those finite trails that can be extended to infinite Eulerian trails. This allows us to prove an effective version of the Erd\H{o}s, Gr\"unwald and Weiszfeld theorem for a class of graphs that includes non locally finite ones, generalizing a theorem of D.Bean.
A major challenge in computed tomography is reconstructing objects from incomplete data. An increasingly popular solution for these problems is to incorporate deep learning models into reconstruction algorithms. This study introduces a novel approach by integrating a Fourier neural operator (FNO) into the Filtered Backprojection (FBP) reconstruction method, yielding the FNO back projection (FNO-BP) network. We employ moment conditions for sinogram extrapolation to assist the model in mitigating artefacts from limited data. Notably, our deep learning architecture maintains a runtime comparable to classical filtered back projection (FBP) reconstructions, ensuring swift performance during both inference and training. We assess our reconstruction method in the context of the Helsinki Tomography Challenge 2022 and also compare it against regular FBP methods.
Given a finite, simple, connected graph $G=(V,E)$ with $|V|=n$, we consider the associated graph Laplacian matrix $L = D - A$ with eigenvalues $0 = \lambda_1 < \lambda_2 \leq \dots \leq \lambda_n$. One can also consider the same graph equipped with positive edge weights $w:E \rightarrow \mathbb{R}_{> 0}$ normalized to $\sum_{e \in E} w_e = |E|$ and the associated weighted Laplacian matrix $L_w$. We say that $G$ is conformally rigid if constant edge-weights maximize the second eigenvalue $\lambda_2(w)$ of $L_w$ over all $w$, and minimize $\lambda_n(w')$ of $L_{w'}$ over all $w'$, i.e., for all $w,w'$, $$ \lambda_2(w) \leq \lambda_2(1) \leq \lambda_n(1) \leq \lambda_n(w').$$ Conformal rigidity requires an extraordinary amount of symmetry in $G$. Every edge-transitive graph is conformally rigid. We prove that every distance-regular graph, and hence every strongly-regular graph, is conformally rigid. Certain special graph embeddings can be used to characterize conformal rigidity. Cayley graphs can be conformally rigid but need not be, we prove a sufficient criterion. We also find a small set of conformally rigid graphs that do not belong into any of the above categories; these include the Hoffman graph, the crossing number graph 6B and others. Conformal rigidity can be certified via semidefinite programming, we provide explicit examples.
Many recent works address the question of characterizing induced obstructions to bounded treewidth. In 2022, Lozin and Razgon completely answered this question for graph classes defined by finitely many forbidden induced subgraphs. Their result also implies a characterization of graph classes defined by finitely many forbidden induced subgraphs that are $(tw,\omega)$-bounded, that is, treewidth can only be large due to the presence of a large clique. This condition is known to be satisfied for any graph class with bounded tree-independence number, a graph parameter introduced independently by Yolov in 2018 and by Dallard, Milani\v{c}, and \v{S}torgel in 2024. Dallard et al. conjectured that $(tw,\omega)$-boundedness is actually equivalent to bounded tree-independence number. We address this conjecture in the context of graph classes defined by finitely many forbidden induced subgraphs and prove it for the case of graph classes excluding an induced star. We also prove it for subclasses of the class of line graphs, determine the exact values of the tree-independence numbers of line graphs of complete graphs and line graphs of complete bipartite graphs, and characterize the tree-independence number of $P_4$-free graphs, which implies a linear-time algorithm for its computation. Applying the algorithmic framework provided in a previous paper of the series leads to polynomial-time algorithms for the Maximum Weight Independent Set problem in an infinite family of graph classes.
We propose a simple empirical representation of expectations such that: For a number of samples above a certain threshold, drawn from any probability distribution with finite fourth-order statistic, the proposed estimator outperforms the empirical average when tested against the actual population, with respect to the quadratic loss. For datasets smaller than this threshold, the result still holds, but for a class of distributions determined by their first four statistics. Our approach leverages the duality between distributionally robust and risk-averse optimization.
Normalizing flow is a class of deep generative models for efficient sampling and likelihood estimation, which achieves attractive performance, particularly in high dimensions. The flow is often implemented using a sequence of invertible residual blocks. Existing works adopt special network architectures and regularization of flow trajectories. In this paper, we develop a neural ODE flow network called JKO-iFlow, inspired by the Jordan-Kinderleherer-Otto (JKO) scheme, which unfolds the discrete-time dynamic of the Wasserstein gradient flow. The proposed method stacks residual blocks one after another, allowing efficient block-wise training of the residual blocks, avoiding sampling SDE trajectories and score matching or variational learning, thus reducing the memory load and difficulty in end-to-end training. We also develop adaptive time reparameterization of the flow network with a progressive refinement of the induced trajectory in probability space to improve the model accuracy further. Experiments with synthetic and real data show that the proposed JKO-iFlow network achieves competitive performance compared with existing flow and diffusion models at a significantly reduced computational and memory cost.
Adversarial attacks can mislead automatic speech recognition (ASR) systems into predicting an arbitrary target text, thus posing a clear security threat. To prevent such attacks, we propose DistriBlock, an efficient detection strategy applicable to any ASR system that predicts a probability distribution over output tokens in each time step. We measure a set of characteristics of this distribution: the median, maximum, and minimum over the output probabilities, the entropy of the distribution, as well as the Kullback-Leibler and the Jensen-Shannon divergence with respect to the distributions of the subsequent time step. Then, by leveraging the characteristics observed for both benign and adversarial data, we apply binary classifiers, including simple threshold-based classification, ensembles of such classifiers, and neural networks. Through extensive analysis across different state-of-the-art ASR systems and language data sets, we demonstrate the supreme performance of this approach, with a mean area under the receiver operating characteristic for distinguishing target adversarial examples against clean and noisy data of 99\% and 97\%, respectively. To assess the robustness of our method, we show that adaptive adversarial examples that can circumvent DistriBlock are much noisier, which makes them easier to detect through filtering and creates another avenue for preserving the system's robustness.
Under a generalised estimating equation analysis approach, approximate design theory is used to determine Bayesian D-optimal designs. For two examples, considering simple exchangeable and exponential decay correlation structures, we compare the efficiency of identified optimal designs to balanced stepped-wedge designs and corresponding stepped-wedge designs determined by optimising using a normal approximation approach. The dependence of the Bayesian D-optimal designs on the assumed correlation structure is explored; for the considered settings, smaller decay in the correlation between outcomes across time periods, along with larger values of the intra-cluster correlation, leads to designs closer to a balanced design being optimal. Unlike for normal data, it is shown that the optimal design need not be centro-symmetric in the binary outcome case. The efficiency of the Bayesian D-optimal design relative to a balanced design can be large, but situations are demonstrated in which the advantages are small. Similarly, the optimal design from a normal approximation approach is often not much less efficient than the Bayesian D-optimal design. Bayesian D-optimal designs can be readily identified for stepped-wedge cluster randomised trials with binary outcome data. In certain circumstances, principally ones with strong time period effects, they will indicate that a design unlikely to have been identified by previous methods may be substantially more efficient. However, they require a larger number of assumptions than existing optimal designs, and in many situations existing theory under a normal approximation will provide an easier means of identifying an efficient design for binary outcome data.
We study a colored generalization of the famous simple-switch Markov chain for sampling the set of graphs with a fixed degree sequence. Here we consider the space of graphs with colored vertices, in which we fix the degree sequence and another statistic arising from the vertex coloring, and prove that the set can be connected with simple color-preserving switches or moves. These moves form a basis for defining an irreducible Markov chain necessary for testing statistical model fit to block-partitioned network data. Our methods further generalize well-known algebraic results from the 1990s: namely, that the corresponding moves can be used to construct a regular triangulation for a generalization of the second hypersimplex. On the other hand, in contrast to the monochromatic case, we show that for simple graphs, the 1-norm of the moves necessary to connect the space increases with the number of colors.
We give a fully polynomial-time randomized approximation scheme (FPRAS) for two terminal reliability in directed acyclic graphs (DAGs). In contrast, we also show the complementing problem of approximating two terminal unreliability in DAGs is #BIS-hard.
We present ResMLP, an architecture built entirely upon multi-layer perceptrons for image classification. It is a simple residual network that alternates (i) a linear layer in which image patches interact, independently and identically across channels, and (ii) a two-layer feed-forward network in which channels interact independently per patch. When trained with a modern training strategy using heavy data-augmentation and optionally distillation, it attains surprisingly good accuracy/complexity trade-offs on ImageNet. We will share our code based on the Timm library and pre-trained models.