The study of nonplanar drawings of graphs with restricted crossing configurations is a well-established topic in graph drawing, often referred to as beyond-planar graph drawing. One of the most studied types of drawings in this area are the $k$-planar drawings $(k \geq 1)$, where each edge cannot cross more than $k$ times. We generalize $k$-planar drawings, by introducing the new family of min-$k$-planar drawings. In a min-$k$-planar drawing edges can cross an arbitrary number of times, but for any two crossing edges, one of the two must have no more than $k$ crossings. We prove a general upper bound on the number of edges of min-$k$-planar drawings, a finer upper bound for $k=3$, and tight upper bounds for $k=1,2$. Also, we study the inclusion relations between min-$k$-planar graphs (i.e., graphs admitting min-$k$-planar drawings) and $k$-planar graphs.
This tutorial gives an advanced introduction to string diagrams and graph languages for higher-order computation. The subject matter develops in a principled way, starting from the two dimensional syntax of key categorical concepts such as functors, adjunctions, and strictification, and leading up to Cartesian Closed Categories, the core mathematical model of the lambda calculus and of functional programming languages. This methodology inverts the usual approach of proceeding from syntax to a categorical interpretation, by rationally reconstructing a syntax from the categorical model. The result is a graph syntax -- more precisely, a hierarchical hypergraph syntax -- which in many ways is shown to be an improvement over the conventional linear term syntax. The rest of the tutorial focuses on applications of interest to programming languages: operational semantics, general frameworks for type inference, and complex whole-program transformations such as closure conversion and automatic differentiation.
Higher-order regularization problem formulations are popular frameworks used in machine learning, inverse problems and image/signal processing. In this paper, we consider the computational problem of finding the minimizer of the Sobolev $\mathrm{W}^{1,p}$ semi-norm with a data-fidelity term. We propose a discretization procedure and prove convergence rates between our numerical solution and the target function. Our approach consists of discretizing an appropriate gradient flow problem in space and time. The space discretization is a nonlocal approximation of the p-Laplacian operator and our rates directly depend on the localization parameter $\epsilon_n$ and the time mesh-size $\tau_n$. We precisely characterize the asymptotic behaviour of $\epsilon_n$ and $\tau_n$ in order to ensure convergence to the considered minimizer. Finally, we apply our results to the setting of random graph models.
We study monotonicity testing of functions $f \colon \{0,1\}^d \to \{0,1\}$ using sample-based algorithms, which are only allowed to observe the value of $f$ on points drawn independently from the uniform distribution. A classic result by Bshouty-Tamon (J. ACM 1996) proved that monotone functions can be learned with $\exp(O(\min\{\frac{1}{\varepsilon}\sqrt{d},d\}))$ samples and it is not hard to show that this bound extends to testing. Prior to our work the only lower bound for this problem was $\Omega(\sqrt{\exp(d)/\varepsilon})$ in the small $\varepsilon$ parameter regime, when $\varepsilon = O(d^{-3/2})$, due to Goldreich-Goldwasser-Lehman-Ron-Samorodnitsky (Combinatorica 2000). Thus, the sample complexity of monotonicity testing was wide open for $\varepsilon \gg d^{-3/2}$. We resolve this question, obtaining a tight lower bound of $\exp(\Omega(\min\{\frac{1}{\varepsilon}\sqrt{d},d\}))$ for all $\varepsilon$ at most a sufficiently small constant. In fact, we prove a much more general result, showing that the sample complexity of $k$-monotonicity testing and learning for functions $f \colon \{0,1\}^d \to [r]$ is $\exp(\Theta(\min\{\frac{rk}{\varepsilon}\sqrt{d},d\}))$. For testing with one-sided error we show that the sample complexity is $\exp(\Theta(d))$. Beyond the hypercube, we prove nearly tight bounds (up to polylog factors of $d,k,r,1/\varepsilon$ in the exponent) of $\exp(\widetilde{\Theta}(\min\{\frac{rk}{\varepsilon}\sqrt{d},d\}))$ on the sample complexity of testing and learning measurable $k$-monotone functions $f \colon \mathbb{R}^d \to [r]$ under product distributions. Our upper bound improves upon the previous bound of $\exp(\widetilde{O}(\min\{\frac{k}{\varepsilon^2}\sqrt{d},d\}))$ by Harms-Yoshida (ICALP 2022) for Boolean functions ($r=2$).
The development of secure cryptographic protocols and the subsequent attack mechanisms have been placed in the literature with the utmost curiosity. While sophisticated quantum attacks bring a concern to the classical cryptographic protocols present in the applications used in everyday life, the necessity of developing post-quantum protocols is felt primarily. In post-quantum cryptography, elliptic curve-base protocols are exciting to the researchers. While the comprehensive study of elliptic curves over finite fields is well known, the extended study over finite rings is still missing. In this work, we generalize the study of Weierstrass elliptic curves over finite ring $\mathbb{Z}_n$ through classification. Several expressions to compute critical factors in studying elliptic curves are conferred. An all-around computational classification on the Weierstrass elliptic curves over $\mathbb{Z}_n$ for rigorous understanding is also attached to this work.
While previous CNN-based models have exhibited promising results for salient object detection (SOD), their ability to explore global long-range dependencies is restricted. Our previous work, the Visual Saliency Transformer (VST), addressed this constraint from a transformer-based sequence-to-sequence perspective, to unify RGB and RGB-D SOD. In VST, we developed a multi-task transformer decoder that concurrently predicts saliency and boundary outcomes in a pure transformer architecture. Moreover, we introduced a novel token upsampling method called reverse T2T for predicting a high-resolution saliency map effortlessly within transformer-based structures. Building upon the VST model, we further propose an efficient and stronger VST version in this work, i.e. VST++. To mitigate the computational costs of the VST model, we propose a Select-Integrate Attention (SIA) module, partitioning foreground into fine-grained segments and aggregating background information into a single coarse-grained token. To incorporate 3D depth information with low cost, we design a novel depth position encoding method tailored for depth maps. Furthermore, we introduce a token-supervised prediction loss to provide straightforward guidance for the task-related tokens. We evaluate our VST++ model across various transformer-based backbones on RGB, RGB-D, and RGB-T SOD benchmark datasets. Experimental results show that our model outperforms existing methods while achieving a 25% reduction in computational costs without significant performance compromise. The demonstrated strong ability for generalization, enhanced performance, and heightened efficiency of our VST++ model highlight its potential.
This paper considers correlation clustering on unweighted complete graphs. We give a combinatorial algorithm that returns a single clustering solution that is simultaneously $O(1)$-approximate for all $\ell_p$-norms of the disagreement vector. This proves that minimal sacrifice is needed in order to optimize different norms of the disagreement vector. Our algorithm is the first combinatorial approximation algorithm for the $\ell_2$-norm objective, and more generally the first combinatorial algorithm for the $\ell_p$-norm objective when $2 \leq p < \infty$. It is also faster than all previous algorithms that minimize the $\ell_p$-norm of the disagreement vector, with run-time $O(n^\omega)$, where $O(n^\omega)$ is the time for matrix multiplication on $n \times n$ matrices. When the maximum positive degree in the graph is at most $\Delta$, this can be improved to a run-time of $O(n\Delta^2 \log n)$.
We present streaming algorithms for the graph $k$-matching problem in both the insert-only and dynamic models. Our algorithms, with space complexity matching the best upper bounds, have optimal or near-optimal update time, significantly improving on previous results. More specifically, for the insert-only streaming model, we present a one-pass algorithm with optimal space complexity $O(k^2)$ and optimal update time $O(1)$, that with high probability computes a maximum weighted $k$-matching of a given weighted graph. The update time of our algorithm significantly improves the previous upper bound of $O(\log k)$, which was derived only for $k$-matching on unweighted graphs. For the dynamic streaming model, we present a one-pass algorithm that with high probability computes a maximum weighted $k$-matching in $O(Wk^2 \cdot \mbox{polylog}(n)$ space and with $O(\mbox{polylog}(n))$ update time, where $W$ is the number of distinct edge weights. Again the update time of our algorithm improves the previous upper bound of $O(k^2 \cdot \mbox{polylog}(n))$. This algorithm, when applied to unweighted graphs, gives a streaming algorithm on the dynamic model whose space and update time complexities are both near-optimal. Our results also imply a streaming approximation algorithm for maximum weighted $k$-matching whose space complexity matches the best known upper bound with a significantly improved update time.
The Koopman operator presents an attractive approach to achieve global linearization of nonlinear systems, making it a valuable method for simplifying the understanding of complex dynamics. While data-driven methodologies have exhibited promise in approximating finite Koopman operators, they grapple with various challenges, such as the judicious selection of observables, dimensionality reduction, and the ability to predict complex system behaviours accurately. This study presents a novel approach termed Mori-Zwanzig autoencoder (MZ-AE) to robustly approximate the Koopman operator in low-dimensional spaces. The proposed method leverages a nonlinear autoencoder to extract key observables for approximating a finite invariant Koopman subspace and integrates a non-Markovian correction mechanism using the Mori-Zwanzig formalism. Consequently, this approach yields a closed representation of dynamics within the latent manifold of the nonlinear autoencoder, thereby enhancing the precision and stability of the Koopman operator approximation. Demonstrations showcase the technique's ability to capture regime transitions in the flow around a circular cylinder. It also provided a low dimensional approximation for chaotic Kuramoto-Sivashinsky with promising short-term predictability and robust long-term statistical performance. By bridging the gap between data-driven techniques and the mathematical foundations of Koopman theory, MZ-AE offers a promising avenue for improved understanding and prediction of complex nonlinear dynamics.
Link prediction on knowledge graphs (KGs) is a key research topic. Previous work mainly focused on binary relations, paying less attention to higher-arity relations although they are ubiquitous in real-world KGs. This paper considers link prediction upon n-ary relational facts and proposes a graph-based approach to this task. The key to our approach is to represent the n-ary structure of a fact as a small heterogeneous graph, and model this graph with edge-biased fully-connected attention. The fully-connected attention captures universal inter-vertex interactions, while with edge-aware attentive biases to particularly encode the graph structure and its heterogeneity. In this fashion, our approach fully models global and local dependencies in each n-ary fact, and hence can more effectively capture associations therein. Extensive evaluation verifies the effectiveness and superiority of our approach. It performs substantially and consistently better than current state-of-the-art across a variety of n-ary relational benchmarks. Our code is publicly available.
The Q-learning algorithm is known to be affected by the maximization bias, i.e. the systematic overestimation of action values, an important issue that has recently received renewed attention. Double Q-learning has been proposed as an efficient algorithm to mitigate this bias. However, this comes at the price of an underestimation of action values, in addition to increased memory requirements and a slower convergence. In this paper, we introduce a new way to address the maximization bias in the form of a "self-correcting algorithm" for approximating the maximum of an expected value. Our method balances the overestimation of the single estimator used in conventional Q-learning and the underestimation of the double estimator used in Double Q-learning. Applying this strategy to Q-learning results in Self-correcting Q-learning. We show theoretically that this new algorithm enjoys the same convergence guarantees as Q-learning while being more accurate. Empirically, it performs better than Double Q-learning in domains with rewards of high variance, and it even attains faster convergence than Q-learning in domains with rewards of zero or low variance. These advantages transfer to a Deep Q Network implementation that we call Self-correcting DQN and which outperforms regular DQN and Double DQN on several tasks in the Atari 2600 domain.