In this paper, we consider algorithms for edge-coloring multigraphs $G$ of bounded maximum degree, i.e., $\Delta(G) = O(1)$. Shannon's theorem states that any multigraph of maximum degree $\Delta$ can be properly edge-colored with $\lfloor 3\Delta/2\rfloor$ colors. Our main results include algorithms for computing such colorings. We design deterministic and randomized sequential algorithms with running time $O(n\log n)$ and $O(n)$, respectively. This is the first improvement since the $O(n^2)$ algorithm in Shannon's original paper, and our randomized algorithm is optimal up to constant factors. We also develop distributed algorithms in the $\mathsf{LOCAL}$ model of computation. Namely, we design deterministic and randomized $\mathsf{LOCAL}$ algorithms with running time $\tilde O(\log^5 n)$ and $O(\log^2n)$, respectively. The deterministic sequential algorithm is a simplified extension of earlier work of Gabow et al. in edge-coloring simple graphs. The other algorithms apply the entropy compression method in a similar way to recent work by the author and Bernshteyn, where the authors design algorithms for Vizing's theorem for simple graphs. We also extend their results to Vizing's theorem for multigraphs.
This paper presents Deep Networks for Improved Segmentation Edges (DeNISE), a novel data enhancement technique using edge detection and segmentation models to improve the boundary quality of segmentation masks. DeNISE utilizes the inherent differences in two sequential deep neural architectures to improve the accuracy of the predicted segmentation edge. DeNISE applies to all types of neural networks and is not trained end-to-end, allowing rapid experiments to discover which models complement each other. We test and apply DeNISE for building segmentation in aerial images. Aerial images are known for difficult conditions as they have a low resolution with optical noise, such as reflections, shadows, and visual obstructions. Overall the paper demonstrates the potential for DeNISE. Using the technique, we improve the baseline results with a building IoU of 78.9%.
We consider the problem of computing the Maximal Exact Matches (MEMs) of a given pattern $P[1 .. m]$ on a large repetitive text collection $T[1 .. n]$, which is represented as a (hopefully much smaller) run-length context-free grammar of size $g_{rl}$. We show that the problem can be solved in time $O(m^2 \log^\epsilon n)$, for any constant $\epsilon > 0$, on a data structure of size $O(g_{rl})$. Further, on a locally consistent grammar of size $O(\delta\log\frac{n}{\delta})$, the time decreases to $O(m\log m(\log m + \log^\epsilon n))$. The value $\delta$ is a function of the substring complexity of $T$ and $\Omega(\delta\log\frac{n}{\delta})$ is a tight lower bound on the compressibility of repetitive texts $T$, so our structure has optimal size in terms of $n$ and $\delta$. We extend our results to several related problems, such as finding $k$-MEMs, MUMs, rare MEMs, and applications.
In this work, we consider the list-decodability and list-recoverability of codes in the zero-rate regime. Briefly, a code $\mathcal{C} \subseteq [q]^n$ is $(p,\ell,L)$-list-recoverable if for all tuples of input lists $(Y_1,\dots,Y_n)$ with each $Y_i \subseteq [q]$ and $|Y_i|=\ell$ the number of codewords $c \in \mathcal{C}$ such that $c_i \notin Y_i$ for at most $pn$ choices of $i \in [n]$ is less than $L$; list-decoding is the special case of $\ell=1$. In recent work by Resch, Yuan and Zhang~(ICALP~2023) the zero-rate threshold for list-recovery was determined for all parameters: that is, the work explicitly computes $p_*:=p_*(q,\ell,L)$ with the property that for all $\epsilon>0$ (a) there exist infinite families positive-rate $(p_*-\epsilon,\ell,L)$-list-recoverable codes, and (b) any $(p_*+\epsilon,\ell,L)$-list-recoverable code has rate $0$. In fact, in the latter case the code has constant size, independent on $n$. However, the constant size in their work is quite large in $1/\epsilon$, at least $|\mathcal{C}|\geq (\frac{1}{\epsilon})^{O(q^L)}$. Our contribution in this work is to show that for all choices of $q,\ell$ and $L$ with $q \geq 3$, any $(p_*+\epsilon,\ell,L)$-list-recoverable code must have size $O_{q,\ell,L}(1/\epsilon)$, and furthermore this upper bound is complemented by a matching lower bound $\Omega_{q,\ell,L}(1/\epsilon)$. This greatly generalizes work by Alon, Bukh and Polyanskiy~(IEEE Trans.\ Inf.\ Theory~2018) which focused only on the case of binary alphabet (and thus necessarily only list-decoding). We remark that we can in fact recover the same result for $q=2$ and even $L$, as obtained by Alon, Bukh and Polyanskiy: we thus strictly generalize their work.
We study the differential privacy (DP) of a core ML problem, linear ordinary least squares (OLS), a.k.a. $\ell_2$-regression. Our key result is that the approximate LS algorithm (ALS) (Sarlos, 2006), a randomized solution to the OLS problem primarily used to improve performance on large datasets, also preserves privacy. ALS achieves a better privacy/utility tradeoff, without modifications or further noising, when compared to alternative private OLS algorithms which modify and/or noise OLS. We give the first {\em tight} DP-analysis for the ALS algorithm and the standard Gaussian mechanism (Dwork et al., 2014) applied to OLS. Our methodology directly improves the privacy analysis of (Blocki et al., 2012) and (Sheffet, 2019)) and introduces new tools which may be of independent interest: (1) the exact spectrum of $(\epsilon, \delta)$-DP parameters (``DP spectrum") for mechanisms whose output is a $d$-dimensional Gaussian, and (2) an improved DP spectrum for random projection (compared to (Blocki et al., 2012) and (Sheffet, 2019)). All methods for private OLS (including ours) assume, often implicitly, restrictions on the input database, such as bounds on leverage and residuals. We prove that such restrictions are necessary. Hence, computing the privacy of mechanisms such as ALS must estimate these database parameters, which can be infeasible in big datasets. For more complex ML models, DP bounds may not even be tractable. There is a need for blackbox DP-estimators (Lu et al., 2022) which empirically estimate a data-dependent privacy. We demonstrate the effectiveness of such a DP-estimator by empirically recovering a DP-spectrum that matches our theory for OLS. This validates the DP-estimator in a nontrivial ML application, opening the door to its use in more complex nonlinear ML settings where theory is unavailable.
In this paper, we propose a probabilistic reduced-dimensional vector autoregressive (PredVAR) model with oblique projections. This model partitions the measurement space into a dynamic subspace and a static subspace that do not need to be orthogonal. The partition allows us to apply an oblique projection to extract dynamic latent variables (DLVs) from high-dimensional data with maximized predictability. We develop an alternating iterative PredVAR algorithm that exploits the interaction between updating the latent VAR dynamics and estimating the oblique projection, using expectation maximization (EM) and a statistical constraint. In addition, the noise covariance matrices are estimated as a natural outcome of the EM method. A simulation case study of the nonlinear Lorenz oscillation system illustrates the advantages of the proposed approach over two alternatives.
We consider the problem of dynamically maintaining the convex hull of a set $S$ of points in the plane under the following special sequence of insertions and deletions (called {\em window-sliding updates}): insert a point to the right of all points of $S$ and delete the leftmost point of $S$. We propose an $O(|S|)$-space data structure that can handle each update in $O(1)$ amortized time, such that standard binary-search-based queries on the convex hull of $S$ can be answered in $O(\log h)$ time, where $h$ is the number of vertices of the convex hull of $S$, and the convex hull itself can be output in $O(h)$ time.
This work develops a novel all-at-once space-time preconditioning approach for resistive magnetohydrodynamics (MHD), with a focus on model problems targeting fusion reactor design. We consider parallel-in-time due to the long time domains required to capture the physics of interest, as well as the complexity of the underlying system and thereby computational cost of long-time integration. To ameliorate this cost by using many processors, we thus develop a novel approach to solving the whole space-time system that is parallelizable in both space and time. We develop a space-time block preconditioning for resistive MHD, following the space-time block preconditioning concept first introduced by Danieli et al. in 2022 for incompressible flow, where an effective preconditioner for classic sequential time-stepping is extended to the space-time setting. The starting point for our derivation is the continuous Schur complement preconditioner by Cyr et al. in 2021, which we proceed to generalise in order to produce, to our knowledge, the first space-time block preconditioning approach for the challenging equations governing incompressible resistive MHD. The numerical results are promising for the model problems of island coalescence and tearing mode, with the overhead computational cost associated with space-time preconditioning versus sequential time-stepping being modest and primarily in the range of 2x-5x, which is low for parallel-in-time schemes in general. Additionally, the scaling results for inner (linear) and outer (nonlinear) iterations are flat in the case of fixed time-step size and only grow very slowly in the case of time-step refinement.
Image fusion aims to generate a high-quality image from multiple images captured under varying conditions. The key problem of this task is to preserve complementary information while filtering out irrelevant information for the fused result. However, existing methods address this problem by leveraging static convolutional neural networks (CNNs), suffering two inherent limitations during feature extraction, i.e., being unable to handle spatial-variant contents and lacking guidance from multiple inputs. In this paper, we propose a novel mutual-guided dynamic network (MGDN) for image fusion, which allows for effective information utilization across different locations and inputs. Specifically, we design a mutual-guided dynamic filter (MGDF) for adaptive feature extraction, composed of a mutual-guided cross-attention (MGCA) module and a dynamic filter predictor, where the former incorporates additional guidance from different inputs and the latter generates spatial-variant kernels for different locations. In addition, we introduce a parallel feature fusion (PFF) module to effectively fuse local and global information of the extracted features. To further reduce the redundancy among the extracted features while simultaneously preserving their shared structural information, we devise a novel loss function that combines the minimization of normalized mutual information (NMI) with an estimated gradient mask. Experimental results on five benchmark datasets demonstrate that our proposed method outperforms existing methods on four image fusion tasks. The code and model are publicly available at: //github.com/Guanys-dar/MGDN.
We propose a novel single shot object detection network named Detection with Enriched Semantics (DES). Our motivation is to enrich the semantics of object detection features within a typical deep detector, by a semantic segmentation branch and a global activation module. The segmentation branch is supervised by weak segmentation ground-truth, i.e., no extra annotation is required. In conjunction with that, we employ a global activation module which learns relationship between channels and object classes in a self-supervised manner. Comprehensive experimental results on both PASCAL VOC and MS COCO detection datasets demonstrate the effectiveness of the proposed method. In particular, with a VGG16 based DES, we achieve an mAP of 81.7 on VOC2007 test and an mAP of 32.8 on COCO test-dev with an inference speed of 31.5 milliseconds per image on a Titan Xp GPU. With a lower resolution version, we achieve an mAP of 79.7 on VOC2007 with an inference speed of 13.0 milliseconds per image.
In this paper, we propose a conceptually simple and geometrically interpretable objective function, i.e. additive margin Softmax (AM-Softmax), for deep face verification. In general, the face verification task can be viewed as a metric learning problem, so learning large-margin face features whose intra-class variation is small and inter-class difference is large is of great importance in order to achieve good performance. Recently, Large-margin Softmax and Angular Softmax have been proposed to incorporate the angular margin in a multiplicative manner. In this work, we introduce a novel additive angular margin for the Softmax loss, which is intuitively appealing and more interpretable than the existing works. We also emphasize and discuss the importance of feature normalization in the paper. Most importantly, our experiments on LFW BLUFR and MegaFace show that our additive margin softmax loss consistently performs better than the current state-of-the-art methods using the same network architecture and training dataset. Our code has also been made available at //github.com/happynear/AMSoftmax