We revisit two well-studied problems, Bounded Degree Vertex Deletion and Defective Coloring, where the input is a graph $G$ and a target degree $\Delta$ and we are asked either to edit or partition the graph so that the maximum degree becomes bounded by $\Delta$. Both are known to be parameterized intractable for treewidth. We revisit the parameterization by treewidth, as well as several related parameters and present a more fine-grained picture of the complexity of both problems. Both admit straightforward DP algorithms with table sizes $(\Delta+2)^\mathrm{tw}$ and $(\chi_\mathrm{d}(\Delta+1))^{\mathrm{tw}}$ respectively, where tw is the input graph's treewidth and $\chi_\mathrm{d}$ the number of available colors. We show that both algorithms are optimal under SETH, even if we replace treewidth by pathwidth. Along the way, we also obtain an algorithm for Defective Coloring with complexity quasi-linear in the table size, thus settling the complexity of both problems for these parameters. We then consider the more restricted parameter tree-depth, and bridge the gap left by known lower bounds, by showing that neither problem can be solved in time $n^{o(\mathrm{td})}$ under ETH. In order to do so, we employ a recursive low tree-depth construction that may be of independent interest. Finally, we show that for both problems, an $\mathrm{vc}^{o(\mathrm{vc})}$ algorithm would violate ETH, thus already known algorithms are optimal. Our proof relies on a new application of the technique of $d$-detecting families introduced by Bonamy et al. Our results, although mostly negative in nature, paint a clear picture regarding the complexity of both problems in the landscape of parameterized complexity, since in all cases we provide essentially matching upper and lower bounds.
In this paper, we establish a sharp upper bound on the the number of fixed points a certain class of neural networks can have. The networks under study (autoencoders) can be viewed as discrete dynamical systems whose nonlinearities are given by the choice of activation functions. To this end, we introduce a new class $\mathcal{F}$ of $C^1$ activation functions that is closed under composition, and contains e.g. the logistic sigmoid function. We use this class to show that any 1-dimensional neural network of arbitrary depth with activation functions in $\mathcal{F}$ has at most three fixed points. Due to the simple nature of such networks, we are able to completely understand their fixed points, providing a foundation to the much needed connection between application and theory of deep neural networks.
An expurgating linear function (ELF) is a linear outer code that disallows the low-weight codewords of the inner code. ELFs can be designed either to maximize the minimum distance or to minimize the codeword error rate (CER) of the expurgated code. List decoding of the inner code from the noiseless all-zeros codeword is an efficient way to identify ELFs that maximize the minimum distance of the expurgated code. For convolutional inner codes, this paper provides distance spectrum union (DSU) upper bounds on the CER of the concatenated code. For short codeword lengths, ELFs transform a good inner code into a great concatenated code. For a constant message size of $K=64$ bits or constant codeword blocklength of $N=152$ bits, an ELF can reduce the gap at CER $10^{-6}$ between the DSU and the random-coding union (RCU) bounds from over 1 dB for the inner code alone to 0.23 dB for the concatenated code. The DSU bounds can also characterize puncturing that mitigates the rate overhead of the ELF while maintaining the DSU-to-RCU gap. The reduction in DSU-to-RCU gap comes with a minimal increase in average complexity. List Viterbi decoding guided by the ELF approaches maximum likelihood (ML) decoding of the concatenated code, and average list size converges to 1 as SNR increases. Thus, average complexity is similar to Viterbi decoding on the trellis of the inner code. For rare large-magnitude noise events, which occur less often than the FER of the inner code, a deep search in the list finds the ML codeword.
In the field of decision trees, most previous studies have difficulty ensuring the statistical optimality of a prediction of new data and suffer from overfitting because trees are usually used only to represent prediction functions to be constructed from given data. In contrast, some studies, including this paper, used the trees to represent stochastic data observation processes behind given data. Moreover, they derived the statistically optimal prediction, which is robust against overfitting, based on the Bayesian decision theory by assuming a prior distribution for the trees. However, these studies still have a problem in computing this Bayes optimal prediction because it involves an infeasible summation for all division patterns of a feature space, which is represented by the trees and some parameters. In particular, an open problem is a summation with respect to combinations of division axes, i.e., the assignment of features to inner nodes of the tree. We solve this by a Markov chain Monte Carlo method, whose step size is adaptively tuned according to a posterior distribution for the trees.
We introduce the local information cost (LIC), which quantifies the amount of information that nodes in a network need to learn when solving a graph problem. We show that the local information cost presents a natural lower bound on the communication complexity of distributed algorithms. For the synchronous CONGEST $KT_1$ model, where each node has initial knowledge of its neighbors' IDs, we prove that $\Omega(\frac{\text{LIC}_\gamma(P)}{\log\tau \log n})$ bits are required for solving a graph problem $P$ with a $\tau$-round algorithm that errs with probability at most $\gamma$. Our result is the first lower bound that yields a general trade-off between communication and time for graph problems in the CONGEST $KT_1$ model. We demonstrate how to apply the local information cost by deriving a lower bound on the communication complexity of computing routing tables for all-pairs-shortest-paths (APSP) routing, as well as for computing a spanner with multiplicative stretch $2t-1$ that consists of at most $O(n^{1+\frac{1}{t} + \epsilon})$ edges, where $\epsilon = O( {1}/{t^2} )$. More concretely, we derive the following lower bounds in the CONGEST model under the $KT_1$ assumption: For constructing routing tables, we show that any $O(\text{poly}(n))$-time algorithm has a communication complexity of $\Omega( {n^2}/{\log^2 n} )$ bits. Our main result is for constructing graph spanners: We show that any $O(\text{poly}(n))$-time algorithm must send at least $\tilde\Omega(\tfrac{1}{t^2} n^{1+{1}/{2t}})$ bits. Previously, only a trivial lower bound of $\tilde \Omega(n)$ bits was known for these problems.
This paper studies the fair range clustering problem in which the data points are from different demographic groups and the goal is to pick $k$ centers with the minimum clustering cost such that each group is at least minimally represented in the centers set and no group dominates the centers set. More precisely, given a set of $n$ points in a metric space $(P,d)$ where each point belongs to one of the $\ell$ different demographics (i.e., $P = P_1 \uplus P_2 \uplus \cdots \uplus P_\ell$) and a set of $\ell$ intervals $[\alpha_1, \beta_1], \cdots, [\alpha_\ell, \beta_\ell]$ on desired number of centers from each group, the goal is to pick a set of $k$ centers $C$ with minimum $\ell_p$-clustering cost (i.e., $(\sum_{v\in P} d(v,C)^p)^{1/p}$) such that for each group $i\in \ell$, $|C\cap P_i| \in [\alpha_i, \beta_i]$. In particular, the fair range $\ell_p$-clustering captures fair range $k$-center, $k$-median and $k$-means as its special cases. In this work, we provide an $O(1)$-approximation algorithm for the fair range $\ell_p$-clustering that picks at most $k+2\ell$ centers and may only violate the upper bound of each demographic group by at most an additive term of $2$.
The linear saturation number $sat^{lin}_k(n,\mathcal{F})$ (linear extremal number $ex^{lin}_k(n,\mathcal{F})$) of $\mathcal{F}$ is the minimum (maximum) number of hyperedges of an $n$-vertex linear $k$-uniform hypergraph containing no member of $\mathcal{F}$ as a subgraph, but the addition of any new hyperedge such that the result hypergraph is still a linear $k$-uniform hypergraph creates a copy of some hypergraph in $\mathcal{F}$. Determining $ex_3^{lin}(n$, Berge-$C_3$) is equivalent to the famous (6,3)-problem, which has been settled in 1976. Since then, determining the linear extremal numbers of Berge cycles was extensively studied. As the counterpart of this problem in saturation problems, the problem of determining the linear saturation numbers of Berge cycles is considered. In this paper, we prove that $sat^{lin}_k$($n$, Berge-$C_t)\ge \big\lfloor\frac{n-1}{k-1}\big\rfloor$ for any integers $k\ge3$, $t\ge 3$, and the equality holds if $t=3$. In addition, we provide an upper bound for $sat^{lin}_3(n,$ Berge-$C_4)$ and for any disconnected Berge-$C_4$-saturated linear 3-uniform hypergraph, we give a lower bound for the number of hyperedges of it.
Finite element methods and kinematically coupled schemes that decouple the fluid velocity and structure's displacement have been extensively studied for incompressible fluid-structure interaction (FSI) over the past decade. While these methods are known to be stable and easy to implement, optimal error analysis has remained challenging. Previous work has primarily relied on the classical elliptic projection technique, which is only suitable for parabolic problems and does not lead to optimal convergence of numerical solutions to the FSI problems in the standard $L^2$ norm. In this article, we propose a new kinematically coupled scheme for incompressible FSI thin-structure model and establish a new framework for the numerical analysis of FSI problems in terms of a newly introduced coupled non-stationary Ritz projection, which allows us to prove the optimal-order convergence of the proposed method in the $L^2$ norm. The methodology presented in this article is also applicable to numerous other FSI models and serves as a fundamental tool for advancing research in this field.
This work considers the question of how convenient access to copious data impacts our ability to learn causal effects and relations. In what ways is learning causality in the era of big data different from -- or the same as -- the traditional one? To answer this question, this survey provides a comprehensive and structured review of both traditional and frontier methods in learning causality and relations along with the connections between causality and machine learning. This work points out on a case-by-case basis how big data facilitates, complicates, or motivates each approach.
Since deep neural networks were developed, they have made huge contributions to everyday lives. Machine learning provides more rational advice than humans are capable of in almost every aspect of daily life. However, despite this achievement, the design and training of neural networks are still challenging and unpredictable procedures. To lower the technical thresholds for common users, automated hyper-parameter optimization (HPO) has become a popular topic in both academic and industrial areas. This paper provides a review of the most essential topics on HPO. The first section introduces the key hyper-parameters related to model training and structure, and discusses their importance and methods to define the value range. Then, the research focuses on major optimization algorithms and their applicability, covering their efficiency and accuracy especially for deep learning networks. This study next reviews major services and toolkits for HPO, comparing their support for state-of-the-art searching algorithms, feasibility with major deep learning frameworks, and extensibility for new modules designed by users. The paper concludes with problems that exist when HPO is applied to deep learning, a comparison between optimization algorithms, and prominent approaches for model evaluation with limited computational resources.
Dynamic programming (DP) solves a variety of structured combinatorial problems by iteratively breaking them down into smaller subproblems. In spite of their versatility, DP algorithms are usually non-differentiable, which hampers their use as a layer in neural networks trained by backpropagation. To address this issue, we propose to smooth the max operator in the dynamic programming recursion, using a strongly convex regularizer. This allows to relax both the optimal value and solution of the original combinatorial problem, and turns a broad class of DP algorithms into differentiable operators. Theoretically, we provide a new probabilistic perspective on backpropagating through these DP operators, and relate them to inference in graphical models. We derive two particular instantiations of our framework, a smoothed Viterbi algorithm for sequence prediction and a smoothed DTW algorithm for time-series alignment. We showcase these instantiations on two structured prediction tasks and on structured and sparse attention for neural machine translation.