Let $S_d(n)$ denote the minimum number of wires of a depth-$d$ (unbounded fan-in) circuit encoding an error-correcting code $C:\{0, 1\}^n \to \{0, 1\}^{32n}$ with distance at least $4n$. G\'{a}l, Hansen, Kouck\'{y}, Pudl\'{a}k, and Viola [IEEE Trans. Inform. Theory 59(10), 2013] proved that $S_d(n) = \Theta_d(\lambda_d(n)\cdot n)$ for any fixed $d \ge 3$. By improving their construction and analysis, we prove $S_d(n)= O(\lambda_d(n)\cdot n)$. Letting $d = \alpha(n)$, a version of the inverse Ackermann function, we obtain circuits of linear size. This depth $\alpha(n)$ is the minimum possible to within an additive constant 2; we credit the nearly-matching depth lower bound to G\'{a}l et al., since it directly follows their method (although not explicitly claimed or fully verified in that work), and is obtained by making some constants explicit in a graph-theoretic lemma of Pudl\'{a}k [Combinatorica, 14(2), 1994], extending it to super-constant depths. We also study a subclass of MDS codes $C: \mathbb{F}^n \to \mathbb{F}^m$ characterized by the Hamming-distance relation $\mathrm{dist}(C(x), C(y)) \ge m - \mathrm{dist}(x, y) + 1$ for any distinct $x, y \in \mathbb{F}^n$. (For linear codes this is equivalent to the generator matrix being totally invertible.) We call these superconcentrator-induced codes, and we show their tight connection with superconcentrators. Specifically, we observe that any linear or nonlinear circuit encoding a superconcentrator-induced code must be a superconcentrator graph, and any superconcentrator graph can be converted to a linear circuit, over a sufficiently large field (exponential in the size of the graph), encoding a superconcentrator-induced code.
The Skolem problem is a long-standing open problem in linear dynamical systems: can a linear recurrence sequence (LRS) ever reach 0 from a given initial configuration? Similarly, the positivity problem asks whether the LRS stays positive from an initial configuration. Deciding Skolem (or positivity) has been open for half a century: the best known decidability results are for LRS with special properties (e.g., low order recurrences). But these problems are easier for "uninitialized" variants, where the initial configuration is not fixed but can vary arbitrarily: checking if there is an initial configuration from which the LRS stays positive can be decided in polynomial time (Tiwari in 2004, Braverman in 2006). In this paper, we consider problems that lie between the initialized and uninitialized variants. More precisely, we ask if 0 (resp. negative numbers) can be avoided from every initial configuration in a neighborhood of a given initial configuration. This can be considered as a robust variant of the Skolem (resp. positivity) problem. We show that these problems lie at the frontier of decidability: if the neighbourhood is given as part of the input, then robust Skolem and robust positivity are Diophantine hard, i.e., solving either would entail major breakthroughs in Diophantine approximations, as happens for (non-robust) positivity. However, if one asks whether such a neighbourhood exists, then the problems turn out to be decidable with PSPACE complexity. Our techniques also allow us to tackle robustness for ultimate positivity, which asks whether there is a bound on the number of steps after which the LRS remains positive. There are two variants depending on whether we ask for a "uniform" bound on this number of steps. For the non-uniform variant, when the neighbourhood is open, the problem turns out to be tractable, even when the neighbourhood is given as input.
Standard mixed-integer programming formulations for the stable set problem on $n$-node graphs require $n$ integer variables. We prove that this is almost optimal: We give a family of $n$-node graphs for which every polynomial-size MIP formulation requires $\Omega(n/\log^2 n)$ integer variables. By a polyhedral reduction we obtain an analogous result for $n$-item knapsack problems. In both cases, this improves the previously known bounds of $\Omega(\sqrt{n}/\log n)$ by Cevallos, Weltge & Zenklusen (SODA 2018). To this end, we show that there exists a family of $n$-node graphs whose stable set polytopes satisfy the following: any $(1+\varepsilon/n)$-approximate extended formulation for these polytopes, for some constant $\varepsilon > 0$, has size $2^{\Omega(n/\log n)}$. Our proof extends and simplifies the information-theoretic methods due to G\"o\"os, Jain & Watson (FOCS 2016, SIAM J. Comput. 2018) who showed the same result for the case of exact extended formulations (i.e. $\varepsilon = 0$).
We give a procedure for computing group-level $(\epsilon, \delta)$-DP guarantees for DP-SGD, when using Poisson sampling or fixed batch size sampling. Up to discretization errors in the implementation, the DP guarantees computed by this procedure are tight (assuming we release every intermediate iterate).
Most diffusion models assume that the reverse process adheres to a Gaussian distribution. However, this approximation has not been rigorously validated, especially at singularities, where t=0 and t=1. Improperly dealing with such singularities leads to an average brightness issue in applications, and limits the generation of images with extreme brightness or darkness. We primarily focus on tackling singularities from both theoretical and practical perspectives. Initially, we establish the error bounds for the reverse process approximation, and showcase its Gaussian characteristics at singularity time steps. Based on this theoretical insight, we confirm the singularity at t=1 is conditionally removable while it at t=0 is an inherent property. Upon these significant conclusions, we propose a novel plug-and-play method SingDiffusion to address the initial singular time step sampling, which not only effectively resolves the average brightness issue for a wide range of diffusion models without extra training efforts, but also enhances their generation capability in achieving notable lower FID scores. Code and models are released at //github.com/PangzeCheung/SingDiffusion.
A subset $S$ of vertices in a graph $G$ is a secure dominating set of $G$ if $S$ is a dominating set of $G$ and, for each vertex $u \not\in S$, there is a vertex $v \in S$ such that $uv$ is an edge and $(S \setminus \{v\}) \cup \{u\}$ is also a dominating set of $G$. The secure domination number of $G$, denoted by $\gamma_{s}(G)$, is the cardinality of a smallest secure dominating sets of $G$. In this paper, we prove that for any outerplanar graph with $n \geq 4$ vertices, $\gamma_{s}(G) \geq (n+4)/5$ and the bound is tight.
The presence of toxic and gender-identity derogatory language in open-source software (OSS) communities has recently become a focal point for researchers. Such comments not only lead to frustration and disengagement among developers but may also influence their leave from the OSS projects. Despite ample evidence suggesting that diverse teams enhance productivity, the existence of toxic or gender identity discriminatory communications poses a significant threat to the participation of individuals from marginalized groups and, as such, may act as a barrier to fostering diversity and inclusion in OSS projects. However, there is a notable lack of research dedicated to exploring the association between gender-based toxic and derogatory language with a perceptible diversity of open-source software teams. Consequently, this study aims to investigate how such content influences the gender, ethnicity, and tenure diversity of open-source software development teams. To achieve this, we extract data from active GitHub projects, assess various project characteristics, and identify instances of toxic and gender-discriminatory language within issue/pull request comments. Using these attributes, we construct a regression model to explore how they associate with the perceptible diversity of those projects.
Equipping the rototranslation group $SE(2)$ with a sub-Riemannian structure inspired by the visual cortex V1, we propose algorithms for image inpainting and enhancement based on hypoelliptic diffusion. We innovate on previous implementations of the methods by Citti, Sarti, and Boscain et al., by proposing an alternative that prevents fading and is capable of producing sharper results in a procedure that we call WaxOn-WaxOff. We also exploit the sub-Riemannian structure to define a completely new unsharp filter using $SE(2)$, analogous to the classical unsharp filter for 2D image processing. We demonstrate our method on blood vessels enhancement in retinal scans.
When training deep neural networks, the phenomenon of $\textit{dying neurons}$ $\unicode{x2013}$units that become inactive or saturated, output zero during training$\unicode{x2013}$ has traditionally been viewed as undesirable, linked with optimization challenges, and contributing to plasticity loss in continual learning scenarios. In this paper, we reassess this phenomenon, focusing on sparsity and pruning. By systematically exploring the impact of various hyperparameter configurations on dying neurons, we unveil their potential to facilitate simple yet effective structured pruning algorithms. We introduce $\textit{Demon Pruning}$ (DemP), a method that controls the proliferation of dead neurons, dynamically leading to network sparsity. Achieved through a combination of noise injection on active units and a one-cycled schedule regularization strategy, DemP stands out for its simplicity and broad applicability. Experiments on CIFAR10 and ImageNet datasets demonstrate that DemP surpasses existing structured pruning techniques, showcasing superior accuracy-sparsity tradeoffs and training speedups. These findings suggest a novel perspective on dying neurons as a valuable resource for efficient model compression and optimization.
The large-matrix limit laws of the rescaled largest eigenvalue of the orthogonal, unitary and symplectic $n$-dimensional Gaussian ensembles -- and of the corresponding Laguerre ensembles (Wishart distributions) for various regimes of the parameter $\alpha$ (degrees of freedom $p$) -- are known to be the Tracy-Widom distributions $F_\beta$ ($\beta=1,2,4$). We will establish (paying particular attention to large, or small, ratios $p/n$) that, with careful choices of the rescaling constants and the expansion parameter $h$, the limit laws embed into asymptotic expansions in powers of $h$, where $h \asymp n^{-2/3}$ resp. $h \asymp (n\,\wedge\,p)^{-2/3}$. We find explicit analytic expressions of the first few expansions terms as linear combinations, with rational polynomial coefficients, of higher order derivatives of the limit law $F_\beta$. With a proper parametrization, the expansions in the Gaussian cases can be understood, for given $n$, as the limit $p\to\infty$ of the Laguerre cases. Whereas the results for $\beta=2$ are presented with proof, the discussion of the cases $\beta=1,4$ is based on some hypotheses, focussing on the algebraic aspects of actually computing the polynomial coefficients. For the purposes of illustration and validation, the various results are checked against simulation data with a sample size of a thousand million.
Let $\mathsf{TH}_k$ denote the $k$-out-of-$n$ threshold function: given $n$ input Boolean variables, the output is $1$ if and only if at least $k$ of the inputs are $1$. We consider the problem of computing the $\mathsf{TH}_k$ function using noisy readings of the Boolean variables, where each reading is incorrect with some fixed and known probability $p \in (0,1/2)$. As our main result, we show that, when $k = o(n)$, it is both sufficient and necessary to use $$(1 \pm o(1)) \frac{n\log \frac{k}{\delta}}{D_{\mathsf{KL}}(p || 1-p)}$$ queries in expectation to compute the $\mathsf{TH}_k$ function with a vanishing error probability $\delta = o(1)$, where $D_{\mathsf{KL}}(p || 1-p)$ denotes the Kullback-Leibler divergence between $\mathsf{Bern}(p)$ and $\mathsf{Bern}(1-p)$ distributions. In particular, this says that $(1 \pm o(1)) \frac{n\log \frac{1}{\delta}}{D_{\mathsf{KL}}(p || 1-p)}$ queries in expectation are both sufficient and necessary to compute the $\mathsf{OR}$ and $\mathsf{AND}$ functions of $n$ Boolean variables. Compared to previous work, our result tightens the dependence on $p$ in both the upper and lower bounds.