Any continuous piecewise-linear function $F\colon \mathbb{R}^{n}\to \mathbb{R}$ can be represented as a linear combination of $\max$ functions of at most $n+1$ affine-linear functions. In our previous paper [``Representing piecewise linear functions by functions with small arity'', AAECC, 2023], we showed that this upper bound of $n+1$ arguments is tight. In the present paper, we extend this result by establishing a correspondence between the function $F$ and the minimal number of arguments that are needed in any such decomposition. We show that the tessellation of the input space $\mathbb{R}^{n}$ induced by the function $F$ has a direct connection to the number of arguments in the $\max$ functions.
Not all convex functions on $\mathbb{R}^n$ have finite minimizers; some can only be minimized by a sequence as it heads to infinity. In this work, we aim to develop a theory for understanding such minimizers at infinity. We study astral space, a compact extension of $\mathbb{R}^n$ to which such points at infinity have been added. Astral space is constructed to be as small as possible while still ensuring that all linear functions can be continuously extended to the new space. Although astral space includes all of $\mathbb{R}^n$, it is not a vector space, nor even a metric space. However, it is sufficiently well-structured to allow useful and meaningful extensions of concepts of convexity, conjugacy, and subdifferentials. We develop these concepts and analyze various properties of convex functions on astral space, including the detailed structure of their minimizers, exact characterizations of continuity, and convergence of descent algorithms.
Calibrating simulation models that take large quantities of multi-dimensional data as input is a hard simulation optimization problem. Existing adaptive sampling strategies offer a methodological solution. However, they may not sufficiently reduce the computational cost for estimation and solution algorithm's progress within a limited budget due to extreme noise levels and heteroskedasticity of system responses. We propose integrating stratification with adaptive sampling for the purpose of efficiency in optimization. Stratification can exploit local dependence in the simulation inputs and outputs. Yet, the state-of-the-art does not provide a full capability to adaptively stratify the data as different solution alternatives are evaluated. We devise two procedures for data-driven calibration problems that involve a large dataset with multiple covariates to calibrate models within a fixed overall simulation budget. The first approach dynamically stratifies the input data using binary trees, while the second approach uses closed-form solutions based on linearity assumptions between the objective function and concomitant variables. We find that dynamical adjustment of stratification structure accelerates optimization and reduces run-to-run variability in generated solutions. Our case study for calibrating a wind power simulation model, widely used in the wind industry, using the proposed stratified adaptive sampling, shows better-calibrated parameters under a limited budget.
Let ${\cal G}$ and ${\cal H}$ be minor-closed graph classes. The pair $({\cal H},{\cal G})$ is an Erd\H{o}s-P\'osa pair (EP-pair) if there is a function $f$ where, for every $k$ and every $G\in{\cal G},$ either $G$ has $k$ pairwise vertex-disjoint subgraphs not belonging to ${\cal H},$ or there is a set $S\subseteq V(G)$ where $|S|\leq f(k)$ and $G-S\in{\cal H}.$ The classic result of Erd\H{o}s and P\'osa says that if $\mathcal{F}$ is the class of forests, then $({\cal F},{\cal G})$ is an EP-pair for every ${\cal G}$. The class ${\cal G}$ is an EP-counterexample for ${\cal H}$ if ${\cal G}$ is minimal with the property that $({\cal H},{\cal G})$ is not an EP-pair. We prove that for every ${\cal H}$ the set $\mathfrak{C}_{\cal H}$ of all EP-counterexamples for ${\cal H}$ is finite. In particular, we provide a complete characterization of $\mathfrak{C}_{\cal H}$ for every ${\cal H}$ and give a constructive upper bound on its size. Each class ${\cal G}\in \mathfrak{C}_{\cal H}$ can be described as all minors of a sequence of grid-like graphs $\langle \mathscr{W}_{k} \rangle_{k\in \mathbb{N}}.$ Moreover, each $\mathscr{W}_{k}$ admits a half-integral packing: $k$ copies of some $H\not\in{\cal H}$ where no vertex is used more than twice. This gives a complete delineation of the half-integrality threshold of the Erd\H{o}s-P\'osa property for minors and yields a constructive proof of Thomas' conjecture on the half-integral Erd\H{o}s-P\'osa property for minors (recently confirmed, non-constructively, by Liu). Let $h$ be the maximum size of a graph in ${\cal H}.$ For every class ${\cal H},$ we construct an algorithm that, given a graph $G$ and a $k,$ either outputs a half-integral packing of $k$ copies of some $H \not\in {\cal H}$ or outputs a set of at most ${2^{k^{\cal O}_h(1)}}$ vertices whose deletion creates a graph in ${\cal H}$ in time $2^{2^{k^{{\cal O}_h(1)}}}\cdot |G|^4\log |G|.$
Let $\Omega$ be a finite set of finitary operation symbols and let $\mathfrak V$ be a nontrivial variety of $\Omega$-algebras. Assume that for some set $\Gamma\subseteq\Omega$ of group operation symbols, all $\Omega$-algebras in $\mathfrak V$ are groups under the operations associated with the symbols in $\Gamma$. In other words, $\mathfrak V$ is assumed to be a nontrivial variety of expanded groups. In particular, $\mathfrak V$ can be a nontrivial variety of groups or rings. Our main result is that there are no post-quantum weakly pseudo-free families in $\mathfrak V$, even in the worst-case setting and/or the black-box model. In this paper, we restrict ourselves to families $(H_d\mathbin|d\in D)$ of computational and black-box $\Omega$-algebras (where $D\subseteq\{0,1\}^*$) such that for every $d\in D$, each element of $H_d$ is represented by a unique bit string of length polynomial in the length of $d$. In our main result, we use straight-line programs to represent nontrivial relations between elements of $\Omega$-algebras. Note that under certain conditions, this result depends on the classification of finite simple groups. Also, we define and study some types of weak pseudo-freeness for families of computational and black-box $\Omega$-algebras.
The inference of a large symmetric signal-matrix $\mathbf{S} \in \mathbb{R}^{N\times N}$ corrupted by additive Gaussian noise, is considered for two regimes of growth of the rank $M$ as a function of $N$. For sub-linear ranks $M=\Theta(N^\alpha)$ with $\alpha\in(0,1)$ the mutual information and minimum mean-square error (MMSE) are derived for two classes of signal-matrices: (a) $\mathbf{S}=\mathbf{X}\mathbf{X}^\intercal$ with entries of $\mathbf{X}\in\mathbb{R}^{N\times M}$ independent identically distributed; (b) $\mathbf{S}$ sampled from a rotationally invariant distribution. Surprisingly, the formulas match the rank-one case. Two efficient algorithms are explored and conjectured to saturate the MMSE when no statistical-to-computational gap is present: (1) Decimation Approximate Message Passing; (2) a spectral algorithm based on a Rotation Invariant Estimator. For linear ranks $M=\Theta(N)$ the mutual information is rigorously derived for signal-matrices from a rotationally invariant distribution. Close connections with scalar inference in free probability are uncovered, which allow to deduce a simple formula for the MMSE as an integral involving the limiting spectral measure of the data matrix only. An interesting issue is whether the known information theoretic phase transitions for rank-one, and hence also sub-linear-rank, still persist in linear-rank. Our analysis suggests that only a smoothed-out trace of the transitions persists. Furthermore, the change of behavior between low and truly high-rank regimes only happens at the linear scale $\alpha=1$.
We consider a weakly supervised learning scenario where the supervision signal is generated by a transition function $\sigma$ of labels associated with multiple input instances. We formulate this problem as \emph{multi-instance Partial Label Learning (multi-instance PLL)}, which is an extension to the standard PLL problem. Our problem is met in different fields, including latent structural learning and neuro-symbolic integration. Despite the existence of many learning techniques, limited theoretical analysis has been dedicated to this problem. In this paper, we provide the first theoretical study of multi-instance PLL with possibly an unknown transition $\sigma$. Our main contributions are as follows. Firstly, we propose a necessary and sufficient condition for the learnability of the problem. This condition non-trivially generalizes and relaxes the existing small ambiguity degree in the PLL literature, since we allow the transition to be deterministic. Secondly, we derive Rademacher-style error bounds based on a top-$k$ surrogate loss that is widely used in the neuro-symbolic literature. Furthermore, we conclude with empirical experiments for learning under unknown transitions. The empirical results align with our theoretical findings; however, they also expose the issue of scalability in the weak supervision literature.
We introduce the \textbf{B}i-Directional \textbf{S}parse \textbf{Hop}field Network (\textbf{BiSHop}), a novel end-to-end framework for deep tabular learning. BiSHop handles the two major challenges of deep tabular learning: non-rotationally invariant data structure and feature sparsity in tabular data. Our key motivation comes from the recent established connection between associative memory and attention mechanisms. Consequently, BiSHop uses a dual-component approach, sequentially processing data both column-wise and row-wise through two interconnected directional learning modules. Computationally, these modules house layers of generalized sparse modern Hopfield layers, a sparse extension of the modern Hopfield model with adaptable sparsity. Methodologically, BiSHop facilitates multi-scale representation learning, capturing both intra-feature and inter-feature interactions, with adaptive sparsity at each scale. Empirically, through experiments on diverse real-world datasets, we demonstrate that BiSHop surpasses current SOTA methods with significantly less HPO runs, marking it a robust solution for deep tabular learning.
We prove optimal concentration of measure for lifted functions on high dimensional expanders (HDX). Let $X$ be a $k$-dimensional HDX. We show for any $i\leq k$ and $f:X(i)\to [0,1]$: \[\Pr_{s\in X(k)}\left[\left|\underset{{t\subseteq s}}{\mathbb{E}}[f(t)]-\mu\right|\geq\varepsilon\right]\leq exp\left(-\varepsilon^2\frac{k}{i}\right).\] Using this fact, we prove that high dimensional expanders are reverse hypercontractive, a powerful functional inequality from discrete analysis implying that for any sets $A,B \subset X(k)$, the probability a $\rho$-correlated pair passes between them is at least \[\Pr_{s,s' \sim T_\rho}[s \in A, s' \in B] \geq \Pr[A]^{O(1)} \Pr[B]^{O(1)}.\] Our results hold under weak spectral assumptions on $X$. Namely we prove exponential concentration of measure for any complex below the `Trickling-Down Threshold' (beyond which concentration may be arbitrarily poor), and optimal concentration for $\sqrt{k}$-skeletons of such complexes. We also show optimal bounds for the top dimension of stronger HDX among other settings. We leverage our inequalities to prove several new agreement testing theorems on high dimensional expanders, including a new 99%-regime test for subsets, and a variant of the `Z-test' achieving inverse exponential soundness under the stronger assumption of $\ell_\infty$-expansion. The latter gives rise to the first optimal testers beyond the complete complex and products, a stepping stone toward the use of HDX in strong soundness PCPs. We also give applications within expansion, analysis, combinatorics, and coding theory, including a proof that two-sided HDX have optimal geometric overlap (giving the first explicit bounded-degree construction), near-optimal double samplers, new super-exponential degree lower bounds for certain HDX, distance-amplified list-decodable and locally testable codes, a Frankl-R\"odl Theorem and more.
Given a graph $G=(V,E)$, a function $f:V\to \{0,1,2\}$ is said to be a \emph{Roman Dominating function} if for every $v\in V$ with $f(v)=0$, there exists a vertex $u\in N(v)$ such that $f(u)=2$. A Roman Dominating function $f$ is said to be an \emph{Independent Roman Dominating function} (or IRDF), if $V_1\cup V_2$ forms an independent set, where $V_i=\{v\in V~\vert~f(v)=i\}$, for $i\in \{0,1,2\}$. The total weight of $f$ is equal to $\sum_{v\in V} f(v)$, and is denoted as $w(f)$. The \emph{Independent Roman Domination Number} of $G$, denoted by $i_R(G)$, is defined as min$\{w(f)~\vert~f$ is an IRDF of $G\}$. For a given graph $G$, the problem of computing $i_R(G)$ is defined as the \emph{Minimum Independent Roman Domination problem}. The problem is already known to be NP-hard for bipartite graphs. In this paper, we further study the algorithmic complexity of the problem. In this paper, we propose a polynomial-time algorithm to solve the Minimum Independent Roman Domination problem for distance-hereditary graphs, split graphs, and $P_4$-sparse graphs.
We study a general factor analysis framework where the $n$-by-$p$ data matrix is assumed to follow a general exponential family distribution entry-wise. While this model framework has been proposed before, we here further relax its distributional assumption by using a quasi-likelihood setup. By parameterizing the mean-variance relationship on data entries, we additionally introduce a dispersion parameter and entry-wise weights to model large variations and missing values. The resulting model is thus not only robust to distribution misspecification but also more flexible and able to capture non-Gaussian covariance structures of the data matrix. Our main focus is on efficient computational approaches to perform the factor analysis. Previous modeling frameworks rely on simulated maximum likelihood (SML) to find the factorization solution, but this method was shown to lead to asymptotic bias when the simulated sample size grows slower than the square root of the sample size $n$, eliminating its practical application for data matrices with large $n$. Borrowing from expectation-maximization (EM) and stochastic gradient descent (SGD), we investigate three estimation procedures based on iterative factorization updates. Our proposed solution does not show asymptotic biases, and scales even better for large matrix factorizations with error $O(1/p)$. To support our findings, we conduct simulation experiments and discuss its application in three case studies.