The question of characterizing the (finite) representable relation algebras in a ``nice" way is open. The class $\mathbf{RRA}$ is known to be not finitely axiomatizable in first-order logic. Nevertheless, it is conjectured that ``almost all'' finite relation algebras are representable. All finite relation algebras with three or fewer atoms are representable. So one may ask, Over what cardinalities of sets are they representable? This question was answered completely by Andr\'eka and Maddux (``Representations for small relation algebras,'' \emph{Notre Dame J. Form. Log.}, \textbf{35} (1994)); they determine the spectrum of every finite relation algebra with three or fewer atoms. In the present paper, we restrict attention to cyclic group representations, and completely determine the cyclic group spectrum for all seven symmetric integral relation algebras on three atoms. We find that in some instances, the spectrum and cyclic spectrum agree; in other instances, the spectra disagree for finitely many $n$; finally, for other instances, the spectra disagree for infinitely many $n$. The proofs employ constructions, SAT solvers, and the probabilistic method.
Toward desirable saliency prediction, the types and numbers of inputs for a salient object detection (SOD) algorithm may dynamically change in many real-life applications. However, existing SOD algorithms are mainly designed or trained for one particular type of inputs, failing to be generalized to other types of inputs. Consequentially, more types of SOD algorithms need to be prepared in advance for handling different types of inputs, raising huge hardware and research costs. Differently, in this paper, we propose a new type of SOD task, termed Arbitrary Modality SOD (AM SOD). The most prominent characteristics of AM SOD are that the modality types and modality numbers will be arbitrary or dynamically changed. The former means that the inputs to the AM SOD algorithm may be arbitrary modalities such as RGB, depths, or even any combination of them. While, the latter indicates that the inputs may have arbitrary modality numbers as the input type is changed, e.g. single-modality RGB image, dual-modality RGB-Depth (RGB-D) images or triple-modality RGB-Depth-Thermal (RGB-D-T) images. Accordingly, a preliminary solution to the above challenges, \i.e. a modality switch network (MSN), is proposed in this paper. In particular, a modality switch feature extractor (MSFE) is first designed to extract discriminative features from each modality effectively by introducing some modality indicators, which will generate some weights for modality switching. Subsequently, a dynamic fusion module (DFM) is proposed to adaptively fuse features from a variable number of modalities based on a novel Transformer structure. Finally, a new dataset, named AM-XD, is constructed to facilitate research on AM SOD. Extensive experiments demonstrate that our AM SOD method can effectively cope with changes in the type and number of input modalities for robust salient object detection.
We study the question of whether a sequence d = (d_1,d_2, \ldots, d_n) of positive integers is the degree sequence of some outerplanar (a.k.a. 1-page book embeddable) graph G. If so, G is an outerplanar realization of d and d is an outerplanaric sequence. The case where \sum d \leq 2n - 2 is easy, as d has a realization by a forest (which is trivially an outerplanar graph). In this paper, we consider the family \cD of all sequences d of even sum 2n\leq \sum d \le 4n-6-2\multipl_1, where \multipl_x is the number of x's in d. (The second inequality is a necessary condition for a sequence d with \sum d\geq 2n to be outerplanaric.) We partition \cD into two disjoint subfamilies, \cD=\cD_{NOP}\cup\cD_{2PBE}, such that every sequence in \cD_{NOP} is provably non-outerplanaric, and every sequence in \cD_{2PBE} is given a realizing graph $G$ enjoying a 2-page book embedding (and moreover, one of the pages is also bipartite).
We study local filters for the Lipschitz property of real-valued functions $f: V \to [0,r]$, where the Lipschitz property is defined with respect to an arbitrary undirected graph $G=(V,E)$. We give nearly optimal local Lipschitz filters both with respect to $\ell_1$-distance and $\ell_0$-distance. Previous work only considered unbounded-range functions over $[n]^d$. Jha and Raskhodnikova (SICOMP `13) gave an algorithm for such functions with lookup complexity exponential in $d$, which Awasthi et al. (ACM Trans. Comput. Theory) showed was necessary in this setting. We demonstrate that important applications of local Lipschitz filters can be accomplished with filters for functions with bounded-range. For functions $f: [n]^d\to [0,r]$, we circumvent the lower bound and achieve running time $(d^r\log n)^{O(\log r)}$ for the $\ell_1$-respecting filter and $d^{O(r)}\text{polylog } n$ for the $\ell_0$-respecting filter. Our local filters provide a novel Lipschitz extension that can be implemented locally. Furthermore, we show that our algorithms have nearly optimal dependence on $r$ for the domain $\{0,1\}^d$. In addition, our lower bound resolves an open question of Awasthi et al., removing one of the conditions necessary for their lower bound for general range. We prove our lower bound via a reduction from distribution-free Lipschitz testing and a new technique for proving hardness for adaptive algorithms. We provide two applications of our local filters to arbitrary real-valued functions. In the first application, we use them in conjunction with the Laplace mechanism for differential privacy and noisy binary search to provide mechanisms for privately releasing outputs of black-box functions, even in the presence of malicious clients. In the second application, we use our local filters to obtain the first nontrivial tolerant tester for the Lipschitz property.
Folklore in complexity theory suspects that circuit lower bounds against $\mathbf{NC}^1$ or $\mathbf{P}/\operatorname{poly}$, currently out of reach, are a necessary step towards proving strong proof complexity lower bounds for systems like Frege or Extended Frege. Establishing such a connection formally, however, is already daunting, as it would imply the breakthrough separation $\mathbf{NEXP} \not\subseteq \mathbf{P}/\operatorname{poly}$, as recently observed by Pich and Santhanam (2023). We show such a connection conditionally for the Implicit Extended Frege proof system ($\mathsf{iEF}$) introduced by Kraj\'i\v{c}ek (The Journal of Symbolic Logic, 2004), capable of formalizing most of contemporary complexity theory. In particular, we show that if $\mathsf{iEF}$ proves efficiently the standard derandomization assumption that a concrete Boolean function is hard on average for subexponential-size circuits, then any superpolynomial lower bound on the length of $\mathsf{iEF}$ proofs implies $\#\mathbf{P} \not\subseteq \mathbf{FP}/\operatorname{poly}$ (which would in turn imply, for example, $\mathbf{PSPACE} \not\subseteq \mathbf{P}/\operatorname{poly}$). Our proof exploits the formalization inside $\mathsf{iEF}$ of the soundness of the sum-check protocol of Lund, Fortnow, Karloff, and Nisan (Journal of the ACM, 1992). This has consequences for the self-provability of circuit upper bounds in $\mathsf{iEF}$. Interestingly, further improving our result seems to require progress in constructing interactive proof systems with more efficient provers.
We optimize pipeline parallelism for deep neural network (DNN) inference by partitioning model graphs into $k$ stages and minimizing the running time of the bottleneck stage, including communication. We give practical and effective algorithms for this NP-hard problem, but our emphasis is on tackling the practitioner's dilemma of deciding when a solution is good enough. To this end, we design novel mixed-integer programming (MIP) relaxations for proving lower bounds. Applying these methods to a diverse testbed of 369 production models, for $k \in \{2, 4, 8, 16, 32, 64\}$, we empirically show that these lower bounds are strong enough to be useful in practice. Our lower bounds are substantially stronger than standard combinatorial bounds. For example, evaluated via geometric means across our production testbed with $k = 16$ pipeline stages, our MIP formulations raised the lower bound from 0.4598 to 0.9452, expressed as a fraction of the best partition found. In other words, our improved lower bounds closed the optimality gap by a factor of 9.855x.
If $G$ is a group, we say a subset $S$ of $G$ is product-free if the equation $xy=z$ has no solutions with $x,y,z \in S$. For $D \in \mathbb{N}$, a group $G$ is said to be $D$-quasirandom if the minimal dimension of a nontrivial complex irreducible representation of $G$ is at least $D$. Gowers showed that in a $D$-quasirandom finite group $G$, the maximal size of a product-free set is at most $|G|/D^{1/3}$. This disproved a longstanding conjecture of Babai and S\'os from 1985. For the special unitary group, $G=SU(n)$, Gowers observed that his argument yields an upper bound of $n^{-1/3}$ on the measure of a measurable product-free subset. In this paper, we improve Gowers' upper bound to $\exp(-cn^{1/3})$, where $c>0$ is an absolute constant. In fact, we establish something stronger, namely, product-mixing for measurable subsets of $SU(n)$ with measure at least $\exp(-cn^{1/3})$; for this product-mixing result, the $n^{1/3}$ in the exponent is sharp. Our approach involves introducing novel hypercontractive inequalities, which imply that the non-Abelian Fourier spectrum of the indicator function of a small set concentrates on high-dimensional irreducible representations. Our hypercontractive inequalities are obtained via methods from representation theory, harmonic analysis, random matrix theory and differential geometry. We generalize our hypercontractive inequalities from $SU(n)$ to an arbitrary $D$-quasirandom compact connected Lie group for $D$ at least an absolute constant, thereby extending our results on product-free sets to such groups. We also demonstrate various other applications of our inequalities to geometry (viz., non-Abelian Brunn-Minkowski type inequalities), mixing times, and the theory of growth in compact Lie groups.
We study frequency domain electromagnetic scattering at a bounded, penetrable, and inhomogeneous obstacle $ \Omega \subset \mathbb{R}^3 $. From the Stratton-Chu integral representation, we derive a new representation formula when constant reference coefficients are given for the interior domain. The resulting integral representation contains the usual layer potentials, but also volume potentials on $\Omega$. Then it is possible to follow a single-trace approach to obtain boundary integral equations perturbed by traces of compact volume integral operators with weakly singular kernels. The coupled boundary and volume integral equations are discretized with a Galerkin approach with usual Curl-conforming and Div-conforming finite elements on the boundary and in the volume. Compression techniques and special quadrature rules for singular integrands are required for an efficient and accurate method. Numerical experiments provide evidence that our new formulation enjoys promising properties.
For a graph $G$, a subset $S\subseteq V(G)$ is called a resolving set of $G$ if, for any two vertices $u,v\in V(G)$, there exists a vertex $w\in S$ such that $d(w,u)\neq d(w,v)$. The Metric Dimension problem takes as input a graph $G$ on $n$ vertices and a positive integer $k$, and asks whether there exists a resolving set of size at most $k$. In another metric-based graph problem, Geodetic Set, the input is a graph $G$ and an integer $k$, and the objective is to determine whether there exists a subset $S\subseteq V(G)$ of size at most $k$ such that, for any vertex $u \in V(G)$, there are two vertices $s_1, s_2 \in S$ such that $u$ lies on a shortest path from $s_1$ to $s_2$. These two classical problems turn out to be intractable with respect to the natural parameter, i.e., the solution size, as well as most structural parameters, including the feedback vertex set number and pathwidth. Some of the very few existing tractable results state that they are both FPT with respect to the vertex cover number $vc$. More precisely, we observe that both problems admit an FPT algorithm running in time $2^{\mathcal{O}(vc^2)}\cdot n^{\mathcal{O}(1)}$, and a kernelization algorithm that outputs a kernel with $2^{\mathcal{O}(vc)}$ vertices. We prove that unless the Exponential Time Hypothesis fails, Metric Dimension and Geodetic Set, even on graphs of bounded diameter, neither admit an FPT algorithm running in time $2^{o(vc^2)}\cdot n^{\mathcal(1)}$, nor a kernelization algorithm that reduces the solution size and outputs a kernel with $2^{o(vc)}$ vertices. The versatility of our technique enables us to apply it to both these problems. We only know of one other problem in the literature that admits such a tight lower bound. Similarly, the list of known problems with exponential lower bounds on the number of vertices in kernelized instances is very short.
We consider generalizations of the classical inverse problem to Bayesien type estimators, where the result is not one optimal parameter but an optimal probability distribution in parameter space. The practical computational tool to compute these distributions is the Metropolis Monte Carlo algorithm. We derive kinetic theories for the Metropolis Monte Carlo method in different scaling regimes. The derived equations yield a different point of view on the classical algorithm. It further inspired modifications to exploit the difference scalings shown on an simulation example of the Lorenz system.
The list-labeling problem captures the basic task of storing a dynamically changing set of up to $n$ elements in sorted order in an array of size $m = (1 + \Theta(1))n$. The goal is to support insertions and deletions while moving around elements within the array as little as possible. Until recently, the best known upper bound stood at $O(\log^2 n)$ amortized cost. This bound, which was first established in 1981, was finally improved two years ago, when a randomized $O(\log^{3/2} n)$ expected-cost algorithm was discovered. The best randomized lower bound for this problem remains $\Omega(\log n)$, and closing this gap is considered to be a major open problem in data structures. In this paper, we present the See-Saw Algorithm, a randomized list-labeling solution that achieves a nearly optimal bound of $O(\log n \operatorname{polyloglog} n)$ amortized expected cost. This bound is achieved despite at least three lower bounds showing that this type of result is impossible for large classes of solutions.