A strong blocking set in a finite projective space is a set of points that intersects each hyperplane in a spanning set. We provide a new graph theoretic construction of such sets: combining constant-degree expanders with asymptotically good codes, we explicitly construct strong blocking sets in the $(k-1)$-dimensional projective space over $\mathbb{F}_q$ that have size $O( q k )$. Since strong blocking sets have recently been shown to be equivalent to minimal linear codes, our construction gives the first explicit construction of $\mathbb{F}_q$-linear minimal codes of length $n$ and dimension $k$, for every prime power $q$, for which $n = O (q k)$. This solves one of the main open problems on minimal codes.
Computing a shortest path between two nodes in an undirected unweighted graph is among the most basic algorithmic tasks. Breadth first search solves this problem in linear time, which is clearly also a lower bound in the worst case. However, several works have shown how to solve this problem in sublinear time in expectation when the input graph is drawn from one of several classes of random graphs. In this work, we extend these results by giving sublinear time shortest path (and short path) algorithms for expander graphs. We thus identify a natural deterministic property of a graph (that is satisfied by typical random regular graphs) which suffices for sublinear time shortest paths. The algorithms are very simple, involving only bidirectional breadth first search and short random walks. We also complement our new algorithms by near-matching lower bounds.
The strong convergence of numerical methods for stochastic differential equations (SDEs) for $t\in[0,\infty)$ is proved. The result is applicable to any one-step numerical methods with Markov property that have the finite time strong convergence and the uniformly bounded moment. In addition, the convergence of the numerical stationary distribution to the underlying one can be derived from this result. To demonstrate the application of this result, the strong convergence in the infinite horizon of the backward Euler-Maruyama method in the $L^p$ sense for some small $p\in (0,1)$ is proved for SDEs with super-linear coefficients, which is also a a standalone new result. Numerical simulations are provided to illustrate the theoretical results.
Concerning the recent notion of circular chromatic number of signed graphs, for each given integer $k$ we introduce two signed bipartite graphs, each on $2k^2-k+1$ vertices, having shortest negative cycle of length $2k$, and the circular chromatic number 4. Each of the construction can be viewed as a bipartite analogue of the generalized Mycielski graphs on odd cycles, $M_{\ell}(C_{2k+1})$. In the course of proving our result, we also obtain a simple proof of the fact that $M_{\ell}(C_{2k+1})$ and some similar quadrangulations of the projective plane have circular chromatic number 4. These proofs have the advantage that they illuminate, in an elementary manner, the strong relation between algebraic topology and graph coloring problems.
This paper introduces a novel compatibility attack to detect a steganographic message embedded in the DCT domain of a JPEG image at high-quality factors (close to 100). Because the JPEG compression is not a surjective function, i.e. not every DCT blocks can be mapped from a pixel block, embedding a message in the DCT domain can create incompatible blocks. We propose a method to find such a block, which directly proves that a block has been modified during the embedding. This theoretical method provides many advantages such as being completely independent to Cover Source Mismatch, having good detection power, and perfect reliability since false alarms are impossible as soon as incompatible blocks are found. We show that finding an incompatible block is equivalent to proving the infeasibility of an Integer Linear Programming problem. However, solving such a problem requires considerable computational power and has not been reached for 8x8 blocks. Instead, a timing attack approach is presented to perform steganalysis without potentially any false alarms for large computing power.
Given a graph $G=(V,E)$, for a vertex set $S\subseteq V$, let $N(S)$ denote the set of vertices in $V$ that have a neighbor in $S$. Extending the concept of binding number of graphs by Woodall~(1973), for a vertex set $X \subseteq V$, we define the binding number of $X$, denoted by $\bind(X)$, as the maximum number $b$ such that for every $S \subseteq X$ where $N(S)\neq V(G)$ it holds that $|N(S)|\ge b {|S|}$. Given this definition, we prove that if a graph $V(G)$ contains a subset $X$ with $\bind(X)= 1/k$ where $k$ is an integer, then $G$ possesses a matching of size at least $|X|/(k+1)$. Using this statement, we derive tight bounds for the estimators of the matching size in planar graphs. These estimators are previously used in designing sublinear space algorithms for approximating the maching size in the data stream model of computation. In particular, we show that the number of locally superior vertices is a $3$ factor approximation of the matching size in planar graphs. The previous analysis by Jowhari (2023) proved a $3.5$ approximation factor. As another application, we show a simple variant of an estimator by Esfandiari \etal (2015) achieves $3$ factor approximation of the matching size in planar graphs. Namely, let $s$ be the number of edges with both endpoints having degree at most $2$ and let $h$ be the number of vertices with degree at least $3$. We prove that when the graph is planar, the size of matching is at least $(s+h)/3$. This result generalizes a known fact that every planar graph on $n$ vertices with minimum degree $3$ has a matching of size at least $n/3$.
The origins of proof-theoretic semantics lie in the question of what constitutes the meaning of the logical connectives and its response: the rules of inference that govern the use of the connective. However, what if we go a step further and ask about the meaning of a proof as a whole? In this paper we address this question and lay out a framework to distinguish sense and denotation of proofs. Two questions are central here. First of all, if we have two (syntactically) different derivations, does this always lead to a difference, firstly, in sense, and secondly, in denotation? The other question is about the relation between different kinds of proof systems (here: natural deduction vs. sequent calculi) with respect to this distinction. Do the different forms of representing a proof necessarily correspond to a difference in how the inferential steps are given? In our framework it will be possible to identify denotation as well as sense of proofs not only within one proof system but also between different kinds of proof systems. Thus, we give an account to distinguish a mere syntactic divergence from a divergence in meaning and a divergence in meaning from a divergence of proof objects analogous to Frege's distinction for singular terms and sentences.
We consider error-correction coding schemes for adversarial wiretap channels (AWTCs) in which the channel can a) read a fraction of the codeword bits up to a bound $r$ and b) flip a fraction of the bits up to a bound $p$. The channel can freely choose the locations of the bit reads and bit flips via a process with unbounded computational power. Codes for the AWTC are of broad interest in the area of information security, as they can provide data resiliency in settings where an attacker has limited access to a storage or transmission medium. We investigate a family of non-linear codes known as pseudolinear codes, which were first proposed by Guruswami and Indyk (FOCS 2001) for constructing list-decodable codes independent of the AWTC setting. Unlike general non-linear codes, pseudolinear codes admit efficient encoders and have succinct representations. We focus on unique decoding and show that random pseudolinear codes can achieve rates up to the binary symmetric channel (BSC) capacity $1-H_2(p)$ for any $p,r$ in the less noisy region: $p<1/2$ and $r<1-H_2(p)$ where $H_2(\cdot)$ is the binary entropy function. Thus, pseudolinear codes are the first known optimal-rate binary code family for the less noisy AWTC that admit efficient encoders. The above result can be viewed as a derandomization result of random general codes in the AWTC setting, which in turn opens new avenues for applying derandomization techniques to randomized constructions of AWTC codes. Our proof applies a novel concentration inequality for sums of random variables with limited independence which may be of interest as an analysis tool more generally.
Reed--Solomon codes are a classic family of error-correcting codes consisting of evaluations of low-degree polynomials over a finite field on some sequence of distinct field elements. They are widely known for their optimal unique-decoding capabilities, but their list-decoding capabilities are not fully understood. Given the prevalence of Reed-Solomon codes, a fundamental question in coding theory is determining if Reed--Solomon codes can optimally achieve list-decoding capacity. A recent breakthrough by Brakensiek, Gopi, and Makam, established that Reed--Solomon codes are combinatorially list-decodable all the way to capacity. However, their results hold for randomly-punctured Reed--Solomon codes over an exponentially large field size $2^{O(n)}$, where $n$ is the block length of the code. A natural question is whether Reed--Solomon codes can still achieve capacity over smaller fields. Recently, Guo and Zhang showed that Reed--Solomon codes are list-decodable to capacity with field size $O(n^2)$. We show that Reed--Solomon codes are list-decodable to capacity with linear field size $O(n)$, which is optimal up to the constant factor. We also give evidence that the ratio between the alphabet size $q$ and code length $n$ cannot be bounded by an absolute constant. Our techniques also show that random linear codes are list-decodable up to (the alphabet-independent) capacity with optimal list-size $O(1/\varepsilon)$ and near-optimal alphabet size $2^{O(1/\varepsilon^2)}$, where $\varepsilon$ is the gap to capacity. As far as we are aware, list-decoding up to capacity with optimal list-size $O(1/\varepsilon)$ was previously not known to be achievable with any linear code over a constant alphabet size (even non-constructively). Our proofs are based on the ideas of Guo and Zhang, and we additionally exploit symmetries of reduced intersection matrices.
A subset $M$ of the edges of a graph or hypergraph is hitting if $M$ covers each vertex of $H$ at least once, and $M$ is $t$-shallow if it covers each vertex of $H$ at most $t$ times. We consider the existence of shallow hitting edge sets and the maximum size of shallow edge sets in $r$-uniform hypergraph $H$ that are regular or have a large minimum degree. Specifically, we show the following. Every $r$-uniform regular hypergraph has a $t$-shallow hitting edge set with $t = O(r)$. Every $r$-uniform regular hypergraph with $n$ vertices has a $t$-shallow edge set of size $\Omega(nt/r^{1+1/t})$. Every $r$-uniform hypergraph with $n$ vertices and minimum degree $\delta_{r-1}(H) \geq n/((r-1)t+1)$ has a $t$-shallow hitting edge set. Every $r$-uniform $r$-partite hypergraph with $n$ vertices in each part and minimum degree $\delta'_{r-1}(H) \geq n/((r-1)t+1) +1$ has a $t$-shallow hitting edge set. We complement our results with constructions of $r$-uniform hypergraphs that show that most of our obtained bounds are best-possible.
We consider the problem of uncertainty quantification in change point regressions, where the signal can be piecewise polynomial of arbitrary but fixed degree. That is we seek disjoint intervals which, uniformly at a given confidence level, must each contain a change point location. We propose a procedure based on performing local tests at a number of scales and locations on a sparse grid, which adapts to the choice of grid in the sense that by choosing a sparser grid one explicitly pays a lower price for multiple testing. The procedure is fast as its computational complexity is always of the order $\mathcal{O} (n \log (n))$ where $n$ is the length of the data, and optimal in the sense that under certain mild conditions every change point is detected with high probability and the widths of the intervals returned match the mini-max localisation rates for the associated change point problem up to log factors. A detailed simulation study shows our procedure is competitive against state of the art algorithms for similar problems. Our procedure is implemented in the R package ChangePointInference which is available via //github.com/gaviosha/ChangePointInference.