亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

For each $d\leq3$, we construct a finite set $F_d$ of multigraphs such that for each graph $H$ of girth at least $5$ obtained from a multigraph $G$ by subdividing each edge at least two times, $H$ has twin-width at most $d$ if and only if $G$ has no minor in $F_d$. This answers a question of Berg\'{e}, Bonnet, and D\'{e}pr\'{e}s asking for the structure of graphs $G$ such that each long subdivision of $G$ has twin-width $4$. As a corollary, we show that the $7\times7$ grid has twin-width $4$, which answers a question of Schidler and Szeider.

相關內容

We investigate shift-invariant vectorial Boolean functions on $n$~bits that are lifted from Boolean functions on $k$~bits, for $k\leq n$. We consider vectorial functions that are not necessarily permutations, but are, in some sense, almost bijective. In this context, we define an almost lifting as a Boolean function for which there is an upper bound on the number of collisions of its lifted functions that does not depend on $n$. We show that if a Boolean function with diameter $k$ is an almost lifting, then the maximum number of collisions of its lifted functions is $2^{k-1}$ for any $n$. Moreover, we search for functions in the class of almost liftings that have good cryptographic properties and for which the non-bijectivity does not cause major security weaknesses. These functions generalize the well-known map $\chi$ used in the Keccak hash function.

We derive a general lower bound for the generalized Hamming weights of nested matrix-product codes, with a particular emphasis on the cases with two and three constituent codes. We also provide an upper bound which is reminiscent of the bounds used for the minimum distance of matrix-product codes. When the constituent codes are two Reed-Solomon codes, we obtain an explicit formula for the generalized Hamming weights of the resulting matrix-product code. We also deal with the non-nested case for the case of two constituent codes.

We consider the discretization of the $1d$-integral Dirichlet fractional Laplacian by $hp$-finite elements. We present quadrature schemes to set up the stiffness matrix and load vector that preserve the exponential convergence of $hp$-FEM on geometric meshes. The schemes are based on Gauss-Jacobi and Gauss-Legendre rules. We show that taking a number of quadrature points slightly exceeding the polynomial degree is enough to preserve root exponential convergence. The total number of algebraic operations to set up the system is $\mathcal{O}(N^{5/2})$, where $N$ is the problem size. Numerical example illustrate the analysis. We also extend our analysis to the fractional Laplacian in higher dimensions for $hp$-finite element spaces based on shape regular meshes.

Houdr\'e and Tetali defined a class of isoperimetric constants $\varphi_p$ of graphs for $0 \leq p \leq 1$, and conjectured a Cheeger-type inequality for $\varphi_\frac12$ of the form $$\lambda_2 \lesssim \varphi_\frac12 \lesssim \sqrt{\lambda_2}$$ where $\lambda_2$ is the second smallest eigenvalue of the normalized Laplacian matrix. If true, the conjecture would be a strengthening of the hard direction of the classical Cheeger's inequality. Morris and Peres proved Houdr\'e and Tetali's conjecture up to an additional log factor, using techniques from evolving sets. We present the following related results on this conjecture. - We provide a family of counterexamples to the conjecture of Houdr\'e and Tetali, showing that the logarithmic factor is needed. - We match Morris and Peres's bound using standard spectral arguments. - We prove that Houdr\'e and Tetali's conjecture is true for any constant $p$ strictly bigger than $\frac12$, which is also a strengthening of the hard direction of Cheeger's inequality. Furthermore, our results can be extended to directed graphs using Chung's definition of eigenvalues for directed graphs.

The goal of trace reconstruction is to reconstruct an unknown $n$-bit string $x$ given only independent random traces of $x$, where a random trace of $x$ is obtained by passing $x$ through a deletion channel. A Statistical Query (SQ) algorithm for trace reconstruction is an algorithm which can only access statistical information about the distribution of random traces of $x$ rather than individual traces themselves. Such an algorithm is said to be $\ell$-local if each of its statistical queries corresponds to an $\ell$-junta function over some block of $\ell$ consecutive bits in the trace. Since several -- but not all -- known algorithms for trace reconstruction fall under the local statistical query paradigm, it is interesting to understand the abilities and limitations of local SQ algorithms for trace reconstruction. In this paper we establish nearly-matching upper and lower bounds on local Statistical Query algorithms for both worst-case and average-case trace reconstruction. For the worst-case problem, we show that there is an $\tilde{O}(n^{1/5})$-local SQ algorithm that makes all its queries with tolerance $\tau \geq 2^{-\tilde{O}(n^{1/5})}$, and also that any $\tilde{O}(n^{1/5})$-local SQ algorithm must make some query with tolerance $\tau \leq 2^{-\tilde{\Omega}(n^{1/5})}$. For the average-case problem, we show that there is an $O(\log n)$-local SQ algorithm that makes all its queries with tolerance $\tau \geq 1/\mathrm{poly}(n)$, and also that any $O(\log n)$-local SQ algorithm must make some query with tolerance $\tau \leq 1/\mathrm{poly}(n).$

A fundamental quantity of interest in Shannon theory, classical or quantum, is the error exponent of a given channel $W$ and rate $R$: the constant $E(W,R)$ which governs the exponential decay of decoding error when using ever larger optimal codes of fixed rate $R$ to communicate over ever more (memoryless) instances of a given channel $W$. Nearly matching lower and upper bounds are well-known for classical channels. Here I show a lower bound on the error exponent of communication over arbitrary classical-quantum (CQ) channels which matches Dalai's sphere-packing upper bound [IEEE TIT 59, 8027 (2013)] for rates above a critical value, exactly analogous to the case of classical channels. Unlike the classical case, however, the argument does not proceed via a refined analysis of a suitable decoder, but instead by leveraging a bound by Hayashi on the error exponent of the cryptographic task of privacy amplification [CMP 333, 335 (2015)]. This bound is then related to the coding problem via tight entropic uncertainty relations and Gallager's method of constructing capacity-achieving parity-check codes for arbitrary channels. Along the way, I find a lower bound on the error exponent of the task of compression of classical information relative to quantum side information that matches the sphere-packing upper bound of Cheng et al. [IEEE TIT 67, 902 (2021)]. In turn, the polynomial prefactors to the sphere-packing bound found by Cheng et al. may be translated to the privacy amplification problem, sharpening a recent result by Li, Yao, and Hayashi [IEEE TIT 69, 1680 (2023)], at least for linear randomness extractors.

Consider an operator that takes the Fourier transform of a discrete measure supported in $\mathcal{X}\subset[-\frac 12,\frac 12)^d$ and restricts it to a compact $\Omega\subset\mathbb{R}^d$. We provide lower bounds for its smallest singular value when $\Omega$ is either a ball or cube of radius $m$, and under different types of geometric assumptions on $\mathcal{X}$. We first show that if distances between points in $\mathcal{X}$ are lower bounded by a $\delta$ that is allowed to be arbitrarily small, then the smallest singular value is at least $Cm^{d/2} (m\delta)^{\lambda-1}$, where $\lambda$ is the maximum number of elements in $\mathcal{X}$ contained within any ball or cube of an explicitly given radius. This estimate communicates a localization effect of the Fourier transform. While it is sharp, the smallest singular value behaves better than expected for many $\mathcal{X}$, including when we dilate a generic set by parameter $\delta$. We next show that if there is a $\eta$ such that, for each $x\in\mathcal{X}$, the set $\mathcal{X}\setminus\{x\}$ locally consists of at most $r$ hyperplanes whose distances to $x$ are at least $\eta$, then the smallest singular value is at least $C m^{d/2} (m\eta)^r$. For dilations of a generic set by $\delta$, the lower bound becomes $C m^{d/2} (m\delta)^{\lceil (\lambda-1)/d\rceil }$. The appearance of a $1/d$ factor in the exponent indicates that compared to worst case scenarios, the condition number of nonharmonic Fourier transforms is better than expected for typical sets and improve with higher dimensionality.

It is a notorious open question whether integer programs (IPs), with an integer coefficient matrix $M$ whose subdeterminants are all bounded by a constant $\Delta$ in absolute value, can be solved in polynomial time. We answer this question in the affirmative if we further require that, by removing a constant number of rows and columns from $M$, one obtains a submatrix $A$ that is the transpose of a network matrix. Our approach focuses on the case where $A$ arises from $M$ after removing $k$ rows only, where $k$ is a constant. We achieve our result in two main steps, the first related to the theory of IPs and the second related to graph minor theory. First, we derive a strong proximity result for the case where $A$ is a general totally unimodular matrix: Given an optimal solution of the linear programming relaxation, an optimal solution to the IP can be obtained by finding a constant number of augmentations by circuits of $[A\; I]$. Second, for the case where $A$ is transpose of a network matrix, we reformulate the problem as a maximum constrained integer potential problem on a graph $G$. We observe that if $G$ is $2$-connected, then it has no rooted $K_{2,t}$-minor for $t = \Omega(k \Delta)$. We leverage this to obtain a tree-decomposition of $G$ into highly structured graphs for which we can solve the problem locally. This allows us to solve the global problem via dynamic programming.

\v{C}ech Persistence diagrams (PDs) are topological descriptors routinely used to capture the geometry of complex datasets. They are commonly compared using the Wasserstein distances $OT_{p}$; however, the extent to which PDs are stable with respect to these metrics remains poorly understood. We partially close this gap by focusing on the case where datasets are sampled on an $m$-dimensional submanifold of $\mathbb{R}^{d}$. Under this manifold hypothesis, we show that convergence with respect to the $OT_{p}$ metric happens exactly when $p\gt m$. We also provide improvements upon the bottleneck stability theorem in this case and prove new laws of large numbers for the total $\alpha$-persistence of PDs. Finally, we show how these theoretical findings shed new light on the behavior of the feature maps on the space of PDs that are used in ML-oriented applications of Topological Data Analysis.

The convex dimension of a $k$-uniform hypergraph is the smallest dimension $d$ for which there is an injective mapping of its vertices into $\mathbb{R}^d$ such that the set of $k$-barycenters of all hyperedges is in convex position. We completely determine the convex dimension of complete $k$-uniform hypergraphs, which settles an open question by Halman, Onn and Rothblum, who solved the problem for complete graphs. We also provide lower and upper bounds for the extremal problem of estimating the maximal number of hyperedges of $k$-uniform hypergraphs on $n$ vertices with convex dimension $d$. To prove these results, we restate them in terms of affine projections that preserve the vertices of the hypersimplex. More generally, we provide a full characterization of the projections that preserve its $i$-dimensional skeleton. In particular, we obtain a hypersimplicial generalization of the linear van Kampen-Flores theorem: for each $n$, $k$ and $i$ we determine onto which dimensions can the $(n,k)$-hypersimplex be linearly projected while preserving its $i$-skeleton. Our results have direct interpretations in terms of $k$-sets and $(i,j)$-partitions, and are closely related to the problem of finding large convexly independent subsets in Minkowski sums of $k$ point sets.

北京阿比特科技有限公司