亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The reconfiguration graph $\mathcal{C}_k(G)$ for the $k$-colourings of a graph $G$ has a vertex for each proper $k$-colouring of $G$, and two vertices of $\mathcal{C}_k(G)$ are adjacent precisely when those $k$-colourings differ on a single vertex of $G$. Much work has focused on bounding the maximum value of ${\rm{diam}}~\mathcal{C}_k(G)$ over all $n$-vertex graphs $G$. We consider the analogous problems for list colourings and for correspondence colourings. We conjecture that if $L$ is a list-assignment for a graph $G$ with $|L(v)|\ge d(v)+2$ for all $v\in V(G)$, then ${\rm{diam}}~\mathcal{C}_L(G)\le n(G)+\mu(G)$. We also conjecture that if $(L,H)$ is a correspondence cover for a graph $G$ with $|L(v)|\ge d(v)+2$ for all $v\in V(G)$, then ${\rm{diam}}~\mathcal{C}_{(L,H)}(G)\le n(G)+\tau(G)$. (Here $\mu(G)$ and $\tau(G)$ denote the matching number and vertex cover number of $G$.) For every graph $G$, we give constructions showing that both conjectures are best possible. Our first main result proves the upper bounds (for the list and correspondence versions, respectively) ${\rm{diam}}~\mathcal{C}_L(G)\le n(G)+2\mu(G)$ and ${\rm{diam}}~\mathcal{C}_{(L,H)}(G)\le n(G)+2\tau(G)$. Our second main result proves that both conjectured bounds hold, whenever all $v$ satisfy $|L(v)|\ge 2d(v)+1$. We conclude by proving one or both conjectures for various classes of graphs such as complete bipartite graphs, subcubic graphs, cactuses, and graphs with bounded maximum average degree.

相關內容

Let $\Omega = [0,1]^d$ be the unit cube in $\mathbb{R}^d$. We study the problem of how efficiently, in terms of the number of parameters, deep neural networks with the ReLU activation function can approximate functions in the Sobolev spaces $W^s(L_q(\Omega))$ and Besov spaces $B^s_r(L_q(\Omega))$, with error measured in the $L_p(\Omega)$ norm. This problem is important when studying the application of neural networks in a variety of fields, including scientific computing and signal processing, and has previously been solved only when $p=q=\infty$. Our contribution is to provide a complete solution for all $1\leq p,q\leq \infty$ and $s > 0$ for which the corresponding Sobolev or Besov space compactly embeds into $L_p$. The key technical tool is a novel bit-extraction technique which gives an optimal encoding of sparse vectors. This enables us to obtain sharp upper bounds in the non-linear regime where $p > q$. We also provide a novel method for deriving $L_p$-approximation lower bounds based upon VC-dimension when $p < \infty$. Our results show that very deep ReLU networks significantly outperform classical methods of approximation in terms of the number of parameters, but that this comes at the cost of parameters which are not encodable.

The orthogonality dimension of a graph $G$ over $\mathbb{R}$ is the smallest integer $k$ for which one can assign a nonzero $k$-dimensional real vector to each vertex of $G$, such that every two adjacent vertices receive orthogonal vectors. We prove that for every sufficiently large integer $k$, it is $\mathsf{NP}$-hard to decide whether the orthogonality dimension of a given graph over $\mathbb{R}$ is at most $k$ or at least $2^{(1-o(1)) \cdot k/2}$. We further prove such hardness results for the orthogonality dimension over finite fields as well as for the closely related minrank parameter, which is motivated by the index coding problem in information theory. This in particular implies that it is $\mathsf{NP}$-hard to approximate these graph quantities to within any constant factor. Previously, the hardness of approximation was known to hold either assuming certain variants of the Unique Games Conjecture or for approximation factors smaller than $3/2$. The proofs involve the concept of line digraphs and bounds on their orthogonality dimension and on the minrank of their complement.

We consider the following general model of a sorting procedure: we fix a hereditary permutation class $\mathcal{C}$, which corresponds to the operations that the procedure is allowed to perform in a single step. The input of sorting is a permutation $\pi$ of the set $[n]=\{1,2,\dotsc,n\}$, i.e., a sequence where each element of $[n]$ appears once. In every step, the sorting procedure picks a permutation $\sigma$ of length $n$ from $\mathcal{C}$, and rearranges the current permutation of numbers by composing it with $\sigma$. The goal is to transform the input $\pi$ into the sorted sequence $1,2,\dotsc,n$ in as few steps as possible. This model of sorting captures not only classical sorting algorithms, like insertion sort or bubble sort, but also sorting by series of devices, like stacks or parallel queues, as well as sorting by block operations commonly considered, e.g., in the context of genome rearrangement. Our goal is to describe the possible asymptotic behavior of the worst-case number of steps needed when sorting with a hereditary permutation class. As the main result, we show that any hereditary permutation class $\mathcal{C}$ falls into one of five distinct categories. Disregarding the classes that cannot sort all permutations, the number of steps needed to sort any permutation of $[n]$ with $\mathcal{C}$ is either $\Theta(n^2)$, a function between $O(n)$ and $\Omega(\sqrt{n})$, a function betwee $O(\log^2 n)$ and $\Omega(\log n), or $1$, and for each of these cases we provide a structural characterization of the corresponding hereditary classes.

We investigate the linear chromatic number $\chi_{\text{lin}}(G(n,p))$ of the binomial random graph $G(n,p)$ on $n$ vertices in which each edge appears independently with probability $p=p(n)$. For dense random graphs ($np \to \infty$ as $n \to \infty$), we show that asymptotically almost surely $\chi_{\text{lin}}(G(n,p)) \ge n (1 - O( (np)^{-1/2} ) ) = n(1-o(1))$. Understanding the order of the linear chromatic number for subcritical random graphs ($np < 1$) and critical ones ($np=1$) is relatively easy. However, supercritical sparse random graphs ($np = c$ for some constant $c > 1$) remain to be investigated.

A matrix $\Phi \in \mathbb{R}^{Q \times N}$ satisfies the restricted isometry property if $\|\Phi x\|_2^2$ is approximately equal to $\|x\|_2^2$ for all $k$-sparse vectors $x$. We give a construction of RIP matrices with the optimal $Q = O(k \log(N/k))$ rows using $O(k\log(N/k)\log(k))$ bits of randomness. The main technical ingredient is an extension of the Hanson-Wright inequality to $\epsilon$-biased distributions.

We provide a simple and natural solution to the problem of generating all $2^n \cdot n!$ signed permutations of $[n] = \{1,2,\ldots,n\}$. Our solution provides a pleasing generalization of the most famous ordering of permutations: plain changes (Steinhaus-Johnson-Trotter algorithm). In plain changes, the $n!$ permutations of $[n]$ are ordered so that successive permutations differ by swapping a pair of adjacent symbols, and the order is often visualized as a weaving pattern involving $n$ ropes. Here we model a signed permutation using $n$ ribbons with two distinct sides, and each successive configuration is created by twisting (i.e., swapping and turning over) two neighboring ribbons or a single ribbon. By greedily prioritizing $2$-twists of the largest symbol before $1$-twists of the largest symbol, we create a signed version of plain change's memorable zig-zag pattern. We provide a loopless algorithm (i.e., worst-case $\mathcal{O}(1)$-time per object) by extending the well-known mixed-radix Gray code algorithm.

We study matrix-matrix multiplication of two matrices, $A$ and $B$, each of size $n \times n$. This operation results in a matrix $C$ of size $n\times n$. Our goal is to produce $C$ as efficiently as possible given a cache: a 1-D limited set of data values that we can work with to perform elementary operations (additions, multiplications, etc.). That is, we attempt to reuse the maximum amount of data from $A$, $B$ and $C$ during our computation (or equivalently, utilize data in the fast-access cache as often as possible). Firstly, we introduce the matrix-matrix multiplication algorithm. Secondly, we present a standard two-memory model to simulate the architecture of a computer, and we explain the LRU (Least Recently Used) Cache policy (which is standard in most computers). Thirdly, we introduce a basic model Cache Simulator, which possesses an $\mathcal{O}(M)$ time complexity (meaning we are limited to small $M$ values). Then we discuss and model the LFU (Least Frequently Used) Cache policy and the explicit control cache policy. Finally, we introduce the main result of this paper, the $\mathcal{O}(1)$ Cache Simulator, and use it to compare, experimentally, the savings of time, energy, and communication incurred from the ideal cache-efficient algorithm for matrix-matrix multiplication. The Cache Simulator simulates the amount of data movement that occurs between the main memory and the cache of the computer. One of the findings of this project is that, in some cases, there is a significant discrepancy in communication values between an LRU cache algorithm and explicit cache control. We propose to alleviate this problem by ``tricking'' the LRU cache algorithm by updating the timestamp of the data we want to keep in cache (namely entries of matrix $C$). This enables us to have the benefits of an explicit cache policy while being constrained by the LRU paradigm (realistic policy on a CPU).

We investigate the problem of approximating an incomplete preference relation $\succsim$ on a finite set by a complete preference relation. We aim to obtain this approximation in such a way that the choices on the basis of two preferences, one incomplete, the other complete, have the smallest possible discrepancy in the aggregate. To this end, we use the top-difference metric on preferences, and define a best complete approximation of $\succsim$ as a complete preference relation nearest to $\succsim$ relative to this metric. We prove that such an approximation must be a maximal completion of $\succsim$, and that it is, in fact, any one completion of $\succsim$ with the largest index. Finally, we use these results to provide a sufficient condition for the best complete approximation of a preference to be its canonical completion. This leads to closed-form solutions to the best approximation problem in the case of several incomplete preference relations of interest.

The matrix semigroup membership problem asks, given square matrices $M,M_1,\ldots,M_k$ of the same dimension, whether $M$ lies in the semigroup generated by $M_1,\ldots,M_k$. It is classical that this problem is undecidable in general but decidable in case $M_1,\ldots,M_k$ commute. In this paper we consider the problem of whether, given $M_1,\ldots,M_k$, the semigroup generated by $M_1,\ldots,M_k$ contains a non-negative matrix. We show that in case $M_1,\ldots,M_k$ commute, this problem is decidable subject to Schanuel's Conjecture. We show also that the problem is undecidable if the commutativity assumption is dropped. A key lemma in our decidability result is a procedure to determine, given a matrix $M$, whether the sequence of matrices $(M^n)_{n\geq 0}$ is ultimately nonnegative. This answers a problem posed by S. Akshay (arXiv:2205.09190). The latter result is in stark contrast to the notorious fact that it is not known how to determine effectively whether for any specific matrix index $(i,j)$ the sequence $(M^n)_{i,j}$ is ultimately nonnegative (which is a formulation of the Ultimate Positivity Problem for linear recurrence sequences).

In this paper, we investigate the problem of deciding whether two random databases $\mathsf{X}\in\mathcal{X}^{n\times d}$ and $\mathsf{Y}\in\mathcal{Y}^{n\times d}$ are statistically dependent or not. This is formulated as a hypothesis testing problem, where under the null hypothesis, these two databases are statistically independent, while under the alternative, there exists an unknown row permutation $\sigma$, such that $\mathsf{X}$ and $\mathsf{Y}^\sigma$, a permuted version of $\mathsf{Y}$, are statistically dependent with some known joint distribution, but have the same marginal distributions as the null. We characterize the thresholds at which optimal testing is information-theoretically impossible and possible, as a function of $n$, $d$, and some spectral properties of the generative distributions of the datasets. For example, we prove that if a certain function of the eigenvalues of the likelihood function and $d$, is below a certain threshold, as $d\to\infty$, then weak detection (performing slightly better than random guessing) is statistically impossible, no matter what the value of $n$ is. This mimics the performance of an efficient test that thresholds a centered version of the log-likelihood function of the observed matrices. We also analyze the case where $d$ is fixed, for which we derive strong (vanishing error) and weak detection lower and upper bounds.

北京阿比特科技有限公司