亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The weight distribution and weight hierarchy of a linear code are two important research topics in coding theory. In this paper, choosing $ D=\Big\{(x,y)\in \Big(\F_{p^{s_1}}\times\F_{p^{s_2}}\Big)\Big\backslash\{(0,0)\}: f(x)+\Tr_1^{s_2}(\alpha y)=0\Big\}$ as a defining set , where $\alpha\in\mathbb{F}_{p^{s_2}}^*$ and $f(x)$ is a quadratic form over $\mathbb{F}_{p^{s_1}}$ with values in $\F_p$, whether $f(x)$ is non-degenerate or not, we construct a family of three-weight $p$-ary linear codes and determine their weight distributions and weight hierarchies. Most of the codes can be used in secret sharing schemes.

相關內容

We prove a few new lower bounds on the randomized competitive ratio for the $k$-server problem and other related problems, resolving some long-standing conjectures. In particular, for metrical task systems (MTS) we asympotically settle the competitive ratio and obtain the first improvement to an existential lower bound since the introduction of the model 35 years ago (in 1987). More concretely, we show: 1. There exist $(k+1)$-point metric spaces in which the randomized competitive ratio for the $k$-server problem is $\Omega(\log^2 k)$. This refutes the folklore conjecture (which is known to hold in some families of metrics) that in all metric spaces with at least $k+1$ points, the competitive ratio is $\Theta(\log k)$. 2. Consequently, there exist $n$-point metric spaces in which the randomized competitive ratio for MTS is $\Omega(\log^2 n)$. This matches the upper bound that holds for all metrics. The previously best existential lower bound was $\Omega(\log n)$ (which was known to be tight for some families of metrics). 3. For all $k<n\in\mathbb N$, for *all* $n$-point metric spaces the randomized $k$-server competitive ratio is at least $\Omega(\log k)$, and consequently the randomized MTS competitive ratio is at least $\Omega(\log n)$. These universal lower bounds are asymptotically tight. The previous bounds were $\Omega(\log k/\log\log k)$ and $\Omega(\log n/\log \log n)$, respectively. 4. The randomized competitive ratio for the $w$-set metrical service systems problem, and its equivalent width-$w$ layered graph traversal problem, is $\Omega(w^2)$. This slightly improves the previous lower bound and matches the recently discovered upper bound. 5. Our results imply improved lower bounds for other problems like $k$-taxi, distributed paging and metric allocation. These lower bounds share a common thread, and other than the third bound, also a common construction.

Lattices with a circulant generator matrix represent a subclass of cyclic lattices. This subclass can be described by a basis containing a vector and its circular shifts. In this paper, we present certain conditions under which the norm expression of an arbitrary vector of this type of lattice is substantially simplified, and then investigate some of the lattices obtained under these conditions. We exhibit systems of nonlinear equations whose solutions yield lattices as dense as $D_n$ in odd dimensions. As far as even dimensions, we obtain lattices denser than $A_n$ as long as $n \in 2\mathbb{Z} \backslash 4\mathbb{Z}$.

Learning generic high-dimensional tasks is notably hard, as it requires a number of training data exponential in the dimension. Yet, deep convolutional neural networks (CNNs) have shown remarkable success in overcoming this challenge. A popular hypothesis is that learnable tasks are highly structured and that CNNs leverage this structure to build a low-dimensional representation of the data. However, little is known about how much training data they require, and how this number depends on the data structure. This paper answers this question for a simple classification task that seeks to capture relevant aspects of real data: the Random Hierarchy Model. In this model, each of the $n_c$ classes corresponds to $m$ synonymic compositions of high-level features, which are in turn composed of sub-features through an iterative process repeated $L$ times. We find that the number of training data $P^*$ required by deep CNNs to learn this task (i) grows asymptotically as $n_c m^L$, which is only polynomial in the input dimensionality; (ii) coincides with the training set size such that the representation of a trained network becomes invariant to exchanges of synonyms; (iii) corresponds to the number of data at which the correlations between low-level features and classes become detectable. Overall, our results indicate how deep CNNs can overcome the curse of dimensionality by building invariant representations, and provide an estimate of the number of data required to learn a task based on its hierarchically compositional structure.

For a graph class $\mathcal{G}$, we define the $\mathcal{G}$-modular cardinality of a graph $G$ as the minimum size of a vertex partition of $G$ into modules that each induces a graph in $\mathcal{G}$. This generalizes other module-based graph parameters such as neighborhood diversity and iterated type partition. Moreover, if $\mathcal{G}$ has bounded modular-width, the W[1]-hardness of a problem in $\mathcal{G}$-modular cardinality implies hardness on modular-width, clique-width, and other related parameters. On the other hand, fixed-parameter tractable (FPT) algorithms in $\mathcal{G}$-modular cardinality may provide new ideas for algorithms using such parameters. Several FPT algorithms based on modular partitions compute a solution table in each module, then combine each table into a global solution. This works well when each table has a succinct representation, but as we argue, when no such representation exists, the problem is typically W[1]-hard. We illustrate these ideas on the generic $(\alpha, \beta)$-domination problem, which asks for a set of vertices that contains at least a fraction $\alpha$ of the adjacent vertices of each unchosen vertex, plus some (possibly negative) amount $\beta$. This generalizes known domination problems such as Bounded Degree Deletion, $k$-Domination, and $\alpha$-Domination. We show that for graph classes $\mathcal{G}$ that require arbitrarily large solution tables, these problems are W[1]-hard in the $\mathcal{G}$-modular cardinality, whereas they are fixed-parameter tractable when they admit succinct solution tables. This leads to several new positive and negative results for many domination problems parameterized by known and novel structural graph parameters such as clique-width, modular-width, and $cluster$-modular cardinality.

To improve the convergence property of the randomized Kaczmarz (RK) method for solving linear systems, Bai and Wu (SIAM J. Sci. Comput., 40(1):A592--A606, 2018) originally introduced a greedy probability criterion for effectively selecting the working row from the coefficient matrix and constructed the greedy randomized Kaczmarz (GRK) method. Due to its simplicity and efficiency, this approach has inspired numerous subsequent works in recent years, such as the capped adaptive sampling rule, the greedy augmented randomized Kaczmarz method, and the greedy randomized coordinate descent method. Since the iterates of the GRK method are actually random variables, existing convergence analyses are all related to the expectation of the error. In this note, we prove that the linear convergence rate of the GRK method is deterministic, i.e. not in the sense of expectation. Moreover, the Polyak's heavy ball momentum technique is incorporated to improve the performance of the GRK method. We propose a refined convergence analysis, compared with the technique used in Loizou and Richt\'{a}rik (Comput. Optim. Appl., 77(3):653--710, 2020), of momentum variants of randomized iterative methods, which shows that the proposed GRK method with momentum (mGRK) also enjoys a deterministic linear convergence. Numerical experiments show that the mGRK method is more efficient than the GRK method.

Vision transformers (ViTs) have pushed the state-of-the-art for visual perception tasks. The self-attention mechanism underpinning the strength of ViTs has a quadratic complexity in both computation and memory usage. This motivates the development of approximating the self-attention at linear complexity. However, an in-depth analysis in this work reveals that existing methods are either theoretically flawed or empirically ineffective for visual recognition. We identify that their limitations are rooted in the inheritance of softmax-based self-attention during approximations, that is, normalizing the scaled dot-product between token feature vectors using the softmax function. As preserving the softmax operation challenges any subsequent linearization efforts. By this insight, a family of Softmax-Free Transformers (SOFT) are proposed. Specifically, a Gaussian kernel function is adopted to replace the dot-product similarity, enabling a full self-attention matrix to be approximated under low-rank matrix decomposition. For computational robustness, we estimate the Moore-Penrose inverse using an iterative Newton-Raphson method in the forward process only, while calculating its theoretical gradients only once in the backward process. To further expand applicability (e.g., dense prediction tasks), an efficient symmetric normalization technique is introduced. Extensive experiments on ImageNet, COCO, and ADE20K show that our SOFT significantly improves the computational efficiency of existing ViT variants. With linear complexity, much longer token sequences are permitted by SOFT, resulting in superior trade-off between accuracy and complexity. Code and models are available at //github.com/fudan-zvg/SOFT.

A binary code of blocklength $n$ and codebook size $M$ is called an $(n,M)$ code, which is studied for memoryless binary symmetric channels (BSCs) with the maximum likelihood (ML) decoding. For any $n \geq 2$, some optimal codes among the linear $(n,4)$ codes have been explicitly characterized in the previous study, but whether the optimal codes among the linear codes are better than all the nonlinear codes or not is unknown. In this paper, we first show that for any $n\geq 2$, there exists an optimal code (among all the $(n,4)$ codes) that is either linear or in a subset of nonlinear codes, called Class-I codes. We identified all the optimal codes among the linear $(n,4)$ codes for each blocklength $n\geq 2$, and found ones that were not given in literature. For any $n$ from $2$ to $300$, all the optimal $(n,4)$ codes are identified, where except for $n=3$, all the optimal $(n,4)$ codes are equivalent to linear codes. There exist optimal $(3,4)$ codes that are not equivalent to linear codes. Furthermore, we derive a subset of nonlinear codes called Class-II codes and justify that for any $n >300$, the set composed of linear, Class-I and Class-II codes and their equivalent codes contains all the optimal $(n,4)$ codes. Both Class-I and Class-II codes are close to linear codes in the sense that they involve only one type of columns that are not included in linear codes. Our results are obtained using a new technique to compare the ML decoding performance of two codes, featured by a partition of the entire range of the channel output.

Let $G$ be an undirected graph, and $s,t$ distinguished vertices of $G$. A minimal $s,t$-separator is an inclusion-wise minimal vertex-set whose removal places $s$ and $t$ in distinct connected components. We present an algorithm for listing the minimal $s,t$-separators of a graph in non-decreasing order of cardinality, in polynomial-delay. This problem finds applications in various algorithms parameterized by treewidth, which include query evaluation in relational databases, probabilistic inference, and many more. In the process, we prove several results that are of independent interest. We establish a new island of tractability to the intensively studied 2-disjoint connected subgraphs problem, which is NP-complete even for restricted graph classes that include planar graphs, and prove new characterizations of minimal $s,t$-separators. Ours is the first to present a ranked enumeration algorithm for minimal separators where the delay is polynomial in the size of the input graph.

We give the first almost-linear time algorithm for computing the \emph{maximal $k$-edge-connected subgraphs} of an undirected unweighted graph for any constant $k$. More specifically, given an $n$-vertex $m$-edge graph $G=(V,E)$ and a number $k = \log^{o(1)}n$, we can deterministically compute in $O(m+n^{1+o(1)})$ time the unique vertex partition $\{V_{1},\dots,V_{z}\}$ such that, for every $i$, $V_{i}$ induces a $k$-edge-connected subgraph while every superset $V'_{i}\supset V_{i}$ does not. Previous algorithms with linear time work only when $k\le2$ {[}Tarjan SICOMP'72{]}, otherwise they all require $\Omega(m+n\sqrt{n})$ time even when $k=3$ {[}Chechik~et~al.~SODA'17; Forster~et~al.~SODA'20{]}. Our algorithm also extends to the decremental graph setting; we can deterministically maintain the maximal $k$-edge-connected subgraphs of a graph undergoing edge deletions in $m^{1+o(1)}$ total update time. Our key idea is a reduction to the dynamic algorithm supporting pairwise $k$-edge-connectivity queries {[}Jin and Sun FOCS'20{]}.

We show that every Borel graph $G$ of subexponential growth has a Borel proper edge-coloring with $\Delta(G) + 1$ colors. We deduce this from a stronger result, namely that an $n$-vertex (finite) graph $G$ of subexponential growth can be properly edge-colored using $\Delta(G) + 1$ colors by an $O(\log^\ast n)$-round deterministic distributed algorithm in the $\mathsf{LOCAL}$ model, where the implied constants in the $O(\cdot)$ notation are determined by a bound on the growth rate of $G$.

北京阿比特科技有限公司