亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We construct a family of (n,k) convolutional codes with degree \delta in {k,n-k} that have a maximum distance profile. The field size required for our construction is of the order n^{2\delta}, which improves upon the known constructions of convolutional codes with a maximum distance profile. Our construction is based on the theory of skew polynomials.

相關內容

Let $G=(V,E)$ be an undirected unweighted planar graph. Consider a vector storing the distances from an arbitrary vertex $v$ to all vertices $S = \{ s_1 , s_2 , \ldots , s_k \}$ of a single face in their cyclic order. The pattern of $v$ is obtained by taking the difference between every pair of consecutive values of this vector. In STOC'19, Li and Parter used a VC-dimension argument to show that in planar graphs, the number of distinct patterns, denoted $x$, is only $O(k^3)$. This resulted in a simple compression scheme requiring $\tilde O(\min \{ k^4+|T|, k\cdot |T|\})$ space to encode the distances between $S$ and a subset of terminal vertices $T \subseteq V$. This is known as the Okamura-Seymour metric compression problem. We give an alternative proof of the $x=O(k^3)$ bound that exploits planarity beyond the VC-dimension argument. Namely, our proof relies on cut-cycle duality, as well as on the fact that distances among vertices of $S$ are bounded by $k$. Our method implies the following: (1) An $\tilde{O}(x+k+|T|)$ space compression of the Okamura-Seymour metric, thus improving the compression of Li and Parter to $\tilde O(\min \{k^3+|T|,k \cdot |T| \})$. (2) An optimal $\tilde{O}(k+|T|)$ space compression of the Okamura-Seymour metric, in the case where the vertices of $T$ induce a connected component in $G$. (3) A tight bound of $x = \Theta(k^2)$ for the family of Halin graphs, whereas the VC-dimension argument is limited to showing $x=O(k^3)$.

We study the local complexity landscape of locally checkable labeling (LCL) problems on constant-degree graphs with a focus on complexities below $\log^* n$. Our contribution is threefold: Our main contribution is that we complete the classification of the complexity landscape of LCL problems on trees in the LOCAL model, by proving that every LCL problem with local complexity $o(\log^* n)$ has actually complexity $O(1)$. This result improves upon the previous speedup result from $o(\log \log^* n)$ to $O(1)$ by [Chang, Pettie, FOCS 2017]. In the related LCA and Volume models [Alon, Rubinfeld, Vardi, Xie, SODA 2012, Rubinfeld, Tamir, Vardi, Xie, 2011, Rosenbaum, Suomela, PODC 2020], we prove the same speedup from $o(\log^* n)$ to $O(1)$ for all bounded degree graphs. Similarly, we complete the classification of the LOCAL complexity landscape of oriented $d$-dimensional grids by proving that any LCL problem with local complexity $o(\log^* n)$ has actually complexity $O(1)$. This improves upon the previous speed-up from $o(\sqrt[d]{\log^* n})$ by Suomela in [Chang, Pettie, FOCS 2017].

Reed Muller (RM) codes are known for their good minimum distance. One can use their structure to construct polar-like codes with good distance properties by choosing the information set as the rows of the polarization matrix with the highest Hamming weight, instead of the most reliable synthetic channels. However, the information length options of RM codes are quite limited due to their specific structure. In this work, we present sufficient conditions to increase the information length by at least one bit for some underlying RM codes and in order to obtain pre-transformed polar-like codes with the same minimum distance than lower rate codes. Moreover, our findings are combined with the method presented in [1] to further reduce the number of minimum weight codewords. Numerical results show that the designed codes perform close to the meta-converse bound at short blocklengths and better than the polarized adjusted convolutional polar codes with the same parameters.

Leveraging prior knowledge on intraclass variance due to transformations is a powerful method to improve the sample complexity of deep neural networks. This makes them applicable to practically important use-cases where training data is scarce. Rather than being learned, this knowledge can be embedded by enforcing invariance to those transformations. Invariance can be imposed using group-equivariant convolutions followed by a pooling operation. For rotation-invariance, previous work investigated replacing the spatial pooling operation with invariant integration which explicitly constructs invariant representations. Invariant integration uses monomials which are selected using an iterative approach requiring expensive pre-training. We propose a novel monomial selection algorithm based on pruning methods to allow an application to more complex problems. Additionally, we replace monomials with different functions such as weighted sums, multi-layer perceptrons and self-attention, thereby streamlining the training of invariant-integration-based architectures. We demonstrate the improved sample complexity on the Rotated-MNIST, SVHN and CIFAR-10 datasets where rotation-invariant-integration-based Wide-ResNet architectures using monomials and weighted sums outperform the respective baselines in the limited sample regime. We achieve state-of-the-art results using full data on Rotated-MNIST and SVHN where rotation is a main source of intraclass variation. On STL-10 we outperform a standard and a rotation-equivariant convolutional neural network using pooling.

The first aim of this article is to give information about the algebraic properties of alternate bases $\boldsymbol{\beta}=(\beta_0,\dots,\beta_{p-1})$ determining sofic systems. We show that a necessary condition is that the product $\delta=\prod_{i=0}^{p-1}\beta_i$ is an algebraic integer and all of the bases $\beta_0,\ldots,\beta_{p-1}$ belong to the algebraic field ${\mathbb Q}(\delta)$. On the other hand, we also give a sufficient condition: if $\delta$ is a Pisot number and $\beta_0,\ldots,\beta_{p-1}\in {\mathbb Q}(\delta)$, then the system associated with the alternate base $\boldsymbol{\beta}=(\beta_0,\dots,\beta_{p-1})$ is sofic. The second aim of this paper is to provide an analogy of Frougny's result concerning normalization of real bases representations. We show that given an alternate base $\boldsymbol{\beta}=(\beta_0,\dots,\beta_{p-1})$ such that $\delta$ is a Pisot number and $\beta_0,\ldots,\beta_{p-1}\in {\mathbb Q}(\delta)$, the normalization function is computable by a finite B\"uchi automaton, and furthermore, we effectively construct such an automaton. An important tool in our study is the spectrum of numeration systems associated with alternate bases. The spectrum of a real number $\delta>1$ and an alphabet $A\subset {\mathbb Z}$ was introduced by Erd\H{o}s et al. For our purposes, we use a generalized concept with $\delta\in{\mathbb C}$ and $A\subset{\mathbb C}$ and study its topological properties.

Let A(n, d) denote the maximum number of codewords in a binary code of length n and minimum Hamming distance d. Deriving upper and lower bounds on A(n, d) have been a subject for extensive research in coding theory. In this paper, we examine upper and lower bounds on A(n, d) in the high-minimum distance regime, in particular, when $d = n/2 - \Theta(\sqrt{n})$. We will first provide a lower bound based on a cyclic construction for codes of length $n= 2^m -1$ and show that $A(n, d= n/2 - 2^{c-1}\sqrt{n}) \geq n^c$, where c is an integer with $1 \leq c \leq m/2-1$. With a Fourier-analytic view of Delsarte's linear program, novel upper bounds on $A(n, n/2 - \sqrt{n})$ and $A(n, n/2 - 2 \sqrt{n})$ are obtained, and, to the best of the authors' knowledge, are the first upper bounds scaling polynomially in n for the regime with $d = n/2 - \Theta(\sqrt{n})$.

Graph convolution is the core of most Graph Neural Networks (GNNs) and usually approximated by message passing between direct (one-hop) neighbors. In this work, we remove the restriction of using only the direct neighbors by introducing a powerful, yet spatially localized graph convolution: Graph diffusion convolution (GDC). GDC leverages generalized graph diffusion, examples of which are the heat kernel and personalized PageRank. It alleviates the problem of noisy and often arbitrarily defined edges in real graphs. We show that GDC is closely related to spectral-based models and thus combines the strengths of both spatial (message passing) and spectral methods. We demonstrate that replacing message passing with graph diffusion convolution consistently leads to significant performance improvements across a wide range of models on both supervised and unsupervised tasks and a variety of datasets. Furthermore, GDC is not limited to GNNs but can trivially be combined with any graph-based model or algorithm (e.g. spectral clustering) without requiring any changes to the latter or affecting its computational complexity. Our implementation is available online.

Graph neural networks, which generalize deep neural network models to graph structured data, have attracted increasing attention in recent years. They usually learn node representations by transforming, propagating and aggregating node features and have been proven to improve the performance of many graph related tasks such as node classification and link prediction. To apply graph neural networks for the graph classification task, approaches to generate the \textit{graph representation} from node representations are demanded. A common way is to globally combine the node representations. However, rich structural information is overlooked. Thus a hierarchical pooling procedure is desired to preserve the graph structure during the graph representation learning. There are some recent works on hierarchically learning graph representation analogous to the pooling step in conventional convolutional neural (CNN) networks. However, the local structural information is still largely neglected during the pooling process. In this paper, we introduce a pooling operator $\pooling$ based on graph Fourier transform, which can utilize the node features and local structures during the pooling process. We then design pooling layers based on the pooling operator, which are further combined with traditional GCN convolutional layers to form a graph neural network framework $\m$ for graph classification. Theoretical analysis is provided to understand $\pooling$ from both local and global perspectives. Experimental results of the graph classification task on $6$ commonly used benchmarks demonstrate the effectiveness of the proposed framework.

This paper addresses the problem of formally verifying desirable properties of neural networks, i.e., obtaining provable guarantees that neural networks satisfy specifications relating their inputs and outputs (robustness to bounded norm adversarial perturbations, for example). Most previous work on this topic was limited in its applicability by the size of the network, network architecture and the complexity of properties to be verified. In contrast, our framework applies to a general class of activation functions and specifications on neural network inputs and outputs. We formulate verification as an optimization problem (seeking to find the largest violation of the specification) and solve a Lagrangian relaxation of the optimization problem to obtain an upper bound on the worst case violation of the specification being verified. Our approach is anytime i.e. it can be stopped at any time and a valid bound on the maximum violation can be obtained. We develop specialized verification algorithms with provable tightness guarantees under special assumptions and demonstrate the practical significance of our general verification approach on a variety of verification tasks.

In this work, we consider the distributed optimization of non-smooth convex functions using a network of computing units. We investigate this problem under two regularity assumptions: (1) the Lipschitz continuity of the global objective function, and (2) the Lipschitz continuity of local individual functions. Under the local regularity assumption, we provide the first optimal first-order decentralized algorithm called multi-step primal-dual (MSPD) and its corresponding optimal convergence rate. A notable aspect of this result is that, for non-smooth functions, while the dominant term of the error is in $O(1/\sqrt{t})$, the structure of the communication network only impacts a second-order term in $O(1/t)$, where $t$ is time. In other words, the error due to limits in communication resources decreases at a fast rate even in the case of non-strongly-convex objective functions. Under the global regularity assumption, we provide a simple yet efficient algorithm called distributed randomized smoothing (DRS) based on a local smoothing of the objective function, and show that DRS is within a $d^{1/4}$ multiplicative factor of the optimal convergence rate, where $d$ is the underlying dimension.

小貼士
登錄享
相關主題
北京阿比特科技有限公司