亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Determining the matrix multiplication exponent $\omega$ is one of the greatest open problems in theoretical computer science. We show that it is impossible to prove $\omega = 2$ by starting with structure tensors of modules of fixed degree and using arbitrary restrictions. It implies that the same is impossible by starting with $1_A$-generic non-diagonal tensors of fixed size with minimal border rank. This generalizes the work of Bl\"aser and Lysikov [3]. Our methods come from both commutative algebra and complexity theory.

相關內容

Predicting the evolution of a representative sample of a material with microstructure is a fundamental problem in homogenization. In this work we propose a graph convolutional neural network that utilizes the discretized representation of the initial microstructure directly, without segmentation or clustering. Compared to feature-based and pixel-based convolutional neural network models, the proposed method has a number of advantages: (a) it is deep in that it does not require featurization but can benefit from it, (b) it has a simple implementation with standard convolutional filters and layers, (c) it works natively on unstructured and structured grid data without interpolation (unlike pixel-based convolutional neural networks), and (d) it preserves rotational invariance like other graph-based convolutional neural networks. We demonstrate the performance of the proposed network and compare it to traditional pixel-based convolution neural network models and feature-based graph convolutional neural networks on multiple large datasets.

We study the modular Hamiltonian associated with a Gaussian state on the Weyl algebra. We obtain necessary/sufficient criteria for the local equivalence of Gaussian states, independently of the classical results by Araki and Yamagami, Van Daele, Holevo. We also present a criterion for a Bogoliubov automorphism to be weakly inner in the GNS representation. The main application of our analysis is the description of the vacuum modular Hamiltonian associated with a time-zero interval in the scalar, massive, free QFT in two spacetime dimensions, thus complementing recent results in higher space dimensions. In particular, we have the formula for the local entropy of a one dimensional Klein-Gordon wave packet and Araki's vacuum relative entropy of a coherent state on a double cone von Neumann algebra. Besides, we derive the type III_1 factor property. Incidentally, we run across certain positive selfadjoint extensions of the Laplacian, with outer boundary conditions, seemingly not considered so far.

We identify the algebraic structure of the material histories generated by concurrent processes. Specifically, we extend existing categorical theories of resource convertibility to capture concurrent interaction. Our formalism admits an intuitive graphical presentation via string diagrams for proarrow equipments. We also consider certain induced categories of resource transducers, which are of independent interest due to their unusual structure.

In the current work we are concerned with sequences of graphs having a grid geometry, with a uniform local structure in a bounded domain $\Omega\subset {\mathbb R}^d$, $d\ge 1$. When $\Omega=[0,1]$, such graphs include the standard Toeplitz graphs and, for $\Omega=[0,1]^d$, the considered class includes $d$-level Toeplitz graphs. In the general case, the underlying sequence of adjacency matrices has a canonical eigenvalue distribution, in the Weyl sense, and it has been shown in the theoretical part of this work that we can associate to it a symbol $\boldsymbol{\mathfrak{f}}$. The knowledge of the symbol and of its basic analytical features provides key information on the eigenvalue structure in terms of localization, spectral gap, clustering, and global distribution. In the present paper, many different applications are discussed and various numerical examples are presented in order to underline the practical use of the developed theory. Tests and applications are mainly obtained from the approximation of differential operators via numerical schemes such as Finite Differences (FDs), Finite Elements (FEs), and Isogeometric Analysis (IgA). Moreover, we show that more applications can be taken into account, since the results presented here can be applied as well to study the spectral properties of adjacency matrices and Laplacian operators of general large graphs and networks, whenever the involved matrices enjoy a uniform local structure.

The tensor power method generalizes the matrix power method to higher order arrays, or tensors. Like in the matrix case, the fixed points of the tensor power method are the eigenvectors of the tensor. While every real symmetric matrix has an eigendecomposition, the vectors generating a symmetric decomposition of a real symmetric tensor are not always eigenvectors of the tensor. In this paper we show that whenever an eigenvector is a generator of the symmetric decomposition of a symmetric tensor, then (if the order of the tensor is sufficiently high) this eigenvector is robust, i.e., it is an attracting fixed point of the tensor power method. We exhibit new classes of symmetric tensors whose symmetric decomposition consists of eigenvectors. Generalizing orthogonally decomposable tensors, we consider equiangular tight frame decomposable and equiangular set decomposable tensors. Our main result implies that such tensors can be decomposed using the tensor power method.

For points $(a,b)$ on an algebraic curve over a field $K$ with height $\mathfrak{h}$, the asymptotic relation between $\mathfrak{h}(a)$ and $\mathfrak{h}(b)$ has been extensively studied in diophantine geometry. When $K=\overline{k(t)}$ is the field of algebraic functions in $t$ over a field $k$ of characteristic zero, Eremenko in 1998 proved the following quasi-equivalence for an absolute logarithmic height $\mathfrak{h}$ in $K$: Given $P\in K[X,Y]$ irreducible over $K$ and $\epsilon>0$, there is a constant $C$ only depending on $P$ and $\epsilon$ such that for each $(a,b)\in K^2$ with $P(a,b)=0$, $$ (1-\epsilon) \deg(P,Y) \mathfrak{h}(b)-C \leq \deg(P,X) \mathfrak{h}(a) \leq (1+\epsilon) \deg(P,Y) \mathfrak{h}(b)+C. $$ In this article, we shall give an explicit bound for the constant $C$ in terms of the total degree of $P$, the height of $P$ and $\epsilon$. This result is expected to have applications in some other areas such as symbolic computation of differential and difference equations.

In this paper we propose a new algorithm for solving large-scale algebraic Riccati equations with low-rank structure, which is based on the found elegant closed form of the stabilizing solution that involves an intrinsic Toeplitz structure and the fast Fourier transform used to accelerate the multiplication of a Toeplitz matrix and vectors. The algorithm works without unnecessary assumptions, shift selection trategies, or matrix calculations of the cubic order with respect to the problem scale. Numerical examples are given to illustrate its features. Besides, we show that it is theoretically equivalent to several algorithms existing in the literature in the sense that they all produce the same sequence under the same parameter setting.

This paper deals with the modular irregularity strength of a graph of n vertices, a new graph invariant, modified from the irregularity strength, by changing the condition of the vertex-weight set associate to the well-known irregular labeling from n distinct positive integer to Z_n-the group of integer modulo n. Investigating the triangular book graph B_m^((3)), we first find the irregularity strength of triangular book graph s(B_m^((3)) ), as the lower bound for the modular irregularity strength, and then construct a modular irregular s(B_m^((3)) )-labeling. The result shows that triangular book graphs admit a modular irregular labeling and its modular irregularity strength and irregularity strength are equal, except for a small case and the infinity property.

We study how neural networks trained by gradient descent extrapolate, i.e., what they learn outside the support of the training distribution. Previous works report mixed empirical results when extrapolating with neural networks: while feedforward neural networks, a.k.a. multilayer perceptrons (MLPs), do not extrapolate well in certain simple tasks, Graph Neural Networks (GNNs), a structured network with MLP modules, have shown some success in more complex tasks. Working towards a theoretical explanation, we identify conditions under which MLPs and GNNs extrapolate well. First, we quantify the observation that ReLU MLPs quickly converge to linear functions along any direction from the origin, which implies that ReLU MLPs do not extrapolate most nonlinear functions. But, they can provably learn a linear target function when the training distribution is sufficiently diverse. Second, in connection to analyzing the successes and limitations of GNNs, these results suggest a hypothesis for which we provide theoretical and empirical evidence: the success of GNNs in extrapolating algorithmic tasks to new data (e.g., larger graphs or edge weights) relies on encoding task-specific non-linearities in the architecture or features. Our theoretical analysis builds on a connection of over-parameterized networks to the neural tangent kernel. Empirically, our theory holds across different training settings.

Graph Neural Networks (GNN) come in many flavors, but should always be either invariant (permutation of the nodes of the input graph does not affect the output) or equivariant (permutation of the input permutes the output). In this paper, we consider a specific class of invariant and equivariant networks, for which we prove new universality theorems. More precisely, we consider networks with a single hidden layer, obtained by summing channels formed by applying an equivariant linear operator, a pointwise non-linearity and either an invariant or equivariant linear operator. Recently, Maron et al. (2019) showed that by allowing higher-order tensorization inside the network, universal invariant GNNs can be obtained. As a first contribution, we propose an alternative proof of this result, which relies on the Stone-Weierstrass theorem for algebra of real-valued functions. Our main contribution is then an extension of this result to the equivariant case, which appears in many practical applications but has been less studied from a theoretical point of view. The proof relies on a new generalized Stone-Weierstrass theorem for algebra of equivariant functions, which is of independent interest. Finally, unlike many previous settings that consider a fixed number of nodes, our results show that a GNN defined by a single set of parameters can approximate uniformly well a function defined on graphs of varying size.

小貼士
登錄享
相關主題
北京阿比特科技有限公司