亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We consider neural network approximation spaces that classify functions according to the rate at which they can be approximated (with error measured in $L^p$) by ReLU neural networks with an increasing number of coefficients, subject to bounds on the magnitude of the coefficients and the number of hidden layers. We prove embedding theorems between these spaces for different values of $p$. Furthermore, we derive sharp embeddings of these approximation spaces into H\"older spaces. We find that, analogous to the case of classical function spaces (such as Sobolev spaces, or Besov spaces) it is possible to trade "smoothness" (i.e., approximation rate) for increased integrability. Combined with our earlier results in [arXiv:2104.02746], our embedding theorems imply a somewhat surprising fact related to "learning" functions from a given neural network space based on point samples: if accuracy is measured with respect to the uniform norm, then an optimal "learning" algorithm for reconstructing functions that are well approximable by ReLU neural networks is simply given by piecewise constant interpolation on a tensor product grid.

相關內容

神經(jing)(jing)(jing)網(wang)(wang)絡(luo)(luo)(Neural Networks)是世界上三(san)個(ge)最(zui)古老的(de)(de)(de)(de)(de)(de)神經(jing)(jing)(jing)建(jian)模學(xue)(xue)(xue)會(hui)(hui)的(de)(de)(de)(de)(de)(de)檔案期刊:國際(ji)神經(jing)(jing)(jing)網(wang)(wang)絡(luo)(luo)學(xue)(xue)(xue)會(hui)(hui)(INNS)、歐洲神經(jing)(jing)(jing)網(wang)(wang)絡(luo)(luo)學(xue)(xue)(xue)會(hui)(hui)(ENNS)和(he)(he)(he)日本神經(jing)(jing)(jing)網(wang)(wang)絡(luo)(luo)學(xue)(xue)(xue)會(hui)(hui)(JNNS)。神經(jing)(jing)(jing)網(wang)(wang)絡(luo)(luo)提供(gong)了一個(ge)論(lun)壇,以(yi)發(fa)展(zhan)和(he)(he)(he)培育一個(ge)國際(ji)社會(hui)(hui)的(de)(de)(de)(de)(de)(de)學(xue)(xue)(xue)者(zhe)和(he)(he)(he)實踐者(zhe)感興趣(qu)的(de)(de)(de)(de)(de)(de)所有(you)方面的(de)(de)(de)(de)(de)(de)神經(jing)(jing)(jing)網(wang)(wang)絡(luo)(luo)和(he)(he)(he)相(xiang)關(guan)方法的(de)(de)(de)(de)(de)(de)計算(suan)智(zhi)能。神經(jing)(jing)(jing)網(wang)(wang)絡(luo)(luo)歡迎高質量論(lun)文的(de)(de)(de)(de)(de)(de)提交(jiao),有(you)助于(yu)(yu)全面的(de)(de)(de)(de)(de)(de)神經(jing)(jing)(jing)網(wang)(wang)絡(luo)(luo)研究,從行(xing)為和(he)(he)(he)大腦(nao)建(jian)模,學(xue)(xue)(xue)習算(suan)法,通過(guo)數(shu)學(xue)(xue)(xue)和(he)(he)(he)計算(suan)分析(xi)(xi),系(xi)統的(de)(de)(de)(de)(de)(de)工(gong)程和(he)(he)(he)技術應用,大量使(shi)用神經(jing)(jing)(jing)網(wang)(wang)絡(luo)(luo)的(de)(de)(de)(de)(de)(de)概念和(he)(he)(he)技術。這(zhe)一獨特而廣泛的(de)(de)(de)(de)(de)(de)范圍促(cu)進(jin)了生(sheng)物(wu)和(he)(he)(he)技術研究之(zhi)間的(de)(de)(de)(de)(de)(de)思想交(jiao)流,并有(you)助于(yu)(yu)促(cu)進(jin)對生(sheng)物(wu)啟發(fa)的(de)(de)(de)(de)(de)(de)計算(suan)智(zhi)能感興趣(qu)的(de)(de)(de)(de)(de)(de)跨學(xue)(xue)(xue)科社區的(de)(de)(de)(de)(de)(de)發(fa)展(zhan)。因(yin)此,神經(jing)(jing)(jing)網(wang)(wang)絡(luo)(luo)編委會(hui)(hui)代表(biao)(biao)的(de)(de)(de)(de)(de)(de)專家領域包(bao)括心理(li)學(xue)(xue)(xue),神經(jing)(jing)(jing)生(sheng)物(wu)學(xue)(xue)(xue),計算(suan)機(ji)科學(xue)(xue)(xue),工(gong)程,數(shu)學(xue)(xue)(xue),物(wu)理(li)。該雜志(zhi)發(fa)表(biao)(biao)文章、信件和(he)(he)(he)評論(lun)以(yi)及給編輯的(de)(de)(de)(de)(de)(de)信件、社論(lun)、時事、軟件調查和(he)(he)(he)專利信息。文章發(fa)表(biao)(biao)在五個(ge)部分之(zhi)一:認(ren)知科學(xue)(xue)(xue),神經(jing)(jing)(jing)科學(xue)(xue)(xue),學(xue)(xue)(xue)習系(xi)統,數(shu)學(xue)(xue)(xue)和(he)(he)(he)計算(suan)分析(xi)(xi)、工(gong)程和(he)(he)(he)應用。 官網(wang)(wang)地址:

The purpose of this article is to develop machinery to study the capacity of deep neural networks (DNNs) to approximate high-dimensional functions. In particular, we show that DNNs have the expressive power to overcome the curse of dimensionality in the approximation of a large class of functions. More precisely, we prove that these functions can be approximated by DNNs on compact sets such that the number of parameters necessary to represent the approximating DNNs grows at most polynomially in the reciprocal $1/\varepsilon$ of the approximation accuracy $\varepsilon>0$ and in the input dimension $d\in \mathbb{N} =\{1,2,3,\dots\}$. To this end, we introduce certain approximation spaces, consisting of sequences of functions that can be efficiently approximated by DNNs. We then establish closure properties which we combine with known and new bounds on the number of parameters necessary to approximate locally Lipschitz continuous functions, maximum functions, and product functions by DNNs. The main result of this article demonstrates that DNNs have sufficient expressiveness to approximate certain sequences of functions which can be constructed by means of a finite number of compositions using locally Lipschitz continuous functions, maxima, and products without the curse of dimensionality.

The virtual element method (VEM) is a Galerkin approximation method that extends the finite element method to polytopal meshes. In this paper, we present two different conforming virtual element formulations for the numerical approximation of the Stokes problem that work on polygonal meshes.The velocity vector field is approximated in the virtual element spaces of the two formulations, while the pressure variable is approximated through discontinuous polynomials. Both formulations are inf-sup stable and convergent with optimal convergence rates in the $L^2$ and energy norm. We assess the effectiveness of these numerical approximations by investigating their behavior on a representative benchmark problem. The observed convergence rates are in accordance with the theoretical expectations and a weak form of the zero-divergence constraint is satisfied at the machine precision level.

Inferencing with network data necessitates the mapping of its nodes into a vector space, where the relationships are preserved. However, with multi-layered networks, where multiple types of relationships exist for the same set of nodes, it is crucial to exploit the information shared between layers, in addition to the distinct aspects of each layer. In this paper, we propose a novel approach that first obtains node embeddings in all layers jointly via DeepWalk on a \textit{supra} graph, which allows interactions between layers, and then fine-tunes the embeddings to encourage cohesive structure in the latent space. With empirical studies in node classification, link prediction and multi-layered community detection, we show that the proposed approach outperforms existing single- and multi-layered network embedding algorithms on several benchmarks. In addition to effectively scaling to a large number of layers (tested up to $37$), our approach consistently produces highly modular community structure, even when compared to methods that directly optimize for the modularity function.

Network embedding has attracted an increasing attention over the past few years. As an effective approach to solve graph mining problems, network embedding aims to learn a low-dimensional feature vector representation for each node of a given network. The vast majority of existing network embedding algorithms, however, are only designed for unsigned networks, and the signed networks containing both positive and negative links, have pretty distinct properties from the unsigned counterpart. In this paper, we propose a deep network embedding model to learn the low-dimensional node vector representations with structural balance preservation for the signed networks. The model employs a semi-supervised stacked auto-encoder to reconstruct the adjacency connections of a given signed network. As the adjacency connections are overwhelmingly positive in the real-world signed networks, we impose a larger penalty to make the auto-encoder focus more on reconstructing the scarce negative links than the abundant positive links. In addition, to preserve the structural balance property of signed networks, we design the pairwise constraints to make the positively connected nodes much closer than the negatively connected nodes in the embedding space. Based on the network representations learned by the proposed model, we conduct link sign prediction and community detection in signed networks. Extensive experimental results in real-world datasets demonstrate the superiority of the proposed model over the state-of-the-art network embedding algorithms for graph representation learning in signed networks.

UMAP (Uniform Manifold Approximation and Projection) is a novel manifold learning technique for dimension reduction. UMAP is constructed from a theoretical framework based in Riemannian geometry and algebraic topology. The result is a practical scalable algorithm that applies to real world data. The UMAP algorithm is competitive with t-SNE for visualization quality, and arguably preserves more of the global structure with superior run time performance. Furthermore, UMAP has no computational restrictions on embedding dimension, making it viable as a general purpose dimension reduction technique for machine learning.

The ever-growing interest witnessed in the acquisition and development of unmanned aerial vehicles (UAVs), commonly known as drones in the past few years, has brought generation of a very promising and effective technology. Because of their characteristic of small size and fast deployment, UAVs have shown their effectiveness in collecting data over unreachable areas and restricted coverage zones. Moreover, their flexible-defined capacity enables them to collect information with a very high level of detail, leading to high resolution images. UAVs mainly served in military scenario. However, in the last decade, they have being broadly adopted in civilian applications as well. The task of aerial surveillance and situation awareness is usually completed by integrating intelligence, surveillance, observation, and navigation systems, all interacting in the same operational framework. To build this capability, UAV's are well suited tools that can be equipped with a wide variety of sensors, such as cameras or radars. Deep learning has been widely recognized as a prominent approach in different computer vision applications. Specifically, one-stage object detector and two-stage object detector are regarded as the most important two groups of Convolutional Neural Network based object detection methods. One-stage object detector could usually outperform two-stage object detector in speed; however, it normally trails in detection accuracy, compared with two-stage object detectors. In this study, focal loss based RetinaNet, which works as one-stage object detector, is utilized to be able to well match the speed of regular one-stage detectors and also defeat two-stage detectors in accuracy, for UAV based object detection. State-of-the-art performance result has been showed on the UAV captured image dataset-Stanford Drone Dataset (SDD).

This paper addresses the problem of formally verifying desirable properties of neural networks, i.e., obtaining provable guarantees that neural networks satisfy specifications relating their inputs and outputs (robustness to bounded norm adversarial perturbations, for example). Most previous work on this topic was limited in its applicability by the size of the network, network architecture and the complexity of properties to be verified. In contrast, our framework applies to a general class of activation functions and specifications on neural network inputs and outputs. We formulate verification as an optimization problem (seeking to find the largest violation of the specification) and solve a Lagrangian relaxation of the optimization problem to obtain an upper bound on the worst case violation of the specification being verified. Our approach is anytime i.e. it can be stopped at any time and a valid bound on the maximum violation can be obtained. We develop specialized verification algorithms with provable tightness guarantees under special assumptions and demonstrate the practical significance of our general verification approach on a variety of verification tasks.

Network embedding has attracted considerable research attention recently. However, the existing methods are incapable of handling billion-scale networks, because they are computationally expensive and, at the same time, difficult to be accelerated by distributed computing schemes. To address these problems, we propose RandNE, a novel and simple billion-scale network embedding method. Specifically, we propose a Gaussian random projection approach to map the network into a low-dimensional embedding space while preserving the high-order proximities between nodes. To reduce the time complexity, we design an iterative projection procedure to avoid the explicit calculation of the high-order proximities. Theoretical analysis shows that our method is extremely efficient, and friendly to distributed computing schemes without any communication cost in the calculation. We demonstrate the efficacy of RandNE over state-of-the-art methods in network reconstruction and link prediction tasks on multiple datasets with different scales, ranging from thousands to billions of nodes and edges.

Many resource allocation problems in the cloud can be described as a basic Virtual Network Embedding Problem (VNEP): finding mappings of request graphs (describing the workloads) onto a substrate graph (describing the physical infrastructure). In the offline setting, the two natural objectives are profit maximization, i.e., embedding a maximal number of request graphs subject to the resource constraints, and cost minimization, i.e., embedding all requests at minimal overall cost. The VNEP can be seen as a generalization of classic routing and call admission problems, in which requests are arbitrary graphs whose communication endpoints are not fixed. Due to its applications, the problem has been studied intensively in the networking community. However, the underlying algorithmic problem is hardly understood. This paper presents the first fixed-parameter tractable approximation algorithms for the VNEP. Our algorithms are based on randomized rounding. Due to the flexible mapping options and the arbitrary request graph topologies, we show that a novel linear program formulation is required. Only using this novel formulation the computation of convex combinations of valid mappings is enabled, as the formulation needs to account for the structure of the request graphs. Accordingly, to capture the structure of request graphs, we introduce the graph-theoretic notion of extraction orders and extraction width and show that our algorithms have exponential runtime in the request graphs' maximal width. Hence, for request graphs of fixed extraction width, we obtain the first polynomial-time approximations. Studying the new notion of extraction orders we show that (i) computing extraction orders of minimal width is NP-hard and (ii) that computing decomposable LP solutions is in general NP-hard, even when restricting request graphs to planar ones.

This paper describes a suite of algorithms for constructing low-rank approximations of an input matrix from a random linear image of the matrix, called a sketch. These methods can preserve structural properties of the input matrix, such as positive-semidefiniteness, and they can produce approximations with a user-specified rank. The algorithms are simple, accurate, numerically stable, and provably correct. Moreover, each method is accompanied by an informative error bound that allows users to select parameters a priori to achieve a given approximation quality. These claims are supported by numerical experiments with real and synthetic data.

北京阿比特科技有限公司