亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Topologically interlocked materials and structures, which are assemblies of unbonded interlocking building blocks, are promising concepts for versatile structural applications. They have been shown to exhibit exceptional mechanical properties, including outstanding combinations of stiffness, strength, and toughness, beyond those achievable with common engineering materials. Recent work has established a theoretical upper limit for the strength and toughness of beam-like topologically interlocked structures. However, this theoretical limit is only achievable for structures with unrealistically high friction coefficients; therefore, it remains unknown whether it is achievable in actual structures. Here, we demonstrate that a hierarchical approach for topological interlocking, inspired by biological systems, overcomes these limitations and provides a path toward optimized mechanical performance. We consider beam-like topologically interlocked structures that present a sinusoidal surface morphology with controllable amplitude and wavelength and examine the properties of the structures using numerical simulations. The results show that the presence of surface morphologies increases the effective frictional strength of the interfaces and, if well-designed, enables us to reach the theoretical limit of the structural carrying capacity with realistic friction coefficients. Furthermore, we observe that the contribution of the surface morphology to the effective friction coefficient of the interface is well described by a criterion combining the surface curvature and surface gradient. Our study demonstrates the ability to architecture the surface morphology in beam-like topological interlocked structures to significantly enhance its structural performance.

相關內容

Extremely large-scale MIMO (XL-MIMO) is a promising technique for future 6G communications. The sharp increase in the number of antennas causes electromagnetic propagation to change from far-field to near-field. Due to the near-field effect, the exhaustive near-field beam training at all angles and distances requires very high overhead. The improved fast near-field beam training scheme based on time-delay structure can reduce the overhead, but it suffers from very high hardware costs and energy consumption caused by time-delay circuits. In this paper, we propose a near-field two dimension (2D) hierarchical beam training scheme to reduce the overhead without the need for extra hardware circuits. Specifically, we first formulate the multi-resolution near-field codewords design problem covering different angle and distance coverages. Next, inspired by phase retrieval problems in digital holography imaging technology, we propose a Gerchberg-Saxton (GS)-based algorithm to acquire the theoretical codeword by considering the ideal fully digital architecture. Based on the theoretical codeword, an alternating optimization algorithm is then proposed to acquire the practical codeword by considering the hybrid digital-analog architecture. Finally, with the help of multi-resolution codebooks, we propose a near-field 2D hierarchical beam training scheme to significantly reduce the training overhead, which is verified by extensive simulation results.

Given the proximity of many wireless users and their diversity in consuming local resources (e.g., data-plans, computation and energy resources), device-to-device (D2D) resource sharing is a promising approach towards realizing a sharing economy. This paper adopts an easy-to-implement greedy matching algorithm with distributed fashion and only sub-linear O(log n) parallel complexity (in user number n) for large-scale D2D sharing. Practical cases indicate that the greedy matching's average performance is far better than the worst-case approximation ratio 50% as compared to the optimum. However, there is no rigorous average-case analysis in the literature to back up such encouraging findings and this paper is the first to present such analysis for multiple representative classes of graphs. For 1D linear networks, we prove that our greedy algorithm performs better than 86.5% of the optimum. For 2D grids, though dynamic programming cannot be directly applied, we still prove this average performance ratio to be above 76%. For the more challenging Erdos-Renyi random graphs, we equivalently reduce to the asymptotic analysis of random trees and successfully prove a ratio up to 79%. Finally, we conduct experiments using real data to simulate realistic D2D networks, and show that our analytical performance measure approximates well practical cases.

Probabilistic logical rule learning has shown great strength in logical rule mining and knowledge graph completion. It learns logical rules to predict missing edges by reasoning on existing edges in the knowledge graph. However, previous efforts have largely been limited to only modeling chain-like Horn clauses such as $R_1(x,z)\land R_2(z,y)\Rightarrow H(x,y)$. This formulation overlooks additional contextual information from neighboring sub-graphs of entity variables $x$, $y$ and $z$. Intuitively, there is a large gap here, as local sub-graphs have been found to provide important information for knowledge graph completion. Inspired by these observations, we propose Logical Entity RePresentation (LERP) to encode contextual information of entities in the knowledge graph. A LERP is designed as a vector of probabilistic logical functions on the entity's neighboring sub-graph. It is an interpretable representation while allowing for differentiable optimization. We can then incorporate LERP into probabilistic logical rule learning to learn more expressive rules. Empirical results demonstrate that with LERP, our model outperforms other rule learning methods in knowledge graph completion and is comparable or even superior to state-of-the-art black-box methods. Moreover, we find that our model can discover a more expressive family of logical rules. LERP can also be further combined with embedding learning methods like TransE to make it more interpretable.

Due to complex interactions among various deep neural network (DNN) optimization techniques, modern DNNs can have weights and activations that are dense or sparse with diverse sparsity degrees. To offer a good trade-off between accuracy and hardware performance, an ideal DNN accelerator should have high flexibility to efficiently translate DNN sparsity into reductions in energy and/or latency without incurring significant complexity overhead. This paper introduces hierarchical structured sparsity (HSS), with the key insight that we can systematically represent diverse sparsity degrees by having them hierarchically composed from multiple simple sparsity patterns. As a result, HSS simplifies the underlying hardware since it only needs to support simple sparsity patterns; this significantly reduces the sparsity acceleration overhead, which improves efficiency. Motivated by such opportunities, we propose a simultaneously efficient and flexible accelerator, named HighLight, to accelerate DNNs that have diverse sparsity degrees (including dense). Due to the flexibility of HSS, different HSS patterns can be introduced to DNNs to meet different applications' accuracy requirements. Compared to existing works, HighLight achieves a geomean of up to 6.4x better energy-delay product (EDP) across workloads with diverse sparsity degrees, and always sits on the EDP-accuracy Pareto frontier for representative DNNs.

Graph Neural Networks (GNNs) draw their strength from explicitly modeling the topological information of structured data. However, existing GNNs suffer from limited capability in capturing the hierarchical graph representation which plays an important role in graph classification. In this paper, we innovatively propose hierarchical graph capsule network (HGCN) that can jointly learn node embeddings and extract graph hierarchies. Specifically, disentangled graph capsules are established by identifying heterogeneous factors underlying each node, such that their instantiation parameters represent different properties of the same entity. To learn the hierarchical representation, HGCN characterizes the part-whole relationship between lower-level capsules (part) and higher-level capsules (whole) by explicitly considering the structure information among the parts. Experimental studies demonstrate the effectiveness of HGCN and the contribution of each component.

Graph Neural Networks (GNNs) are information processing architectures for signals supported on graphs. They are presented here as generalizations of convolutional neural networks (CNNs) in which individual layers contain banks of graph convolutional filters instead of banks of classical convolutional filters. Otherwise, GNNs operate as CNNs. Filters are composed with pointwise nonlinearities and stacked in layers. It is shown that GNN architectures exhibit equivariance to permutation and stability to graph deformations. These properties provide a measure of explanation respecting the good performance of GNNs that can be observed empirically. It is also shown that if graphs converge to a limit object, a graphon, GNNs converge to a corresponding limit object, a graphon neural network. This convergence justifies the transferability of GNNs across networks with different number of nodes.

Graph Neural Networks (GNNs), which generalize deep neural networks to graph-structured data, have drawn considerable attention and achieved state-of-the-art performance in numerous graph related tasks. However, existing GNN models mainly focus on designing graph convolution operations. The graph pooling (or downsampling) operations, that play an important role in learning hierarchical representations, are usually overlooked. In this paper, we propose a novel graph pooling operator, called Hierarchical Graph Pooling with Structure Learning (HGP-SL), which can be integrated into various graph neural network architectures. HGP-SL incorporates graph pooling and structure learning into a unified module to generate hierarchical representations of graphs. More specifically, the graph pooling operation adaptively selects a subset of nodes to form an induced subgraph for the subsequent layers. To preserve the integrity of graph's topological information, we further introduce a structure learning mechanism to learn a refined graph structure for the pooled graph at each layer. By combining HGP-SL operator with graph neural networks, we perform graph level representation learning with focus on graph classification task. Experimental results on six widely used benchmarks demonstrate the effectiveness of our proposed model.

Recently, graph neural networks (GNNs) have revolutionized the field of graph representation learning through effectively learned node embeddings, and achieved state-of-the-art results in tasks such as node classification and link prediction. However, current GNN methods are inherently flat and do not learn hierarchical representations of graphs---a limitation that is especially problematic for the task of graph classification, where the goal is to predict the label associated with an entire graph. Here we propose DiffPool, a differentiable graph pooling module that can generate hierarchical representations of graphs and can be combined with various graph neural network architectures in an end-to-end fashion. DiffPool learns a differentiable soft cluster assignment for nodes at each layer of a deep GNN, mapping nodes to a set of clusters, which then form the coarsened input for the next GNN layer. Our experimental results show that combining existing GNN methods with DiffPool yields an average improvement of 5-10% accuracy on graph classification benchmarks, compared to all existing pooling approaches, achieving a new state-of-the-art on four out of five benchmark data sets.

Recently, deep learning has achieved very promising results in visual object tracking. Deep neural networks in existing tracking methods require a lot of training data to learn a large number of parameters. However, training data is not sufficient for visual object tracking as annotations of a target object are only available in the first frame of a test sequence. In this paper, we propose to learn hierarchical features for visual object tracking by using tree structure based Recursive Neural Networks (RNN), which have fewer parameters than other deep neural networks, e.g. Convolutional Neural Networks (CNN). First, we learn RNN parameters to discriminate between the target object and background in the first frame of a test sequence. Tree structure over local patches of an exemplar region is randomly generated by using a bottom-up greedy search strategy. Given the learned RNN parameters, we create two dictionaries regarding target regions and corresponding local patches based on the learned hierarchical features from both top and leaf nodes of multiple random trees. In each of the subsequent frames, we conduct sparse dictionary coding on all candidates to select the best candidate as the new target location. In addition, we online update two dictionaries to handle appearance changes of target objects. Experimental results demonstrate that our feature learning algorithm can significantly improve tracking performance on benchmark datasets.

北京阿比特科技有限公司