亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

A generalized low-density parity-check (GLDPC) code is a class of codes, where single parity check nodes in a conventional low-density parity-check (LDPC) code are replaced by linear codes with higher parity check constraints. In this paper, we introduce a new method of constructing GLDPC codes by inserting the generalized check nodes for partial doping. While the conventional protograph GLDPC code dopes the protograph check nodes by replacing them with the generalized check nodes, a new GLDPC code is constructed by adding the generalized check nodes and partially doping the selected variable nodes to possess higher degrees of freedom, called a partially doped GLDPC (PD-GLDPC) code. The proposed PD-GLDPC codes can make it possible to do more accurate extrinsic information transfer (EXIT) analysis and the doping granularity can become finer in terms of the protograph than the conventional GLDPC code. We also propose the constraint for the typical minimum distance of PD-GLDPC codes and prove that the PD-GLDPC codes satisfying this condition have the linear minimum distance growth property. Furthermore, we obtain the threshold optimized protograph for both regular and irregular ensembles of the proposed PD-GLDPC codes over the binary erasure channel (BEC). Specifically, we propose the construction algorithms for both regular and irregular protograph-based PD-GLDPC codes that enable the construction of GLDPC codes with higher rates than the conventional ones. The block error rate performance of the proposed PD-GLDPC code shows that it has a reasonably good waterfall performance with low error floor and outperforms other LDPC codes for the same code rate, code length, and degree distribution.

相關內容

Exploits constitute malware in the form of application inputs. They take advantage of security vulnerabilities inside programs in order to yield execution control to attackers. The root cause of successful exploitation lies in emergent functionality introduced when programs are compiled and loaded in memory for execution, called `Weird Machines' (WMs). Essentially WMs are unexpected virtual machines that execute attackers' bytecode, complicating malware analysis whenever the bytecode set is unknown. We take the direction that WM bytecode is best understood at the level of the process memory layout attained by exploit execution. Each step building towards this memory layout comprises an exploit primitive, an exploit's basic building block. This work presents a WM reconstruction algorithm that works by identifying pre-defined exploit primitive-related behaviour during the dynamic analysis of target binaries, associating it with the responsible exploit segment - the WM bytecode. In this manner any analyst familiar with exploit programming will immediately recognise the reconstructed WM bytecode's semantics. This work is a first attempt at studying the feasibility of this method and focuses on web browsers when targeted by JavaScript exploits.

Effectively structuring deep knowledge plays a pivotal role in transfer from teacher to student, especially in semantic vision tasks. In this paper, we present a simple knowledge structure to exploit and encode information inside the detection system to facilitate detector knowledge distillation. Specifically, aiming at solving the feature imbalance problem while further excavating the missing relation inside semantic instances, we design a graph whose nodes correspond to instance proposal-level features and edges represent the relation between nodes. To further refine this graph, we design an adaptive background loss weight to reduce node noise and background samples mining to prune trivial edges. We transfer the entire graph as encoded knowledge representation from teacher to student, capturing local and global information simultaneously. We achieve new state-of-the-art results on the challenging COCO object detection task with diverse student-teacher pairs on both one- and two-stage detectors. We also experiment with instance segmentation to demonstrate robustness of our method. It is notable that distilled Faster R-CNN with ResNet18-FPN and ResNet50-FPN yields 38.68 and 41.82 Box AP respectively on the COCO benchmark, Faster R-CNN with ResNet101-FPN significantly achieves 43.38 AP, which outperforms ResNet152-FPN teacher about 0.7 AP. Code: //github.com/dvlab-research/Dsig.

We consider n robots with limited visibility: each robot can observe other robots only up to a constant distance denoted as the viewing range. The robots operate in discrete rounds that are either fully synchronous (FSync) or semi-synchronized (SSync). Most previously studied formation problems in this setting seek to bring the robots closer together (e.g., Gathering or Chain-Formation). In this work, we introduce the Max-Line-Formation problem, which has a contrary goal: to arrange the robots on a straight line of maximal length. First, we prove that the problem is impossible to solve by robots with a constant sized circular viewing range. The impossibility holds under comparably strong assumptions: robots that agree on both axes of their local coordinate systems in FSync. On the positive side, we show that the problem is solvable by robots with a constant square viewing range, i.e., the robots can observe other robots that lie within a constant-sized square centered at their position. In this case, the robots need to agree on only one axis of their local coordinate systems. We derive two algorithms: the first algorithm considers oblivious robots and converges to the optimal configuration in time $\mathcal{O}(n^2 \cdot \log (n/\varepsilon))$ under the SSync scheduler. The other algorithm makes use of locally visible lights (LUMI). It is designed for the FSync scheduler and can solve the problem exactly in optimal time $\Theta(n)$. Afterward, we show that both the algorithmic and the analysis techniques can also be applied to the Gathering and Chain-Formation problem: we introduce an algorithm with a reduced viewing range for Gathering and give new and improved runtime bounds for the Chain-Formation problem.

This paper presents a novel algorithm for a swarm of unmanned aerial vehicles (UAVs) to search for an unknown source. The proposed method is inspired by the well-known PSO algorithm and is called acceleration-based particle swarm optimization (APSO) to address the source-seeking problem with no a priori information. Unlike the conventional PSO algorithm, where the particle velocity is updated based on the self-cognition and social-cognition information, here the update is performed on the particle acceleration. A theoretical analysis is provided, showing the stability and convergence of the proposed APSO algorithm. Conditions on the parameters of the resulting third order update equations are obtained using Jurys stability test. High fidelity simulations performed in CoppeliaSim, shows the improved performance of the proposed APSO algorithm for searching an unknown source when compared with the state-of-the-art particle swarm-based source seeking algorithms. From the obtained results, it is observed that the proposed method performs better than the existing methods under scenarios like different inter-UAV communication network topologies, varying number of UAVs in the swarm, different sizes of search region, restricted source movement and in the presence of measurements noise.

Quantum low-density parity-check (LDPC) codes are an important class of quantum error correcting codes. In such codes, each qubit only affects a constant number of syndrome bits, and each syndrome bit only relies on some constant number of qubits. Constructing quantum LDPC codes is challenging. It is an open problem to understand if there exist good quantum LDPC codes, i.e. with constant rate and relative distance. Furthermore, techniques to perform fault-tolerant gates are poorly understood. We present a unified way to address these problems. Our main results are a) a bound on the distance, b) a bound on the code dimension and c) limitations on certain fault-tolerant gates that can be applied to quantum LDPC codes. All three of these bounds are cast as a function of the graph separator of the connectivity graph representation of the quantum code. We find that unless the connectivity graph contains an expander, the code is severely limited. This implies a necessary, but not sufficient, condition to construct good codes. This is the first bound that studies the limitations of quantum LDPC codes that does not rely on locality. As an application, we present novel bounds on quantum LDPC codes associated with local graphs in $D$-dimensional hyperbolic space.

Quantum low-density parity-check (LDPC) codes are a promising avenue to reduce the cost of constructing scalable quantum circuits. However, it is unclear how to implement these codes in practice. Seminal results of Bravyi & Terhal, and Bravyi, Poulin & Terhal have shown that quantum LDPC codes implemented through local interactions obey restrictions on their dimension $k$ and distance $d$. Here we address the complementary question of how many long-range interactions are required to implement a quantum LDPC code with parameters $k$ and $d$. In particular, in 2D we show that a quantum LDPC with distance $n^{1/2 + \epsilon}$ code requires $\Omega(n^{1/2 + \epsilon})$ interactions of length $\widetilde{\Omega}(n^{\epsilon})$. Further a code satisfying $k \propto n$ with distance $d \propto n^\alpha$ requires $\widetilde{\Omega}(n)$ interactions of length $\widetilde{\Omega}(n^{\alpha/2})$. Our results are derived using bounds on quantum codes from graph metrics. As an application of these results, we consider a model called a stacked architecture, which has previously been considered as a potential way to implement quantum LDPC codes. In this model, although most interactions are local, a few of them are allowed to be very long. We prove that limited long-range connectivity implies quantitative bounds on the distance and code dimension.

Graph convolution is the core of most Graph Neural Networks (GNNs) and usually approximated by message passing between direct (one-hop) neighbors. In this work, we remove the restriction of using only the direct neighbors by introducing a powerful, yet spatially localized graph convolution: Graph diffusion convolution (GDC). GDC leverages generalized graph diffusion, examples of which are the heat kernel and personalized PageRank. It alleviates the problem of noisy and often arbitrarily defined edges in real graphs. We show that GDC is closely related to spectral-based models and thus combines the strengths of both spatial (message passing) and spectral methods. We demonstrate that replacing message passing with graph diffusion convolution consistently leads to significant performance improvements across a wide range of models on both supervised and unsupervised tasks and a variety of datasets. Furthermore, GDC is not limited to GNNs but can trivially be combined with any graph-based model or algorithm (e.g. spectral clustering) without requiring any changes to the latter or affecting its computational complexity. Our implementation is available online.

In order to facilitate the accesses of general users to knowledge graphs, an increasing effort is being exerted to construct graph-structured queries of given natural language questions. At the core of the construction is to deduce the structure of the target query and determine the vertices/edges which constitute the query. Existing query construction methods rely on question understanding and conventional graph-based algorithms which lead to inefficient and degraded performances facing complex natural language questions over knowledge graphs with large scales. In this paper, we focus on this problem and propose a novel framework standing on recent knowledge graph embedding techniques. Our framework first encodes the underlying knowledge graph into a low-dimensional embedding space by leveraging generalized local knowledge graphs. Given a natural language question, the learned embedding representations of the knowledge graph are utilized to compute the query structure and assemble vertices/edges into the target query. Extensive experiments were conducted on the benchmark dataset, and the results demonstrate that our framework outperforms state-of-the-art baseline models regarding effectiveness and efficiency.

Various 3D reconstruction methods have enabled civil engineers to detect damage on a road surface. To achieve the millimetre accuracy required for road condition assessment, a disparity map with subpixel resolution needs to be used. However, none of the existing stereo matching algorithms are specially suitable for the reconstruction of the road surface. Hence in this paper, we propose a novel dense subpixel disparity estimation algorithm with high computational efficiency and robustness. This is achieved by first transforming the perspective view of the target frame into the reference view, which not only increases the accuracy of the block matching for the road surface but also improves the processing speed. The disparities are then estimated iteratively using our previously published algorithm where the search range is propagated from three estimated neighbouring disparities. Since the search range is obtained from the previous iteration, errors may occur when the propagated search range is not sufficient. Therefore, a correlation maxima verification is performed to rectify this issue, and the subpixel resolution is achieved by conducting a parabola interpolation enhancement. Furthermore, a novel disparity global refinement approach developed from the Markov Random Fields and Fast Bilateral Stereo is introduced to further improve the accuracy of the estimated disparity map, where disparities are updated iteratively by minimising the energy function that is related to their interpolated correlation polynomials. The algorithm is implemented in C language with a near real-time performance. The experimental results illustrate that the absolute error of the reconstruction varies from 0.1 mm to 3 mm.

Random walks are at the heart of many existing network embedding methods. However, such algorithms have many limitations that arise from the use of random walks, e.g., the features resulting from these methods are unable to transfer to new nodes and graphs as they are tied to vertex identity. In this work, we introduce the Role2Vec framework which uses the flexible notion of attributed random walks, and serves as a basis for generalizing existing methods such as DeepWalk, node2vec, and many others that leverage random walks. Our proposed framework enables these methods to be more widely applicable for both transductive and inductive learning as well as for use on graphs with attributes (if available). This is achieved by learning functions that generalize to new nodes and graphs. We show that our proposed framework is effective with an average AUC improvement of 16:55% while requiring on average 853x less space than existing methods on a variety of graphs.

北京阿比特科技有限公司