亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Graph neural networks (GNN) have become an important class of neural network models that have gained popularity in domains such as social and financial network analysis. Different phases of GNN computations can be modeled using both dense and sparse matrix operations. There have been many frameworks and optimization techniques proposed in the literature to accelerate GNNs. However, getting consistently high performance across many input graphs with different sparsity patterns and GNN embedding sizes has remained difficult. In this paper, we propose different algebraic reassociations of GNN computations that lead to novel dense and sparse matrix primitive selections and compositions. We show that the profitability of these compositions depends on the input graph, embedding size, and the target hardware. We developed SENSEi, a system that uses a data-driven adaptive strategy to select the best composition given the input graph and GNN embedding sizes. Our evaluations on a wide range of graphs and embedding sizes show that SENSEi achieves geomean speedups of $1.105\times$ (up to $2.959\times$) and $1.187\times$ (up to $1.99\times$) on graph convolutional networks and geomean speedups of $2.307\times$ (up to $35.866\times$) and $1.44\times$ (up to $5.69\times$) on graph attention networks on CPUs and GPUs respectively over the widely used Deep Graph Library. Further, we show that the compositions yield notable synergistic performance benefits on top of other established sparse optimizations such as sparse matrix tiling by evaluating against a well-tuned baseline.

相關內容

With the development of wireless communication, people have put forward higher requirements for train-ground communications in the high-speed railway (HSR) scenarios. With the help of mobile relays (MRs) installed on the roof of the train, the application of Millimeter-Wave (mm-wave) communication which has rich spectrum resources to the train-ground communication system can realize high data rate, so as to meet users' increasing demand for broad-band multimedia access. Also, full-duplex (FD) technology can theoretically double the spectral efficiency. In this paper, we formulate the user association and transmission scheduling problem in the mm-wave train-ground communication system with MR operating in the FD mode as a nonlinear programming problem. In order to maximize the system throughput and the number of users meeting quality of service (QoS) requirements, we propose an algorithm based on coalition game to solve the challenging NP-hard problem, and also prove the convergence and Nash-stable structure of the proposed algorithm. Extensive simulation results demonstrate that the proposed coalition game based algorithm can effectively improve the system throughput and meet the QoS requirements of as many users as possible, so that the communication system has a certain QoS awareness.

One technology that has the potential to improve wireless communications in years to come is integrated sensing and communication (ISAC). In this study, we take advantage of reconfigurable intelligent surface's (RIS) potential advantages to achieve ISAC while using the same frequency and resources. Specifically, by using the reflecting elements, the RIS dynamically modifies the radio waves' strength or phase in order to change the environment for radio transmission and increase the ISAC systems' transmission rate. We investigate a single cell downlink communication situation with RIS assistance. Combining the ISAC base station's (BS) beamforming with RIS's discrete phase shift optimization, while guaranteeing the sensing signal, The aim of optimizing the sum rate is specified. We take advantage of alternating maximization to find practical solutions with dividing the challenge into two minor issues. The first power allocation subproblem is non-convex that CVX solves by converting it to convex. A local search strategy is used to solve the second subproblem of phase shift optimization. According to the results of the simulation, using RIS with adjusted phase shifts can significantly enhance the ISAC system's performance.

Including prior information about model parameters is a fundamental step of any Bayesian statistical analysis. It is viewed positively by some as it allows, among others, to quantitatively incorporate expert opinion about model parameters. It is viewed negatively by others because it sets the stage for subjectivity in statistical analysis. Certainly, it creates problems when the inference is skewed due to a conflict with the data collected. According to the theory of conflict resolution (O'Hagan and Pericchi, 2012), a solution to such problems is to diminish the impact of conflicting prior information, yielding inference consistent with the data. This is typically achieved by using heavy-tailed priors. We study both theoretically and numerically the efficacy of such a solution in a regression framework where the prior information about the coefficients takes the form of a product of density functions with known location and scale parameters. We study functions with regularly varying tails (Student distributions), log-regularly-varying tails (as introduced in Desgagn\'e (2015)), and propose functions with slower tail decays that allow to resolve any conflict that can happen under that regression framework, contrarily to the two previous types of functions. The code to reproduce all numerical experiments is available online.

Modeling complex systems that consist of different types of objects leads to multilayer networks, in which vertices are connected by both inter-layer and intra-layer edges. In this paper, we investigate multiplex networks, in which vertices in different layers are identified with each other, and the only inter-layer edges are those that connect a vertex with its copy in other layers. Let the third-order adjacency tensor $\mathcal{A}\in\R^{N\times N\times L}$ and the parameter $\gamma\geq 0$, which is associated with the ease of communication between layers, represent a multiplex network with $N$ vertices and $L$ layers. To measure the ease of communication in a multiplex network, we focus on the average inverse geodesic length, which we refer to as the multiplex global efficiency $e_\mathcal{A}(\gamma)$ by means of the multiplex path length matrix $P\in\R^{N\times N}$. This paper generalizes the approach proposed in \cite{NR23} for single-layer networks. We describe an algorithm based on min-plus matrix multiplication to construct $P$, as well as variants $P^K$ that only take into account multiplex paths made up of at most $K$ intra-layer edges. These matrices are applied to detect redundant edges and to determine non-decreasing lower bounds $e_\mathcal{A}^K(\gamma)$ for $e_\mathcal{A}(\gamma)$, for $K=1,2,\dots,N-2$. Finally, the sensitivity of $e_\mathcal{A}^K(\gamma)$ to changes of the entries of the adjacency tensor $\mathcal{A}$ is investigated to determine edges that should be strengthened to enhance the multiplex global efficiency the most.

Invertible neural networks (INNs) represent an important class of deep neural network architectures that have been widely used in several applications. The universal approximation properties of INNs have also been established recently. However, the approximation rate of INNs is largely missing. In this work, we provide an analysis of the capacity of a class of coupling-based INNs to approximate bi-Lipschitz continuous mappings on a compact domain, and the result shows that it can well approximate both forward and inverse maps simultaneously. Furthermore, we develop an approach for approximating bi-Lipschitz maps on infinite-dimensional spaces that simultaneously approximate the forward and inverse maps, by combining model reduction with principal component analysis and INNs for approximating the reduced map, and we analyze the overall approximation error of the approach. Preliminary numerical results show the feasibility of the approach for approximating the solution operator for parameterized second-order elliptic problems.

Back-propagation (BP) is widely used learning algorithm for neural network optimization. However, BP requires enormous computation cost and is too slow to train in central processing unit (CPU). Therefore current neural network optimizaiton is performed in graphical processing unit (GPU) with compute unified device architecture (CUDA) programming. In this paper, we propose a light, fast learning algorithm on CPU that is fast as CUDA acceleration on GPU. This algorithm is based on forward-propagating method, using concept of dual number in algebraic geometry.

The prevalence of machine learning in biomedical research is rapidly growing, yet the trustworthiness of such research is often overlooked. While some previous works have investigated the ability of adversarial attacks to degrade model performance in medical imaging, the ability to falsely improve performance via recently-developed "enhancement attacks" may be a greater threat to biomedical machine learning. In the spirit of developing attacks to better understand trustworthiness, we developed two techniques to drastically enhance prediction performance of classifiers with minimal changes to features: 1) general enhancement of prediction performance, and 2) enhancement of a particular method over another. Our enhancement framework falsely improved classifiers' accuracy from 50% to almost 100% while maintaining high feature similarities between original and enhanced data (Pearson's r's>0.99). Similarly, the method-specific enhancement framework was effective in falsely improving the performance of one method over another. For example, a simple neural network outperformed logistic regression by 17% on our enhanced dataset, although no performance differences were present in the original dataset. Crucially, the original and enhanced data were still similar (r=0.99). Our results demonstrate the feasibility of minor data manipulations to achieve any desired prediction performance, which presents an interesting ethical challenge for the future of biomedical machine learning. These findings emphasize the need for more robust data provenance tracking and other precautionary measures to ensure the integrity of biomedical machine learning research.

Unsupervised domain adaptation has recently emerged as an effective paradigm for generalizing deep neural networks to new target domains. However, there is still enormous potential to be tapped to reach the fully supervised performance. In this paper, we present a novel active learning strategy to assist knowledge transfer in the target domain, dubbed active domain adaptation. We start from an observation that energy-based models exhibit free energy biases when training (source) and test (target) data come from different distributions. Inspired by this inherent mechanism, we empirically reveal that a simple yet efficient energy-based sampling strategy sheds light on selecting the most valuable target samples than existing approaches requiring particular architectures or computation of the distances. Our algorithm, Energy-based Active Domain Adaptation (EADA), queries groups of targe data that incorporate both domain characteristic and instance uncertainty into every selection round. Meanwhile, by aligning the free energy of target data compact around the source domain via a regularization term, domain gap can be implicitly diminished. Through extensive experiments, we show that EADA surpasses state-of-the-art methods on well-known challenging benchmarks with substantial improvements, making it a useful option in the open world. Code is available at //github.com/BIT-DA/EADA.

Heterogeneous graph neural networks (HGNNs) as an emerging technique have shown superior capacity of dealing with heterogeneous information network (HIN). However, most HGNNs follow a semi-supervised learning manner, which notably limits their wide use in reality since labels are usually scarce in real applications. Recently, contrastive learning, a self-supervised method, becomes one of the most exciting learning paradigms and shows great potential when there are no labels. In this paper, we study the problem of self-supervised HGNNs and propose a novel co-contrastive learning mechanism for HGNNs, named HeCo. Different from traditional contrastive learning which only focuses on contrasting positive and negative samples, HeCo employs cross-viewcontrastive mechanism. Specifically, two views of a HIN (network schema and meta-path views) are proposed to learn node embeddings, so as to capture both of local and high-order structures simultaneously. Then the cross-view contrastive learning, as well as a view mask mechanism, is proposed, which is able to extract the positive and negative embeddings from two views. This enables the two views to collaboratively supervise each other and finally learn high-level node embeddings. Moreover, two extensions of HeCo are designed to generate harder negative samples with high quality, which further boosts the performance of HeCo. Extensive experiments conducted on a variety of real-world networks show the superior performance of the proposed methods over the state-of-the-arts.

Graph neural networks (GNNs) have been proven to be effective in various network-related tasks. Most existing GNNs usually exploit the low-frequency signals of node features, which gives rise to one fundamental question: is the low-frequency information all we need in the real world applications? In this paper, we first present an experimental investigation assessing the roles of low-frequency and high-frequency signals, where the results clearly show that exploring low-frequency signal only is distant from learning an effective node representation in different scenarios. How can we adaptively learn more information beyond low-frequency information in GNNs? A well-informed answer can help GNNs enhance the adaptability. We tackle this challenge and propose a novel Frequency Adaptation Graph Convolutional Networks (FAGCN) with a self-gating mechanism, which can adaptively integrate different signals in the process of message passing. For a deeper understanding, we theoretically analyze the roles of low-frequency signals and high-frequency signals on learning node representations, which further explains why FAGCN can perform well on different types of networks. Extensive experiments on six real-world networks validate that FAGCN not only alleviates the over-smoothing problem, but also has advantages over the state-of-the-arts.

北京阿比特科技有限公司