亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The exclusive or (xor) function is one of the simplest examples that illustrate why nonlinear feedforward networks are superior to linear regression for machine learning applications. We review the xor representation and approximation problems and discuss their solutions in terms of probabilistic logic and associative copula functions. After briefly reviewing the specification of feedforward networks, we compare the dynamics of learned error surfaces with different activation functions such as RELU and tanh through a set of colorful three-dimensional charts. The copula representations extend xor from Boolean to real values, thereby providing a convenient way to demonstrate the concept of cross-validation on in-sample and out-sample data sets. Our approach is pedagogical and is meant to be a machine learning prolegomenon.

相關內容

A quantum network distributes quantum entanglements between remote nodes, and is key to many applications in secure communication, quantum sensing and distributed quantum computing. This paper explores the fundamental trade-off between the throughput and the quality of entanglement distribution in a multi-hop quantum repeater network. Compared to existing work which aims to heuristically maximize the entanglement distribution rate (EDR) and/or entanglement fidelity, our goal is to characterize the maximum achievable worst-case fidelity, while satisfying a bound on the maximum achievable expected EDR between an arbitrary pair of quantum nodes. This characterization will provide fundamental bounds on the achievable performance region of a quantum network, which can assist with the design of quantum network topology, protocols and applications. However, the task is highly non-trivial and is NP-hard as we shall prove. Our main contribution is a fully polynomial-time approximation scheme to approximate the achievable worst-case fidelity subject to a strict expected EDR bound, combining an optimal fidelity-agnostic EDR-maximizing formulation and a worst-case isotropic noise model. The EDR and fidelity guarantees can be implemented by a post-selection-and-storage protocol with quantum memories. By developing a discrete-time quantum network simulator, we conduct simulations to show the characterized performance region (the approximate Pareto frontier) of a network, and demonstrate that the designed protocol can achieve the performance region while existing protocols exhibit a substantial gap.

Graph neural networks (GNNs) are widely used for modeling complex interactions between entities represented as vertices of a graph. Despite recent efforts to theoretically analyze the expressive power of GNNs, a formal characterization of their ability to model interactions is lacking. The current paper aims to address this gap. Formalizing strength of interactions through an established measure known as separation rank, we quantify the ability of certain GNNs to model interaction between a given subset of vertices and its complement, i.e. between the sides of a given partition of input vertices. Our results reveal that the ability to model interaction is primarily determined by the partition's walk index -- a graph-theoretical characteristic defined by the number of walks originating from the boundary of the partition. Experiments with common GNN architectures corroborate this finding. As a practical application of our theory, we design an edge sparsification algorithm named Walk Index Sparsification (WIS), which preserves the ability of a GNN to model interactions when input edges are removed. WIS is simple, computationally efficient, and in our experiments has markedly outperformed alternative methods in terms of induced prediction accuracy. More broadly, it showcases the potential of improving GNNs by theoretically analyzing the interactions they can model.

We consider a causal inference model in which individuals interact in a social network and they may not comply with the assigned treatments. In particular, we suppose that the form of network interference is unknown to researchers. To estimate meaningful causal parameters in this situation, we introduce a new concept of exposure mapping, which summarizes potentially complicated spillover effects into a fixed dimensional statistic of instrumental variables. We investigate identification conditions for the intention-to-treat effects and the average treatment effects for compliers, while explicitly considering the possibility of misspecification of exposure mapping. Based on our identification results, we develop nonparametric estimation procedures via inverse probability weighting. Their asymptotic properties, including consistency and asymptotic normality, are investigated using an approximate neighborhood interference framework. For an empirical illustration, we apply our method to experimental data on the anti-conflict intervention school program. The proposed methods are readily available with the companion R package latenetwork.

Gradient-based first-order convex optimization algorithms find widespread applicability in a variety of domains, including machine learning tasks. Motivated by the recent advances in fixed-time stability theory of continuous-time dynamical systems, we introduce a generalized framework for designing accelerated optimization algorithms with strongest convergence guarantees that further extend to a subclass of non-convex functions. In particular, we introduce the GenFlow algorithm and its momentum variant that provably converge to the optimal solution of objective functions satisfying the Polyak-{\L}ojasiewicz (PL) inequality in a fixed time. Moreover, for functions that admit non-degenerate saddle-points, we show that for the proposed GenFlow algorithm, the time required to evade these saddle-points is uniformly bounded for all initial conditions. Finally, for strongly convex-strongly concave minimax problems whose optimal solution is a saddle point, a similar scheme is shown to arrive at the optimal solution again in a fixed time. The superior convergence properties of our algorithm are validated experimentally on a variety of benchmark datasets.

Integrated sensing and communications (ISAC) systems have gained significant interest because of their ability to jointly and efficiently access, utilize, and manage the scarce electromagnetic spectrum. The co-existence approach toward ISAC focuses on the receiver processing of overlaid radar and communications signals coming from independent transmitters. A specific ISAC coexistence problem is dual-blind deconvolution (DBD), wherein the transmit signals and channels of both radar and communications are unknown to the receiver. Prior DBD works ignore the evolution of the signal model over time. In this work, we consider a dynamic DBD scenario using a linear state space model (LSSM) such that, apart from the transmit signals and channels of both systems, the LSSM parameters are also unknown. We employ a factor graph representation to model these unknown variables. We avoid the conventional matrix inversion approach to estimate the unknown variables by using an efficient expectation-maximization algorithm, where each iteration employs a Gaussian message passing over the factor graph structure. Numerical experiments demonstrate the accurate estimation of radar and communications channels, including in the presence of noise.

For multi-transmission rate environments, access point (AP) connection methods have been proposed for maximizing system throughput, which is the throughput of an entire system, on the basis of the cooperative behavior of users. These methods derive optimal positions for the cooperative behavior of users, which means that new users move to improve the system throughput when connecting to an AP. However, the conventional method only considers the transmission rate of new users and does not consider existing users, even though it is necessary to consider the transmission rate of all users to improve system throughput. In addition, these method do not take into account the frequency of interference between users. In this paper, we propose an AP connection method which maximizes system throughput by considering the interference between users and the initial position of all users. In addition, our proposed method can improve system throughput by about 6% at most compared to conventional methods.

Coding schemes for several problems in network information theory are constructed starting from point-to-point channel codes that are designed for symmetric channels. Given that the point-to-point codes satisfy certain properties pertaining to the rate, the error probability, and the distribution of decoded sequences, bounds on the performance of the coding schemes are derived and shown to hold irrespective of other properties of the codes. In particular, we consider the problems of lossless and lossy source coding, Slepian-Wolf coding, Wyner-Ziv coding, Berger-Tung coding, multiple description coding, asymmetric channel coding, Gelfand-Pinsker coding, coding for multiple access channels, Marton coding for broadcast channels, and coding for cloud radio access networks (C-RAN's). We show that the coding schemes can achieve the best known inner bounds for these problems, provided that the constituent point-to-point channel codes are rate-optimal. This would allow one to leverage commercial off-the-shelf codes for point-to-point symmetric channels in the practical implementation of codes over networks. Simulation results demonstrate the gain of the proposed coding schemes compared to existing practical solutions to these problems.

Graph Convolutional Networks (GCNs) have been widely applied in various fields due to their significant power on processing graph-structured data. Typical GCN and its variants work under a homophily assumption (i.e., nodes with same class are prone to connect to each other), while ignoring the heterophily which exists in many real-world networks (i.e., nodes with different classes tend to form edges). Existing methods deal with heterophily by mainly aggregating higher-order neighborhoods or combing the immediate representations, which leads to noise and irrelevant information in the result. But these methods did not change the propagation mechanism which works under homophily assumption (that is a fundamental part of GCNs). This makes it difficult to distinguish the representation of nodes from different classes. To address this problem, in this paper we design a novel propagation mechanism, which can automatically change the propagation and aggregation process according to homophily or heterophily between node pairs. To adaptively learn the propagation process, we introduce two measurements of homophily degree between node pairs, which is learned based on topological and attribute information, respectively. Then we incorporate the learnable homophily degree into the graph convolution framework, which is trained in an end-to-end schema, enabling it to go beyond the assumption of homophily. More importantly, we theoretically prove that our model can constrain the similarity of representations between nodes according to their homophily degree. Experiments on seven real-world datasets demonstrate that this new approach outperforms the state-of-the-art methods under heterophily or low homophily, and gains competitive performance under homophily.

A community reveals the features and connections of its members that are different from those in other communities in a network. Detecting communities is of great significance in network analysis. Despite the classical spectral clustering and statistical inference methods, we notice a significant development of deep learning techniques for community detection in recent years with their advantages in handling high dimensional network data. Hence, a comprehensive overview of community detection's latest progress through deep learning is timely to both academics and practitioners. This survey devises and proposes a new taxonomy covering different categories of the state-of-the-art methods, including deep learning-based models upon deep neural networks, deep nonnegative matrix factorization and deep sparse filtering. The main category, i.e., deep neural networks, is further divided into convolutional networks, graph attention networks, generative adversarial networks and autoencoders. The survey also summarizes the popular benchmark data sets, model evaluation metrics, and open-source implementations to address experimentation settings. We then discuss the practical applications of community detection in various domains and point to implementation scenarios. Finally, we outline future directions by suggesting challenging topics in this fast-growing deep learning field.

Object detection typically assumes that training and test data are drawn from an identical distribution, which, however, does not always hold in practice. Such a distribution mismatch will lead to a significant performance drop. In this work, we aim to improve the cross-domain robustness of object detection. We tackle the domain shift on two levels: 1) the image-level shift, such as image style, illumination, etc, and 2) the instance-level shift, such as object appearance, size, etc. We build our approach based on the recent state-of-the-art Faster R-CNN model, and design two domain adaptation components, on image level and instance level, to reduce the domain discrepancy. The two domain adaptation components are based on H-divergence theory, and are implemented by learning a domain classifier in adversarial training manner. The domain classifiers on different levels are further reinforced with a consistency regularization to learn a domain-invariant region proposal network (RPN) in the Faster R-CNN model. We evaluate our newly proposed approach using multiple datasets including Cityscapes, KITTI, SIM10K, etc. The results demonstrate the effectiveness of our proposed approach for robust object detection in various domain shift scenarios.

北京阿比特科技有限公司