亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Learning the relationships between various entities from time-series data is essential in many applications. Gaussian graphical models have been studied to infer these relationships. However, existing algorithms process data in a batch at a central location, limiting their applications in scenarios where data is gathered by different agents. In this paper, we propose a distributed sparse inverse covariance algorithm to learn the network structure (i.e., dependencies among observed entities) in real-time from data collected by distributed agents. Our approach is built on an online graphical alternating minimization algorithm, augmented with a consensus term that allows agents to learn the desired structure cooperatively. We allow the system designer to select the number of communication rounds and optimization steps per data point. We characterize the rate of convergence of our algorithm and provide simulations on synthetic datasets.

相關內容

We consider the problem of inferring the conditional independence graph (CIG) of a sparse, high-dimensional stationary multivariate Gaussian time series. A sparse-group lasso-based frequency-domain formulation of the problem based on frequency-domain sufficient statistic for the observed time series is presented. We investigate an alternating direction method of multipliers (ADMM) approach for optimization of the sparse-group lasso penalized log-likelihood. We provide sufficient conditions for convergence in the Frobenius norm of the inverse PSD estimators to the true value, jointly across all frequencies, where the number of frequencies are allowed to increase with sample size. This results also yields a rate of convergence. We also empirically investigate selection of the tuning parameters based on Bayesian information criterion, and illustrate our approach using numerical examples utilizing both synthetic and real data.

We obtain a canonical representation for block matrices. The representation facilitates simple computation of the determinant, the matrix inverse, and other powers of a block matrix, as well as the matrix logarithm and the matrix exponential. These results are particularly useful for block covariance and block correlation matrices, where evaluation of the Gaussian log-likelihood and estimation are greatly simplified. We illustrate this with an empirical application using a large panel of daily asset returns. Moreover, the representation paves new ways to regularizing large covariance/correlation matrices, test block structures in matrices, and estimate regressions with many variables.

Community detection for un-weighted networks has been widely studied in network analysis, but the case of weighted networks remains a challenge. In this paper, a Distribution-Free Models (DFM) is proposed for networks in which nodes are partitioned into different communities. DFM is a general, interpretable and identifiable model for both un-weighted networks and weighted networks. The proposed model does not require prior knowledge on a specific distribution for elements of adjacency matrix but only the expected value. The distribution-free property of DFM even allows adjacency matrix to have negative elements. We develop an efficient spectral algorithm to fit DFM. By introducing a noise matrix, we build a theoretic framework on perturbation analysis to show that the proposed algorithm stably yields consistent community detection under DFM. Numerical experiments on both synthetic networks and two social networks from literature are used to illustrate the algorithm.

We consider the problem of secure distributed matrix computation (SDMC), where a \textit{user} queries a function of data matrices generated at distributed \textit{source} nodes. We assume the availability of $N$ honest but curious computation servers, which are connected to the sources, the user, and each other through orthogonal and reliable communication links. Our goal is to minimize the amount of data that must be transmitted from the sources to the servers, called the \textit{upload cost}, while guaranteeing that no $T$ colluding servers can learn any information about the source matrices, and the user cannot learn any information beyond the computation result. We first focus on secure distributed matrix multiplication (SDMM), considering two matrices, and propose a novel polynomial coding scheme using the properties of finite field discrete Fourier transform, which achieves an upload cost significantly lower than the existing results in the literature. We then generalize the proposed scheme to include straggler mitigation, and to the multiplication of multiple matrices while keeping the input matrices, the intermediate computation results, as well as the final result secure against any $T$ colluding servers. We also consider a special case, called computation with own data, where the data matrices used for computation belong to the user. In this case, we drop the security requirement against the user, and show that the proposed scheme achieves the minimal upload cost. We then propose methods for performing other common matrix computations securely on distributed servers, including changing the parameters of secret sharing, matrix transpose, matrix exponentiation, solving a linear system, and matrix inversion, which are then used to show how arbitrary matrix polynomials can be computed securely on distributed servers using the proposed procedure.

This paper introduces the R package drpop to flexibly estimate total population size from incomplete lists. Total population estimation, also called capture-recapture, is an important problem in many biological and social sciences. A typical dataset consists of incomplete lists of individuals from the population of interest along with some covariate information. The goal is to estimate the number of unobserved individuals and equivalently, the total population size. drpop flexibly models heterogeneity using the covariate information, under the assumption that two lists are conditionally independent given covariates. This can be a much weaker assumption than full marginal independence often required by classical methods. Moreover, it can incorporate complex and high dimensional covariates, and does not require parametric models like other popular methods. In particular, our estimator is doubly robust and has fast convergence rates even under flexible non-parametric set-ups. drpop provides the user with the flexibility to choose the model for estimation of intermediate parameters and returns the estimated population size, confidence interval and some other related quantities. In this paper, we illustrate the applications of drpop in different scenarios and we also present some performance summaries.

We study sparse linear regression over a network of agents, modeled as an undirected graph (with no centralized node). The estimation problem is formulated as the minimization of the sum of the local LASSO loss functions plus a quadratic penalty of the consensus constraint -- the latter being instrumental to obtain distributed solution methods. While penalty-based consensus methods have been extensively studied in the optimization literature, their statistical and computational guarantees in the high dimensional setting remain unclear. This work provides an answer to this open problem. Our contribution is two-fold. First, we establish statistical consistency of the estimator: under a suitable choice of the penalty parameter, the optimal solution of the penalized problem achieves near optimal minimax rate $\mathcal{O}(s \log d/N)$ in $\ell_2$-loss, where $s$ is the sparsity value, $d$ is the ambient dimension, and $N$ is the total sample size in the network -- this matches centralized sample rates. Second, we show that the proximal-gradient algorithm applied to the penalized problem, which naturally leads to distributed implementations, converges linearly up to a tolerance of the order of the centralized statistical error -- the rate scales as $\mathcal{O}(d)$, revealing an unavoidable speed-accuracy dilemma.Numerical results demonstrate the tightness of the derived sample rate and convergence rate scalings.

In this work, we study the multi-agent decision problem where agents try to coordinate to optimize a given system-level objective. While solving for the global optimal is intractable in many cases, the greedy algorithm is a well-studied and efficient way to provide good approximate solutions - notably for submodular optimization problems. Executing the greedy algorithm requires the agents to be ordered and execute a local optimization based on the solutions of the previous agents. However, in limited information settings, passing the solution from the previous agents may be nontrivial, as some agents may not be able to directly communicate with each other. Thus the communication time required to execute the greedy algorithm is closely tied to the order that the agents are given. In this work, we characterize interplay between the communication complexity and agent orderings by showing that the complexity using the best ordering is O(n) and increases considerably to O(n^2) when using the worst ordering. Motivated by this, we also propose an algorithm that can find an ordering and execute the greedy algorithm quickly, in a distributed fashion. We also show that such an execution of the greedy algorithm is advantageous over current methods for distributed submodular maximization.

The aim of this work is to develop a fully-distributed algorithmic framework for training graph convolutional networks (GCNs). The proposed method is able to exploit the meaningful relational structure of the input data, which are collected by a set of agents that communicate over a sparse network topology. After formulating the centralized GCN training problem, we first show how to make inference in a distributed scenario where the underlying data graph is split among different agents. Then, we propose a distributed gradient descent procedure to solve the GCN training problem. The resulting model distributes computation along three lines: during inference, during back-propagation, and during optimization. Convergence to stationary solutions of the GCN training problem is also established under mild conditions. Finally, we propose an optimization criterion to design the communication topology between agents in order to match with the graph describing data relationships. A wide set of numerical results validate our proposal. To the best of our knowledge, this is the first work combining graph convolutional neural networks with distributed optimization.

Network embedding aims to learn a latent, low-dimensional vector representations of network nodes, effective in supporting various network analytic tasks. While prior arts on network embedding focus primarily on preserving network topology structure to learn node representations, recently proposed attributed network embedding algorithms attempt to integrate rich node content information with network topological structure for enhancing the quality of network embedding. In reality, networks often have sparse content, incomplete node attributes, as well as the discrepancy between node attribute feature space and network structure space, which severely deteriorates the performance of existing methods. In this paper, we propose a unified framework for attributed network embedding-attri2vec-that learns node embeddings by discovering a latent node attribute subspace via a network structure guided transformation performed on the original attribute space. The resultant latent subspace can respect network structure in a more consistent way towards learning high-quality node representations. We formulate an optimization problem which is solved by an efficient stochastic gradient descent algorithm, with linear time complexity to the number of nodes. We investigate a series of linear and non-linear transformations performed on node attributes and empirically validate their effectiveness on various types of networks. Another advantage of attri2vec is its ability to solve out-of-sample problems, where embeddings of new coming nodes can be inferred from their node attributes through the learned mapping function. Experiments on various types of networks confirm that attri2vec is superior to state-of-the-art baselines for node classification, node clustering, as well as out-of-sample link prediction tasks. The source code of this paper is available at //github.com/daokunzhang/attri2vec.

In this paper, we study the optimal convergence rate for distributed convex optimization problems in networks. We model the communication restrictions imposed by the network as a set of affine constraints and provide optimal complexity bounds for four different setups, namely: the function $F(\xb) \triangleq \sum_{i=1}^{m}f_i(\xb)$ is strongly convex and smooth, either strongly convex or smooth or just convex. Our results show that Nesterov's accelerated gradient descent on the dual problem can be executed in a distributed manner and obtains the same optimal rates as in the centralized version of the problem (up to constant or logarithmic factors) with an additional cost related to the spectral gap of the interaction matrix. Finally, we discuss some extensions to the proposed setup such as proximal friendly functions, time-varying graphs, improvement of the condition numbers.

北京阿比特科技有限公司