亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In globally distributed projects, virtual teams are often partially dispersed. One common setup occurs when several members from one company work with a large outsourcing vendor based in another country. Further, the introduction of the popular BizDevOps concept has increased the necessity to cooperate across departments and reduce the age-old disconnection between the business strategy and technical development. Establishing a good collaboration in partially distributed BizDevOps teams requires extensive collaboration and communication techniques. Nowadays, a common approach is to rely on collaboration through pull requests and frequent communication on Slack. To investigate barriers for pull requests in distributed teams, we examined an organization located in Scandinavia where cross-functional BizDevOps teams collaborated with off-site team members in India. Data were collected by conducting 14 interviews, observing 23 entire days with the team, and observing 37 meetings. We found that the pull-request approach worked very well locally but not across sites. We found barriers such as domain complexity, different agile processes (timeboxed vs. flow-based development), and employee turnover. Using an intellectual capital lens on our findings, we discuss barriers and positive and negative effects on the success of the pull-request approach.

相關內容

Although distributed machine learning has opened up many new and exciting research frontiers, fragmentation of models and data across different machines, nodes, and sites still results in considerable communication overhead, impeding reliable training in real-world contexts. The focus on gradients as the primary shared statistic during training has spawned a number of intuitive algorithms for distributed deep learning; however, gradient-centric training of large deep neural networks (DNNs) tends to be communication-heavy, often requiring additional adaptations such as sparsity constraints, compression, quantization, and more, to curtail bandwidth. We introduce an innovative, communication-friendly approach for training distributed DNNs, which capitalizes on the outer-product structure of the gradient as revealed by the mechanics of auto-differentiation. The exposed structure of the gradient evokes a new class of distributed learning algorithm, which is naturally more communication-efficient than full gradient sharing. Our approach, called distributed auto-differentiation (dAD), builds off a marriage of rank-based compression and the innate structure of the gradient as an outer-product. We demonstrate that dAD trains more efficiently than other state of the art distributed methods on modern architectures, such as transformers, when applied to large-scale text and imaging datasets. The future of distributed learning, we determine, need not be dominated by gradient-centric algorithms.

Traditional industrial systems, e.g., power plants, water treatment plants, etc., were built to operate highly isolated and controlled capacity. Recently, Industrial Control Systems (ICSs) have been exposed to the Internet for ease of access and adaptation to advanced technologies. However, it creates security vulnerabilities. Attackers often exploit these vulnerabilities to launch an attack on ICSs. Towards this, threat hunting is performed to proactively monitor the security of ICS networks and protect them against threats that could make the systems malfunction. A threat hunter manually identifies threats and provides a hypothesis based on the available threat intelligence. In this paper, we motivate the gap in lacking research in the automation of threat hunting in ICS networks. We propose an automated extraction of threat intelligence and the generation and validation of a hypothesis. We present an automated threat hunting framework based on threat intelligence provided by the ICS MITRE ATT&CK framework to automate the tasks. Unlike the existing hunting solutions which are cloud-based, costly and prone to human errors, our solution is a central and open-source implemented using different open-source technologies, e.g., Elasticsearch, Conpot, Metasploit, Web Single Page Application (SPA), and a machine learning analyser. Our results demonstrate that the proposed threat hunting solution can identify the network's attacks and alert a threat hunter with a hypothesis generated based on the techniques, tactics, and procedures (TTPs) from ICS MITRE ATT&CK. Then, a machine learning classifier automatically predicts the future actions of the attack.

Federated Learning (FL) is a privacy preserving machine learning scheme, where training happens with data federated across devices and not leaving them to sustain user privacy. This is ensured by making the untrained or partially trained models to reach directly the individual devices and getting locally trained "on-device" using the device owned data, and the server aggregating all the partially trained model learnings to update a global model. Although almost all the model learning schemes in the federated learning setup use gradient descent, there are certain characteristic differences brought about by the non-IID nature of the data availability, that affects the training in comparison to the centralized schemes. In this paper, we discuss the various factors that affect the federated learning training, because of the non-IID distributed nature of the data, as well as the inherent differences in the federating learning approach as against the typical centralized gradient descent techniques. We empirically demonstrate the effect of number of samples per device and the distribution of output labels on federated learning. In addition to the privacy advantage we seek through federated learning, we also study if there is a cost advantage while using federated learning frameworks. We show that federated learning does have an advantage in cost when the model sizes to be trained are not reasonably large. All in all, we present the need for careful design of model for both performance and cost.

Decentralized optimization is gaining increased traction due to its widespread applications in large-scale machine learning and multi-agent systems. The same mechanism that enables its success, i.e., information sharing among participating agents, however, also leads to the disclosure of individual agents' private information, which is unacceptable when sensitive data are involved. As differential privacy is becoming a de facto standard for privacy preservation, recently results have emerged integrating differential privacy with distributed optimization. Although such differential-privacy based privacy approaches for distributed optimization are efficient in both computation and communication, directly incorporating differential privacy design in existing distributed optimization approaches significantly compromises optimization accuracy. In this paper, we propose to redesign and tailor gradient methods for differentially-private distributed optimization, and propose two differential-privacy oriented gradient methods that can ensure both privacy and optimality. We prove that the proposed distributed algorithms can ensure almost sure convergence to an optimal solution under any persistent and variance-bounded differential-privacy noise, which, to the best of our knowledge, has not been reported before. The first algorithm is based on static-consensus based gradient methods and only shares one variable in each iteration. The second algorithm is based on dynamic-consensus (gradient-tracking) based distributed optimization methods and, hence, it is applicable to general directed interaction graph topologies. Numerical comparisons with existing counterparts confirm the effectiveness of the proposed approaches.

The distributed convex optimization problem over the multi-agent system is considered in this paper, and it is assumed that each agent possesses its own cost function and communicates with its neighbours over a sequence of time-varying directed graphs. However, due to some reasons there exist communication delays while agents receive information from other agents, and we are going to seek the optimal value of the sum of agents' loss functions in this case. We desire to handle this problem with the push-sum distributed dual averaging (PS-DDA) algorithm. It is proved that this algorithm converges and the error decays at a rate $\mathcal{O}\left(T^{-0.5}\right)$ with proper step size, where $T$ is iteration span. The main result presented in this paper also illustrates the convergence of the proposed algorithm is related to the maximum value of the communication delay on one edge. We finally apply the theoretical results to numerical simulations to show the PS-DDA algorithm's performance.

Learning the similarity between structured data, especially the graphs, is one of the essential problems. Besides the approach like graph kernels, Gromov-Wasserstein (GW) distance recently draws big attention due to its flexibility to capture both topological and feature characteristics, as well as handling the permutation invariance. However, structured data are widely distributed for different data mining and machine learning applications. With privacy concerns, accessing the decentralized data is limited to either individual clients or different silos. To tackle these issues, we propose a privacy-preserving framework to analyze the GW discrepancy of node embedding learned locally from graph neural networks in a federated flavor, and then explicitly place local differential privacy (LDP) based on Multi-bit Encoder to protect sensitive information. Our experiments show that, with strong privacy protections guaranteed by the $\varepsilon$-LDP algorithm, the proposed framework not only preserves privacy in graph learning but also presents a noised structural metric under GW distance, resulting in comparable and even better performance in classification and clustering tasks. Moreover, we reason the rationale behind the LDP-based GW distance analytically and empirically.

Multiplying matrices is among the most fundamental and compute-intensive operations in machine learning. Consequently, there has been significant work on efficiently approximating matrix multiplies. We introduce a learning-based algorithm for this task that greatly outperforms existing methods. Experiments using hundreds of matrices from diverse domains show that it often runs $100\times$ faster than exact matrix products and $10\times$ faster than current approximate methods. In the common case that one matrix is known ahead of time, our method also has the interesting property that it requires zero multiply-adds. These results suggest that a mixture of hashing, averaging, and byte shuffling$-$the core operations of our method$-$could be a more promising building block for machine learning than the sparsified, factorized, and/or scalar quantized matrix products that have recently been the focus of substantial research and hardware investment.

In recent years, mobile devices have gained increasingly development with stronger computation capability and larger storage. Some of the computation-intensive machine learning and deep learning tasks can now be run on mobile devices. To take advantage of the resources available on mobile devices and preserve users' privacy, the idea of mobile distributed machine learning is proposed. It uses local hardware resources and local data to solve machine learning sub-problems on mobile devices, and only uploads computation results instead of original data to contribute to the optimization of the global model. This architecture can not only relieve computation and storage burden on servers, but also protect the users' sensitive information. Another benefit is the bandwidth reduction, as various kinds of local data can now participate in the training process without being uploaded to the server. In this paper, we provide a comprehensive survey on recent studies of mobile distributed machine learning. We survey a number of widely-used mobile distributed machine learning methods. We also present an in-depth discussion on the challenges and future directions in this area. We believe that this survey can demonstrate a clear overview of mobile distributed machine learning and provide guidelines on applying mobile distributed machine learning to real applications.

Alternating Direction Method of Multipliers (ADMM) is a widely used tool for machine learning in distributed settings, where a machine learning model is trained over distributed data sources through an interactive process of local computation and message passing. Such an iterative process could cause privacy concerns of data owners. The goal of this paper is to provide differential privacy for ADMM-based distributed machine learning. Prior approaches on differentially private ADMM exhibit low utility under high privacy guarantee and often assume the objective functions of the learning problems to be smooth and strongly convex. To address these concerns, we propose a novel differentially private ADMM-based distributed learning algorithm called DP-ADMM, which combines an approximate augmented Lagrangian function with time-varying Gaussian noise addition in the iterative process to achieve higher utility for general objective functions under the same differential privacy guarantee. We also apply the moments accountant method to bound the end-to-end privacy loss. The theoretical analysis shows that DP-ADMM can be applied to a wider class of distributed learning problems, is provably convergent, and offers an explicit utility-privacy tradeoff. To our knowledge, this is the first paper to provide explicit convergence and utility properties for differentially private ADMM-based distributed learning algorithms. The evaluation results demonstrate that our approach can achieve good convergence and model accuracy under high end-to-end differential privacy guarantee.

The pervasive use of social media provides massive data about individuals' online social activities and their social relations. The building block of most existing recommendation systems is the similarity between users with social relations, i.e., friends. While friendship ensures some homophily, the similarity of a user with her friends can vary as the number of friends increases. Research from sociology suggests that friends are more similar than strangers, but friends can have different interests. Exogenous information such as comments and ratings may help discern different degrees of agreement (i.e., congruity) among similar users. In this paper, we investigate if users' congruity can be incorporated into recommendation systems to improve it's performance. Experimental results demonstrate the effectiveness of embedding congruity related information into recommendation systems.

北京阿比特科技有限公司