亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The internet's key points of global control lie in the hands of a few people, primarily private organizations based in the United States. These control points, as they exist today, raise structural risks to the global internet's long-term stability. I argue: the problem isn't that these control points exist, it's that there is no popular governance over them. I advocate for a localist approach to internet governance: small internets deployed on municipal scales, interoperating selectively, carefully, with this internet and one another.

相關內容

Recent developments of advanced Human-Vehicle Interactions rely on the concept Internet-of-Vehicles (IoV), to achieve large-scale communications and synchronizations of data in practice. The concept of IoV is highly similar to a distributed system, where each vehicle is considered as a node and all nodes are grouped with a centralized server. In this manner, the concerns of data privacy are significant since all vehicles collect, process and share personal statistics (e.g. multi-modal, driving statuses and etc.). Therefore, it's important to understand how modern privacy-preserving techniques suit for IoV. We present the most comprehensive study to characterize modern privacy-preserving techniques for IoV to date. We focus on Differential Privacy (DP), a representative set of mathematically-guaranteed mechanisms for both privacy-preserving processing and sharing on sensitive data. The purpose of our study is to demystify the tradeoffs of deploying DP techniques, in terms of service quality. We first characterize representative privacy-preserving processing mechanisms, enabled by advanced DP approaches. Then we perform a detailed study of an emerging in-vehicle, Deep-Neural-Network-driven application, and study the upsides and downsides of DP for diverse types of data streams. Our study obtains 11 key findings and we highlight FIVE most significant observations from our detailed characterizations. We conclude that there are a large volume of challenges and opportunities for future studies, by enabling privacy-preserving IoV with low overheads for service quality.

Recent developments of advanced Human-Vehicle Interactions rely on the concept Internet-of-Vehicles (IoV), to achieve large-scale communications and synchronizations of data in practice. The concept of IoV is highly similar to a distributed system, where each vehicle is considered as a node and all nodes are grouped with a centralized server. In this manner, the concerns of data privacy are significant since all vehicles collect, process and share personal statistics (e.g. multi-modal, driving statuses and etc.). Therefore, it's important to understand how modern privacy-preserving techniques suit for IoV. We present the most comprehensive study to characterize modern privacy-preserving techniques for IoV to date. We focus on Differential Privacy (DP), a representative set of mathematically-guaranteed mechanisms for both privacy-preserving processing and sharing on sensitive data. The purpose of our study is to demystify the tradeoffs of deploying DP techniques, in terms of service quality. We first characterize representative privacy-preserving processing mechanisms, enabled by advanced DP approaches. Then we perform a detailed study of an emerging in-vehicle, Deep-Neural-Network-driven application, and study the upsides and downsides of DP for diverse types of data streams. Our study obtains 11 key findings and we highlight FIVE most significant observations from our detailed characterizations. We conclude that there are a large volume of challenges and opportunities for future studies, by enabling privacy-preserving IoV with low overheads for service quality.

As more and more autonomous vehicles (AVs) are being deployed on public roads, designing socially compatible behaviors for them is becoming increasingly important. In order to generate safe and efficient actions, AVs need to not only predict the future behaviors of other traffic participants, but also be aware of the uncertainties associated with such behavior prediction. In this paper, we propose an uncertain-aware integrated prediction and planning (UAPP) framework. It allows the AVs to infer the characteristics of other road users online and generate behaviors optimizing not only their own rewards, but also their courtesy to others, and their confidence regarding the prediction uncertainties. We first propose the definitions for courtesy and confidence. Based on that, their influences on the behaviors of AVs in interactive driving scenarios are explored. Moreover, we evaluate the proposed algorithm on naturalistic human driving data by comparing the generated behavior against ground truth. Results show that the online inference can significantly improve the human-likeness of the generated behaviors. Furthermore, we find that human drivers show great courtesy to others, even for those without right-of-way. We also find that such driving preferences vary significantly in different cultures.

It is argued that all model based approaches to the selection of covariates in linear regression have failed. This applies to frequentist approaches based on P-values and to Bayesian approaches although for different reasons. In the first part of the paper 13 model based procedures are compared to the model-free Gaussian covariate procedure in terms of the covariates selected and the time required. The comparison is based on four data sets and two simulations. There is nothing special about these data sets which are often used as examples in the literature. All the model based procedures failed. In the second part of the paper it is argued that the cause of this failure is the very use of a model. If the model involves all the available covariates standard P-values can be used. The use of P-values in this situation is quite straightforward. As soon as the model specifies only some unknown subset of the covariates the problem being to identify this subset the situation changes radically. There are many P-values, they are dependent and most of them are invalid. The Bayesian paradigm also assumes a correct model but although there are no conceptual problems with a large number of covariates there is a considerable overhead causing computational and allocation problems even for moderately sized data sets. The Gaussian covariate procedure is based on P-values which are defined as the probability that a random Gaussian covariate is better than the covariate being considered. These P-values are exact and valid whatever the situation. The allocation requirements and the algorithmic complexity are both linear in the size of the data making the procedure capable of handling large data sets. It outperforms all the other procedures in every respect.

Traffic classification, i.e. the identification of the type of applications flowing in a network, is a strategic task for numerous activities (e.g., intrusion detection, routing). This task faces some critical challenges that current deep learning approaches do not address. The design of current approaches do not take into consideration the fact that networking hardware (e.g., routers) often runs with limited computational resources. Further, they do not meet the need for faithful explainability highlighted by regulatory bodies. Finally, these traffic classifiers are evaluated on small datasets which fail to reflect the diversity of applications in real commercial settings. Therefore, this paper introduces a Lightweight, Efficient and eXplainable-by-design convolutional neural network (LEXNet) for Internet traffic classification, which relies on a new residual block (for lightweight and efficiency purposes) and prototype layer (for explainability). Based on a commercial-grade dataset, our evaluation shows that LEXNet succeeds to maintain the same accuracy as the best performing state-of-the-art neural network, while providing the additional features previously mentioned. Moreover, we demonstrate that LEXNet significantly reduces the model size and inference time compared to the state-of-the-art neural networks with explainability-by-design and post hoc explainability methods. Finally, we illustrate the explainability feature of our approach, which stems from the communication of detected application prototypes to the end-user, and we highlight the faithfulness of LEXNet explanations through a comparison with post hoc methods.

A large number of current machine learning methods rely upon deep neural networks. Yet, viewing neural networks as nonlinear dynamical systems, it becomes quickly apparent that mathematically rigorously establishing certain patterns generated by the nodes in the network is extremely difficult. Indeed, it is well-understood in the nonlinear dynamics of complex systems that, even in low-dimensional models, analytical techniques rooted in pencil-and-paper approaches reach their limits quickly. In this work, we propose a completely different perspective via the paradigm of rigorous numerical methods of nonlinear dynamics. The idea is to use computer-assisted proofs to validate mathematically the existence of nonlinear patterns in neural networks. As a case study, we consider a class of recurrent neural networks, where we prove via computer assistance the existence of several hundred Hopf bifurcation points, their non-degeneracy, and hence also the existence of several hundred periodic orbits. Our paradigm has the capability to rigorously verify complex nonlinear behaviour of neural networks, which provides a first step to explain the full abilities, as well as potential sensitivities, of machine learning methods via computer-assisted proofs.

Graph Convolutional Network (GCN) has achieved extraordinary success in learning effective task-specific representations of nodes in graphs. However, regarding Heterogeneous Information Network (HIN), existing HIN-oriented GCN methods still suffer from two deficiencies: (1) they cannot flexibly explore all possible meta-paths and extract the most useful ones for a target object, which hinders both effectiveness and interpretability; (2) they often need to generate intermediate meta-path based dense graphs, which leads to high computational complexity. To address the above issues, we propose an interpretable and efficient Heterogeneous Graph Convolutional Network (ie-HGCN) to learn the representations of objects in HINs. It is designed as a hierarchical aggregation architecture, i.e., object-level aggregation first, followed by type-level aggregation. The novel architecture can automatically extract useful meta-paths for each object from all possible meta-paths (within a length limit), which brings good model interpretability. It can also reduce the computational cost by avoiding intermediate HIN transformation and neighborhood attention. We provide theoretical analysis about the proposed ie-HGCN in terms of evaluating the usefulness of all possible meta-paths, its connection to the spectral graph convolution on HINs, and its quasi-linear time complexity. Extensive experiments on three real network datasets demonstrate the superiority of ie-HGCN over the state-of-the-art methods.

Active learning has long been a topic of study in machine learning. However, as increasingly complex and opaque models have become standard practice, the process of active learning, too, has become more opaque. There has been little investigation into interpreting what specific trends and patterns an active learning strategy may be exploring. This work expands on the Local Interpretable Model-agnostic Explanations framework (LIME) to provide explanations for active learning recommendations. We demonstrate how LIME can be used to generate locally faithful explanations for an active learning strategy, and how these explanations can be used to understand how different models and datasets explore a problem space over time. In order to quantify the per-subgroup differences in how an active learning strategy queries spatial regions, we introduce a notion of uncertainty bias (based on disparate impact) to measure the discrepancy in the confidence for a model's predictions between one subgroup and another. Using the uncertainty bias measure, we show that our query explanations accurately reflect the subgroup focus of the active learning queries, allowing for an interpretable explanation of what is being learned as points with similar sources of uncertainty have their uncertainty bias resolved. We demonstrate that this technique can be applied to track uncertainty bias over user-defined clusters or automatically generated clusters based on the source of uncertainty.

Classifying large scale networks into several categories and distinguishing them according to their fine structures is of great importance with several applications in real life. However, most studies of complex networks focus on properties of a single network but seldom on classification, clustering, and comparison between different networks, in which the network is treated as a whole. Due to the non-Euclidean properties of the data, conventional methods can hardly be applied on networks directly. In this paper, we propose a novel framework of complex network classifier (CNC) by integrating network embedding and convolutional neural network to tackle the problem of network classification. By training the classifiers on synthetic complex network data and real international trade network data, we show CNC can not only classify networks in a high accuracy and robustness, it can also extract the features of the networks automatically.

Steve Jobs, one of the greatest visionaries of our time was quoted in 1996 saying "a lot of times, people do not know what they want until you show it to them" [38] indicating he advocated products to be developed based on human intuition rather than research. With the advancements of mobile devices, social networks and the Internet of Things, enormous amounts of complex data, both structured and unstructured are being captured in hope to allow organizations to make better business decisions as data is now vital for an organizations success. These enormous amounts of data are referred to as Big Data, which enables a competitive advantage over rivals when processed and analyzed appropriately. However Big Data Analytics has a few concerns including Management of Data-lifecycle, Privacy & Security, and Data Representation. This paper reviews the fundamental concept of Big Data, the Data Storage domain, the MapReduce programming paradigm used in processing these large datasets, and focuses on two case studies showing the effectiveness of Big Data Analytics and presents how it could be of greater good in the future if handled appropriately.

北京阿比特科技有限公司