亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We present F-PKI, an enhancement to the HTTPS public-key infrastructure that gives trust flexibility to both clients and domain owners while giving certification authorities (CAs) means to enforce stronger security measures. In today's web PKI, all CAs are equally trusted, and security is defined by the weakest link. We address this problem by introducing trust flexibility in two dimensions: with F-PKI, each domain owner can define a domain policy (specifying, for example, which CAs are authorized to issue certificates for their domain name) and each client can set or choose a validation policy based on trust levels. F-PKI thus supports a property that is sorely needed in today's Internet: trust heterogeneity. Different parties can express different trust preferences while still being able to verify all certificates. In contrast, today's web PKI only allows clients to fully distrust suspicious/misbehaving CAs, which is likely to cause collateral damage in the form of legitimate certificates being rejected. Our contribution is to present a system that is backward compatible, provides sensible security properties to both clients and domain owners, ensures the verifiability of all certificates, and prevents downgrade attacks. Furthermore, F-PKI provides a ground for innovation, as it gives CAs an incentive to deploy new security measures to attract more customers, without having these measures undercut by vulnerable CAs.

相關內容

 超文本傳輸安全協議是超文本傳輸協議和 SSL/TLS 的組合,用以提供加密通訊及對網絡服務器身份的鑒定。

We develop two distributed downlink resource allocation algorithms for user-centric, cell-free, spatially-distributed, multiple-input multiple-output (MIMO) networks. In such networks, each user is served by a subset of nearby transmitters that we call distributed units or DUs. The operation of the DUs in a region is controlled by a central unit (CU). Our first scheme is implemented at the DUs, while the second is implemented at the CUs controlling these DUs. We define a hybrid quality of service metric that enables distributed optimization of system resources in a proportional fair manner. Specifically, each of our algorithms performs user scheduling, beamforming, and power control while accounting for channel estimation errors. Importantly, our algorithm does not require information exchange amongst DUs (CUs) for the DU-distributed (CU-distributed) system, while also smoothly converging. Our results show that our CU-distributed system provides 1.3- to 1.8-fold network throughput compared to the DU-distributed system, with minor increases in complexity and front-haul load - and substantial gains over benchmark schemes like local zero-forcing. We also analyze the trade-offs provided by the CU-distributed system, hence highlighting the significance of deploying multiple CUs in user-centric cell-free networks.

Global datasphere is increasing fast, and it is expected to reach 175 Zettabytes by 20251 . However, most of the content is unstructured and is not understandable by machines. Structuring this data into a knowledge graph enables multitudes of intelligent applications such as deep question answering, recommendation systems, semantic search, etc. The knowledge graph is an emerging technology that allows logical reasoning and uncovers new insights using content along with the context. Thereby, it provides necessary syntax and reasoning semantics that enable machines to solve complex healthcare, security, financial institutions, economics, and business problems. As an outcome, enterprises are putting their effort into constructing and maintaining knowledge graphs to support various downstream applications. Manual approaches are too expensive. Automated schemes can reduce the cost of building knowledge graphs up to 15-250 times. This paper critiques state-of-the-art automated techniques to produce knowledge graphs of near-human quality autonomously. Additionally, it highlights different research issues that need to be addressed to deliver high-quality knowledge graphs

Hypertext Transfer Protocol Secure (HTTPS) protocol has become integral part of the modern internet technology. It is currently the primary protocol for commercialized web applications. It can provide a fast, secure connection with a certain level of privacy and integrity, and it has become a basic assumption on most web services on the internet. However, HTTPS cannot provide security assurances on the request data in compute, so the computing environment remains uncertain risks and vulnerabilities. A hardware-based trusted execution environment (TEE) such as Intel Software Guard Extension (SGX) provides in-memory encryption to help protect the runtime computation to reduce risks of illegal leaking or modifying private information. The central concept of SGX enables the computation happening inside the enclave, a protected environment that encrypts the codes and data pertaining to a security-sensitive computation. In addition, SGX provides provide security assurances via remote attestation to the web client, including TCB identity, vendor identity and verification identity. Here we propose a HTTP protocol, called HTTPS Attestable (HTTPA), by including remote attestation process onto the HTTPS protocol to address the privacy and security concerns on web and the access over the Internet. With HTTPA, we can provide security assurances to establish trustworthiness with web services and ensure integrity of request handling for web users. We expect that remote attestation will become a new trend adopted to reduce web services security risks, and propose the HTTPA protocol to unify the web attestation and accessing services in a standard and efficient way.

Despite being a hot research topic for a decade, drones are still not part of our everyday life. In this article, we analyze the reasons for this state of affairs and look for ways of improving the situation. We rely on the achievements of the so-called Technology Assessment (TA), an interdisciplinary research field aiming at providing knowledge for better-informed and well-reflected decisions concerning new technologies. We demonstrate that the most critical area requiring further development is safety. Since Unmanned Aerial System Traffic Management (UTM) systems promise to address this problem in a systematic manner, we also indicate relevant solutions for UTM that have to be designed by wireless experts. Moreover, we suggest project implementation guidelines for several drone applications. The guidelines take into account the public acceptance levels estimated in state of the art literature of the correspondent field.

Promising solutions exist today that can accurately track mobile entities indoor using visual inertial odometry in favorable visual conditions, or by leveraging fine-grained ranging (RF, ultrasonic, IR, etc.) to reference anchors. However, they are unable to directly cater to "dynamic" indoor environments (e.g. first responder scenarios, multi-player AR/VR gaming in everyday spaces, etc.) that are devoid of such favorable conditions. Indeed, we show that the need for "infrastructure-free", and robustness to "node mobility" and "visual conditions" in such environments, motivates a robust RF-based approach along with the need to address a novel and challenging variant of its infrastructure-free (i.e. peer-to-peer) localization problem that is latency-bounded - accurate tracking of mobile entities imposes a latency budget that not only affects the solution computation but also the collection of peer-to-peer ranges themselves. In this work, we present the design and deployment of DynoLoc that addresses this latency-bounded infrastructure-free RF localization problem. To this end, DynoLoc unravels the fundamental tradeoff between latency and localization accuracy and incorporates design elements that judiciously leverage the available ranging resources to adaptively estimate the joint topology of nodes, coupled with robust algorithm that maximizes the localization accuracy even in the face of practical environmental artifacts (wireless connectivity and multipath, node mobility, etc.). This allows DynoLoc to track (every second) a network of few tens of mobile entities even at speeds of 1-2 m/s with median accuracies under 1-2 m (compared to 5m+ with baselines), without infrastructure support. We demonstrate DynoLoc's potential in a real-world firefighters' drill, as well as two other use cases of (i) multi-player AR/VR gaming, and (ii) active shooter tracking by first responders.

The confluence of 5G and AI is transforming wireless networks to deliver diverse services at the Edge, driving towards a vision of pervasive distributed intelligence. Future 6G networks will need to deliver quality of experience through seamless integration of communication, computation and AI. Therefore, networks must become intelligent, distributed, scalable, and programmable platforms across the continuum of data delivery to address the ever-increasing service requirements and deployment complexity. We present novel results across three research directions that are expected to be integral to 6G systems and also discuss newer 6G metrics.

This paper presents a study on how cooperation versus non-cooperation, and centralization versus distribution impact the performance of a traffic game of autonomous vehicles. A model using a particle-based, Lagrange representation, is developed, instead of a Eulerian, flow-based one, usual in routing problems of the game-theoretical approach. This choice allows representation of phenomena such as fuel exhaustion, vehicle collision, and wave propagation. The elements necessary to represent interactions in a multi-agent transportation system are defined, including a distributed, priority-based resource allocation protocol, where resources are nodes and links in a spatial network and individual routing strategies are performed. A fuel consumption dynamics is developed in order to account for energy cost and vehicles having limited range. The analysis shows that only the scenarios with cooperative resource allocation can achieve optimal values of either collective cost or equity coefficient, corresponding respectively to the centralized and to the distributed cases.

As data are increasingly being stored in different silos and societies becoming more aware of data privacy issues, the traditional centralized training of artificial intelligence (AI) models is facing efficiency and privacy challenges. Recently, federated learning (FL) has emerged as an alternative solution and continue to thrive in this new reality. Existing FL protocol design has been shown to be vulnerable to adversaries within or outside of the system, compromising data privacy and system robustness. Besides training powerful global models, it is of paramount importance to design FL systems that have privacy guarantees and are resistant to different types of adversaries. In this paper, we conduct the first comprehensive survey on this topic. Through a concise introduction to the concept of FL, and a unique taxonomy covering: 1) threat models; 2) poisoning attacks and defenses against robustness; 3) inference attacks and defenses against privacy, we provide an accessible review of this important topic. We highlight the intuitions, key techniques as well as fundamental assumptions adopted by various attacks and defenses. Finally, we discuss promising future research directions towards robust and privacy-preserving federated learning.

To make deliberate progress towards more intelligent and more human-like artificial systems, we need to be following an appropriate feedback signal: we need to be able to define and evaluate intelligence in a way that enables comparisons between two systems, as well as comparisons with humans. Over the past hundred years, there has been an abundance of attempts to define and measure intelligence, across both the fields of psychology and AI. We summarize and critically assess these definitions and evaluation approaches, while making apparent the two historical conceptions of intelligence that have implicitly guided them. We note that in practice, the contemporary AI community still gravitates towards benchmarking intelligence by comparing the skill exhibited by AIs and humans at specific tasks such as board games and video games. We argue that solely measuring skill at any given task falls short of measuring intelligence, because skill is heavily modulated by prior knowledge and experience: unlimited priors or unlimited training data allow experimenters to "buy" arbitrary levels of skills for a system, in a way that masks the system's own generalization power. We then articulate a new formal definition of intelligence based on Algorithmic Information Theory, describing intelligence as skill-acquisition efficiency and highlighting the concepts of scope, generalization difficulty, priors, and experience. Using this definition, we propose a set of guidelines for what a general AI benchmark should look like. Finally, we present a benchmark closely following these guidelines, the Abstraction and Reasoning Corpus (ARC), built upon an explicit set of priors designed to be as close as possible to innate human priors. We argue that ARC can be used to measure a human-like form of general fluid intelligence and that it enables fair general intelligence comparisons between AI systems and humans.

Music recommender systems (MRS) have experienced a boom in recent years, thanks to the emergence and success of online streaming services, which nowadays make available almost all music in the world at the user's fingertip. While today's MRS considerably help users to find interesting music in these huge catalogs, MRS research is still facing substantial challenges. In particular when it comes to build, incorporate, and evaluate recommendation strategies that integrate information beyond simple user--item interactions or content-based descriptors, but dig deep into the very essence of listener needs, preferences, and intentions, MRS research becomes a big endeavor and related publications quite sparse. The purpose of this trends and survey article is twofold. We first identify and shed light on what we believe are the most pressing challenges MRS research is facing, from both academic and industry perspectives. We review the state of the art towards solving these challenges and discuss its limitations. Second, we detail possible future directions and visions we contemplate for the further evolution of the field. The article should therefore serve two purposes: giving the interested reader an overview of current challenges in MRS research and providing guidance for young researchers by identifying interesting, yet under-researched, directions in the field.

北京阿比特科技有限公司