亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Multi-signature aggregates signatures from multiple users on the same message into a joint signature, which is widely applied in blockchain to reduce the percentage of signatures in blocks and improve the throughput of transactions. The $k$-sum attacks are one of the major challenges to design secure multi-signature schemes. In this work, we address $k$-sum attacks from a novel angle by defining a Public Third Party (PTP), which is an automatic process that can be verifiable by the public and restricts the signing phase from continuing until receiving commitments from all signers. Further, a two-round multi-signature scheme MEMS with PTP is proposed, which is secure based on discrete logarithm assumption in the random oracle model. As each signer communicates directly with the PTP instead of other co-signers, the total amount of communications is significantly reduced. In addition, as PTP participates in the computation of the aggregation and signing algorithms, the computation cost left for each signer and verifier remains the same as the basis Schnorr signature. To the best of our knowledge, this is the maximum efficiency that a Schnorr-based multi-signature scheme can achieve. Further, MEMS is applied in blockchain platform, e.g., Fabric, to improve the transaction efficiency.

相關內容

It is commonly assumed that the end-to-end networking performance of edge offloading is purely dictated by that of the network connectivity between end devices and edge computing facilities, where ongoing innovation in 5G/6G networking can help. However, with the growing complexity of edge-offloaded computation and dynamic load balancing requirements, an offloaded task often goes through a multi-stage pipeline that spans across multiple compute nodes and proxies interconnected via a dedicated network fabric within a given edge computing facility. As the latest hardware-accelerated transport technologies such as RDMA and GPUDirect RDMA are adopted to build such network fabric, there is a need for good understanding of the full potential of these technologies in the context of computation offload and the effect of different factors such as GPU scheduling and characteristics of computation on the net performance gain achievable by these technologies. This paper unveils detailed insights into the latency overhead in typical machine learning (ML)-based computation pipelines and analyzes the potential benefits of adopting hardware-accelerated communication. To this end, we build a model-serving framework that supports various communication mechanisms. Using the framework, we identify performance bottlenecks in state-of-the-art model-serving pipelines and show how hardware-accelerated communication can alleviate them. For example, we show that GPUDirect RDMA can save 15--50\% of model-serving latency, which amounts to 70--160 ms.

Wearable sensors such as Inertial Measurement Units (IMUs) are often used to assess the performance of human exercise. Common approaches use handcrafted features based on domain expertise or automatically extracted features using time series analysis. Multiple sensors are required to achieve high classification accuracy, which is not very practical. These sensors require calibration and synchronization and may lead to discomfort over longer time periods. Recent work utilizing computer vision techniques has shown similar performance using video, without the need for manual feature engineering, and avoiding some pitfalls such as sensor calibration and placement on the body. In this paper, we compare the performance of IMUs to a video-based approach for human exercise classification on two real-world datasets consisting of Military Press and Rowing exercises. We compare the performance using a single camera that captures video in the frontal view versus using 5 IMUs placed on different parts of the body. We observe that an approach based on a single camera can outperform a single IMU by 10 percentage points on average. Additionally, a minimum of 3 IMUs are required to outperform a single camera. We observe that working with the raw data using multivariate time series classifiers outperforms traditional approaches based on handcrafted or automatically extracted features. Finally, we show that an ensemble model combining the data from a single camera with a single IMU outperforms either data modality. Our work opens up new and more realistic avenues for this application, where a video captured using a readily available smartphone camera, combined with a single sensor, can be used for effective human exercise classification.

Time is a crucial factor in modelling dynamic behaviours of intelligent agents: activities have a determined temporal duration in a real-world environment, and previous actions influence agents' behaviour. In this paper, we propose a language for modelling concurrent interaction between agents that also allows the specification of temporal intervals in which particular actions occur. Such a language exploits a timed version of Abstract Argumentation Frameworks to realise a shared memory used by the agents to communicate and reason on the acceptability of their beliefs with respect to a given time interval. An interleaving model on a single processor is used for basic computation steps, with maximum parallelism for time elapsing. Following this approach, only one of the enabled agents is executed at each moment. To demonstrate the capabilities of language, we also show how it can be used to model interactions such as debates and dialogue games taking place between intelligent agents. Lastly, we present an implementation of the language that can be accessed via a web interface. Under consideration in Theory and Practice of Logic Programming (TPLP).

This paper investigates the semantic extraction task-oriented dynamic multi-time scale user admission and resourceallocation in mobile edge computing (MEC) systems. Amid prevalence artifi cial intelligence applications in various industries,the offloading of semantic extraction tasks which are mainlycomposed of convolutional neural networks of computer vision isa great challenge for communication bandwidth and computing capacity allocation in MEC systems. Considering the stochasticnature of the semantic extraction tasks, we formulate a stochastic optimization problem by modeling it as the dynamic arrival of tasks in the temporal domain. We jointly optimize the system revenue and cost which are represented as user admission in the long term and resource allocation in the short term respectively. To handle the proposed stochastic optimization problem, we decompose it into short-time-scale subproblems and a long-time-scale subproblem by using the Lyapunov optimization technique. After that, the short-time-scale optimization variables of resource allocation, including user association, bandwidth allocation, and computing capacity allocation are obtained in closed form. The user admission optimization on long-time scales is solved by a heuristic iteration method. Then, the multi-time scale user admission and resource allocation algorithm is proposed for dynamic semantic extraction task computing in MEC systems. Simulation results demonstrate that, compared with the benchmarks, the proposed algorithm improves the performance of user admission and resource allocation efficiently and achieves a flexible trade-off between system revenue and cost at multi-time scales and considering semantic extraction tasks.

Maritime activities represent a major domain of economic growth with several emerging maritime Internet of Things use cases, such as smart ports, autonomous navigation, and ocean monitoring systems. The major enabler for this exciting ecosystem is the provision of broadband, low-delay, and reliable wireless coverage to the ever-increasing number of vessels, buoys, platforms, sensors, and actuators. Towards this end, the integration of unmanned aerial vehicles (UAVs) in maritime communications introduces an aerial dimension to wireless connectivity going above and beyond current deployments, which are mainly relying on shore-based base stations with limited coverage and satellite links with high latency. Considering the potential of UAV-aided wireless communications, this survey presents the state-of-the-art in UAV-aided maritime communications, which, in general, are based on both conventional optimization and machine-learning-aided approaches. More specifically, relevant UAV-based network architectures are discussed together with the role of their building blocks. Then, physical-layer, resource management, and cloud/edge computing and caching UAV-aided solutions in maritime environments are discussed and grouped based on their performance targets. Moreover, as UAVs are characterized by flexible deployment with high re-positioning capabilities, studies on UAV trajectory optimization for maritime applications are thoroughly discussed. In addition, aiming at shedding light on the current status of real-world deployments, experimental studies on UAV-aided maritime communications are presented and implementation details are given. Finally, several important open issues in the area of UAV-aided maritime communications are given, related to the integration of sixth generation (6G) advancements.

Blockchain is an emerging decentralized data collection, sharing and storage technology, which have provided abundant transparent, secure, tamper-proof, secure and robust ledger services for various real-world use cases. Recent years have witnessed notable developments of blockchain technology itself as well as blockchain-adopting applications. Most existing surveys limit the scopes on several particular issues of blockchain or applications, which are hard to depict the general picture of current giant blockchain ecosystem. In this paper, we investigate recent advances of both blockchain technology and its most active research topics in real-world applications. We first review the recent developments of consensus mechanisms and storage mechanisms in general blockchain systems. Then extensive literature is conducted on blockchain enabled IoT, edge computing, federated learning and several emerging applications including healthcare, COVID-19 pandemic, social network and supply chain, where detailed specific research topics are discussed in each. Finally, we discuss the future directions, challenges and opportunities in both academia and industry.

In large-scale systems there are fundamental challenges when centralised techniques are used for task allocation. The number of interactions is limited by resource constraints such as on computation, storage, and network communication. We can increase scalability by implementing the system as a distributed task-allocation system, sharing tasks across many agents. However, this also increases the resource cost of communications and synchronisation, and is difficult to scale. In this paper we present four algorithms to solve these problems. The combination of these algorithms enable each agent to improve their task allocation strategy through reinforcement learning, while changing how much they explore the system in response to how optimal they believe their current strategy is, given their past experience. We focus on distributed agent systems where the agents' behaviours are constrained by resource usage limits, limiting agents to local rather than system-wide knowledge. We evaluate these algorithms in a simulated environment where agents are given a task composed of multiple subtasks that must be allocated to other agents with differing capabilities, to then carry out those tasks. We also simulate real-life system effects such as networking instability. Our solution is shown to solve the task allocation problem to 6.7% of the theoretical optimal within the system configurations considered. It provides 5x better performance recovery over no-knowledge retention approaches when system connectivity is impacted, and is tested against systems up to 100 agents with less than a 9% impact on the algorithms' performance.

Games and simulators can be a valuable platform to execute complex multi-agent, multiplayer, imperfect information scenarios with significant parallels to military applications: multiple participants manage resources and make decisions that command assets to secure specific areas of a map or neutralize opposing forces. These characteristics have attracted the artificial intelligence (AI) community by supporting development of algorithms with complex benchmarks and the capability to rapidly iterate over new ideas. The success of artificial intelligence algorithms in real-time strategy games such as StarCraft II have also attracted the attention of the military research community aiming to explore similar techniques in military counterpart scenarios. Aiming to bridge the connection between games and military applications, this work discusses past and current efforts on how games and simulators, together with the artificial intelligence algorithms, have been adapted to simulate certain aspects of military missions and how they might impact the future battlefield. This paper also investigates how advances in virtual reality and visual augmentation systems open new possibilities in human interfaces with gaming platforms and their military parallels.

The growing energy and performance costs of deep learning have driven the community to reduce the size of neural networks by selectively pruning components. Similarly to their biological counterparts, sparse networks generalize just as well, if not better than, the original dense networks. Sparsity can reduce the memory footprint of regular networks to fit mobile devices, as well as shorten training time for ever growing networks. In this paper, we survey prior work on sparsity in deep learning and provide an extensive tutorial of sparsification for both inference and training. We describe approaches to remove and add elements of neural networks, different training strategies to achieve model sparsity, and mechanisms to exploit sparsity in practice. Our work distills ideas from more than 300 research papers and provides guidance to practitioners who wish to utilize sparsity today, as well as to researchers whose goal is to push the frontier forward. We include the necessary background on mathematical methods in sparsification, describe phenomena such as early structure adaptation, the intricate relations between sparsity and the training process, and show techniques for achieving acceleration on real hardware. We also define a metric of pruned parameter efficiency that could serve as a baseline for comparison of different sparse networks. We close by speculating on how sparsity can improve future workloads and outline major open problems in the field.

The concept of smart grid has been introduced as a new vision of the conventional power grid to figure out an efficient way of integrating green and renewable energy technologies. In this way, Internet-connected smart grid, also called energy Internet, is also emerging as an innovative approach to ensure the energy from anywhere at any time. The ultimate goal of these developments is to build a sustainable society. However, integrating and coordinating a large number of growing connections can be a challenging issue for the traditional centralized grid system. Consequently, the smart grid is undergoing a transformation to the decentralized topology from its centralized form. On the other hand, blockchain has some excellent features which make it a promising application for smart grid paradigm. In this paper, we have an aim to provide a comprehensive survey on application of blockchain in smart grid. As such, we identify the significant security challenges of smart grid scenarios that can be addressed by blockchain. Then, we present a number of blockchain-based recent research works presented in different literatures addressing security issues in the area of smart grid. We also summarize several related practical projects, trials, and products that have been emerged recently. Finally, we discuss essential research challenges and future directions of applying blockchain to smart grid security issues.

北京阿比特科技有限公司