亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Cooperation is fundamental for human prosperity. Blockchain, as a trust machine, is a cooperative institution in cyberspace that supports cooperation through distributed trust with consensus protocols. While studies in computer science focus on fault tolerance problems with consensus algorithms, economic research utilizes incentive designs to analyze agent behaviors. To achieve cooperation on blockchains, emerging interdisciplinary research introduces rationality and game-theoretical solution concepts to study the equilibrium outcomes of various consensus protocols. However, existing studies do not consider the possibility for agents to learn from historical observations. Therefore, we abstract a general consensus protocol as a dynamic game environment, apply a solution concept of bounded rationality to model agent behavior, and resolve the initial conditions for three different stable equilibria. In our game, agents imitatively learn the global history in an evolutionary process toward equilibria, for which we evaluate the outcomes from both computing and economic perspectives in terms of safety, liveness, validity, and social welfare. Our research contributes to the literature across disciplines, including distributed consensus in computer science, game theory in economics on blockchain consensus, evolutionary game theory at the intersection of biology and economics, bounded rationality at the interplay between psychology and economics, and cooperative AI with joint insights into computing and social science. Finally, we discuss that future protocol design can better achieve the most desired outcomes of our honest stable equilibria by increasing the reward-punishment ratio and lowering both the cost-punishment ratio and the pivotality rate.

相關內容

The mushroomed Deepfake synthetic materials circulated on the internet have raised serious social impact to politicians, celebrities, and every human being on earth. In this survey, we provide a thorough review of the existing Deepfake detection studies from the reliability perspective. Reliability-oriented research challenges of the current Deepfake detection research domain are defined in three aspects, namely, transferability, interpretability, and robustness. While solutions have been frequently addressed regarding the three challenges, the general reliability of a detection model has been barely considered, leading to the lack of reliable evidence in real-life usages and even for prosecutions on Deepfake-related cases in court. We, therefore, introduce a model reliability study metric using statistical random sampling knowledge and the publicly available benchmark datasets to review the reliability of the existing detection models on arbitrary Deepfake candidate suspects. Case studies are further executed to justify the real-life Deepfake cases including different groups of victims with the help of the reliably qualified detection models as reviewed in this survey. Reviews and experiments upon the existing approaches provide informative discussions and future research directions of Deepfake detection.

Transaction fees represent a major incentive in many blockchain systems as a way to incentivize processing transactions. Unfortunately, they also introduce an enormous amount of incentive asymmetry compared to alternatives like fixed block rewards. We analyze some of the incentive compatibility issues that arise from transaction fees, which relate to the bids that users submit, the allocation rules that miners use to choose which transactions to include, and where they choose to mine in the context of longest-chain consensus. We start by surveying a variety of mining attacks including undercutting, fee sniping, and fee-optimized selfish mining. Then, we move to analyzing mechanistic notions of user incentive compatibility, myopic miner incentive compatibility, and off-chain-agreement-proofness, as well as why they are provably incompatible in their full form. Then, we discuss weaker notions of nearly and $\gamma$-weak incentive compatibility, and how all of these forms of incentive compatibility hold or fail in the trustless auctioneer setup of blockchains, examining classical mechanisms as well as more recent ones such as Ethereum's EIP-1559 mechanism and \cite{chung}'s burning second-price auction. Throughout, we generalize and interrelate existing notions, provide new unifying perspectives and intuitions on analysis, and discuss both specific and overarching open problems for future work.

As the impact of AI on various scientific fields is increasing, it is crucial to embrace interdisciplinary knowledge to understand the impact of technology on society. The goal is to foster a research environment beyond disciplines that values diversity and creates, critiques and develops new conceptual and theoretical frameworks. Even though research beyond disciplines is essential for understanding complex societal issues and creating positive impact it is notoriously difficult to evaluate and is often not recognized by current academic career progression. The motivation for this paper is to engage in broad discussion across disciplines and identify guiding principles fir AI research beyond disciplines in a structured and inclusive way, revealing new perspectives and contributing to societal and human wellbeing and sustainability.

This paper explores reward mechanisms for a query incentive network in which agents seek information from social networks. In a query tree issued by the task owner, each agent is rewarded by the owner for contributing to the solution, for instance, solving the task or inviting others to solve it. The reward mechanism determines the reward for each agent and motivates all agents to propagate and report their information truthfully. In particular, the reward cannot exceed the budget set by the task owner. However, our impossibility results demonstrate that a reward mechanism cannot simultaneously achieve Sybil-proof (agents benefit from manipulating multiple fake identities), collusion-proof (multiple agents pretend as a single agent to improve the reward), and other essential properties. In order to address these issues, we propose two novel reward mechanisms. The first mechanism achieves Sybil-proof and collusion-proof, respectively; the second mechanism sacrifices Sybil-proof to achieve the approximate versions of Sybil-proof and collusion-proof. Additionally, we show experimentally that our second reward mechanism outperforms the existing ones.

With the proliferating of wireless demands, wireless local area network (WLAN) becomes one of the most important wireless networks. Network intelligence is promising for the next generation wireless networks, captured lots of attentions. Sensing is one efficient enabler to achieve network intelligence since utilizing sensing can obtain diverse and valuable non-communication information. Thus, integrating sensing and communications (ISAC) is a promising technology for future wireless networks. Sensing assisted communication (SAC) is an important branch of ISAC, but there are few related works focusing on the systematical and comprehensive analysis on SAC in WLAN. This article is the first work to systematically analyze SAC in the next generation WLAN from the system simulation perspective. We analyze the scenarios and advantages of SAC. Then, from system simulation perspective, several sources of performance gain brought from SAC are proposed, i.e. beam link failure, protocol overhead, and intra-physical layer protocol data unit (intra-PPDU) performance decrease, while several important influencing factors are described in detail. Performance evaluation is deeply analyzed and the performance gain of the SAC in both living room and street canyon scenarios are verified by system simulation. Finally, we provide our insights on the future directions of SAC for the next generation WLAN.

Diffusion models have shown incredible capabilities as generative models; indeed, they power the current state-of-the-art models on text-conditioned image generation such as Imagen and DALL-E 2. In this work we review, demystify, and unify the understanding of diffusion models across both variational and score-based perspectives. We first derive Variational Diffusion Models (VDM) as a special case of a Markovian Hierarchical Variational Autoencoder, where three key assumptions enable tractable computation and scalable optimization of the ELBO. We then prove that optimizing a VDM boils down to learning a neural network to predict one of three potential objectives: the original source input from any arbitrary noisification of it, the original source noise from any arbitrarily noisified input, or the score function of a noisified input at any arbitrary noise level. We then dive deeper into what it means to learn the score function, and connect the variational perspective of a diffusion model explicitly with the Score-based Generative Modeling perspective through Tweedie's Formula. Lastly, we cover how to learn a conditional distribution using diffusion models via guidance.

Graph neural networks (GNNs) have been a hot spot of recent research and are widely utilized in diverse applications. However, with the use of huger data and deeper models, an urgent demand is unsurprisingly made to accelerate GNNs for more efficient execution. In this paper, we provide a comprehensive survey on acceleration methods for GNNs from an algorithmic perspective. We first present a new taxonomy to classify existing acceleration methods into five categories. Based on the classification, we systematically discuss these methods and highlight their correlations. Next, we provide comparisons from aspects of the efficiency and characteristics of these methods. Finally, we suggest some promising prospects for future research.

Fast developing artificial intelligence (AI) technology has enabled various applied systems deployed in the real world, impacting people's everyday lives. However, many current AI systems were found vulnerable to imperceptible attacks, biased against underrepresented groups, lacking in user privacy protection, etc., which not only degrades user experience but erodes the society's trust in all AI systems. In this review, we strive to provide AI practitioners a comprehensive guide towards building trustworthy AI systems. We first introduce the theoretical framework of important aspects of AI trustworthiness, including robustness, generalization, explainability, transparency, reproducibility, fairness, privacy preservation, alignment with human values, and accountability. We then survey leading approaches in these aspects in the industry. To unify the current fragmented approaches towards trustworthy AI, we propose a systematic approach that considers the entire lifecycle of AI systems, ranging from data acquisition to model development, to development and deployment, finally to continuous monitoring and governance. In this framework, we offer concrete action items to practitioners and societal stakeholders (e.g., researchers and regulators) to improve AI trustworthiness. Finally, we identify key opportunities and challenges in the future development of trustworthy AI systems, where we identify the need for paradigm shift towards comprehensive trustworthy AI systems.

In the past few decades, artificial intelligence (AI) technology has experienced swift developments, changing everyone's daily life and profoundly altering the course of human society. The intention of developing AI is to benefit humans, by reducing human labor, bringing everyday convenience to human lives, and promoting social good. However, recent research and AI applications show that AI can cause unintentional harm to humans, such as making unreliable decisions in safety-critical scenarios or undermining fairness by inadvertently discriminating against one group. Thus, trustworthy AI has attracted immense attention recently, which requires careful consideration to avoid the adverse effects that AI may bring to humans, so that humans can fully trust and live in harmony with AI technologies. Recent years have witnessed a tremendous amount of research on trustworthy AI. In this survey, we present a comprehensive survey of trustworthy AI from a computational perspective, to help readers understand the latest technologies for achieving trustworthy AI. Trustworthy AI is a large and complex area, involving various dimensions. In this work, we focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being. For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems. We also discuss the accordant and conflicting interactions among different dimensions and discuss potential aspects for trustworthy AI to investigate in the future.

Bid optimization for online advertising from single advertiser's perspective has been thoroughly investigated in both academic research and industrial practice. However, existing work typically assume competitors do not change their bids, i.e., the wining price is fixed, leading to poor performance of the derived solution. Although a few studies use multi-agent reinforcement learning to set up a cooperative game, they still suffer the following drawbacks: (1) They fail to avoid collusion solutions where all the advertisers involved in an auction collude to bid an extremely low price on purpose. (2) Previous works cannot well handle the underlying complex bidding environment, leading to poor model convergence. This problem could be amplified when handling multiple objectives of advertisers which are practical demands but not considered by previous work. In this paper, we propose a novel multi-objective cooperative bid optimization formulation called Multi-Agent Cooperative bidding Games (MACG). MACG sets up a carefully designed multi-objective optimization framework where different objectives of advertisers are incorporated. A global objective to maximize the overall profit of all advertisements is added in order to encourage better cooperation and also to protect self-bidding advertisers. To avoid collusion, we also introduce an extra platform revenue constraint. We analyze the optimal functional form of the bidding formula theoretically and design a policy network accordingly to generate auction-level bids. Then we design an efficient multi-agent evolutionary strategy for model optimization. Offline experiments and online A/B tests conducted on the Taobao platform indicate both single advertiser's objective and global profit have been significantly improved compared to state-of-art methods.

北京阿比特科技有限公司