亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Blockchains protect an ecosystem worth more than $500bn with their strong security properties derived from the principle of decentralization. Is today's blockchain really decentralized? In this paper, we empirically studied one of the {\em least decentralized} parts of Ethereum -- the most used blockchain system in practice -- and shed light on the decentralization issue from a new perspective. To avoid centralization caused by Maximal Extractable Value (MEV), Ethereum adopts a novel mechanism that produces blocks through a {\em builder market}. After two years in operation, however, the builder market has evolved to a highly centralized one with three builders producing more than 90% of blocks. {\em Why does the builder market centralize, given that it is permissionless and anyone can join?} Moreover, {\em what are the security implications of a centralized builder market to MEV-Boost auctions?} Through a rigorous empirical study of the builder market's core mechanism, MEV-Boost auctions, we answered these two questions using a large-scale auction dataset we curated since 2022. Unlike previous works that focus on {\em who} wins the auctions, we focus on {\em why} they win, to shed light on the {openness, competitiveness, and efficiency} of MEV-Boost auctions. Our findings also help identify directions for improving the decentralization of builder markets.

相關內容

區塊鏈(Blockchain)是由(you)節(jie)點參與的(de)(de)分布式(shi)數據庫(ku)系統(tong),它(ta)的(de)(de)特點是不可更改(gai),不可偽造(zao),也可以(yi)將(jiang)其理解為(wei)賬(zhang)簿系統(tong)(ledger)。它(ta)是比特幣的(de)(de)一(yi)個重要(yao)概念,完(wan)整比特幣區塊鏈的(de)(de)副本,記錄了(le)其代幣(token)的(de)(de)每一(yi)筆交(jiao)易。通(tong)過這(zhe)些信息,我(wo)們可以(yi)找到每一(yi)個地址,在歷史上任何(he)一(yi)點所擁有的(de)(de)價值。

知識薈萃

精(jing)品入門和(he)進階教(jiao)程(cheng)、論文和(he)代(dai)碼整理等

更多

查看相關VIP內容(rong)、論文、資訊等

The need for a disaster-related event monitoring system has arisen due to the societal and economic impact caused by the increasing number of severe disaster events. An event monitoring system should be able to extract event-related information from texts, and discriminates event instances. We demonstrate our open-source event monitoring system, namely, Master of Disaster (MoD), which receives news streams, extracts event information, links extracted information to a knowledge graph (KG), in this case Wikidata, and discriminates event instances visually. The goal of event visualization is to group event mentions referring to the same real-world event instance so that event instance discrimination can be achieved by visual screening.

A cautious interpretation of AI regulations and policy in the EU and the USA place explainability as a central deliverable of compliant AI systems. However, from a technical perspective, explainable AI (XAI) remains an elusive and complex target where even state of the art methods often reach erroneous, misleading, and incomplete explanations. "Explainability" has multiple meanings which are often used interchangeably, and there are an even greater number of XAI methods - none of which presents a clear edge. Indeed, there are multiple failure modes for each XAI method, which require application-specific development and continuous evaluation. In this paper, we analyze legislative and policy developments in the United States and the European Union, such as the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, the AI Act, the AI Liability Directive, and the General Data Protection Regulation (GDPR) from a right to explanation perspective. We argue that these AI regulations and current market conditions threaten effective AI governance and safety because the objective of trustworthy, accountable, and transparent AI is intrinsically linked to the questionable ability of AI operators to provide meaningful explanations. Unless governments explicitly tackle the issue of explainability through clear legislative and policy statements that take into account technical realities, AI governance risks becoming a vacuous "box-ticking" exercise where scientific standards are replaced with legalistic thresholds, providing only a false sense of security in XAI.

The careful planning and safe deployment of 5G technologies will bring enormous benefits to society and the economy. Higher frequency, beamforming, and small-cells are key technologies that will provide unmatched throughput and seamless connectivity to 5G users. Superficial knowledge of these technologies has raised concerns among the general public about the harmful effects of radiation. Several standardization bodies are active to put limits on the emissions which are based on a defined set of radiation measurement methodologies. However, due to the peculiarity of 5G such as dynamicity of the beams, network densification, Time Division Duplexing mode of operation, etc, using existing EMF measurement methods may provide inaccurate results. In this context, we discuss our experimental studies aimed towards the measurement of radiation caused by beam-based transmissions from a 5G base station equipped with an Active Antenna System(AAS). We elaborate on the shortcomings of current measurement methodologies and address several open questions. Next, we demonstrate that using user-specific downlink beamforming, not only better performance is achieved compared to non-beamformed downlink, but also the radiation in the vicinity of the intended user is significantly decreased. Further, we show that under weak reception conditions, an uplink transmission can cause significantly high radiation in the vicinity of the user equipment. We believe that our work will help in clearing several misleading concepts about the 5G EMF radiation effects. We conclude the work by providing guidelines to improve the methodology of EMF measurement by considering the spatiotemporal dynamicity of the 5G transmission.

With the increase in the number of privacy regulations, small development teams are forced to make privacy decisions on their own. In this paper, we conduct a mixed-method survey study, including statistical and qualitative analysis, to evaluate the privacy perceptions, practices, and knowledge of members involved in various phases of the Software Development Life Cycle (SDLC). Our survey includes 362 participants from 23 countries, encompassing roles such as product managers, developers, and testers. Our results show diverse definitions of privacy across SDLC roles, emphasizing the need for a holistic privacy approach throughout SDLC. We find that software teams, regardless of their region, are less familiar with privacy concepts (such as anonymization), relying on self-teaching and forums. Most participants are more familiar with GDPR and HIPAA than other regulations, with multi-jurisdictional compliance being their primary concern. Our results advocate the need for role-dependent solutions to address the privacy challenges, and we highlight research directions and educational takeaways to help improve privacy-aware SDLC.

In studies of educational production functions or intergenerational mobility, it is common to transform the key variables into percentile ranks. Yet, it remains unclear what the regression coefficient estimates with ranks of the outcome or the treatment. In this paper, we derive effective causal estimands for a broad class of commonly-used regression methods, including the ordinary least squares (OLS), two-stage least squares (2SLS), difference-in-differences (DiD), and regression discontinuity designs (RDD). Specifically, we introduce a novel primitive causal estimand, the Rank Average Treatment Effect (rank-ATE), and prove that it serves as the building block of the effective estimands of all the aforementioned econometrics methods. For 2SLS, DiD, and RDD, we show that direct applications to outcome ranks identify parameters that are difficult to interpret. To address this issue, we develop alternative methods to identify more interpretable causal parameters.

The development of an IT strategy and ensuring that it is the best possible one for business is a key problem many organizations face. This problem is that of linking business architecture to IT architecture in general and application architecture specifically. In our earlier work we proposed Category theory as the formal language to unify the business and IT worlds with the ability to represent the concepts and relations between the two in a unified way. We used rCOS as the underlying model for the specification of interfaces, contracts, and components. The concept of pseudo-category was then utilized to represent the business and application architecture specifications and the relationships contained within. The linkages between them now can be established using the matching of the business component contracts with the application component contracts. However the matching was based on manual process and in this paper we extend the work by considering automated component matching process. The ground work for a tool to support the matching process is laid out in this paper.

In the area of blockchain, numerous methods have been proposed for suppressing intentional forks by attackers more effectively than the random rule. However, all of them, except for the random rule, require major updates, rely on a trusted third party, or assume strong synchrony. Hence, it is challenging to apply these methods to existing systems such as Bitcoin. To address these issues, we propose another countermeasure that can be easily applied to existing proof of work blockchain systems. Our method is a tie-breaking rule that uses partial proof of work, which does not function as a block, as a time standard with finer granularity. By using the characteristic of partial proof of work, the proposed method enables miners to choose the last-generated block in a chain tie, which suppresses intentional forks by attackers. Only weak synchrony, which is already met by existing systems such as Bitcoin, is required for effective functioning. We evaluated the proposed method through a detailed analysis that is lacking in existing works. In networks that adopt our method, the proportion of the attacker hashrate necessary for selfish mining was approximately 0.31479 or higher, regardless of the block propagation capability of the attacker. Furthermore, we demonstrated through extended selfish mining that the impact of Match against pre-generated block, which is a concern in all last-generated rules, can be mitigated with appropriate parameter settings.

We study the problem of realizing families of subgroups as the set of stabilizers of configurations from a subshift of finite type (SFT). This problem generalizes both the existence of strongly and weakly aperiodic SFTs. We show that a finitely generated normal subgroup is realizable if and only if the quotient by the subgroup admits a strongly aperiodic SFT. We also show that if a subgroup is realizable, its subgroup membership problem must be decidable. The article also contains the introduction of periodically rigid groups, which are groups for which every weakly aperiodic subshift of finite type is strongly aperiodic. We conjecture that the only finitely generated periodically rigid groups are virtually $\mathbb{Z}$ groups and torsion-free virtually $\mathbb{Z}^2$ groups. Finally, we show virtually nilpotent and polycyclic groups satisfy the conjecture.

More than one hundred benchmarks have been developed to test the commonsense knowledge and commonsense reasoning abilities of artificial intelligence (AI) systems. However, these benchmarks are often flawed and many aspects of common sense remain untested. Consequently, we do not currently have any reliable way of measuring to what extent existing AI systems have achieved these abilities. This paper surveys the development and uses of AI commonsense benchmarks. We discuss the nature of common sense; the role of common sense in AI; the goals served by constructing commonsense benchmarks; and desirable features of commonsense benchmarks. We analyze the common flaws in benchmarks, and we argue that it is worthwhile to invest the work needed ensure that benchmark examples are consistently high quality. We survey the various methods of constructing commonsense benchmarks. We enumerate 139 commonsense benchmarks that have been developed: 102 text-based, 18 image-based, 12 video based, and 7 simulated physical environments. We discuss the gaps in the existing benchmarks and aspects of commonsense reasoning that are not addressed in any existing benchmark. We conclude with a number of recommendations for future development of commonsense AI benchmarks.

Co-evolving time series appears in a multitude of applications such as environmental monitoring, financial analysis, and smart transportation. This paper aims to address the following challenges, including (C1) how to incorporate explicit relationship networks of the time series; (C2) how to model the implicit relationship of the temporal dynamics. We propose a novel model called Network of Tensor Time Series, which is comprised of two modules, including Tensor Graph Convolutional Network (TGCN) and Tensor Recurrent Neural Network (TRNN). TGCN tackles the first challenge by generalizing Graph Convolutional Network (GCN) for flat graphs to tensor graphs, which captures the synergy between multiple graphs associated with the tensors. TRNN leverages tensor decomposition to model the implicit relationships among co-evolving time series. The experimental results on five real-world datasets demonstrate the efficacy of the proposed method.

北京阿比特科技有限公司